CloudBrokerBadgeThe first Cloud Expo event I attended 2 years ago was striking for the myriad of traditional hosting companies who were cloud-washing their offerings (usually quite blatantly and badly I felt).  Last year what struck me was the myriad of new small vendors selling niche cloud-related product offerings – data transfer optimisation, security products, management products, etc. I wonder what the theme will be this year? It’ll be interesting to see how many vendors are wearing the “I’m a cloud brokerage” badge at this week’s Cloud Expo.

Whilst I was at AWS’s Re:invent conference last November, one of the guest speakers at the partner day was Tiffani Bova, Distinguished Analyst from Gartner. Part of her presentation covered the topic of cloud brokerage, something Gartner have been talking about for quite a while, and something Tiffani and I had some debate about afterwards.

I must admit, it took me a while to wrap my head around the concept of cloud brokerage, partially as the pushing of the term was coming more from the analyst community than the rest of the cloud industry. Williams Fellows from 451 Research refers to this as “…a ‘cloudemic’ of firms now stamping ‘cloud broker’ across their service portfolios”. Tiffani’s view was that 90%+ of the AWS partners in the room (including Smart421) were brokers. It’s such a broad definition, e.g. Gartner’s definition is

Cloud services brokerage (CSB) is an IT role and business model in which a company or other entity adds value to one or more (public or private) cloud services on behalf of one or more consumers of that service via three primary roles including aggregation, integration and customization brokerage.

The great thing about definitions is that you can never have enough :). Way back in mid 2011 NIST published the following definition of a Cloud Broker…

NIST CloudBrokerageDefinition

The default view taken in society in that anyone with the title “agent” (aka broker) is looked down upon – e.g. estate agent, recruitment agent etc :). But by this definition I guess we’re all brokers in one way or another, even if it’s just combining toast and scrambled eggs to make the kid’s breakfast in the morning (aggregation).

Looking at what Smart421 delivers for our customers – we integrate different cloud and non-cloud services, we design and implement complex cloud environments and we add a 24×7 ITIL-based service management capability on top including ongoing capacity and cost optimisation. We also add value by handling access management, enhanced security, managing billing complexities and bringing new market innovations to our customers (as cloud service providers like AWS are still innovating and releasing functionality at an amazing rate, too fast for customers to keep up generally).  I guess that means I get to wear the badge too!

In a blog post tomorrow I’ll talk some more about one of the oft-touted cloud brokerage promises – that of dynamically migrating IT workloads across cloud providers (closely related to the dreaded phrase “bursting into the cloud”), and of making deployment-time decisions about which cloud service provider to use…and why I don’t believe in it.

PS If you happen to be attending Cloud Expo Europe this week, catch me on the Smart421 stand or let me know you are there via @SmartCTO

Keynote room before we all troop in...

Keynote room before we all troop in…

Here at AWS re:Invent, Werner Vogels  (@Werner) was the keynote compere on day 3, and I quickly realised the formula. On day 2, Andy Jassy announced some new stuff with more of a business value message, but the real techy new announcements were for Werner to unveil. He also made light of the fact that all AWS ever do is bleedin’ mention Netflix, saying that a new drinking game had been created where you have to down a shot every time anyone mentions them.

Anyway, there was a lot of announcements…

  • PostgreSQL on RDS – this actually got a spontaneous whoop from the gathered audience, and so there is clearly a lot of pent-up demand for it. I note that it has multi-AZ support from launch, which hasn’t been the case for other databases on RDS.
  • A new I2 instance family – for uber-high IO
  • A new C3 instance family – for uber-high CPU
  • Global secondary indexes on DynamoDB
  • Cross region Redshift snapshot copies
  • Amazon Kinesis – stream processing that can handle huge data ingest rates and deliver it to a number of consuming applications or services.

So that’s enough for most vendors for a year of releases really.

Later in the day I attended the Amazon WorkSpaces session (which wasn’t previously on the re:invent agenda as it wasn’t announced!) to understand this new offering a bit better. I have to say that the first impression is one of… well… it’s a bit dull. What can I say – it’s a Windows desktop – even if you are accessing it via a laptop, an iPad or and Android tablet. It just does what it says on the tin. It’s the economics of it that are the really interesting thing. It’s not a market I know really well (i.e. the price points of the key vendors), but it’s fair to say that the Citrix guys on the expensive-looking stand in the vendor expo were not looking too happy – or very busy. In a kind of “my breakfast has just been eaten” way.

Interestingly I had a chat with an Amazonian on this point and his view was that Citrix could have easily launched a pay-as-you-go virtual desktop product like Amazon WorkSpaces if they had really wanted to, but I guess Citrix had the classic dilemma of whether to cannibalise their on-premise business model or not. The answer clearly is – you HAVE to do this, as someone else will regardless.

Both Citrix and VMware‘s share prices dipped following the announcement.

VMware’s response was as you would expect from a competitor – as detailed in Erik Frieberg’s blog post:

  • We welcome a new competitor in our market etc etc
  • Ours is better than theirs

What else could he say? But of course I strongly suspect he knows the real unspoken truth here – it’s not AWS’s first product launch you need to worry about (as it will always typically be relatively immature and lacking key features), it’s the pace of product updates that follow it that should concern you.

From a technical point of view, Amazon WorkSpaces uses the same protocol over the wire to “deliver the pixels” as VMware’s product – PCoIP, a UDP protocol that both parties have licensed. Only Windows 7 desktops are supported in this preview release, running on Windows Server 2008r2 under the covers. Other OSs are clearly in the pipeline, including support for browser based clients. Provisioning time is currently 15-20 minutes, but they plan to get that down to 5 mins when it comes out of preview. Active Directory integration is supported.

The day wrapped up with the re:Play party, sponsored by Intel. I had a chat with the Intel guy to understand what their motivation for such a strong sponsorship with AWS. Obviously AWS buy a shedload of Intel chips, but it’s more than that – I just think they know they need to be onboard with the way the wind is blowing, simple as that.

The party was good I must admit, although it’s never a good sign at a party when there is a queue for the men’s toilets but not the women’s :).  There was lots of entertainment laid on – 1980′s video games, Jenga, helicopter flying, laser-dodging etc – and then a set from deadmau5 – who I must confess I was not massively familiar with.  Good to feel your jeans shake with the sub-bass though….


I was off playing Centipede (badly) after 30 minutes of this.  I spoke to a Canadian (Dov Amihod, CTO from Lagoa – a really interesting start-up doing really high quality photo-real 3D image rendering for use in retailer catalogues etc as it’s cheaper than using getting a photo-shoot done) on the way back from the party and apparent deadmau5 is really big there, but he was a bit suspicious as to why he’s want to play in-front of 8000 geeks, for Intel :)

Experience deadmau5 by watching my short clip on YouTube here.

Please Rate and Like this blog. Our readers want to know YOUR opinion so please leave a Comment.

Yesterday at AWS re:Invent, Andy Jassy delivered the main keynote. As you can see from the photo below, the event was immense – the day before I was in the APN Summit so it was AWS partners only, and that felt big.

1384361167005But this was 9,000 attendees from 57 countries in a room. The photo doesn’t really capture the epic scale – which struck me as kinda like a metaphor for AWS itself, i.e. the scale of the administrative operation was off the chart, it was all very efficiently managed, and it gets bigger every year!

I thought it was interesting that they didn’t even “save up” the recent announcement about the 10% price reduction for M3 EC2 instances that was announced on 5th November for re:Invent. To me, this just shows how baked into the business model these regular price reductions have become.

In content terms, the three main new announcements were:

  • Amazon CloudTrail – the ability to log all AWS API calls to S3 for audit and compliance purposes. This is a nice feature that we’ve asked for before, but actually hasn’t been too much of a barrier to customer adoption previously, probably because we are typically managing the entire AWS layer for a customer anyway.
  • Amazon WorkSpaces – virtual desktops-as-a-service. Interestingly desktop “state” is maintained as you move between access devices, e.g. from laptop to tablet. We’re deployed virtual desktops in AWS for a number of customer projects – either desktops for key users in a Disaster Recovery scenario, or for developers who are located around the world and need a consistent desktop with known applications installed etc in order to access AWS-hosted dev and test environments. So I can see us using this new feature in future projects as I suspect the cost model in terms of the saved installation/build/ongoing patching effort of putting in a bunch of Windows Remote Desktop servers.
  • Amazon AppStream – HD quality video generation and streaming across multiple device types. This is related to another announcement that was made on 5th Nov – the new g2.2xlarge instance type, which has the GPU grunt to enables the creation of 3D applications that run in the cloud and deliver high performance 3D graphics to mobile devices, TVs etc.

Weirdly being at the event you get less time to look into these new product announcements and so you probably have less detail than if you were just reading about it on the web – after the keynote it was straight into a bunch of technical sessions.

I mainly focused on the data analytics sessions. First off, I got to hear about what NASA have been doing with data visualisation – I think all attendees expected to hear about exciting interstellar data visualisations, but it was actually about much more mundane visualisations of skills management, recruitment trends etc – and this in fact made it much more applicable to the audience’s typical use cases as well. There were some great takeaways about how to maximise your chance of success which I need to write-up at some point…

I then attended an excellent deep dive on Amazon Elastic MapReduce (EMR) – this covered Hadoop tuning and optimisation, architecture choices and how they impact costs, dynamically scaling clusters, when the use S3 and when to HDFS for storage, instance sizes to use and how to design the cluster size for a specific workload.

This was followed by some customer technical overviews of their use of RedShift. They had all migrated to RedShift from either a SQL or NoSQL architecture. For example, have deployed two RedShift clusters in order to isolate read from write workloads, but I felt that they had been forced to put considerable effort into building a proxy in front of RedShift to optimise performance – fundamentally as RedShift is limited to 15 concurrent queries and for their reporting workload, they are not in control of the peaks in their user’s demand for reports. So they’ve implemented their own query queuing and throttling mechanism, which sounds like a whole heap of technical and tricky non-differentiating work to me. A key takeaway from this session for me though was that the price-performance characteristic of RedShift had really worked for these customers, and given them the ability to scale at a cost that they just could not before. They were all achieving very high data ingress rates by batching up their data inserts and loading direct from S3.

The final session I attended was about a Mechanical Turk use case from InfoScout. Mechanical Turk is an intriguing service as it’s so different to the other AWS offerings – in fact it’s not a service at all really although it exposes a bunch of APIs – it’s a marketplace. Classic Mechanical Turk use cases include translation, transcription, sentiment analysis, search engine algorithm validation etc, but InfoScout’s need was for data cleaning and capture following an automated by fallible OCR process – capturing the data from pictures of shopping receipts taken on smart phones. The main takeaway for me was about how they manage quality control – i.e. how do you know and therefore tune and optimise the quality of the results you get from the workers executing your HITs? InfoScout use two quality control strategies:

  • Known answers – in a batch of receipt images that is handled by a Mechanical Turk worker, they inject a “known” receipt and compare the data captured with the known data on that receipt. This technique is good for clear yes/no quality checks, e.g. is this receipt from Walmart. This allows them to compute a metric for each worker as to how likely their other receipts have been accurately processed.
  • Plurality – send unprocessed receipt to more than one worker and see how consistent the returned results are. InfoScout use this to build a confidence score based upon this and other factors such as worker tenure etc.

The final event of the day was the re:invent pub crawl around 16 of the coolest bars in The Venetian and The Palazzo hotels. I’m guessing I don’t need to tell you much about that event, other than it started with sangria… :)

Tough but someone has to do it...

Tough, but someone has to do it…

Positively Noctilucent:  not all clouds were created equal

With just 3 weeks to go until its Enterprise Summit in London (17 Sept), Amazon Web Services (AWS) is on a bit of a roll. ( hashtag #AWSsummit )

Research reports by analysts at Forrester and at Gartner have shown again AWS’ leadership position in a significant way.  For those still looking for evidence that Cloud Computing is a robust alternative to on-premise compute and storage, these reports are as close as you are likely to get to an objective viewpoint.

In June, Forrester Research published its ‘Wave’ report on Enterprise Public Cloud platforms, Q2 ( get the reprint here or via short URL ). In it, the authors Rymer and Staten took all the mainstream IaaS / PaaS / SaaS vendors (except absences e.g. Savvis) and analysed from a number of viewpoints: rapid developers, coders, DevOps pros, and application development & delivery (AD&D).

Incidently, they also registered a blurring of the lines between the widely accepted stratified offerings software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS),  But I digress. You should read the report.

Amazon Web Services (AWS) came out the strongest of all vendors across three of our four developer segments.

In August, Forrester’s arch rivals Gartner published its Magic Quadrant report on Cloud IaaS  ( get the reprint here or via short URL ). Analyst team Leong, Toombs, Gill, Petri and Haynes put AWS way out in front.  Other mainstream vendors (including Savvis) looked like a tightly bunched peloton on their now-famous 2 x 2 matrix.

Research clients of Forrester and Gartner place a lot of currency in these types of reports; substantial IT deals are decided on the strength of vendors’ positioning and analyst opinions.  Of course, analysts don’t always get it right, but when it comes to Cloud they have for the most part been prescient.

Regular readers of the Smart421 blog will know only too well that we have been uncharacteristically vocal of our support for the AWS Cloud technology stack.  This is only because a number of UK enterprises have already engaged Smart421 to architect, deliver and support solutions on the AWS Cloud.

If you’ve registered to attend on 17 September, you’re in for a treat with a great agenda.  If you’d like to have a more pointed conversation with our Cloud Architects or Senior Management team, please fee free to visit our special microsite to book an on-site meeting.

Share this blog using the short URL

Please Rate and Like this blog.  We always welcome Comment.

Jeff Bezos Photo by John Keatley, Seattle's leading photographer

Jeff Bezos
Photo by John Keatley, Seattle’s leading photographer

Every time I hear this story, it makes me smile. From Kim Lane over at API Evangelist:

[…] one day Jeff Bezos issued a mandate, sometime back around 2002 (give or take a year):

  • All teams will henceforth expose their data and functionality through service interfaces.
  • Teams must communicate with each other through these interfaces.
  • There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
  • It doesn’t matter what technology they use.
  • All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

The mandate closed with:

Anyone who doesn’t do this will be fired. Thank you; have a nice day!

Assuming for the moment that this is true, the thing that makes me smile here isn’t the closing rhetoric. What Jeff described here is pretty well everything you need to know about successful SOA.

Look at the wording again. “All teams”. He didn’t say “all systems” or “all services”. Technology isn’t [the most] important. People are.

By focussing on teams rather than technology, Jeff ensured that Amazon’s embryonic SOA was business aligned. One, simple decision was all it took. Well, that and ten years of concerted effort of one of the brightest engineering teams on the planet.

EOA-summit-logo-2013It was great to see National Rail Enquiries (NRE) win an award at the European Outsourcing Association Awards in Amsterdam last Friday (26 April).

In recognition of their SIAM outsourcing strategy (Service Integration and Management), NRE won the award for Best Multi-sourcing Project of the Year , beating strong category finalists 60k and Centrica (Centrica won this category in 2012).

Smart421 is pleased to be a large part of that initiative, performing the Managed Services element on top of an AWS Cloud platform for several key NRE applications.

As customers struggle with the chains of traditional SI relationships, Smart421 is providing agile delivery and innovation methods in the IaaS world.

Many analysts see this as “third generation outsourcing” and a change for good – and so do I.


Please rate and Like this blog.  If you can, please leave a Comment.

DATABASE at Postmasters, March 2009DATABASE at Postmasters, March 2009 to Michael Mandiberg

“NoSQL” is an unfortunate term for the currently hype around non-relational database systems. Many of the ideas presented at the NoSQL Roadshow in London of the new wave of different databases are not new. More than one presenter used the term preSQL, and during a break Brian Bulkowski from Aerospike explained that Oracle had created these ideas many times over, but had not considered them commercially worthwhile. What has changed in recent years though is the business need. There are new problems to solve such as online and mobile gaming and advert-serving that require users to trade-off consistency, availability, and partition tolerance.

For example, Amazon require a horizontally scalable system (partition tolerance), so that when, and it can be any time of day, a customer adds an item to their shopping cart, it is captured in the database (availability). They are less concerned about the time it takes for this to filter through their other data stores (eventual consistency), or how long it takes to fulfil the order. In the mid-2000s Amazon realised that the current relational database were not meeting their needs and they created Dynamo, a highly-available and scalable technology for key-value storage.

Papers by Amazon on Dynamo, and Google on their technology BigTable, were a major contributor to the current early adopter market. Today there are many competitor NoSQL products, including the aggregate-oriented databases (key-value stores, document stores and column stores), and graph databases.

Key-value stores are the simplest store type, with keys mapping to binary objects. They allow low-latency writes and scale easily across multiple servers, but can only offer single key/value access. David Dawson and Marcus Kern of MIG gave an example of using the key-value store Riak as a persistence store for a bespoke queuing system in their SMS gateway product. The biggest difficulty was finding a way to simplify the retrieval of messages in the event of a node failure; their solution involved the use of predictable keys.

Column stores manage structured data, stored in columns, with multiple-attribute access. Apache Cassandra, originally developed by Facebook, is a well known example. These are also optimised for writes and used for high throughput uses such as activity feeds or message queues.

In document stores keys are unique references to “documents” which encapsulate and encode data in a standard format. These database systems hold hierarchal data structures that reduce the need for table joins and allow for variety and evolving schemas. Akmal B. Chaudhri of IBM presented his investigations into the popularity of the various NoSQL offerings, showing that MongoDB, a document store for JSON documents, is leading the way. Some of our Smart consultants are using MongoDB from an application development perspective and are very positive about their experience so far. It will be interesting to follow these projects and understand the effects on administration, support and future change. What impact will the schemaless nature of the database have?

Graph databases use the concept of nodes and edges to store information about entities and the relationships between them. Jim Webber of Neo Technolgy gave an example of the use of the open source Neo4j graph database in modelling the relationships in Doctor Who.

Each type of database system has its own strengths and weaknesses, and the reality is that NoSQL databases will only be used as part of a solution. Choosing different data storage techonologies for different persistence needs has been termed polyglot persistence. Every example provided at the roadshow included relational database systems alongside NoSQL technology. Wes Biggs of Adfonics, an independent advertising marketplace, explained the architecture of their solution for buying advertising on mobile devices on behalf of advertising agencies. They use the relational MySQL with data on hard drives for long running information such as campaign details, MySQL Cluster with data on flash drives for aggregate instructions such as user details, and Aerospike with data in RAM for raw instructions such as the inflight data.

The overall message from the NoSQL Roadshow was that this is still very early days for NoSQL database systems, and it is not yet clear whether the future popularity will be closer to that of OO or relational databases.

Many presenters offered words of caution. Security is a big issue and many of the major NoSQL database systems must be run in a trusted environment. Wes Biggs discussed the issue of the huge number of vendors and the lack of evidence for their claims. At this moment any system choices are basically faith based, and should only be taken if someone else has used it for a similar use case; they were burned with a few early choices for a key/value store before they settled on Aerospike.

The Fusion-io presentation on their new directFS filesystem for flash drivers was only one example with obsession with performance, and the importance of the hardware architecture.

At the moment there are some exciting potentials for NoSQL technology, but anyone getting involved at this stage will be making some brave choices. It will be fascinating to see how the market shakes out over the next few months and years and I’ll be following with interest.

Update: 18 Dec 2012 – interested to learn more about MongoDB ? Click here.

Please Like and Rate this blog. If you can, please leave a Comment.

The subcategory called Big Data is emerging out of the shadows and into the mainstream.

Matt Wood with Robin Meehan

From left: Matt Wood, Chief Data Scientist at Amazon Web Services (AWS) with Robin Meehan, CTO at Smart421
Photo by Jim Templeton-Cross

What it is.

Definitions abound (who would have thought it? – quite usual in the technology market). For Big Data, we quite like the definition that originated with Doug Laney (@doug_laney), formerly META Group, now a Gartner analyst. It goes something like this:

 ” … increasing volume (amount of data), velocity (speed of data in and out), and variety (range of data types and sources)”

Gartner continue to use this “3Vs” model for describing Big Data.

Unsurprisingly, others are claiming Gartner’s construct for Big Data (see Doug’s blog post, 14 Jan 2012).

Still confused?

Put another way, Big Data is commonly understood to be:

“… a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools. The challenges include capture, curation, storage,search, sharing, analysis,and visualization. The trend to larger data sets is due to the additional information derivable from analysis of a single large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to “spot business trends, determine quality of research, prevent diseases, link legal citations, combat crime, and determine real-time roadway traffic conditions.” read more on Wikipedia.

Big Data could be executed on-premise if you have sufficient compute and storage in your corporate data centre. And some do, especially some large banks, and with good success. Several solutions are already out there on the market;  Oracle’s Big Data Appliance is just one example.  But it does also beg the question “why would you” ?

If you don’t want the CapEx of purchasing more tin, or don’t want to gobble up capacity in your own data centre, then there are alternatives. For example, a cost model now exists with cloud-based compute and cloud-based storage (for example, think of Amazon’s announcement of 25 percent reductions in the price of Amazon S3, it’s storage solution) that puts Big Data in the Cloud well within the reach of all UK enterprises. A cost model like that islikely to win friends in procurement and in corporate governance as well as in IT.

Hinging on technologies including Apache Hadoop clusters, Amazon Elastic Map Reduce (Amazon EMR) and others, Big Data is delivering a degree of analytics and visualisation not previously possible at affordable levels.

Don’t just take our word for it, ask around. We could point you to other experts in Big Data, such Matt Wood ( @mza ), Chief Data Scientist at AWS.

What it isn’t.

Big Data isn’t business intelligence (BI). What I mean is that Big Data isn’t BI in any traditional sense of the term. It is altogether another level on from that. Granted that some tooling enterprises may own may be recycled for use in Big Data analytics. But it isn’t another species, it’s another race.

Big Data isn’t a lame attempt at reviving a management information system (MIS); those should be left to rest in peace.

What it means for you.

By now, if you’ve read this far, something should be niggling away at you that you could be missing a trick. I trust it won’t be those voices in your head again. But it might be your instincts telling you how Big Data could answer those tough business questions – y’know, those “I can’t be asked” questions that existing systems just cannot deliver.

Now, you would not necessarily get our CTO to come right out and say that Big Data is the next big thing. But evidence we are assembling so far does seem to point to a new capability to deliver. For those with an appetite to understand their business in new ways, Big Data is delivering tangible intelligence that lets them see new dimensions, new possibilities and new revenue streams.

I did get a full radar lock on something our CTO said in the summer. It was a throw away line at the time but it stuck with me and with others. So, when the time came to consider an appropriate go-to-market message for our quarter three (Q3) focus, we decided to wheel out his one-liner as part of our messaging.

“It’s not about survival of the fittest -
it’s about survival of the best informed”
Robin Meehan, CTO, Smart421 Ltd.

Making no apologies to Charles Darwin or evolutionists, the statement is resonating with decision makers in the enterprise space, not least those in the Insurance sector. Why?  Well, we think it is because a lot of the big insurers operate under many names in their brand portfolios.

The capability to see and understand impacts of brand activities, such as Insurance Quotes, delivered using Big Data analytics in the AWS Cloud, is illuminating new gains that would otherwise have remained out of reach.

Don’t forget – brand analysis is only one use case for Big Data in the Cloud.

If the world is going Big Data crazy then you need to know what it is, what it isn’t and what it means to your enterprise.

Agree?  Disagree?

UPDATE 05 Dec 2012 – Our economist friend Tim Harford  (@TimHarford) sent this hilarious tweet: The data do not lie. OR DO THEY? Muah huah huah!

UPDATE 06 Dec 2012 – Robin and colleague Ben Baumguertel (@bbaumguertel) are attending the Big Data Analytics event in London today (organised by @WhitehallMedia ).

Please Rate this post and Like this post below. If you can, please leave a Comment.

GhostA colleague pointed me at an article in Computing the other day that starts off with “Retail giant Marks & Spencer is ditching Amazon as its online platform host“. As we are a leading AWS Solution Provider, interesting I thought, so I looked into it. The article itself it not misleading, but as usual the comments and interpretation it has generated have confused Amazon the retail store with Amazon Web Services (AWS) the IaaS/PaaS provider. For example someone made the comment “One is left wondering how many of AWS’s customers will move to rackspace or other companies of that ilk” – which is nonsense IMHO, and I’ll explain why.

M&S’s eCommerce offering is built using a white-labelled version of Amazon’s retail store – see this article in ComputerWeekly from 2007 when it was announced. I’m sure that some or all of M&S’s eCommerce site runs on AWS infrastructure, but that’s not the point really. I totally understand a retail organisation’s reluctance to use a retail competitor’s platform due to the reduced control over new functionality releases and the level of business data insight that this could give to a competitor. But those arguments do not apply to running a retail (with your own eCommerce software/platform) operation on AWS – if you follow good cloud architectural practices, e.g. encrypt data in transit and at rest, keep your encryption keys private and away from the cloud service provider etc.

In discussions with customers and vendors I see this confusion of and AWS all the time (sometimes deliberately to spread fear, and sometimes accidentally through lack of understanding). To be balanced here, the AWS offering has historically leaned on the brand and scale as a selling point, so I guess AWS need to keep making this distinction clear (which I’ve seen them do in every presentation). Anyway, either way it’s a red herring…and getting a bit tedious.

It is another exciting day for Smart421 with the news that we have been confirmed as the first UK enterprise to be confirmed on the Amazon Partner Network (APN) as an Advanced Consulting Partner by Amazon Web Services (AWS).

AWS Logo Advanced Consulting Partner Dark

What does it really mean for Smart421 and our Customers?

In some ways, partner ‘labels’ are often seen as just that, labels, and can be given out like confetti. However, for Advanced Consulting Partner status, we had to put up some substantive evidence of various AWS capabilities including Customer references, minimum of $10,000 a month AWS billings, minimum of Business level AWS support (previously called Gold) and at least 10 trained AWS staff.

So before the cynics have a pop either at the program or at us, or both, I can reveal it does require proven AWS capabilities. That will sort the ‘wheat from the chaff’ so to speak :-)

Does it change anything for us? – well yes and no…..

In terms of Operations, we already have a great relationship with AWS (since 2010 as an AWS Solution Provider and since 2012 also as an AWS Direct Connect Solution Provider) with access to a range of valuable contact points, from technical contacts through to Sales and up into Senior Management in the UK and USA. From my perspective as AWS Practice Manager, the existing relationship means regular face-to-face monthly meetings with our technical contacts in AWS, access to the product teams including ‘gurus’ based in Seattle, involvement in beta trials for new features etc. so the APN will just help reinforce those good relationships.

Perhaps the biggest change will be the impact of membership of the APN for our Customers. We are now able to be able to leverage a wealth of AWS resources on our Customers behalf. This translates into concrete deliverables as straightforward as documentation right through to support from technical architects during delivery engagements. It adds up to an improved level of confidence for our Customers that our proven AWS capabilities are fully backed by AWS and its rapidly growing global eco-system.

It means far more than a partner label to us and our Customers….


Get every new post delivered to your Inbox.

Join 1,122 other followers