The 2011 AWS Summit on 14 June drew more than 700 people to hear Werner Vogels, Amazon CTO
Although Steve has beaten me to it :), here’s my post about this event. On Tuesday 14 June, Smart421 had a stand at this event at the QEII Centre in London, and also several Smarties to attend the various presentation streams. The first thing to say is that is was BUSY – nearly 700 attendees – much bigger than the previous UK-based AWS events we’ve attended. There’s clear momentum here – and interestingly the audience could be split into two camps – the techy guys, often working for quite small companies and start-ups, and the suits like myself – enterprise-level attendees, CTOs etc. On the start-up side, I bumped into Neil Chapman, a friend that I worked with at BSkyB who is launching an online audiobook service called Bardowl which might be one to watch – AWS removes the economic barriers from startups like this which is great. The latter category included some very well known organisations and some of our existing customers, and this reconfirmed to me that AWS now has real credibility in the FTSE250 layer.
I stuck to the ‘advanced sessions’ track, and there was some griping on twitter (#awssummit) that this track wasn’t really advanced enough. I agree that if you’ve read the various white papers from AWS then you’ll know all this material, but as always you pick up a few interesting little things. For example, I not sure I could have put a figure on the annual failure rate of disks (mentioned in the fault tolerant design session) but now I can – approximately 3-5% for ephemeral storage and in the range 0.1-0.5% or EBS volumes, i.e. EBS is an order of magnitude better than traditional disk, but it’s still not infallible of course, so you’ve got to architect for this.
In the session covering options for high availability databases, I learned that the failover time for multi-AZ RDS is about 2 minutes. Ray Bradford highlighted an interesting design tradeoff that AWS have made here – you want to be very sure of a failure of the primary RDS instance before failing over to the secondary in the other AZ, hence the delay – it’s not all due to the failover time, it’s also significantly contributed to by the per-failover monitoring period. One point that was not stressed was that Oracle RDS does not currently support multi-AZ failover – I’m sure this is on the unpublished roadmap, but it’s a feature that really needs to be there for Oracle RDS to be credible in the enterprise space. And whilst I’m asking for new features :), the other missing feature that is a real pain is that VPC is limited to single-AZ at present also.
Smarties on the stand – talking and looking clever
Carlos Conde discussed economic optimisation approaches, and one thing I picked up that I hadn’t realised was that once you’ve been allocated a spot instance, you’ve got it for at least an hour. Thinking about it this is pretty obvious, as AWS charging is per hour – but as we don’t tend to use spot instances I hadn’t really considered it. [RM Correction - this is nonsense - from the AWS site - "if the Spot Price goes above your maximum price and your instance is terminated by Amazon EC2, you will not be charged for any partial hour of usage"]. As we tend to focus on the enterprise market, we’ve stuck to on-demand and reserved instances as they map best to reasonably permanent workloads that enterprise customers tend to have. But spot clearly has its place even in an enterprise context (for the right use case) and can reduce AWS usage costs quite significantly. Another nice rule of thumb from Carlos was that if you are transferring more than 1Tb into AWS, then shipping some physical media and uploading it via the AWS Import facility is the way to go from a financial perspective – 1Tb is the break-even point where the bandwidth costs of just uploading to S3 match the import cost. Obviously the timeliness characteristics of these data transfer techniques are different though.
Finally, presentation of the day has to go to JJ from Amazon.co who walked through several fascinating scenarios where the Amazon.co retail business has exploited AWS to solve particular challenges with scaling the online bookstore. I caught up with JJ afterwards and quizzed him about “do they get a special deal from AWS” and the answer is absolutely not – so the interesting thing is that Amazon.co use a mixture of reserved, on-demand and spot instances just like the rest of us. They also use different AWS accounts to get round the fact that VPC is limited to a single AZ as I mentioned earlier – so they have vs running in different AZs, but they have to use a different account for each one. Using consolidated billing means this isn’t such a pain I guess, but it is just more accounts to manage in an admin sense.
Overall, a very useful and fun day, finishing with some beers – smoothly organised by Iain Gavin/Andy Gough and co – nice one!