Dodo headAt today’s AWS User Group meeting, I was reminded of CohesiveFT’s VSN3 offering – this provides an overlay network capability on top of various cloud providers. The bulk of their customers are using VSN3 on AWS, but it’s available for use on other cloud providers also, and even across multiple cloud providers in a single deployment.

The historical roots of VSN3 are in pre-AWS VPC days and at that point in history (Dec 2009 or so) it must have been a very attractive offering. In fact, providers like CohesiveFT have been providing Software Defined Networking (SDN) solutions long before the term was widely used. Now the differentiators offered by this kind of solution are looking increasingly thin as AWS’s VPC offering has matured and matured, but there’s still some advantages of this kind of product that I can see some customers will pay good money for. They are:

  • Network level encryption, i.e. automatically encrypt all traffic between all AWS instances – something that VPC doesn’t offer today, it assumes a network inside a VPC is trusted. Whilst I can’t see any compelling reason why I’d need this from a technical standpoint – I can see that a customer’s CISO might insist upon it, so it’s nice to know how I’d do it.
  • If you need UDP multi-cast support
  • If you want to treat different AWS regions (and maybe other cloud providers) as a single network
  • If you want a higher level of IPSec encryption than AES 128-bit
  • If you want to avoid locking yourself in to a specific cloud service provider’s approach to network management

As AWS’s pace of innovation is so high, then any other innovators like CohesiveFT that are initially part of the product supplier ecosystem eventually have their differentiators subsumed into the AWS core offering – and worse than that – those features are then typically offered for free (such as for AWS VPC) – destroying their market in a stroke. It’s pretty brutal!

logo awsWe’ve wanted to be able to use ELBs inside VPC deployments for a long time – via our AWS partnership relationship we put this on our “AWS Christmas wish list” quite a while ago, and Santa has delivered early :)

All the extra AWS goodies are gradually making their way into VPC, and the direction of travel /trajectory is great, and more importantly for giving our customers confidence it is consistent over time – which confirms our view (as mentioned before) that VPC will become the default model for most/all deployments over time.

logo awsThose boys in Seattle have been busy – we’ve known about these new releases for some time now via our AWS partnership but have had to keep schtum, so it’s nice now that they are out in the public domain to talk about a bit more. Each of these items in the AWS news release would be good enough for a news release on their own for most organisations…

For us, the most immediately exciting part is that their virtual private cloud (VPC) offering is now available in multiple AZs in a region, a single VPC can span multiple AZs, and a single AWS account can have more than one VPC. In our view, these remove the last major barriers to serious production use by enterprises – as we typically want to engineer a virtual private cloud that spans AZs in the same region for resilience reasons.

The other really exciting thing about the VPC news is that Smart421 are mentioned in the press release :). We use VPC for pretty much everything we do for customers now – there’s no good reason why not, the financial model is not really affected and you get much more control. One of those customers is Haven Power, where VPC is one of the building blocks of the solution to deliver a really innovative business continuity planning (BCP) solution.

The Identity and Access Management functional enhancements are very welcome also – this is an area that has matured quite fast but from a relatively low base. Managing multiple customer and internal deployments and the accounts and permissions that go with them is challenging, and integration with Active Directory etc looks like it will be very useful.

Finally, Direct Connect is something we’ve been discussing with AWS in the UK for a while – again, coming back to my theme about making IaaS a credible solution for enterprise customers (the market Smart421 principally serve) – enterprise customers expect to have more private, more controlled networks between their data centres and the use of VPNs usually concerns the internal security guys, so this is a necessary piece of the enterprise deployment architecture jigsaw. It’s only available in US East as the moment but there’s a commitment to a London deployment, so this is something I’d expect our customers to want to exploit – especially at the price point which seems a bargain to me, e.g. a 1Gbps port used full time for a month is about $216. Peanuts – so it’d be rude not to use it.

Read the AWS blog entry for more details, and the related Smart421 news item is here.

Main entrance at Queen Elizabeth II Conference Centre, London

The 2011 AWS Summit on 14 June drew more than 700 people to hear Werner Vogels, Amazon CTO

Although Steve has beaten me to it :), here’s my post about this event. On Tuesday 14 June, Smart421 had a stand at this event at the QEII Centre in London, and also several Smarties to attend the various presentation streams. The first thing to say is that is was BUSY – nearly 700 attendees – much bigger than the previous UK-based AWS events we’ve attended. There’s clear momentum here – and interestingly the audience could be split into two camps – the techy guys, often working for quite small companies and start-ups, and the suits like myself – enterprise-level attendees, CTOs etc. On the start-up side, I bumped into Neil Chapman, a friend that I worked with at BSkyB who is launching an online audiobook service called Bardowl which might be one to watch – AWS removes the economic barriers from startups like this which is great. The latter category included some very well known organisations and some of our existing customers, and this reconfirmed to me that AWS now has real credibility in the FTSE250 layer.

I stuck to the ‘advanced sessions’ track, and there was some griping on twitter (#awssummit) that this track wasn’t really advanced enough. I agree that if you’ve read the various white papers from AWS then you’ll know all this material, but as always you pick up a few interesting little things. For example, I not sure I could have put a figure on the annual failure rate of disks (mentioned in the fault tolerant design session) but now I can – approximately 3-5% for ephemeral storage and in the range 0.1-0.5% or EBS volumes, i.e. EBS is an order of magnitude better than traditional disk, but it’s still not infallible of course, so you’ve got to architect for this.

In the session covering options for high availability databases, I learned that the failover time for multi-AZ RDS is about 2 minutes. Ray Bradford highlighted an interesting design tradeoff that AWS have made here – you want to be very sure of a failure of the primary RDS instance before failing over to the secondary in the other AZ, hence the delay – it’s not all due to the failover time, it’s also significantly contributed to by the per-failover monitoring period. One point that was not stressed was that Oracle RDS does not currently support multi-AZ failover – I’m sure this is on the unpublished roadmap, but it’s a feature that really needs to be there for Oracle RDS to be credible in the enterprise space. And whilst I’m asking for new features :), the other missing feature that is a real pain is that VPC is limited to single-AZ at present also.

Stand

Smarties on the stand – talking and looking clever

Carlos Conde discussed economic optimisation approaches, and one thing I picked up that I hadn’t realised was that once you’ve been allocated a spot instance, you’ve got it for at least an hour. Thinking about it this is pretty obvious, as AWS charging is per hour – but as we don’t tend to use spot instances I hadn’t really considered it. [RM Correction - this is nonsense - from the AWS site - "if the Spot Price goes above your maximum price and your instance is terminated by Amazon EC2, you will not be charged for any partial hour of usage"].  As we tend to focus on the enterprise market, we’ve stuck to on-demand and reserved instances as they map best to reasonably permanent workloads that enterprise customers tend to have. But spot clearly has its place even in an enterprise context (for the right use case) and can reduce AWS usage costs quite significantly. Another nice rule of thumb from Carlos was that if you are transferring more than 1Tb into AWS, then shipping some physical media and uploading it via the AWS Import facility is the way to go from a financial perspective – 1Tb is the break-even point where the bandwidth costs of just uploading to S3 match the import cost. Obviously the timeliness characteristics of these data transfer techniques are different though.

Finally, presentation of the day has to go to JJ from Amazon.co who walked through several fascinating scenarios where the Amazon.co retail business has exploited AWS to solve particular challenges with scaling the online bookstore. I caught up with JJ afterwards and quizzed him about “do they get a special deal from AWS” and the answer is absolutely not – so the interesting thing is that Amazon.co use a mixture of reserved, on-demand and spot instances just like the rest of us. They also use different AWS accounts to get round the fact that VPC is limited to a single AZ as I mentioned earlier – so they have vs running in different AZs, but they have to use a different account for each one. Using consolidated billing means this isn’t such a pain I guess, but it is just more accounts to manage in an admin sense.

Overall, a very useful and fun day, finishing with some beers – smoothly organised by Iain Gavin/Andy Gough and co – nice one!

Thursday was a good day! I was heading off to the AWS Tech Summit for Developers & Architects in London with a few of my colleagues from Smart421, which I was looking forward to especially as I have a keen interest in Cloud Computing and given Smart421′s gaining AWS Solution Provider status in 2010, attending was a real win win for both myself and for Smart, and to top it all off there was promise of free Guinness at the end to celebrate St Patrick’s Day!

Iain Gavin, AWS

Iain Gavin, UK Country Manager at AWS. Projected on screen from left: Richard Holland – Operations and Delivery Director, Eagle Genomics; Richard Churchill – Technical Director, Servicetick; Francis Barton – IT Development Manager, Costcutter.

 Doors opened at the conference at 12pm and it was clear to see how popular the event was as the entrance hall was packed, Iain Gavin (AWS UK Account Manager) confirmed that there were over 380 attendees at the conference which was much higher than expected (usually events have 50% drop out) which I think demonstrates the industry’s growing interest and adoption of Cloud Computing.

Whilst we were all trying to find seats in the hall, I couldn’t help but notice a couple of key points that were rolling across the  screen that “262 billion objects are hosted in S3″ and that “S3 handles 200,000 transactions per second” both of which took me by surprise a little, if truth be told, as whilst a lot of people are “nearing the waters edge” I hadn’t appreciated that there were that many  “dipping their toes in the water”.

Anyhow, first up of the speakers was Matt Woods, one of AWS’s evangelists (the UK version of Jeff Barr), who covered off a number of the recent changes and releases to AWS such as VPC supporting NAT addressing, the launch of Cloud Formation (which allows users to spin up an entire stack – which I can see being very useful for the launch of environments ), S3 websites for supporting static web sites and offering the great SLA’s and resilience of S3, though you may need to use Cloudberry Explorer or CyberDuck to help with this as not all of these features are available via the AWS Management Console. Despite being told that they weren’t going to repeat what we had heard before, unfortunately that was what we got!

Following on from Matt Woods, was Francis Barton (IT Development Manager of CostCutter) an AWS client who talked about their experience of using AWS after quite literally stumbling across this whilst looking at Elastic Bamboo as their CI tool! After looking at what it had to offer, they could see the potential this had for helping their company that works on fine margins a way to offer such a highly scalable solution with great resilience for a reasonable cost. they have managed to move their system of Oracle App Server onto AWS EC2 instance running Apache TomCat with relative ease and have made great use of SQS and SNS as part of the solution.  That said he did mention they had hit a couple of issues (which was nice a refreshing to hear about some of the real world experience) around JPA caching and connection pool fail overs with RDS, but all things they are working to resolve.

Next up was Richard Churchill (Technical Director of ServiceTick) who has developed a software solution SessionCam that records a customer’s interaction with your website, capturing loads of information, such as the actual images the customer saw, the behaviour and navigation of the customer around the site. As you can imagine, SessionCam is capturing mass of data as sending this back asynchronously, it is easy to see why they have over 450 EC2 instances running! They to are using SQS at the core of the application and taking advantage of the autoscaling offered and found that the stats you can get for SQS very useful. The other key point (I thought it was key at least) was that they had found that utilising more Micro instances had yielded a far better return on investment than using larger instances (would be great to get some stats on this from AWS’s perspective) but guess it all comes down to your application design and architecture in the end.

The final AWS customer speaker was Richard Holland (Operations and Delivery Director of Eagle Genomics), a Bio-Informatics company based in Cambridgeshire who are using AWS for exposing data to their clients for analysis, ended up being rushed do to the over run from the previous sessions, but touched on how they had used Zeus to obtain better and more intelligent load-balancing. I think the item that he touched on though that got most peoples attention that was given the sensitivity of the data they hold for their clients they had engaged AT&T and Cognizant to complete ethical hacks on AWS, both of whom failed in the attacks – something that I will be looking into more deeply as this is something comes up repeatedly when discussing the Cloud and security within it (my colleague Joseph picked up on this very point in his blog posted earlier today). See slide 12 of 13 on Richard’s slide deck, available on Slideshare.

After a quick break, we all reconvened and started the afternoon technical sessions with Matt Woods giving a presentation on “High-Availability Websites: Build Better Websites with a Concert of AWS Products” covering things such as patterns for availability and utilising S3 for Asset Hosting, and using S3 websites for hosting dynamic (client side) websites. He also covered using CloudFront for Global Edge Caching to enhance your web site. He also touched upon the extended support available now for  languages such a Ruby on Rails, Scala etc.

Carlos Conde was next delivering his presentation on “Running Databases with AWS – How to Make the Right Choice for Your Use Case”. He was very insightful and offered up some architectural principles and patterns for using RDS, as well as using backs ups to creating new test instances for more “real world testing”. It was good to see what RDS brings to the table, but in my view, whilst still limited to MySQL, I think most will stick to hosting the database servers on EC2 instances – well until Oracle instances are available on RDS – still no date on this from AWS!

Finally Ianni Vamdelis delivered his talk on “Deploying Java Applications in the AWS Cloud: A Closer Look at Amazon Elastic Beanstalk” which came with the great tag line of “Easy to begin, impossible to outgrow!”. I think this was the highlight of the day for me, as this is in my current arena of work, I can see the masses of potential that this offers up, for being able to deploy your applications so easily; setting up your logging automatically in S3 for you, configuring Elastic IP’s, Health Checks and Load-balancing and all in a easily repeatable way, plus the lugin’s for Eclipse to support this are great! This surely is such a god send???!!!??? Only down side, this is not available yet in VPC :-(

All in all a good day with some great food for thought, but as seems to be case, you can’t help feel that whilst we have come a long way we are not quite there yet and still waiting for that next release / feature to become available, though I have come away more impressed than before and with a great belief that more and more work that I do will be in the Cloud.

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers