wind-battered union jack at the Cobb at Lyme Regis, Dorset Photo: Richard Austin / Rex Features via BBC News

A wind-battered union jack at the Cobb at Lyme Regis
Photo: Richard Austin / Rex Features via BBC News

If the wind and rain storms hitting the UK right now teach us one thing, then it’s to be ready for every eventuality. The UK is taking the brunt of everything the Atlantic can throw at us. We’ve all seen the evening news reports of terrible flooding in the Somerset Levels and the rail line swinging in mid air at Dawlish.

You cannot fail to admire the resilience of those impacted.  We salute you.

I’m not going to stray into the politics of it all.  Leave that to the professionals and the pundits. But it does seem an apt moment to remind ourselves of the vital importance of business continuity planning and having a proper disaster recovery strategy in place.

Ok so DR planning may not be the sexiest gig on the planet, but it does offer a hero factor at times when DR needs to be invoked for real, such as when water starts come up through the floor or trees crash through your buildings where you keep your servers.

Regular readers of this blog will know we’ve seen very successful strategies using on-premise IT estates, data centres, external hosting, and the Cloud. One great example I’d think is worth highlighting again today is the innovative approach to using the power of the AWS Cloud for disaster recovery.  Tech journalists as well as Amazon picked up on the outstanding work done by Paul Armstrong and others at Haven Power, part of Drax Group.

Here’s their story.

Read Haven Power’s story on the Smart421 website here http://www.smart421.com/customers/casestudies/haven-power.asp

Find out more on the on the AWS website here http://aws.amazon.com/solutions/case-studies/haven-power/

Discover more of our blogs about Amazon Web Services here http://smart421.wordpress.com/tag/amazon-web-services/

Blogs on Cloud   http://smart421.wordpress.com/tag/cloud/

Our Cloud architects will be available to discuss this and other engagements at the Cloud Expo Europe, at London ExCel on 26-27 February.   You can find Smart421 on the AWS Village (stand 1070)

Please Rate and Like this blog. Share it using the social icons below or via the short URL http://bit.ly/1buEWe5

Our readers want to know what YOU think, so please post a Comment.

Yesterday at AWS re:Invent, Andy Jassy delivered the main keynote. As you can see from the photo below, the event was immense – the day before I was in the APN Summit so it was AWS partners only, and that felt big.

1384361167005But this was 9,000 attendees from 57 countries in a room. The photo doesn’t really capture the epic scale – which struck me as kinda like a metaphor for AWS itself, i.e. the scale of the administrative operation was off the chart, it was all very efficiently managed, and it gets bigger every year!

I thought it was interesting that they didn’t even “save up” the recent announcement about the 10% price reduction for M3 EC2 instances that was announced on 5th November for re:Invent. To me, this just shows how baked into the business model these regular price reductions have become.

In content terms, the three main new announcements were:

  • Amazon CloudTrail – the ability to log all AWS API calls to S3 for audit and compliance purposes. This is a nice feature that we’ve asked for before, but actually hasn’t been too much of a barrier to customer adoption previously, probably because we are typically managing the entire AWS layer for a customer anyway.
  • Amazon WorkSpaces – virtual desktops-as-a-service. Interestingly desktop “state” is maintained as you move between access devices, e.g. from laptop to tablet. We’re deployed virtual desktops in AWS for a number of customer projects – either desktops for key users in a Disaster Recovery scenario, or for developers who are located around the world and need a consistent desktop with known applications installed etc in order to access AWS-hosted dev and test environments. So I can see us using this new feature in future projects as I suspect the cost model in terms of the saved installation/build/ongoing patching effort of putting in a bunch of Windows Remote Desktop servers.
  • Amazon AppStream – HD quality video generation and streaming across multiple device types. This is related to another announcement that was made on 5th Nov – the new g2.2xlarge instance type, which has the GPU grunt to enables the creation of 3D applications that run in the cloud and deliver high performance 3D graphics to mobile devices, TVs etc.

Weirdly being at the event you get less time to look into these new product announcements and so you probably have less detail than if you were just reading about it on the web – after the keynote it was straight into a bunch of technical sessions.

I mainly focused on the data analytics sessions. First off, I got to hear about what NASA have been doing with data visualisation – I think all attendees expected to hear about exciting interstellar data visualisations, but it was actually about much more mundane visualisations of skills management, recruitment trends etc – and this in fact made it much more applicable to the audience’s typical use cases as well. There were some great takeaways about how to maximise your chance of success which I need to write-up at some point…

I then attended an excellent deep dive on Amazon Elastic MapReduce (EMR) – this covered Hadoop tuning and optimisation, architecture choices and how they impact costs, dynamically scaling clusters, when the use S3 and when to HDFS for storage, instance sizes to use and how to design the cluster size for a specific workload.

This was followed by some customer technical overviews of their use of RedShift. They had all migrated to RedShift from either a SQL or NoSQL architecture. For example, Desk.com have deployed two RedShift clusters in order to isolate read from write workloads, but I felt that they had been forced to put considerable effort into building a proxy in front of RedShift to optimise performance – fundamentally as RedShift is limited to 15 concurrent queries and for their reporting workload, they are not in control of the peaks in their user’s demand for reports. So they’ve implemented their own query queuing and throttling mechanism, which sounds like a whole heap of technical and tricky non-differentiating work to me. A key takeaway from this session for me though was that the price-performance characteristic of RedShift had really worked for these customers, and given them the ability to scale at a cost that they just could not before. They were all achieving very high data ingress rates by batching up their data inserts and loading direct from S3.

The final session I attended was about a Mechanical Turk use case from InfoScout. Mechanical Turk is an intriguing service as it’s so different to the other AWS offerings – in fact it’s not a service at all really although it exposes a bunch of APIs – it’s a marketplace. Classic Mechanical Turk use cases include translation, transcription, sentiment analysis, search engine algorithm validation etc, but InfoScout’s need was for data cleaning and capture following an automated by fallible OCR process – capturing the data from pictures of shopping receipts taken on smart phones. The main takeaway for me was about how they manage quality control – i.e. how do you know and therefore tune and optimise the quality of the results you get from the workers executing your HITs? InfoScout use two quality control strategies:

  • Known answers – in a batch of receipt images that is handled by a Mechanical Turk worker, they inject a “known” receipt and compare the data captured with the known data on that receipt. This technique is good for clear yes/no quality checks, e.g. is this receipt from Walmart. This allows them to compute a metric for each worker as to how likely their other receipts have been accurately processed.
  • Plurality – send unprocessed receipt to more than one worker and see how consistent the returned results are. InfoScout use this to build a confidence score based upon this and other factors such as worker tenure etc.

The final event of the day was the re:invent pub crawl around 16 of the coolest bars in The Venetian and The Palazzo hotels. I’m guessing I don’t need to tell you much about that event, other than it started with sangria… :)

Tough but someone has to do it...

Tough, but someone has to do it…

2013 IP Expo, London

2013 IP Expo, London

With our vantage point and close proximity to the Amazon Web Services Lab at IP Expo, we were eye witnesses to the ongoing popularity of AWS.  You would never have known it was Amazon’s first time at IP Expo; their AWS Labs sessions were packed out all day long. We didn’t see that sort of traffic anywhere else in the hall.

This sent a strong signal that only underlines the leader position that Amazon has fought for, and won, in the UK.  They are in front by a country mile.

Which suited us down to the ground, of course.

We were beneficiaries of all that goodwill and warm buzz it generated.  After each Lab session, attendees polarised to the Amazon Partner Village to ask questions and to see for themselves what Amazon Partners are up to.

Enterprise Cloud. Ask us how.

Anticipating high levels of interest, we staffed our stand with senior Cloud Architects as well as with Account Managers, all of whom were able to take questions and talk about how Smart421 is helping large UK companies on their separate cloud adoption journeys.

As evidence, we showcased a set of case studies covering a selection of customer engagements: Big Data analytics on the AWS Cloud for Aviva Quotemehappy.com, Services Transition to the AWS Cloud for ATOC Rail Settlement Plan and National Rail Enquiries, and Disaster Recovery on the AWS Cloud for Haven Power – part of Drax.

These real war stories certainly caught people’s attention.   It seems that Smart421 is still one of the only IT services firms able to put up so much in the way of proof that we’ve done it for real.

As well as those case studies, my colleagues tell me that our AWS Competencies logos had a remarkable “magnetic effect” on drawing visitors to the stand.

Over-the shoulder. AWS Lab session

Over-the shoulder. An AWS Lab session

But we didn’t just absorb all the goodness from the AWS Labs.  We went out and hunted down prospects for ourselves.  Glad we did.  Evaluating the results today tells us that it really was worth all the hard work. We met some great people.

IP Expo was for us a trade show with a difference.  As well as meeting people we already know (good), it has put us in touch with a whole new audience. OK, so we didn’t get to speak with everyone (with 12,500 attendees it would have taken some doing) but the 200 or so we did speak with were engaged enough to want know more about us and what we are doing with AWS. Therefore, mission accomplished.

We were pleased that our colleagues in Kcom and Eclipse (other brands in the KCOM Group plc) were exhibiting. By all accounts they had a great show too.

Well, I’m off to give our stand team a well-deserved pat on the back.

We’ve got a few emails to send and some calls to make…

Please share this blog by using the social button below, or via the short URL http://bit.ly/19YEqNB

Please Rate and Like this this blog.  Our readers want to know YOUR opinion, so please leave a Comment.

After the polished video intro (screen shot below and catch the full key note here) Brad Anderson (Corporate VP at Microsoft) kicked off TechEd Europe here in Madrid by highlighting the opportunities that we have available to us as at the moment in IT. He talked about innovation, helping customers differentiate themselves and how Microsoft aim to make others great, so whilst I definitely agree that with him, it was the resonance with so much of what we do at Smart that I really liked.

OpeningSeq

Four areas of innovation were discussed around; People Centric IT, Building better Apps for people, Data, and the Cloud Platform. Aligning to these were new product release announcements including SQL 2014 CTP, VS2013 and TFS2013 CTP, Windows Server 2012 R2 and System Center 2012 R2. Better still, a partnership with Oracle was announced which means Oracle is fully supported on Azure and Hyper-V (and from what I gathered Oracle images ready to spin up in Azure).

One of the best parts (and I tweeted at the time – @wanty1975 btw) was a video of one of our customers (Aviva) referenced as an example of mobile and Azure development combining to provide an innovative solution for their customers. Why was this so good? Well partly because my old boss was in the video…but more so because seeing real world examples, and close to home too help inspire and make me realise the opportunities we have available to us.

TechEd Pres

So a brief list of new features mentioned in the key note that you should be aware of:

  • BYOD – two-factor authentication making it easy to add devices (a MS Surface in the demo) to AD domains using Windows 8.1 (not sure what the experience would be like for other devices though??). The simple process means files are sync’d, permissios granted to corporate data and Line of Business (LoB) applications downloaded onto the device accessible via a company portal-like homepage.
  • VS and TFS 2013 – ALM benefits throughout the develoment lifecycle but I really liked InRelease (following a recent acquisition by Microsoft) which provides workflow and approval mechanisms for moving builds through environments. I could see this really helping to take the pain away from tracking builds through environments and ensuring everyone knows what versions are where. From a dev perspective I also thought the new Code Lens and Code Map features looked really helpful. These come under the banner of Situational Awareness and are designed to reduce the time and effort needed by developers when context switching. Code Lens integrates with TFS 2013 to provide details within the code base, such as tests passing/failed, who it was last checked in by and any recent check-ins. It helps developers understand who has done what, where and why, much more simply than the previous version/compare routine that we’ve all experienced. Code Map provides a diagramatic view of code (similar to architecture modelling tools) but this can be viewed at runtime in debug mode, like a call stack, helping developers familiarise themselves with code much quicker and easier.
  • SQL 2014 – some huge performance gains achieved through in-memory OLTP and row based access rather than page, therefore reducing locking issues and increasing performance. SQL will suggest what tables are ‘hot’ and recommend moving them into memory (I’m assuming they’re persisted to disk to fairly frequently!!). I also picked up on the ability to replicate an existing on-prem DB to Azure, to provide HA and DR via a right-click menu option…didn’t see or hear any more evidence of this but sounds like a great feature. Also HDInsight and Hadoop linkage to Excel to provide visualisations and insight to Big Data.
  • Windows Server and System Center 2012 R2 – providing the tools to make Hybrid cloud simple and easy, with tooling to help sync data, create replicas for DR purposes and significant performance benefits for virtualised machines.

So there are obviously things that I’ve missed and lots more to talk about but hopefully this represents a fair summary. I’ll also be tweeting any further nuggets throughout the week.

On Tuesday I presented on the topic of “Disaster Recovery in the cloud” at this year’s big Amazon Web Services (AWS) Enterprise event near Westminster in London – I had the privilege of sharing the speaking platform with a number of AWS speakers including Amazon.com CTO Dr Werner Vogels who delivered the opening keynote. Smart421 were sponsors of the event and so we had a stand in the Partner Expo and some colleagues in attendance.

In front of a full house I went through one of our AWS case studies relating to the design and implementation of a Disaster Recovery (DR) solution for Haven Power and walked though a generic DR architecture on AWS, using it to bring out various architectural considerations such as resilience, cost and complexity design tradeoffs, security features and patterns, support for heterogeneous platforms, support for emergency virtual desktops for remote users, monitoring and control considerations etc.

Generic DR Architecture

I also covered some material on the various data replication strategies that can be employed to meet different recovery time and recovery point objectives (RTO and RPO) for different classes of data and applications. Get in touch if you’d like a copy of the presentation.

I took what felt like a bit of a risk by throwing in some images into the presentation to keep it a bit light hearted, and seemed to get away with it :). There was a definite sheep theme to the slides which was not intentional – just the way it worked out. A key takeaway for my audience was that Dolly the sheep was stuffed and is now in the National Museum of Scotland…

Overall it was a great event, very well organised (including the AV team) and well attended, with around 300+ attendees or so from enterprise customers. AWS events get bigger every year – last year there was one big event in London, whereas this year it was split across two days, with a developer/startups focus on day one and enterprise focus on day two. I had some excellent conversations with various customers and innovative startups, and it was also great to catch up with my AWS colleagues – I must confess to having enjoyed the benefits of using the speaker’s room. Highlight of the day for me was when I tenuously weaved a picture of Clive Sinclair with a ZX81 into my slide deck and used it as a “grey hair test” of my audience to see who knew who it was – and the fact that one person (admittedly not an IT person) thought it might be Babbage

logo awsThose boys in Seattle have been busy – we’ve known about these new releases for some time now via our AWS partnership but have had to keep schtum, so it’s nice now that they are out in the public domain to talk about a bit more. Each of these items in the AWS news release would be good enough for a news release on their own for most organisations…

For us, the most immediately exciting part is that their virtual private cloud (VPC) offering is now available in multiple AZs in a region, a single VPC can span multiple AZs, and a single AWS account can have more than one VPC. In our view, these remove the last major barriers to serious production use by enterprises – as we typically want to engineer a virtual private cloud that spans AZs in the same region for resilience reasons.

The other really exciting thing about the VPC news is that Smart421 are mentioned in the press release :). We use VPC for pretty much everything we do for customers now – there’s no good reason why not, the financial model is not really affected and you get much more control. One of those customers is Haven Power, where VPC is one of the building blocks of the solution to deliver a really innovative business continuity planning (BCP) solution.

The Identity and Access Management functional enhancements are very welcome also – this is an area that has matured quite fast but from a relatively low base. Managing multiple customer and internal deployments and the accounts and permissions that go with them is challenging, and integration with Active Directory etc looks like it will be very useful.

Finally, Direct Connect is something we’ve been discussing with AWS in the UK for a while – again, coming back to my theme about making IaaS a credible solution for enterprise customers (the market Smart421 principally serve) – enterprise customers expect to have more private, more controlled networks between their data centres and the use of VPNs usually concerns the internal security guys, so this is a necessary piece of the enterprise deployment architecture jigsaw. It’s only available in US East as the moment but there’s a commitment to a London deployment, so this is something I’d expect our customers to want to exploit – especially at the price point which seems a bargain to me, e.g. a 1Gbps port used full time for a month is about $216. Peanuts – so it’d be rude not to use it.

Read the AWS blog entry for more details, and the related Smart421 news item is here.

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers