flock of migrating canada geese birds flying at sunsetIn yesterday’s blog post I summarised the cloud services broker role with some definiti0ns, and concluded that I do indeed appear to work for one of them – and I suspect many vendors at this week’s Cloud Expo Europe (#cee14) might lead on this from a marketing point of view.

We’re delivering service intermediation/customisation and service aggregation/integration, but one thing we are not really doing (or seeing any demand for) at the moment is dynamic or semi-dynamic workload migration. i.e. it’s not just dev & test any more, these days we are migrating complex production environments customer after customer onto AWS. But we are not providing customers with the means to dynamically move or spread those IT workloads across different cloud providers. It’s certainly something we could do from a technology perspective, and most of our deployments have some hybrid aspect to them.

cee_logo_2014_nowebsiteThe ability to migrate IT workloads dynamically (i.e. at run-time, not at deployment time) is something I sometimes see as a capability under the “cloud broker” banner, but in my view it really just doesn’t make sense – at least not at the moment.

The rate of innovation in the IaaS/PaaS/DaaS market is such that most of the other vendors are playing catch-up with AWS, as AWS continue to differentiate themselves from the following pack. This shows no sign of slowing down over the next couple of years – so the only way a migrated workload is going to work across multiple cloud vendors is if it only relies on the lowest common denominator functionality across the vendors, which is typically basic storage, virtualised compute and connectivity.  Or you have to architect your solution to take into account deployment differences across the cloud providers you intend to use – and be able to effectively monitor and support both – twice the work and complexity, and not something you really want to debug.  Did your load balancing just stop working as you expected – it worked last week…mmm…I wonder if our load balancing configuration behaves exactly the same across all our cloud service providers? And so on…

Even storage – the most commoditised of the building blocks of IaaS (you would have thought) contains some interesting differentiation – not on price any more as Google/Microsoft/AWS are effectively price-matching these days, but on features like access control, archiving to cheaper storage, automated data life-cycle policies etc.

The bottom line is that if you are going to architect your applications so they can run on any cloud service provider, then you can’t easily use any of the good bits and hence your value in migrating to a cloud solution is diminished.  Not ruined, just reduced.

There are now a bunch of brokerage tools out there from vendors that claim to give you this workload migration capability, but what I’ve seen so far is disappointing, e.g. one recent new tool I looked at required a custom template to be created for each cloud provider – so whilst the end user might get a menu choice of “deploy to AWS” or “deploy to Azure” – under the covers you still need a bunch of experts in each cloud service provider’s technology and those experts need to keep themselves abreast of new functionality continually.  You can create an impression of homogeneity, but it’s just a veneer.

In our experience, even in very large enterprise estates (e.g. where we’ve deployed and managed up to 750 AWS instances made up of numerous discrete environments), whilst the IT workloads might be relatively consistent in nature (e.g. there might be a corporate standard for this OS and that application server etc), there is always sufficient variance in each project’s requirements that a completely cookie-cutter approach to environment deployment and self-service just does not work.  Each project needs slightly different software, or software versions, or server specifications, or connectivity requirements etc etc – and the list goes on.  And if they didn’t – well – you’d hope the projects would be merged into a single project if they were so similar in their needs, right?

So – given that it’s taken the IT industry this long to get to the point that “as a service” is really possible, for the next couple of years at least let’s focus less on hiding away the good bits of cloud and the really useful differentiating features of the leading cloud providers, and focus more on actually exploiting them please!

PS If you happen to be attending Cloud Expo catch me on the Smart421 stand or let me know you are there via @SmartCTO

Vegas baby!

Vegas baby!

I’ve survived my first full day in Vegas at AWS re:Invent, the annual Amazon Web Services shindig, although I must admit to being jet-lagged to hell. Handily nothing ever shuts down here so waking up at 2am is not a problem :)

The first day was dedicated to the AWS Partner Network (APN) Summit, and the #1 highlight had to be the announcement that Smart421 have been awarded Premier Consulting Partner status – one of only 3 partners in the EMEA region to be recognised in this way. This is the highest level that there is globally, and it makes me really proud of what our internal AWS Practice have achieved over our four year journey with AWS – this is not something that AWS give to any old partner! It’s recognition of the great customer case studies and deployments that we’ve jointly undertaken with AWS, and the investment in deep skills that we’ve made.

APNThe sheer scale of re:invent is pretty staggering. The venue (The Venetian) is enormous, the rooms are massive, and there’s a huge number of attendees with a very wide variety of interests – enterprise-level, gaming, HPC, start-ups etc. As I was at the APN Summit all day which was on its own floor, this didn’t really hit me until I went to the Expo part of the event at the end of day – where 180+ different vendors are touting their wares. It was a struggle even to walk through the room as it was so busy – although copious amounts of food and alcohol probably helped drive attendance :).

Here’s a couple of other takeaways from the APN Summit yesterday:

  • AWS have just updated the quote that they use to demonstrate the rate at which they are scaling their infrastructure. Anyone familiar with AWS will probably have heard before that one way of quantifying their rate of infrastructure growth is in terms of comparing with the number of servers etc needed to run the Amazon.com retail business at some point back in the past. Well – AWS has carried on growing, and so this comparison metric has had to be updated. They are now adding enough server capacity every day to power Amazon.com when it was a $7bn business – which is quite an incredible statement really. Cloud computing is indeed a scale game…
  • One of the big push areas from AWS is in driving use of AWS to host specific packages such as Microsoft Exchange, various Oracle business products (i.e. not just the technology components such as database, middleware etc), SAP, Microsoft SharePoint etc. Hence yesterday they announced some new partner competencies for some of these products. Personally I don’t quite get this – in my view, the cloud model is not so compelling for these kinds of IT workloads, as they tend to be very “steady state” in nature, not particular peaky workloads and if they are somewhat peaky, then you’ve usually got to have a resilient pair running all the time anyway and so they options for scaling down are limited. There’s a myriad of companies out there offering very price-competitive hosted Exchange and SharePoint models (like our sister company in fact) and they can exploit multi-tenancy across customers to drive a really low price point. Office365 (which wraps in Exchange and SharePoint with other stuff) is also the direction of travel for many enterprise customers. Having said all that, AWS are obviously seeing traction for these more enterprise-level software deployments otherwise they wouldn’t be aligning their partner model to it – as they are clearly not dummies given that they are giving the rest of the IaaS/PaaS market a bit of a hiding.

Today we have the opening keynote from Andy Jassy, and then we get into the more nitty-gritty technical sessions…

Positively Noctilucent:  not all clouds were created equal

With just 3 weeks to go until its Enterprise Summit in London (17 Sept), Amazon Web Services (AWS) is on a bit of a roll. ( hashtag #AWSsummit )

Research reports by analysts at Forrester and at Gartner have shown again AWS’ leadership position in a significant way.  For those still looking for evidence that Cloud Computing is a robust alternative to on-premise compute and storage, these reports are as close as you are likely to get to an objective viewpoint.

In June, Forrester Research published its ‘Wave’ report on Enterprise Public Cloud platforms, Q2 ( get the reprint here or via short URL http://bit.ly/1dUsPC3 ). In it, the authors Rymer and Staten took all the mainstream IaaS / PaaS / SaaS vendors (except absences e.g. Savvis) and analysed from a number of viewpoints: rapid developers, coders, DevOps pros, and application development & delivery (AD&D).

Incidently, they also registered a blurring of the lines between the widely accepted stratified offerings software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS),  But I digress. You should read the report.

Amazon Web Services (AWS) came out the strongest of all vendors across three of our four developer segments.

In August, Forrester’s arch rivals Gartner published its Magic Quadrant report on Cloud IaaS  ( get the reprint here or via short URL http://gtnr.it/17Lu848 ). Analyst team Leong, Toombs, Gill, Petri and Haynes put AWS way out in front.  Other mainstream vendors (including Savvis) looked like a tightly bunched peloton on their now-famous 2 x 2 matrix.

Research clients of Forrester and Gartner place a lot of currency in these types of reports; substantial IT deals are decided on the strength of vendors’ positioning and analyst opinions.  Of course, analysts don’t always get it right, but when it comes to Cloud they have for the most part been prescient.

Regular readers of the Smart421 blog will know only too well that we have been uncharacteristically vocal of our support for the AWS Cloud technology stack.  This is only because a number of UK enterprises have already engaged Smart421 to architect, deliver and support solutions on the AWS Cloud.

If you’ve registered to attend on 17 September, you’re in for a treat with a great agenda.  If you’d like to have a more pointed conversation with our Cloud Architects or Senior Management team, please fee free to visit our special microsite to book an on-site meeting.

Share this blog using the short URL http://bit.ly/12LlyCF

Please Rate and Like this blog.  We always welcome Comment.

Organised by the UK Windows Azure User Group, this free all day conference provided a great opportunity to catch up on the latest developments, particularly given the Microsoft announcement a couple of weeks back.

Core to this announcement was Microsoft’s move into Infrastructure-As-A-Service (IaaS), and the key note by Scott Guthrie positioned IaaS (described as Virtual Machines) alongside Microsoft’s current Cloud offerings which to date has focused on Platform-As-A-Service (PaaS – now labelled Cloud Services by Microsoft) and Software-As-A-Service (SaaS – Office 365 for example).

MS Cloud Day

Despite the lack of internet connectivity for a large part of the presentation (what is it with Cloud demos and loss of connectivity?!?) Scott did a great job talking through the slides, clearly describing the alignment of each of the deployment options: On-premise vs Virtual Machines vs Cloud Services vs SaaS.

In addition to Virtual Machines, the new Web Sites service was also discussed which gives Azure customers up to 10 web-sites and 1GB of storage for free (whilst in preview period, see here for further details). The demonstration showed how easy it is if you simply want to re-host an existing web-site on Azure whether it be ASP.NET, Node.js, PHP or even classic-ASP. So the new Web Site and Virtual Machine services provide a simple route to hosting applications on the Azure platform, but there is the added benefit of the Azure management aids, real time statistics and in the case of Web Sites incremental deployments and continuous integration (through TFS or GIT) too.

So where does this fit with Paas? Well Steve Plank from Microsoft provided some answers with another demonstration. With Cloud Services you get host of services to call upon including Storage, Database, Identity, Caching and Service Bus and the demo showed that if you design your application from the ground-up utilising these services, you benefit from an end-to-end application architecture that can be deployed and running in minutes at the click of a button. It is this architecture that really gives you the elasticity and flexibility in the places you need it.

A good day and exciting times with the options and landscape constantly changing. Nicely summed up by another Smartie (Andy Carter), ‘I guess there’s a load more stuff I need to learn about’, when a couple of days after passing the Azure certification MS announced the new services…(Well done btw!)

TrendTowardsPaaS

Amazon Web Service’s announcement yesterday about their AWS Elastic Bean stalk offering was a significant step in their inexorable movement up the stack from IaaS towards PaaS (platform as a service), offering a Java PaaS based upon a Apache/Tomcat/Linux stack.

At the same time, the SaaS players are moving in the opposite direction, as demonstrated by Sales force.com’s recent acquisition of Heroku, the ruby PaaS provider.

Up until now, I’d have argued that PaaS has been the least mature of all the “aaS” siblings, with significant risk of lock-in, vendor failure risk etc, but this is rapidly changing and PaaS will become the main battleground between all the vendors as the SaaS players make their offerings more and more configurable/flexible and the IaaS vendors try and simplify their typically quite techy offerings for a wider audience.

There’s good further material of this topic on Krishnan Subramanian’s blog.

Firstly, apologies for this somewhat delayed post – put it down to the usual pressures of trying to get everything closed off before year end (and a little ‘help’ from my CIO ;o) )!

KeyNote

So my first visit to Berlin for Microsoft’s TechEd Europe event left me feeling drained attempting to get around as many bars breakout sessions as possible. The Cloud undoubtedly dominated proceedings and this is backed up by the fact that I only chose one Cloud specific breakout session, and yet out of the 20 or so sessions I attended I can hardly recall a session where the Cloud wasn’t mentioned! It highlights just how integral the Cloud is to Microsoft’s product strategy, covering 3 distinct service offerings; Infrastructure-As-A-Service, Platform-As-A-Service and Software-As-A-Service.

The keynote (watch it here) provided a clear steer on how Microsoft see the end-to-end application landscape evolving; user-centric portable devices, hooking into Visual Studio (VS) applications/services hosted in the Cloud. On the portable devices side the recent release of Windows 7 Mobile and the ease at which applications can be written in VS2010, provided an insight into how Microsoft see mobile apps becoming pivotal for businesses (anyone can get started using the free tools). Not much to shout about here in hardware terms but development wise there appears to be great potential for developing rich business applications for mobile devices.

Software-As-A-Service was headed by Office 365 (Microsoft’s answer to GoogleApps) and provides slick browser based versions of your usual favourites; Word, Excel and PowerPoint. What’s impressive here is that you also get Exchange Online, SharePoint Online and Lync Online (MS unified communications suite encompassing IM, Voice, Email and Web Conferencing). With pricing options for both SMEs and Enterprises, organisations have the opportunity to utilise some great tools with low up-front costs, that will help streamline communication and simplify company collaboration (both internal and external via federated Lync environments).

Infrastructure-As-A-Service facilitated by Hyper-V Cloud technology provides businesses with a self-service, elastic infrastructure platform. It gives you the opportunity to utilise vast amounts of processing power without the prohibitive expenditure you’d usually face when procuring your own tin. I was impressed with the Service Templates that simplify deployment and management of applications spanning multiple tiers. Remember however, that you’re still responsible for the patching, upgrading and management of all the servers deployed, but you get the commodity based benefits the Cloud can offer. Interestingly, Microsoft mentioned that financial assistance was available to companies wishing to be guinea pigs on this new offering.

Platform-As-A-Service is obviously where Windows Azure comes into play, providing a robust and scalable platform to host business applications. This was described as the panacea and it was clear to see alignment with all development technologies attempting to abstract the solution from where it may ultimately be deployed. This was demonstrated with VS2010 deployments, particularly when using AppFabric but also SQL Server in the form of Data Tier Application Development. The development tools are evolving to make it as easy as possible to switch between Cloud and non-Cloud based deployments, and clearly Microsoft want to remove as many obstacles as possible so that choosing Cloud services is literally one click away.

The other item I want to highlight was the effectiveness of Twitter at an event like this. Not only was it useful for me to keep up with the latest celebrity gossip, but I was really impressed with the ability for Microsoft to get immediate feedback on the event and breakout sessions. I can see it providing businesses with a great medium for assessing the impact of marketing campaigns etc. but also data-mining potential discovering patterns in comments. On this theme I recommend you take a look at this. A great week and highly recommended.

ovumA couple of days ago I was one of the presenters at an Ovum/Butler strategy briefing at the Ambassadors Hotel in London. Smart421 were sponsoring the event and I wanted to share something real relating to cloud computing – and not just give a thinly-veiled sales pitch (not really my style – and also I’ve sat through enough of those from vendors/SIs to last me a lifetime) or explain what IaaS/PaaS/SaaS are to an audience for the millionth time.

So, after some steering from the Ovum analyst who was performing the introduction to the subject, Laurent Lachal, I settled upon the subject of what organisations need to do and consider today in their Enterprise Architecture teams in order to gear up and exploit the opportunities provided by cloud computing. It’s always better to have a map before you set off on a journey, and so I explained how we would use the Open Group’s TOGAF architecture development method (ADM) to help us navigate and address all the impacts of cloud computing in an organisation. Anyway, the presentation all went well with just the right amount of pre-match nerves to get me excited before I went on – and some good audience questions. Although my colleagues tell me that apparently I have a couple of bad habits when presenting that I need to work on :). Drop us a line if you would like a copy of the presentation.

The main observations that I wanted to share in this blog post were relating to the nature of the audience, as they provided an interesting insight into the cloud computing market itself. As this audience had been generated from a list Ovum subscribers mainly, they were quite a different mix to other cloud events I’ve been to or presented at:

  • Awareness – The level of knowledge of the subject was relatively low. You might say “that’s why they’ve come to a cloud briefing session you dummy” and that would be fair, but I am contrasting this to the knowledge levels I’ve seen at other, perhaps more techy events. My conclusion – it is very easy to get misled about how well understood the subject is by the market when you work in a “consulting bubble”. There is still a significant amount of “educating the market” to be done.
  • Strategy – When the audience were asked “who has a cloud strategy/adoption plan in place today?” that the answer was – virtually nobody out of 50+ delegates. One hand went up.
  • Execution – What was clear to me from the various vendors/presenters was that no one has really done this yet. Whilst I know that there are lots of case studies that can be trotted out etc, the reality on the ground is that outside of the true early adopters, it just has not gone “mass market” yet. SaaS adoption is the exception to this rule as this is more of an evolution of the previous ASP model, but PaaS and IaaS really have tiny market take up at present in the larger enterprises. My view is that SMEs are generally adopting quicker than larger enterprises.
  • Risk – The audience did not feel particularly risk-averse. Again, I guess they had “self-selected” to attend, but I didn’t get any of that “corporate fear” paralysis sensation about data security etc that I’ve witnessed in some enterprises. So I felt like these guys were really going to make something happen over the next year or so, which I felt was particularly encouraging for both the industry and the UK economy.

One closing comment I would make about the event is that there were some points made during the day that I knew were just plain wrong, some of which fell into the FUD category. I managed to keep my mouth shut. Amazing as it may seem, I am also wrong on occasions, so I’m not having a pop here specifically – it’s just that even in the vendor/SI space we’re all still getting to grips with a disruptive technology and there is a lot of hyperbole and urban myths to be thrashed out and dispelled yet.

Continuing on from my previous post….the second session I attended at Cloudforce 2010 covered Force.com. Being familiar with the PaaS offering already, I was more interested in drilling into some of the uses to which it was being put, pros/cons etc.

The sweet spot for Force.com was described as relational data “heavy”, process centric applications. This includes the classic Lotus Notes app replacement problem that many organisations suffer from, and also replacement for (typically business-built) Excel and Access applications. Afterwards I had a chat with one of the presenters and asked the obvious counter question – for what would it not be a good fit? The sour spot (!?) for Force.com would be an application that has very few users, performs heavy computational processing and manipulates large unstructured data (PDFs, images etc). For these latter requirements you are better going down the AWS EC2/S3-style IaaS route, although potentially Force.com could still be used for any relational data management aspects and combined with AWS services (for example). I find the idea of this ‘architectural mashup’ (my words, not theirs) quite appealing – it feels like the next logical evolutionary step after “integration at the glass”, but at the same time it also feels like the increase in moving parts and dependencies in your architecture is heading in the wrong direction, i.e. more complex.

From a competitive positioning point of view, what was interesting was that Salesforce.com focused their arguments on tackling Java and .NET development, rather than discussing how they were or were not better than other PaaS providers – so to a large extent they feel they are still in the “educating the market” phase and of course as market leaders I guess they don’t feel they need to defend their offering against more direct competitors. Quite a bit of time was spent discussing industry analysis reports comparing development speed and cost between Force.com and Java/.NET – which really reminded me of the same debates between Java and .NET in years gone by. The claim made was that in general Force.com is 5x faster, and half the cost. The flip side of that argument was not really discussed, i.e. as Force.com abstracts you away more from the underlying code (80% of the work is supposed to be “point and click”) then there must be some inherent restrictions that you could avoid in Java/.NET – it’s the classic 3GL vs 4GL debate all over again but with a cloud deployment model thrown in. My conclusion is use the right tool for the right job, it’s not either/or, and Salesforce.com’s answer would be to mix and match.

Finally, I discussed lock-in and governance concerns with the presenter afterwards. Salesforce.com are evolving and enhancing their product offerings at an amazing rate and clearly expanding out to target more and more of the enterprise IT footprint. As the post-Internet world continues to centre around fewer and fewer mega-brands, there has to be a concern with committing more and more of your IT estate to any one vendor. In addition, I have real concerns about the governance of internal Force.co implementations. Whilst there is no reason that good development lifecycle processes cannot be applied to a Force.co development, the whole point is that more of the “point and click” can be performed by users closer to the business etc, and so organisations will need to be very vigilant to prevent utter change management chaos occurring, e.g. changes to apps on the fly, redundant/duplicate data being created and never tidied up etc. Are we at the new dawn of a fresh wave of data quality and integration nightmares? I regularly see how difficult our customer’s find it to control change in “classic” software developments on mainframes etc, with the stronger more vocal change agents causing the introduction of architecturally suspect changes due to insufficient or ineffective governance controls. The lowered barriers to entry from a PaaS offering such as Force.co is like pouring petrol on this fire I suspect. We’ll see if I’m right…although I accept that predicting train wrecks in the software development world is not exactly a genius insight!

I love the fact that the job I have gets me involved in some really interesting projects where you can be at the forefront and help shape the future for companies.

So it is of no surprise that I am finding myself increasingly looking at Cloud-based solutions for our clientsCloud-based Solutions and out of interest too and finding myself thinking “hey, this is great, it helps solve so many of my issues – reducing capex, increasing business  agility and reducing time to market” but I still keep coming back to what should I do with my data?

Data is the currency of all businesses and therefore is vital that it is protected; plus in the EU there are so many different rules that we need to adhere to that when you think about handing that over to someone else it just doesn’t feel right – just because I can put my data in the cloud, should I?

Will my data be safe? Who will have access to it? Where is my data? How is my data protected? What control will I have? How will my data be backed up and verified? What access will I have to my data? What happens if it all goes wrong? These are all common questions and are the main ones that people come back to and are the questions that you will need to take up with your Cloud provider. They should be happy to answer all of these questions and if not, then I would seriously consider if they are the right provider for you.

These are key questions and points that need to be understood and agreed with your provider as these will ultimately form part of the SLR and should match your organisations SLA; You will be losing some control over the data so ask yourself is this right for you? Remember that you will still be responsible for the data whilst it is in the Cloud so make sure that all the associated legalities are  addressed (PCI DSS, DPA etc), have a clear exit strategy in case it all goes wrong – you should be able to get you data back and in a usable form. You may find that the security provided by the provider coupled with your SLA’s offers you a more secure solution than if hosted in house. Speak to other customers and other providers for references and guides on price and capabilities – as is true with so much in life you often end up getting what you pay for.

If you can address these points then there is no reason why you shouldn’t move your data to the cloud, but ultimately you have to ask yourself just because I can, should I? There are many people out there hosting their data in the Cloud, some have some of their data in the Cloud and there are others who don’t host any data in the Cloud – just because you don’t move your data now doesn’t mean that you never will and likewise, if you do  move your data to the Cloud then there is no reason why you can’t then bring this back in house if it is not working for you (make sure this exit strategy is part of your agreement with the provider!!!)

If you consider the points above then I am sure you will make an well informed and thought out decision on if you do put your data in the Cloud or not…

One of the great benefits of being a member of the British Computer Society is being able to attend the very good conferences, forums and presentations they host and run, one of which I attended earlier this – “Software as a Service – Is it the right time to move services into or from the cloud?”.

The event was run by the CMA who are part of BCS from the BCS headquarters in London and was very well attended, with the likes of IBM, Shell, Sun, Fujitsu, Logica, Barron McCann, Experian, BT, Siemens, Ordnance Survey and many others sending delegates showing the rise in interest in Cloud Computing across all sectors. There were seven presentations in all, two of which were examples of companies that have embraced Cloud Computing, and a presentation from Rahoul Bhansali from Hudson & Yorke who has been heavily involved in the government’s G-Cloud initiative (which is something I am sure I / Smart421 will blog on later as the initiative matures). However for this blog entry I want to focus on the two presentations that caught my eye and overlay these with the two examples of companies embracing Cloud Computing.

The first presentation I want to focus on was from Nick Coleman of IBM (author of the Coleman report) who delivered a presentation on “The future directions of the Cloud Computing”. The presentation focussed on the message that “Cloud Computing is evolutionally and not revolutionary”, something which I wholeheartedly agree with; managed services have been around for years, virtualisation is not new and many companies have run / hosted applications on hardware hosted by a third party – Cloud Computing is about pulling these together and enabling different combinations of SaaS, PaaS and IaaS to be put together to meet the specific and individual needs of a company. Cloud Computing is transformative; it offers businesses, especially SMEs, a lot of benefits such as a reduction in costs, agility in deliveries, quicker time to market, easy to scale solutions etc, but there are still a number of considerations that need to be taken into account – security is still the main concern with the Cloud – in 2009 just under 50% of vulnerabilities found in web apps were not patchable, with Cloud based solutions you are not aware of what else is being hosted with your application, and where is your app being hosted? – these are all answerable but you will most likely have to dig down to get these answers and may even end up having to pay extra to resolve them. As Nick alluded to during his presentation, Cloud Computing is transformative, businesses and people need to adapt and be willing to relinquish control over some key aspects that have traditionally been kept in house. It was these considerations that featured heavily in the presentations from the companies that had embraced Cloud Computing – Ordnance Survey and EasyNet Connect.

Richard Britton who was up until recently IT Director for EasyNet Connect, presented on how they migrated their complex 50+ applications into a SalesForce Cloud based solution. They also surveyed their customers and:

  • 73% of their customers said they would be using Cloud based solutions within 5 years
  • 37% of their customers said they would be using Cloud based solutions within 1 years
  • 66% of their large SME (50+ staff) customers are already using Cloud based solutions or plan to do so within a year

EasyNet have definitely embraced Cloud based solutions and plan to press forward with this as they grow.

Tim Martin and Ian Hoult from Ordnance Survey presented on how the OS OpenSpace product had been migrated to the Cloud; The OpenSpace application is used by many companies, including many local councils that seek the level of detail in the maps they provide. The application has between 5,00 and 10,000 updates on a daily basis that were shipped to all of the app consumers on a 6 weekly cycle, this equated to about 60GB worth of information so had to be loaded onto multiple DVD’s so was a very expensive process and a prime candidate to be converted to SaaS. In addition to the expense of shipping all of the updates, the demand for information on the local council web sites can flux quite a bit, the example they used was the recent increase in volume (around 80% above and beyond normal use) during the cold weather of people looking for salt bins. With this is in mind, it would have being very expensive for them to build the extra capacity that would be required to deal with a huge serge in requests. They decided to push OpenSpace into the Cloud to deal with these issues and can now serve 7 million tiles as opposed to 500,000 and for a third of the cost for delivering and now have a scalable and economical solution with reduced support costs – they can now add an additional server into an environment within 5 minutes if the need arises. With that said it wasn’t all plain sailing, they had to switch their database as the licensing with their existing DB was not “Cloud Friendly”, also the data has been split, so all personal data is held locally within Ordnance Survey and only the public data is hosted within the Cloud within the EU. All in all they have seen this as a huge success and will be delivering 2 more of their products into the Cloud, but will spend more time in looking into the contracts and licensing aspects which leads me nicely into the 2nd presentation I wanted to focus on that was delivered by Richard Kemp from Kemp Little who specialise in IT litigation on the differences between cloud services and outsourcing.

Richard, who has worked with Google and SalesForce on this subject, was a very captivating speaker with an interesting subject, one of which I think is the most important for those looking at Cloud Computing. As I mention earlier, Cloud Computing is evolutionary and not revolutionary, we have all been involved in outsourcing in some way, shape or form in our careers and we all know that one of the hardest aspects is the contracts that need to be drawn up to support these. The traditional approach is a transfer out of assets and in return you receive a service contract. Richard said that with the ever increasing shift to Cloud based solutions, things are not changing as much as people think; you still need to set the KPIs, SLAs and service description but now certain items are becoming critical to these agreements such as:

  • Performance – key service metrics need to be specified
  • Availability – with the services being accessed over the internet who bears the risk of an outage?
  • Data – Data Security is always a key item to any business, but when hosting that data in the Cloud you need to be sure that the data standards applied by the Cloud provider is that that you require; you should be able to access that data at any given point and most importantly, you need to be able to get your data back at any given point and in a usable format
  • Exit – an exit strategy is key – being able to regain control of the service in a quick and timely manner

As well as these key points, there are some differences in the approaches taken to draw up the agreements for either outsourcing or Cloud:

  • Outsourcing tends to be a custom / bespoke deal where as with a Cloud based deal being a one to many / generic deal
  • With outsourcing the supplier is responsible for delivery; with Cloud based solutions the customer takes on the risk of their services being served over the internet and accept the risks with that
  • Outsourcing generally work to a set up front free where as Cloud is generally pay as you use

I think the key thing is to do all of the hard thinking up front when thinking about moving to a Cloud based solution, look at the key points above, remember the criticality of your data and the fact that the customer is responsible for the data (make sure you specify where it needs to be hosted, what standards need to be adhered and make sure that the data can not be used with express consent), make sure an exit plan is agreed up front as this helps things nearer the end of any agreement and helps avoid litigation as it is always the lawyers who win!

All in all I thought the event was very good and gave some excellent insight as to what to bear in mind – in my honest opinion I think now is the time to be moving items into the Cloud, SOA is fast becoming adopted as much of a standard as OO and Cloud based solutions can be a truly cost effective, flexible and scalable solution for any business!!

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers