Cloud Computing


empty pocketFollowing on from my post about Google, AWS and then Azure price cuts the other day, there’s an interesting summary of Rackspace’s position covered on TechCrunch. In summary, the Rackspace CTO John Engates explained that they are continuing on the same track of not matching the recent price drops – which is consistent with his blog from July last year where he said…

We at Rackspace don’t aspire to offer the lowest unit prices. We strive instead to offer the best value…

I suspect a key reason is because they can’t afford to play this game of chicken.

Looking at basic storage as it’s easiest to do a like-for-like comparison, Rackspace’s Cloud Files is 10 cents/GB still, so that’s now 3.33x than the entry price for AWS S3, and 3.8x the entry cost of Google Cloud Storage. Whilst I firmly believe that agility is typically a stronger driver than cost in the enterprise market, that’s such a huge difference that I don’t see how a customer procurement department can ignore it. Rackspace is having to move up the food chain as the base services get commoditised underneath them, i.e. focusing on service management, OpenStack, DevOps etc – get (a bit more) niche or get out. I get the “focus on value” message, but it’s hard to show much differentiating value on relatively commodity services like storage. It looks like this price drop was one price drop too far for Rackspace’s pockets. And then there were 3…

PS As an illustration of the positive impact on our customers, we’ve recently re-priced a customer proposal that was already going through the Smart421 sales machine when these price cuts were announced, and it’s resulted in an immediate 17% overall AWS cost reduction. Nice.

 

Fight in ice hockey 2009It’s been a pretty amazing 48 hours or so in the mega-cloud vendor space. We’ve rather lazily got used to continual price reductions from AWS, but this round of Google vs AWS price reductions are pretty special even given this context.

First Google announced some very big price reductions – it was the storage pricing that really grabbed my attention, at 2.6 cents/GB. But for the majority of workloads the compute costs are the dominant component, and so the 32% reduction in compute costs is probably more significant for many. It’s a minor point, but the Google announcement mentioned “reintroducing Moore’s Law to the cloud“, but Moore’s Law is of course finally running out of steam, e.g. according to Intel it’ll be game over by 2020.

AWS have responded with this, but interestingly seem to be calling time on the race to the bottom, knowing that they have a much more credible enterprise offering than Google I suspect. On S3 they’ve almost matched Google but not quite at 3 cents/GB reducing to 2.75 cents/GB with volume. Perhaps the bit that I’m most excited about is the price reduction of the M3 instance range by a whopping 38% (e.g. an m3.large in US-East is reducing from $0.225/hour to $0.140/hour), given that the M3 range is often our preferred weapon of choice these days. That’s a massive bang for your buck.

The next obvious thing to look out for is what Microsoft do with Azure pricing – the assumption is that they will match AWS as per their previous announcement to “peg” their pricing to AWS. Ouch – imagine being an exec and getting out of bed in the morning to find out that you need to drop your prices by 30-80% across the board!

[ADDED 2nd April - Microsoft have done just that - see announcement on their blog here]

So what conclusions can we draw from all this? Well here are mine:

  1. What’s cheapest today is not necessarily cheapest tomorrow – so optimise costs for the long term, not the short term. OK, if you just want some server or storage capacity for a short time then go with the cheapest I guess, but in reality I’m talking about enterprise workloads and it’s never “a short time” – storage is for life, not just for Christmas :), and the cost of moving between providers might outweigh any benefit. Also, the costs are now so low for some workloads (e.g. if I’m playing around with some feature on AWS) that they are trivial anyway – so convenience and sticking with whatever minimises any usage friction are paramount for me.So rather like when choosing a bank to save your money with, where you might want to go for the savings account with the best long term track record of consistent high interest rates rather than the headline grabbing “bonus” offer – when selecting an IaaS cloud provider it’s their trajectory that matters (and hence their ability to leverage mega-scale). It’s not a great time to be a sub-scale (and sub-scale these days still means freakin’ huge) cloud provider unless you’ve got some specific niche offering…
  2. In general, we don’t recommend buying AWS Reserved Instances (RIs) for longer than a 1 year term. The 3 year term often makes more financial sense at that moment in time, but in reality the prices are dropping quicker in a year than the additional saving. This makes sense really, as AWS virtually created the IaaS market only 8 years ago, so a 3 year commitment is still a lifetime in this market.In fact, now is a great time to buy 1 year AWS RIs as it’ll be a few months (you’d have thought!) until the next round of potential price drops – maybe timed to coincide with the next AWS Re:Invent conference in Las Vegas in November – so you’ll get the maximum saving. An exception to my point here is that sometimes 3 year RIs are useful for projects where the TCO needs to be completely fixed and predictable – i.e. cost predictability for a business case is the primary requirement.
  3. A mild concern about where all this is heading – in my view there’s enough competition in the market at present for it to be healthy (i.e. the consumer is winning at the moment), but there is a risk that all but the most massive cloud service providers are squeezed out and the resulting oligopoly means that prices start to creep up again. You could argue that Microsoft’s price pegging announcement is an early sign of an oligopoly forming – reminiscent of the behaviour of the supermarket sector in the UK (where 4 retailers have 76.5% of the market). We’re a few years away from this risk so I don’t think this should infuence enterprise’s investment and vendor selection decisions today.

We’re loving it – what a great time to be migrating customers’ IT workloads to a cheaper, more agile platform where the price is only going down!

CloudExpo2013 Our LocationWhat does last week’s Cloud Expo Europe tell us about the maturity of the market for cloud services in the UK? As an Amazon Web Services Premier Consulting Partner, Smart421 had a stand in the Amazon Web Services Village which gave us a great opportunity to have numerous customer conversations. Wayne Stallwood, one of the AWS Architects from our internal AWS Practice, supported our sales and marketing staff on the stand, and we compared notes afterwards to draw out the key themes.

First of all, one immediate observation was that people were more openly talking about hosting production/live systems in the cloud, i.e. not just the usual dev, DR and startups. We’ve been at this cloud game for about 4 years now and so it is far from new for us (although as a side note, it was interesting to hear Mark Russinovich from Microsoft Azure saying “the cloud is new” this week) and we started to see this shift at least a year ago if not longer. Some of the presentations at Cloud Expo Europe reflected that, for example with a talk about DDoS hardening etc – very much about live systems. There were lots of questions about performance stability, resources, scalability, reliability etc – again more enterprise-level considerations.

Smart421 stand

Smart421 stand

Balancing this though, it was somewhat alarming that some of the people coming to the stand still wanted to talk about the buzzwords without really knowing what they are…so we had a few openers where it was “so this “big data” thing…what does that do for me?” and if you looked at the name badges it was established enterprise people asking the questions. This tells us that there’s still a huge lump of “educational debt” to overcome in the enterprise space.

I had time to attend a couple of presentations but they were pretty awful – dull vendor pitches. You need to choose carefully at these events as the attendees typically don’t pay to attend, and so the bulk of the funding for the event has to be sourced from vendors, and hence they all get to present. There is always some great content though, you just need to be selective and accept you’ll get a few duds.

Instead, I devoted the bulk of my time to understanding Red Hat‘s direction of travel, especially in relation to OpenStack (as I’m fascinated by the cooperation and competition in this area, e.g. from Mirantis , and I’m interested to see how the delivery of private clouds plays out as enterprises use it as a not always sensible stepping stone to the inevitable destination of public cloud) – although inexplicably they were squirreled away on an upper floor and poorly represented in the online show mobile app and so pretty hard for people to find.  I also took some time to catch up with AWS colleagues old and new – including AWS Technical Evangelist Ian Massingham.

The Cloud Expo Europe event itself was co-located with Data Centre World (just over half the floor space) and Big Data Expo Europe (really just a thinly populated stream within the Cloud Expo event), and it was a bit odd to be wandering around the show floor and then stumble into the “dark side” with vendors trying to pitch cooling, racking and UPS power systems to me. I don’t want to build a data centre, ok, AWS has already taken care of that for me :).

The pure cloud content felt smaller to me that previous years, and so as a final thought – I wonder if this reflects not so much that the cloud market is going off the boil but more the opposite – that it’s now mainstream enough that it’s harder to raise so much interest for events that are riding the latest hype?

flock of migrating canada geese birds flying at sunsetIn yesterday’s blog post I summarised the cloud services broker role with some definiti0ns, and concluded that I do indeed appear to work for one of them – and I suspect many vendors at this week’s Cloud Expo Europe (#cee14) might lead on this from a marketing point of view.

We’re delivering service intermediation/customisation and service aggregation/integration, but one thing we are not really doing (or seeing any demand for) at the moment is dynamic or semi-dynamic workload migration. i.e. it’s not just dev & test any more, these days we are migrating complex production environments customer after customer onto AWS. But we are not providing customers with the means to dynamically move or spread those IT workloads across different cloud providers. It’s certainly something we could do from a technology perspective, and most of our deployments have some hybrid aspect to them.

cee_logo_2014_nowebsiteThe ability to migrate IT workloads dynamically (i.e. at run-time, not at deployment time) is something I sometimes see as a capability under the “cloud broker” banner, but in my view it really just doesn’t make sense – at least not at the moment.

The rate of innovation in the IaaS/PaaS/DaaS market is such that most of the other vendors are playing catch-up with AWS, as AWS continue to differentiate themselves from the following pack. This shows no sign of slowing down over the next couple of years – so the only way a migrated workload is going to work across multiple cloud vendors is if it only relies on the lowest common denominator functionality across the vendors, which is typically basic storage, virtualised compute and connectivity.  Or you have to architect your solution to take into account deployment differences across the cloud providers you intend to use – and be able to effectively monitor and support both – twice the work and complexity, and not something you really want to debug.  Did your load balancing just stop working as you expected – it worked last week…mmm…I wonder if our load balancing configuration behaves exactly the same across all our cloud service providers? And so on…

Even storage – the most commoditised of the building blocks of IaaS (you would have thought) contains some interesting differentiation – not on price any more as Google/Microsoft/AWS are effectively price-matching these days, but on features like access control, archiving to cheaper storage, automated data life-cycle policies etc.

The bottom line is that if you are going to architect your applications so they can run on any cloud service provider, then you can’t easily use any of the good bits and hence your value in migrating to a cloud solution is diminished.  Not ruined, just reduced.

There are now a bunch of brokerage tools out there from vendors that claim to give you this workload migration capability, but what I’ve seen so far is disappointing, e.g. one recent new tool I looked at required a custom template to be created for each cloud provider – so whilst the end user might get a menu choice of “deploy to AWS” or “deploy to Azure” – under the covers you still need a bunch of experts in each cloud service provider’s technology and those experts need to keep themselves abreast of new functionality continually.  You can create an impression of homogeneity, but it’s just a veneer.

In our experience, even in very large enterprise estates (e.g. where we’ve deployed and managed up to 750 AWS instances made up of numerous discrete environments), whilst the IT workloads might be relatively consistent in nature (e.g. there might be a corporate standard for this OS and that application server etc), there is always sufficient variance in each project’s requirements that a completely cookie-cutter approach to environment deployment and self-service just does not work.  Each project needs slightly different software, or software versions, or server specifications, or connectivity requirements etc etc – and the list goes on.  And if they didn’t – well – you’d hope the projects would be merged into a single project if they were so similar in their needs, right?

So – given that it’s taken the IT industry this long to get to the point that “as a service” is really possible, for the next couple of years at least let’s focus less on hiding away the good bits of cloud and the really useful differentiating features of the leading cloud providers, and focus more on actually exploiting them please!

PS If you happen to be attending Cloud Expo catch me on the Smart421 stand or let me know you are there via @SmartCTO

CloudBrokerBadgeThe first Cloud Expo event I attended 2 years ago was striking for the myriad of traditional hosting companies who were cloud-washing their offerings (usually quite blatantly and badly I felt).  Last year what struck me was the myriad of new small vendors selling niche cloud-related product offerings – data transfer optimisation, security products, management products, etc. I wonder what the theme will be this year? It’ll be interesting to see how many vendors are wearing the “I’m a cloud brokerage” badge at this week’s Cloud Expo.

Whilst I was at AWS’s Re:invent conference last November, one of the guest speakers at the partner day was Tiffani Bova, Distinguished Analyst from Gartner. Part of her presentation covered the topic of cloud brokerage, something Gartner have been talking about for quite a while, and something Tiffani and I had some debate about afterwards.

I must admit, it took me a while to wrap my head around the concept of cloud brokerage, partially as the pushing of the term was coming more from the analyst community than the rest of the cloud industry. Williams Fellows from 451 Research refers to this as “…a ‘cloudemic’ of firms now stamping ‘cloud broker’ across their service portfolios”. Tiffani’s view was that 90%+ of the AWS partners in the room (including Smart421) were brokers. It’s such a broad definition, e.g. Gartner’s definition is

Cloud services brokerage (CSB) is an IT role and business model in which a company or other entity adds value to one or more (public or private) cloud services on behalf of one or more consumers of that service via three primary roles including aggregation, integration and customization brokerage.

The great thing about definitions is that you can never have enough :). Way back in mid 2011 NIST published the following definition of a Cloud Broker…

NIST CloudBrokerageDefinition

The default view taken in society in that anyone with the title “agent” (aka broker) is looked down upon – e.g. estate agent, recruitment agent etc :). But by this definition I guess we’re all brokers in one way or another, even if it’s just combining toast and scrambled eggs to make the kid’s breakfast in the morning (aggregation).

Looking at what Smart421 delivers for our customers – we integrate different cloud and non-cloud services, we design and implement complex cloud environments and we add a 24×7 ITIL-based service management capability on top including ongoing capacity and cost optimisation. We also add value by handling access management, enhanced security, managing billing complexities and bringing new market innovations to our customers (as cloud service providers like AWS are still innovating and releasing functionality at an amazing rate, too fast for customers to keep up generally).  I guess that means I get to wear the badge too!

In a blog post tomorrow I’ll talk some more about one of the oft-touted cloud brokerage promises – that of dynamically migrating IT workloads across cloud providers (closely related to the dreaded phrase “bursting into the cloud”), and of making deployment-time decisions about which cloud service provider to use…and why I don’t believe in it.

PS If you happen to be attending Cloud Expo Europe this week, catch me on the Smart421 stand or let me know you are there via @SmartCTO

WhereHasAzureCDNGoneI stumbled across the fact today that Microsoft have quietly withdrawn their Content Delivery Network (CDN) offering from Azure for new customers. Several Azure users report a response from Windows Azure Technical support along these lines…

We’re in the process of building out our next generation Windows Azure Content Delivery Network (CDN) architecture, and during this time we are no longer accepting new CDN customers.

We highly encourage you to wait until we’re ready building our next generation Azure CDN service.

That’s fine – because customers who were already using it have still got it, right? Well no, upon reflection, this feels like a bigger deal than I first thought. In the earlier days of Azure the CDN was a top table feature and selling point, and then with little ceremony it quietly disappears. So little ceremony in fact that forum posters were asking where it’s gone, assuming it was a portal bug. For any cloud provider to remove a key part of their cloud service catalogue whilst they create a better one – it’s just not really on. It also makes me wonder how many people are really using it, as surely there would be more fuss in the blogosphere if they were. To be fair to Microsoft, I get the impression that if I asked for access to this functionality to be enabled for my account I might well get it, but it does raise the question about how enterprise-strength was the CDN service if they felt they had to replace it.

Come on Microsoft – this is not really good enough in the cloud era. As cloud consumers we’ve got to believe that the PaaS services that we are engineering into our solutions are going to be there tomorrow (or at least with a smooth transition capability, on my timetable), or else we won’t bother…

IGNITE banner logoIn just over 24 hours, we’ll be joining with IBM UK (@ibmmobileuk) and others to deliver the IGNITE event (@NimbusNinety  #NimbusIGNITE), at the Skyloft, a swish venue on 28th Floor, Millbank Tower in London.

We’ve been looking forward to it. Any initiative that brings us into contact with senior IT and business people is something we thrive on.

The organisers, Nimbus Ninety, is a class act. We like the focus Nimbus places on communities around special interest groups. Around 5,500 people belong to one of more of their communities: Business Leadership Exchange; Obis Omni; The Cloud Circle; The Big Data Insight Group.

The 2-day IGNITE event is a meeting of minds, attracting over three hundred members from across these communities. This year’s theme is squarely on disruptive change.

Leveraging the disruptive opportunities of cloud, big data, mobile and social: winning strategies for success in the digital age

As co-sponsor of the annual Market Trends Report (MTR), we have been working with editor, Mark Young (@Mark_J_Young) to analyse the findings. Over 260 people took the survey during September and October which means the results are the most recent and insightful available anywhere in the IT market. The results are also highly significant.

Robin Meehan, CTO at Smart421 (@SmartCTO) will tomorrow present some of the findings as well as his own unique view on what he is seeing in the enterprise market.

Check back to this blog site tomorrow morning for a special blog post on this very subject.

The effervescent Emma Taylor, founder and managing director of Nimbus Ninety, will personally introduce and be MC for the IGNITE MTR session.

A number of Smart421 customers will be attending and we very much look forward to seeing them again. We will also be hosting a number of On Site Meetings with UK enterprises who have expressed an interest to be introduced to Smart421.

“Monetise your Enterprise assets” is our main message. We’ll be evidencing how our credentials in Enterprise Cloud, Mobile First, API Factory and Big Data make Smart421 the enterprise partner of choice for just about everyone in the room tomorrow.

Robin’s most recent public presentation was on Big Data, recorded in September 2013. Watch it on YouTube here.

Place share this blog by clicking on the social buttons below, or via the short URL here http://bit.ly/HAQn6t

Please Rate and Like this blog.  Our readers what to know YOUR opinion, so please take a moment to leave a Comment.

It caught my eye the other day that Microsoft announced an equivalent to Amazon Web Services’ Direct Connect offering, i.e. the ability to connect from your premises to your cloud deployment without going over the Internet. The press release says this capability is “expected to be available in first half of 2014” – and I assume that this initial launch will be US only with Europe to follow later, although it doesn’t say.

Smart421 was a Direct Connect launch partner in the European region for AWS back in Jan 2012, although the initial US launch was way back in August 2011. So going on that basis, I can now put a crude estimate on how far behind AWS the Azure platform really is – at least two and a half years :)

Anyway, now is as good a time as any to share some brief stats from our real world experience of deploying Direct Connect for the European region. I’m not aware of much data in the public domain about Direct Connect latency measurements in the European region – so if you know of some, please comment on this post to let me know.

On a 1 gigabit connection, for an ICMP (i.e. ping) round trip we typically see a latency of circa 12-13ms for Direct Connect versus 33ms via a VPN over the Internet, i.e. about a 60% reduction in latency.

AWSDirectConnectLatency

This data needs to be considered carefully as there are a multitude of factors at play here depending on the specific customer environment and requirements – such as the Internet connectivity for the VPN, and crucially where the customer “on-premises” equipment is in network terms with respect to the AWS Direct Connect location in London Docklands. Also any comparison will vary depending on time of day etc. I’m deliberately not providing any stats on achieved bandwidth here as there are just too many factors involved – principally that the limiting factor is likely to be any MPLS connectivity involved in the architecture rather than Direct Connect itself.

Still – it’s interesting data nonetheless…thanks to ‘Smartie’ Wayne for compiling the data.

Please share this blog using the social buttons below or short URL http://bit.ly/1e8CCq6

Please take a moment to Rate and Like this Post. Our readers want to see YOUR opinion so please post a Comment.

A graphical user interface isn’t the only way for Amazon Web Services customers to control their cloud deployments.

Amazon has recently released the AWS Command Line Interface (CLI), capable of controlling EC2, S3, Elastic Beanstalk, Simple Workflow Service and about twenty other services, but no support for Glacier, SimpleDB, EMR, Cloudsearch, DataPipeline or Cloudfront.

The AWS CLI commands take the form:

$ aws SERVICE OPERATION [OPTIONS], so an example would be, “aws ec2 help”.

Each operation generates its output in JSON format by default. However using the “-output text” or “-output table”, will output text or tabular formats.

I found that trying to parse the text or tabular formats in a bash script quite tricky, so opted to parse the json using jq (JSON Query) tool.

$ aws ec2 describe-instances | jq .ReservationSet[ ].instancesSet[ ].keyName

The command above will return the keys associated with each instance.

In order to view the contents of S3 buckets in a directory-based listing, the AWS CLI command below will generate the required output.

$ aws s3 ls s3://mybucket
      LastWriteTime     Length Name
      -------------     ------ ----
                           PRE myfolder/
2013-09-03 10:00:00       1234 myfile.txt

In order to perform recursive uploads and downloads of multiple files in a single folder-level command. The AWS CLI will run these transfers in parallel for increased performance.

$ aws s3 cp myfolder s3://mybucket/myfolder --recursive
upload: myfolder/file1.txt to s3://mybucket/myfolder/file1.txt
upload: myfolder/subfolder/file1.txt to s3://mybucket/myfolder/subfolder/file1.txt
…

As with all things AWS, configuring the command line tool isn’t exactly trival, but Amazon offers an easy step-by-step install guide and extensive documentation to help new users get started.

The tool is available for Windows, Mac and Linux and also comes pre-installed on the most recent versions of the Amazon Linux AMI, Amazon’s supported and maintained Linux image for use on its EC2 cloud-computing service.

Very powerful.

Please share this blog using the social buttons below or short URL http://bit.ly/15OlOEP

Please take a moment to Rate and Like this Post. Our readers want to see YOUR opinion so please post a Comment.

Dodo headAt today’s AWS User Group meeting, I was reminded of CohesiveFT’s VSN3 offering – this provides an overlay network capability on top of various cloud providers. The bulk of their customers are using VSN3 on AWS, but it’s available for use on other cloud providers also, and even across multiple cloud providers in a single deployment.

The historical roots of VSN3 are in pre-AWS VPC days and at that point in history (Dec 2009 or so) it must have been a very attractive offering. In fact, providers like CohesiveFT have been providing Software Defined Networking (SDN) solutions long before the term was widely used. Now the differentiators offered by this kind of solution are looking increasingly thin as AWS’s VPC offering has matured and matured, but there’s still some advantages of this kind of product that I can see some customers will pay good money for. They are:

  • Network level encryption, i.e. automatically encrypt all traffic between all AWS instances – something that VPC doesn’t offer today, it assumes a network inside a VPC is trusted. Whilst I can’t see any compelling reason why I’d need this from a technical standpoint – I can see that a customer’s CISO might insist upon it, so it’s nice to know how I’d do it.
  • If you need UDP multi-cast support
  • If you want to treat different AWS regions (and maybe other cloud providers) as a single network
  • If you want a higher level of IPSec encryption than AES 128-bit
  • If you want to avoid locking yourself in to a specific cloud service provider’s approach to network management

As AWS’s pace of innovation is so high, then any other innovators like CohesiveFT that are initially part of the product supplier ecosystem eventually have their differentiators subsumed into the AWS core offering – and worse than that – those features are then typically offered for free (such as for AWS VPC) – destroying their market in a stroke. It’s pretty brutal!

Next Page »

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers