empty pocketFollowing on from my post about Google, AWS and then Azure price cuts the other day, there’s an interesting summary of Rackspace’s position covered on TechCrunch. In summary, the Rackspace CTO John Engates explained that they are continuing on the same track of not matching the recent price drops – which is consistent with his blog from July last year where he said…

We at Rackspace don’t aspire to offer the lowest unit prices. We strive instead to offer the best value…

I suspect a key reason is because they can’t afford to play this game of chicken.

Looking at basic storage as it’s easiest to do a like-for-like comparison, Rackspace’s Cloud Files is 10 cents/GB still, so that’s now 3.33x than the entry price for AWS S3, and 3.8x the entry cost of Google Cloud Storage. Whilst I firmly believe that agility is typically a stronger driver than cost in the enterprise market, that’s such a huge difference that I don’t see how a customer procurement department can ignore it. Rackspace is having to move up the food chain as the base services get commoditised underneath them, i.e. focusing on service management, OpenStack, DevOps etc – get (a bit more) niche or get out. I get the “focus on value” message, but it’s hard to show much differentiating value on relatively commodity services like storage. It looks like this price drop was one price drop too far for Rackspace’s pockets. And then there were 3…

PS As an illustration of the positive impact on our customers, we’ve recently re-priced a customer proposal that was already going through the Smart421 sales machine when these price cuts were announced, and it’s resulted in an immediate 17% overall AWS cost reduction. Nice.


WhereHasAzureCDNGoneI stumbled across the fact today that Microsoft have quietly withdrawn their Content Delivery Network (CDN) offering from Azure for new customers. Several Azure users report a response from Windows Azure Technical support along these lines…

We’re in the process of building out our next generation Windows Azure Content Delivery Network (CDN) architecture, and during this time we are no longer accepting new CDN customers.

We highly encourage you to wait until we’re ready building our next generation Azure CDN service.

That’s fine – because customers who were already using it have still got it, right? Well no, upon reflection, this feels like a bigger deal than I first thought. In the earlier days of Azure the CDN was a top table feature and selling point, and then with little ceremony it quietly disappears. So little ceremony in fact that forum posters were asking where it’s gone, assuming it was a portal bug. For any cloud provider to remove a key part of their cloud service catalogue whilst they create a better one – it’s just not really on. It also makes me wonder how many people are really using it, as surely there would be more fuss in the blogosphere if they were. To be fair to Microsoft, I get the impression that if I asked for access to this functionality to be enabled for my account I might well get it, but it does raise the question about how enterprise-strength was the CDN service if they felt they had to replace it.

Come on Microsoft – this is not really good enough in the cloud era. As cloud consumers we’ve got to believe that the PaaS services that we are engineering into our solutions are going to be there tomorrow (or at least with a smooth transition capability, on my timetable), or else we won’t bother…

Last week I was fortunate enough to attend my 4th annual DroidCon UK. DroidCon is a 2 day conference held at the Business Design Centre in Islington. London and encourages Android developers and enthusiasts to come together and share their ideas, apps, and advice.

As with previous years, the first day is a bar camp (45 minute talks that are decided on the day, based on who is there and who wants to present something). The second day is more structured with a pre defined schedule of talks. There are also vendor booths around the main hall promoting their products and services.

The purpose of this post is for me to summarise the event and mention some of the key things I took away from it.

Dependency Injection on android
The first talk discussed dependency injection and how it can be achieved on android via libraries such as RoboGuice and Dagger.  A quick show of hands made it clear that very few people in the room are actually following DI. Coming from a predominantly Java/Spring background, I understand the benefits of using DI (giving objects exactly what they need, loose coupling and easier testing), but there is a certain stigma that DI holds when mentioned alongside android. RoboGuice is often assumed to be the “go-to” DI framework, however there is a new kid on the block, dagger, from square.

Without a doubt, this was my favourite session from the conference, I was impressed with the service they offer, and its free! For those of you that have at some point developed an android app, chances are you’ve emailed the APK around to a few friends and asked them to check it out. They may have encountered some errors, but all you’ll likely hear is “it crashed when I clicked the blue button”, which isn’t entirely helpful.

Fortunately, the guys over at TestFairy have developed a service whereby you can upload your APK to your online account, and distribute it to your friends. When your friends install and open the app, their interactions are recorded, so if there is an error you can see a video clip of what they pressed in order to get the error. Logs are included, along with various other useful information such as battery, cpu, memory usage. Finally, you spot that bug that only occurs if an incoming call occurs whilst on your app! Please, please go check them out.

OCR on android
Next up was another bar camp discussion around OCR on android and how it can be achieved by using OpenCV. I was quite interested in this talk as I have done some work with OCR on android/iOS for Aviva. The discussion covered the basics of using the OpenCV library, however the emphasis was mostly on how non-trivial OCR is. I can certainly vouch for that, in order to get results that are even remotely close I needed to use blacklists and whitelists for characters, and pre-process the image. If you’re interested in OCR on android I’d highly recommend having a look at the tess-two project on github, which is a fantastic java wrapper around tesseract tools.

I haven’t delved too deeply into SQLite storage on android, but from what I did experiment with it wasn’t the easiest of APIs to work with. Fortunately there is a library called Cupboard that makes this easier with its fluent API. Hugo Visser, the creator of Cupboard gave a brilliant talk on how to get started using it.

Instant backends with Windows Azure
Microsoft, at an android conference, not what you’d typically expect, however they did provide a great session on how to get started building mobile backends on the Azure platform. Microsoft provide an intuitive interface so you can quite quickly create and deploy a backend with social sign in via the usual providers (Google, Facebook). You’re also able to download client libraries so that you can quickly and easily exchange data between your backend and the mobile client. Whilst this looks like a promising offer enabling you to get moving fast, I’m quite sceptical about whether or not ease/speed is still a benefit once your backend becomes more complex.

There is an excellent tutorial here

Mobile backend starter, from Google
Google is also on the BaaS scene, offering the mobile backend as a starter. This is based on the existing Google cloud platform services like app engine and cloud store. Similar to the Azure offering, you’re able to deploy a sample backend in a few clicks and are given client libraries to make the communication as easy as using a Java API.

A colleague of mine, Charlie Simms also attended the event, here is his take on it

A variety of SDKs were promoted at this years event, such as :

So what did it for me:

The event had 86 line ups so there was plenty to choice from, but having done my MSc at Surrey and been on a Satellite Communications Workshop (http://www.surrey.ac.uk/ee/study/pd/courses/satellite_communications.htm ) I was very interest and puzzled in the Keynote on Friday morning “Smart-Phones in Space” http://uk.droidcon.com/2013/sessions/keynote-smart-phones-in-space-a-guide/ why would you want to put a mobile phone in space. Dr Chris Bridges lead a small team at Surrey University Space Centre (http://www.surrey.ac.uk/ssc/ ) to launch a Nexus 1 into low Earth orbit. This talk provided the journey of the first of many ‘phone-sat’ projects and how hardware and software challenges were overcome to achieve this scientific feat in teaching and research. A brilliant talk both interesting and demonstrating what advanced technology is packed into a smart phone.

The second thing that did it for me was the talk by Benjamin Cabé  “Leveraging Android for the Internet of Things with Eclipse M2M”. Benjamin is an Open-Source M2M evangelist and chair of the Eclipse M2M project (http://m2m.eclipse.org). This talk briefly introduced the Eclipse M2M projects with cool end-to-end examples, combining the use of Android API for doing fleet tracking, remote monitoring, and home automation. Also discussed importantly is the challenge of operating large fleets of M2M devices, with the need for cloud-based scalable infrastructures. With the vast amounts of data being collected from possibly billions of sources there will be an ever greater need for cloud base services and the support for big data. The internet of things still in its infancy but it won’t be long before it starts to impact all our lives in the same way as mobile technology is doing today. Certainly a space to be watched closely.

The weird and wonderful

If you’ve made it this far, then lets end the post on some of the weird and wonderful ways that android is being used.

This year we had the worlds (probably) only android powered hydroponics system for growing plants indoors.


Android powered hydroponics system from Seraku

There was also an android enabled car, an android powered mirror (which you can operate with your best Tom Cruise “Minority Report” impression), and a guy controlling an android powered robot via his android powered smartwatch

Lego Mindstorms robot controlled via AndroidScript / Smartwatch

Lego Mindstorms robot controlled via AndroidScript / Smartwatch

I feel that every year droidcon has a bit of a theme to it, well perhaps not a theme as such, but there will be one or two topics that are in the limelight of the conference, last year felt like the focus was on NFC, this year there was quite a focus on the backends and security, which may pinpoint androids evolution into the B2E space. With presence from Motorola Rhomobile promoting their enterprise platform, droidcon isn’t just attracting indie developers and B2C solutions.

Please share this blog using the social buttons below or via this short URL  http://bit.ly/1astU7y

Please Rate and Like this blog.  Our readers want to know what YOU think, so please take a moment to leave a Comment.

It caught my eye the other day that Microsoft announced an equivalent to Amazon Web Services’ Direct Connect offering, i.e. the ability to connect from your premises to your cloud deployment without going over the Internet. The press release says this capability is “expected to be available in first half of 2014” – and I assume that this initial launch will be US only with Europe to follow later, although it doesn’t say.

Smart421 was a Direct Connect launch partner in the European region for AWS back in Jan 2012, although the initial US launch was way back in August 2011. So going on that basis, I can now put a crude estimate on how far behind AWS the Azure platform really is – at least two and a half years :)

Anyway, now is as good a time as any to share some brief stats from our real world experience of deploying Direct Connect for the European region. I’m not aware of much data in the public domain about Direct Connect latency measurements in the European region – so if you know of some, please comment on this post to let me know.

On a 1 gigabit connection, for an ICMP (i.e. ping) round trip we typically see a latency of circa 12-13ms for Direct Connect versus 33ms via a VPN over the Internet, i.e. about a 60% reduction in latency.


This data needs to be considered carefully as there are a multitude of factors at play here depending on the specific customer environment and requirements – such as the Internet connectivity for the VPN, and crucially where the customer “on-premises” equipment is in network terms with respect to the AWS Direct Connect location in London Docklands. Also any comparison will vary depending on time of day etc. I’m deliberately not providing any stats on achieved bandwidth here as there are just too many factors involved – principally that the limiting factor is likely to be any MPLS connectivity involved in the architecture rather than Direct Connect itself.

Still – it’s interesting data nonetheless…thanks to ‘Smartie’ Wayne for compiling the data.

Please share this blog using the social buttons below or short URL http://bit.ly/1e8CCq6

Please take a moment to Rate and Like this Post. Our readers want to see YOUR opinion so please post a Comment.

timeAn interesting announcement from Google at their recent I/O conference was of per minute billing for virtual machine capacity. As we work with AWS a lot, cloud billing is a subject close to my heart – and so this caught my attention. On the surface, per minute-based billing is attractive as there is some inherent wastage in the per-hour model used by AWS, Microsoft Azure etc. When estimating likely AWS usage charges for customer engagements (using the excellent AWS online calculator, which says it is still in beta testing but is actually rock solid), we take great care with the assumptions made about the number of hours that instances will be running for – the classic example of this is for dev/test environments, e.g. it’s quite easy to assume 5 days per week at 10 hours day. What we’ve found over time is that because customers and their development staff have typically been brought up on a diet of inefficient server use (i.e. make a guess what I need, add some capacity contingency on top, pay for them upfront, it’s a sunk cost, so I don’t care about how efficiently I use it), then there is not a strong culture of turning off environments when not in use etc. Also, we control dev/test environments using our SmartSentinel cloud management tooling – and you need to allow a few minutes for instances to startup/shutdown to ensure you don’t fall into an additional hour of cost (especially Windows instances :)).

So per-minute billing is attractive as it just cuts down on some over-billing when we “spill” into another hour of usage. But – and this is a big but – the logistics of IaaS billing are already complex enough that I don’t really want it to become more complex. We manage cloud billing for a number of customers, and in a 30 day month we have 720 distinct hourly measurement points where virtual machine usage charges are accrued (to keep it simpler – I’m ignoring other usage based charging, e.g. for storage etc, here). Even with this level of data, validation, reconciliation and invoicing of charges is already very complex. If that became 43,200 measurement points in a month, I think it would tip our finance team over the edge :). The complexity stems from the fact that AWS have some really attractive sophistications to their charging model – we like these features and don’t want to lose them, e.g.

  • the ability to reserve instances over a 1 or 3 year time period, i.e. make a commitment and share the cost advantage with AWS
  • the ability to choose 3 different types of reservation based upon likely usage levels, e.g. 100% on all the time, or rarely on (e.g. for a DR scenario)
  • the ability for a customer to get the benefit of their reservations across their various AWS projects/deployment, e.g. if across your AWS estate on average you always have 5 m1.large instances running, but no individual project has them running all the time, you can still reserve the instances and get all the price advantage as the reduced per-hour cost is shared across the entire estate
  • volume discounts
  • ..and that’s before I even get to spot pricing!

These pricing model sophistications are real differentiators and allow a much more tailored cost model for specific customer deployment scenarios – and I think they are more important than per-minute metering of usage. It’ll be interesting to see if AWS or Azure follow the Google lead (as tends to happen with IaaS pricing between the big boys). Cloud billing truly is becoming a big data problem – if this carries on we’ll need to run up an on-demand AWS EMR Hadoop cluster to do billing reconciliation :)…

Las VegasUnfortunately I didn’t manage to make a strong enough case to travel to Las Vegas in person :(, so I did the next best thing and watched the live media stream yesterday evening – it was just like being there, but without Tom Jones or any showgirls. The two big things from Andy Jassy (the AWS SVP) were an approx 24% storage (S3) price reduction across all regions from 1st Dec, and the launch of a limited beta version of datawarehousing-as-as-service. On the second of these, AWS Redshift (which is discussed in more detailed in Jeff Barr’s post here) is a direct challenge to the existing column-oriented database world, Teradata, IBM, Oracle etc. It looks really interesting and is a classic cloud use case and so it makes sense for AWS to tackle it – it requires large volumes of storage and compute power and is a traditionally high-CapEx market sector – I’m looking forward to playing with it..

As for the S3 price reduction…well, a 24% price reduction is a pretty amazing step change in pricing. In what other industries would have such dramatic changes in price? I wish it was happening to UK gas & electricity pricing :). Having said that, Google storage costs currently start at $0.095 per GB per month, so it looks like AWS are price matching with Google. Microsoft Azure pricing was still at $0.125 per GB when I checked this morning, but presumably they will have to respond (to be precise – this is not quite an apples for apples comparison as Azure replication is over a significant distance whereas AWS S3 replication is between AZs which are separate but within some, but typically undisclosed, kilometres). As discussed before on the blog, I can’t see how the majority of smaller (and by that I still mean very big!) IaaS cloud players can possibly compete with this perfect storm of huge economies of scale and immensely deep pockets. Looking at our current AWS billing (which includes customer’s AWS accounts that we manage on their behalf), S3 storage costs only account for <5% of the total costs as the lion’s share of the cost relates to compute – so more price reductions here as well please!

[Update 30/11/12 - Since reading Jeff's post I've realised that these cost savings also apply to EBS Snapshots (der...of course you'd expect that), so this actually makes the cost saving from this one price reduction more significant, getting up to 8% or so]

Organised by the UK Windows Azure User Group, this free all day conference provided a great opportunity to catch up on the latest developments, particularly given the Microsoft announcement a couple of weeks back.

Core to this announcement was Microsoft’s move into Infrastructure-As-A-Service (IaaS), and the key note by Scott Guthrie positioned IaaS (described as Virtual Machines) alongside Microsoft’s current Cloud offerings which to date has focused on Platform-As-A-Service (PaaS – now labelled Cloud Services by Microsoft) and Software-As-A-Service (SaaS – Office 365 for example).

MS Cloud Day

Despite the lack of internet connectivity for a large part of the presentation (what is it with Cloud demos and loss of connectivity?!?) Scott did a great job talking through the slides, clearly describing the alignment of each of the deployment options: On-premise vs Virtual Machines vs Cloud Services vs SaaS.

In addition to Virtual Machines, the new Web Sites service was also discussed which gives Azure customers up to 10 web-sites and 1GB of storage for free (whilst in preview period, see here for further details). The demonstration showed how easy it is if you simply want to re-host an existing web-site on Azure whether it be ASP.NET, Node.js, PHP or even classic-ASP. So the new Web Site and Virtual Machine services provide a simple route to hosting applications on the Azure platform, but there is the added benefit of the Azure management aids, real time statistics and in the case of Web Sites incremental deployments and continuous integration (through TFS or GIT) too.

So where does this fit with Paas? Well Steve Plank from Microsoft provided some answers with another demonstration. With Cloud Services you get host of services to call upon including Storage, Database, Identity, Caching and Service Bus and the demo showed that if you design your application from the ground-up utilising these services, you benefit from an end-to-end application architecture that can be deployed and running in minutes at the click of a button. It is this architecture that really gives you the elasticity and flexibility in the places you need it.

A good day and exciting times with the options and landscape constantly changing. Nicely summed up by another Smartie (Andy Carter), ‘I guess there’s a load more stuff I need to learn about’, when a couple of days after passing the Azure certification MS announced the new services…(Well done btw!)

Last Tuesday was another installment from the UK Windows Azure User Group (formerly London Windows Azure UG) which this time included a presentation from our very own Simon Hart, and as well as myself, two other Smarties also attended the event to heckle support Simon.

The session kicked off with an apology from Microsoft’s Mark Quirk (Azure Product Manager) due to last week’s Azure downtime. Mark described this as the first incident in two-half years and although any incident like this is never a good thing, it was great to see Mark there and helped reinforce to me just how important this stuff is to Microsoft.

The 1st agenda item was from Yossi Dahan (MS Technical Specialist) and given that he wasn’t feeling well he did a great presentation on Azure and how it can support mobile application development. Yossi demonstrated a mobile solution called BabelCam (source available here ) which goes something like this:

  1. Someone takes a picture from a mobile device of a menu which is in a foreign language.
  2. The solution then authenticates using the Azure Access Control Service (ACS).
  3. Once authenticated, stores the image on Azure as a BLOB.
  4. Passes the image to an OCR service to capture the text.
  5. The text is then passed to Bing where it is translated into English.
  6. Translated message is passed back to the mobile device.

A couple of things stuck out for me:

  1. Due to the constraints of processing power on mobile devices, Azure enables developers to move as much logic and data as possible from the device up onto the cloud.
  2. This helps deliver more powerful applications but also means that it is easier to support multiple device OS’s (Windows, iOS and Android) using the .NET SDKs.

The main realisation I had was how influential Cloud paired with Mobile will be and the combined potential that these technologies bring. Consumer-focused mobile applications (above Enterprise apps) require the type of scalability that only the cloud can provide. Couple this with the ease at which things like ACS can be configured to authenticate users via Live ID, Yahoo!, Google Mail and Facebook and MS are providing the building blocks for vastly scalable, quick-to-market solutions.

azure ug cropped

Following the beer and pizza (thanks sponsors), Simon Hart talked us through his soon to be published Azure reference architecture…in 45 minutes!

The solution enables you to notify your home-based media center of programmes you wish to record from any browser device. The effort and dedication shown by Simon in developing the reference architecture was obvious and the slides at the start helped frame the presentation; what initially looks to be a simple undertaking turns out to be quite a challenge. What Simon demonstrated was how all the facets of Azure can be used to overcome these challenges creating a solution that is:

  • Secure (using ACS)
  • Scalable (using the Azure Service Bus)
  • Resilient (using an active-passive DR capability across regions)

It was great to see an example pulling all these different parts together in one solution…the difficult part, was grasping it all in the time! I think that Simon is penciled in for the next meeting (3rd April) to talk in more detail about the Service Bus, so it’ll be another late one for me but well worth it.

Planky getting an error!On Tuesday night last week I attended my first London Windows Azure user group meeting – it’s the second time this new group have met, but the first one I’m managed to make it to. My colleague Simon Hart blogged about the inaugural event here.

There were about 35 attendees or so and it felt like a good crowd, asking intelligent questions and I had some interesting chats during the breaks with some other user group members and I also caught up with Yossi Dahan (a Microsoft technical architect I’ve met before) – it really feels like this young user group has some momentum – so hats off to the organisers for getting this off the ground! The good pizza, chips, and beer also always helps :) – this must be one of the best catered user group meetings I’ve ever been to – there was even someone opening my beer bottle for me…

Planky (aka Steve Plank from Microsoft) presented on two topics relating to different strategies for identity federation and application access control – Azure’s Access Control Service (ACS) and Azure Connect.

Most of the the presentation time was allocated to ACS – which is pretty intricate to use. Well – it’s probably fairer to say that there are plenty of moving parts and technologies to get to grips with if you want to federate identities from Active Directory on-premise using ADFS2, via ACS in Azure to a set of applications hosted in Azure (which will typically using Windows Identity Foundation – WIF – to process the security token issued by ACS). None of it is particularly tricky in itself, but the great man himself hit some issues along the way (which always makes for a better presentation anyway :)) and I was left thinking that it was a bit of nightmare to troubleshoot exactly why user access to the end application (the “relying party”) was being denied (see the image above) – it’s just the joys of debugging a distributed architecture I guess.

Azure Connect is essentially a VPN and IPSEC tunnel offering that I guess is very roughly equivalent to the Virtual Private Cloud (VPC) offering from AWS, but with some significant differences – but it’s trying to address the same key requirement – seamless but secure network connectivity between on-premise and cloud-based networks. It’s still in beta (at least until Summer 2012) and has some inherent limitations such as the fact that it requires a separate installation of agent software on every on-premise server that will talk to/from Azure, but it looks like an interesting technology. My main concern was just whether our customer’s security team’s could live with this model though – as in addition to the installation requirement, it essentially avoids any corporate firewall by creating an out bound SSL (port 443) connection to the Relay Service on Azure, effectively creating a client-to-site VPN from each individual on-premise server to the Relay Service.

So overall – a very useful and interesting evening, I’m glad I attended and I recommend my Smart421 colleagues to make the effort to attend future events (which are planned to be monthly) – the next event (register here) is on the 7th Feb and relates to “Parallel Processing with Azure and HPC Server“, so I’m personally very interested to hear how this compares to AWS’s offerings in this area.


Get every new post delivered to your Inbox.

Join 1,084 other followers