Internet Explorer running through workspaces on an iPad.

Unusually, I’m writing this blog post in a browser. Specifically, I’m writing it in Microsoft Internet Explorer, on Windows. Not particularly odd you might think, except that I’m not sat in front of my laptop. I’m using my iPad.

It’s ok, you haven’t gone mad. Hell might have already frozen over when Microsoft released Office for iPad last week, but rest assured it hasn’t happened twice: MS have not released Windows iPad Edition.

Instead, I’m trying out Amazon’s new Workspaces product. This product went GA last week, after a private beta announcement which we covered back in November.

Workspaces is a virtual desktop product that allows you to run managed Windows 7 desktops in AWS with very little effort. Signing up takes minutes, and you can provision either a mostly vanilla workspace with just a few basic utilities installed, or a ‘Plus’ workspace which adds Microsoft Office Pro, and Trend Micro AV. In either case, licences for the installed software are included in the price, making it a great way to stand up a desktop machine with all the essentials fully licenced in no time.

There are two performance teirs: ‘Standard’, with 1 vCPU and 3.75GB of ram (which sounds suspiciously similar to an m3.medium instance), or ‘Performance’, which packs 2 vCPUs and 7.5GB of RAM (m3.large, anyone?). As is commonly considered best practice, each machine has a couple of disk volumes attached: one that holds operating system and applications (C:), and one for holding a user’s data (D:). Data on the user’s D: is automatically backed up every 12 hours.

Depending on the bundle you chose, prices range from $35 to $75 per month.

You access your workspace using an amazon provided client application that runs on Windows, Mac, iPad, Kindle or Android tablet.

So, that’s the basics covered. How is it to use? Honestly, from the UK it’s currently a little painful. This is to be expected as workspaces is currently only available in the US, so every pixel of my display is being shot across the Atlantic before I get to see it. I’m seeing latencies of just over 200ms, and Amazon recommend a sub 100ms latency for good user experience. I can confirm that both iPad and Mac clients work well enough (spot the Apple fanboy), although in common with any iPad based remote desktop product, the touch-your-screen-to-point-your-mouse impedance mismatch is disorientating at times. Swapping between devices seems to work much as you’d expect. If you’re logged on from your iPad, and then sign in from a desktop, your session transfers seamlessly to the desktop.

From an infrastructure/desktop manager’s perspective, it’s early days at the moment I think. AD integration is possible, allowing users to log in with their normal credentials, as well as allowing them access to local printers and (I assume) file shares. While deploying your own software is certainly possible, you’re pretty much on your own there: There is no concept of an AMI here, nor is there any support for packaging and deploying applications within the service itself. This in itself is probably not a disaster in some senses, since most enterprises have their own deployment tools, but the lack of custom AMI capability makes boot strapping a workspace into the deployment tool harder than it would otherwise be.

What about use cases? We can already see a couple of things we do for customers where workspaces could replace or supplement what we currently provide:

  • Cloud DR solutions (for an example see our Haven Power case study). As things stand, the key issue preventing us from doing this is the fact that you pay for workspaces per month, regardless of how much usage of the workspace you make. Unusually for AWS, there isn’t an API allowing you to automatically provision/deprovision workspaces, making it hard to optimise the cost here.
  • Remote desktops for 3rd Party users. We deployed a Windows Terminal Services farm in AWS for another of our customers, who use it to allow third parties to work on their applications. Both the applications and terminal services farm are managed by us in AWS, and are accessed globally. in theory it would be relatively straightforward to replace the terminal services farm with Workspaces, although we’d have to be confident that the performance is adequate.

Workspaces is a promising technology, but until it’s available in the EU-WEST-1, we’re unlikely to be able to adopt it except perhaps in very niche circumstances.

That’s the thing about Amazon though: Like Apple, when Amazon first release a new feature, it’s tempting to be a little underwhelmed. But then, like Apple, a few months or years later we look back at a now mature technology, and we can’t quite remember when it grew up from a metaphorical spotty teenager with potential, to an essential member of the team.

It’s this ability to start ‘simple’, but then improve and polish their products day in, day out, over and over again that has made both companies the unstoppable juggernauts they now are.

 

Please Rate and Like  this blog.  Share it via the social icons below or via short URL http://bit.ly/1hewn2U

Our readers want to know what YOU think, so please add a Comment. 

Please state your position Photo: BNThermic via Police Helicopter, 14 March 2014

Please state your position
Photo: BNThermic via Police Helicopter, 14 March 2014

“What’s the difference between APIs and SOA?” A question with a million answers.

I’m not going to tell you I’m the holder of the One True Way here. I’m going to give you the Smart421 position.

To us, SOA was always about delivering easy to understand interfaces that expose business functionality from diverse underlying systems. We always felt this was as much about people as technology, and we never particularly bought into all the vendor hype that demanded true SOA required a massive stack of complicated tools and technologies, that (what a co-incidence!) only Vendor X could provide.

We’re simple straightforward folk, who like simple straightforward solutions.

Let’s take registries and repositories as an example. I once said to a colleague that one vendor’s product looked like it was designed by machines, for machines. It used language like Port Types, Bindings, Metadata, Ontologies etc. It was the dream of people who like to put acronyms on their CVs. What it didn’t help with at all from what I could see is helping the developers throughout your organisation discover services, and really understand quickly and easily what the service did, and how to use it.

The API movement has, by necessity, stripped a lot of the complexity away. Why? Because it turns out that if you are going to expose APIs to the outside world, where people have a choice about whether they should use your API or someone else’s, ease of use, and ease of discovery are critically important characteristics. Integrating with a REST(ish) API is, frankly, easier than integrating with a SOAP service. Discovering services and understanding them on a developer portal that is designed for humans is easier than poring over piles of WSDL in a freakishly complicated registry product.

If you’ve wondered why truly successful SOA initiatives are common as unicorns, here’s why: They often overcomplicate everything. Technology, processes, standards and the services themselves become a barrier to the very innovation they set out to enable.

Of course, nobody mandated SOAs have to be complicated. Nobody forced the world to adopt WS-*. We often tried to encourage customers to delay adopting complexity until the last responsible moment. That’s not to say WS-* weren’t useful. In the right contexts, they were, but they often came at the expense of accessibility and increased friction in adoption.

Unfortunately, the brutal reality is that for most people (or at least those who commission such initiatives), SOA as a term is connected indelibly with SOAP, ESBs, WS-* and rigid over-engineered change processes.

Sometimes, beautiful relationships don’t work out. Sometimes you just have to move on. So we have.

For us, APIs are the new SOA. What you might call SOA, we will refer to simply as internal APIs. On one level, this doesn’t change anything – it’s an emphasis thing. We still believe in the principles that underly SOA, and we’ll continue to build our integration on this basis. We know that APIs aren’t a silver bullet either. We’re certainly not saying “As long as it’s an API it’s good, and any old API will do”. There’s still plenty of scope to fail, plenty of mistakes to make (and to avoid repeating), and plenty of learning to do for our customers, for us, and for the entire API ecosystem.

But whether internal or external, we’ll focus on the same thing: Crafting APIs that are beautifully easy to understand, consume and change. We’ll fight the good fight to keep the standards and technologies we use mercifully simple, lest APIs suffer the same fate as SOA. We’ll try wherever possible to make the technology fade into the background.

“A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.”
Antoine de Saint-Exupéry

Share this post using the social icons below or via the short URL http://bit.ly/1ekeVNy

Please Rate and Like this Blog. Our Readers want to know what YOU think so please Comment.

I’ve been doing some research on Hypermedia APIs recently, as a result of trying to understand the latest state of the art in API design. I came across this post from David Heinemeier Hansson (@dhh) of 37signals fame.

The recurrent hoopla over hypermedia APIs is completely overblown. Embedding URLs instead of IDs is not going to guard you from breakage, it’s not going to do anything materially useful for standardizing API clients, and it doesn’t do much for discoverability.

The thrust of his post is that while adding links into your JSON resource representations is a good idea, believing that this on its own will change the world is not a good idea.

David finishes the post with the following bombshell:

We’ve been down this path to over-standardization of APIs before. It lead to the construction of the WS-deathstar. Let’s not repeat the same mistakes twice. Some times fewer standards and less ceremony is exactly what’s called for.

My reading of this is that this is a dig against efforts like HAL, which aims to provide a standard markup and media type for hypermedia APIs.

This comparison was enough to drive a 56 comment long rant, counter-rant, counter-counter-rant comment storm that I read in full. I came away with two main impressions: 1) People can be pretty mean when angry (potential alien invaders: take note). 2) a little compromise goes a long way.

Speaking as someone who lived through the creeping plague of standards that made up WS-*, I can attest first hand that in the main (with a few notable exceptions), most of these standards hindered rather than helped those of us whose job it was to create good, business meaningful, interoperable services.

It’s not that the standards themselves were necessarily flawed (although in many cases they were), it’s that they were complex, constantly changing, and in many cases solved problems that either people didn’t have, or that HTTP (which is/was still the most common transport for SOAP messages) had already solved. Even when it they did solve a problem (e.g. using WS-Security to secure messages sent over MQ) often the implementation of these standards was spotty and inconsistent, meaning that far from providing the interoperable experience the spec promised, it required us to test that each SOAP client worked with each SOAP server, in order to be sure we were creating services people in our organisations could actually use.

Also, it’s worth noting that the driving force behind most of the WS-* specifications were not architects out in the field, or customers, but product vendors. Being brutal about it, many in the industry take the view that product vendors effectively created a machine for adding additional checkboxes to RFPs, in a kind of procurement arms race that served nobody. I think this is somewhat cynical, in that I remember a few of these standards feeling sensible conceptually at one point or another, even though they feel stupid to me in retrospect. WS-AT, I’m looking at you. None the less, the product vendors were the driving force, and where there is smoke…

JSON over HTTP is an altogether simpler beast.

HTTP has been battle tested and honed for over two decades. HTTP has well defined semantics for finding/creating/updating/deleting resources, built in support for authentication & authorisation, rich support for caching,  content versioning and relocation, content type negotiation, compression, the works, all things that were added after the fact to SOAP, despite the fact that more often than not SOAP was running atop HTTP anyway! Not only that, but HTTP is available on (to a good approximation) every PC, server, tablet and mobile phone on the planet, providing the underpinnings of the day to day internet activity of nearly 2.5bn internet users worldwide. It’s fair to say it’s well tested. Far better tested than even the most mature WS-* standards. Although HTTP has some extensibility in the form of ‘X-‘ headers, and custom content types, in practice, these features are used respectfully, and almost always provide optional data – i.e. if a client or server doesn’t understand the header, they can just ignore it. There is no equivalent of SOAP’s mustUnderstand attribute in HTTP. Thankfully.

For its part, JSON is a simple markup language, with an emphasis on compactness and simplicity. No namespaces. No processing instructions. Just objects, strings, numbers, boolean values and arrays. It arose almost by accident; a natural side effect of the Javascript/ECMAScript syntax, although it was ultimately standardised in RFC 4627 in 2006.

For these reasons, I think the chances of JSON/HTTP being corrupted this decade in the same way as SOAP was in the last are pretty slim. But this paragraph at the end of the HAL Internet Draft strikes fear into my heart:

Omitting forms from HAL was an intentional design decision that was made to keep it focused on linking for APIs.  HAL is therefore a good candidate for use as a base media type on which to build more complex   capabilities.  An additional media type is planned for the future which will add form-like controls on top of HAL.

This is where DHH does have a point. I really hope this isn’t the beginning of a treadmill of extensions and standards atop JSON/HTTP, something which is currently glorious in its simplicity.

The reality is that simplicity and pragmatism are amoung the most valuable (and sadly rare) features in modern software development. We should tread carefully here.

I’ll be using HAL on future projects, but I’ll be using the smallest subset of it necessary to get the job done.

View from the Millbank Tower, London

As part of my focus on developing Smart421’s API strategy, last week I attended Mashery’s one day Business of APIs (BAPI) conference Altitude London,  29th floor of the Millbank Tower on the banks of The Thames.

I’ve been to a fair few events in the last couple of years, from the London AWS enterprise event Robin mentioned in his post this morning to the Gartner AADI conference in May.

The BAPI event had a very different flavour. Although it was a Mashery event, their presence was mainly facilitatory. There was no heavy sales pitch, and no compulsory product demo. The other thing missing was sponsor stands, which was interesting… No need to run the gauntlet with marketing folks to secure your lunch.

What was available in spades was great quality discussion, both from the presenters, and one on one with attendees. The conference was really focussed on the business of building and consuming APIs.

I particularly enjoyed Creating Success / APIs Changing Business from Kevin Flowers, CTO, Coca-Cola Enterprises. Kevin talked about the challenges of launching APIs in a large business. CCE have built a series of internal APIs for managing everything from procurement and finance to sales and service.

This fits well with our internal discussions within Smart421. We think that for many of our customers, the future of integration lies in building and consuming APIs internally first, and then selectively exposing these (or APIs derived from these) to the outside world. Test, learn, do, with the test and learn occurring mainly internally.

TomTom’s Peter Moeykens talked about their experience of building a public API around TomTom’s core map, navigation and point-of-interest assets. He talked about the journey TomTom went on, the (inevitable) missteps made, and the ‘lightbulb moment’ that was the realisation that APIs need to be thought about as products. Again, this is a view I share strongly, even for internal APIs – without taking that outside in view of the world, your APIs will be lifeless. Peter talked about how they sold APIs internally (Answer: demos), and how they sold them externally, and perhaps more to the point, how they got their sales guys to sell them externally (Answer: showing them the kinds of applications the APIs would allow other people to build). Really great, practical advice.

There were a bunch of other great talks too, the slides for which are available on the BAPI website, with videos on the way for the most part. A few of these were only tangentially related to APIs, but were none the less well worth an hour of my time: David McCandless of Information is Beautiful fame talked about data visualisation in a highly amusing talk – well worth a watch when the videos arrive. While Paull Young of charity:water made me laugh – and then damned near cry – with his talk about the work that C:W has been doing on its digital strategy (a topic I’m hoping to talk more about here over the coming months).

So, in summary, a really great event. I’ll definitely be going again next year given a chance, and would encourage anyone interested in this space to go along as well. If you do, hang around a while afterwards – plenty of high quality discussion happened after the main event was over.

Please Rate and Like this blog.  Our readers always like to know what YOU think, so please leave a Comment.

I’ve been doing some thinking recently about digital strategies for big companies. The big blue chips that us consumers deal with on a day to day basis are acutely aware that the world is changing.

We get our news from Twitter, our entertainment from Netflix, and our jobs from LinkedIn. We routinely carry in our pockets more computing power than sat on our desk just 5 years ago.

Big companies have a lot to protect. In the main, they’re already successful. They have established products that customers already buy, but it’s rare that you hear a customer talk about their bank the same way they talk about their iPad.

So what are the magic ingredients? What is it that draws us to things like Google and Netflix?

Here’s your chance – leave a comment, and let me know the last time you had a real “wow!” moment with a company that involved digital/mobile/internet technology…

Following my last post, one of our clients reached out to me with a story about their own work in developing APIs.

The company had decided to build a new customer portal, and for pragmatic delivery reasons, decided to outsource the development of the portal to a 3rd party development shop. Inevitably though, the portal needed to talk to internal systems.

Rather than just hand out the technical documentation for their back end systems and punch holes in the firewall left right and centre, our client did the sensible thing and decided to expose some proper APIs for the portal developers to consume. They’d already built APIs (services) internally before, so knew what they were doing. Since these were only being used by a ‘friendly’ development partner rather than the great unwashed, they didn’t need a heavy duty API gateway product, and could get on with the task at hand in short order.

What they found was a surprise: Using a third party to deliver the portal seemed to make our client deliver better APIs than usual. This was despite the fact that they were using the same technologies as usual, and the same development teams The APIs were well documented, well structured and didn’t leak implementation details from their back-end processing systems.

This is a great example of Conway’s Law at work. If you’ve not heard of Conway’s Law before, go and read the article. It’s one of the most important lessons I’ve learnt, and it’s something that’s all too rarely understood and applied.

Some lessons I’ve taken away from this:

  1. Good IT architecture isn’t just a result of good technology and good people, it’s the direct result of the design of your organisation.
  2. Make sure the people building and consuming your APIs aren’t on the same team. The further apart they are in organisational terms, the more likely you are to deliver good quality APIs.
  3. If you’re building a website or portal, build it on top of your APIs, don’t just connect it straight to your back end. You could even get a good creative agency to do the work here – they’ll be used to consuming APIs, and it’ll force you to build APIs of the quality that e.g. mobile developers would expect.

Over the last few months, we’ve seen a real increase in interest from customers in creating public APIs. Regardless of industry, there’s a real buzz about creating new ways of driving value for end users by creating APIs.

The irony isn’t lost on us that for the last two decades (at least!) API was a term you wouldn’t dream of using with business people, but over the last couple of years we’ve started talking about the API Economy, how to market APIs and how to sell them.

APIs have been a mainstay of both startups and internet giants for a number of years now, with companies going through quite a journey to establish proper public APIs for their developer communities. What’s new I think is that the established blue chips are starting to see the size of the opportunity here.

The confluence (dare I say nexus?) of mobile, cloud, and the need to innovate are driving companies to think about how they deliver and manage APIs, safely and effectively.

But in a culture where integrating with third parties was mainly a bespoke, point to point, paperwork heavy experience, perhaps the biggest challenge for most will be fostering a community of developers who want to call the APIs in the first place, and aligning their interests with ours.

I’m really interested to talk to customers about their problems in this space, both technical and organisational. If you’d like a chat about this, drop me a line or send me a tweet.

Photo: Industrial backdrop by Pilarts  Dreamstime Stock Photos & Stock Free Images

Photo: Industrial backdrop by Pilarts Dreamstime Stock Photos & Stock Free Images

I’d like to propose a best practice for rolling out new features in a Service Oriented Architecture (SOA).

Traditionally, when we roll out a major new feature, we often end up causing a breaking change to the service. We’re then faced with a choice: (a) Force all our consumers to upgrade to the new version, and making all our consumers hate us, or (b) continue to support the old version of the service as well as the new, making only our own teams hate us. Suck it up, plan (b) is the better option, but try telling that to the guy having to patch fixes in three concurrent versions of a service.

Now, there are patterns that can help here (more on that another day), but they all still mean more work for everyone.

Also, when we first roll out a feature is exactly the moment we understand it least. We’ve got absolutely no idea how people will use it, nor whether it will even turn out to be useful. By baking the feature into a new major version of the service, we’re taking all our options away. The feature will be hard to remove if we decide it isn’t useful, and if we want to change how it works, we’re back into a major version upgrade again.

To my mind, good engineering is largely about keeping your options open. It’d be nice if we can try a new feature with a subset of consumers first, iterating quickly with just that subset, gradually adding more consumers as we get more confident.

Enter the Feature Flags pattern. Feature flags allow you to turn features on an off at a moment’s notice. At its most basic, a feature flag just turns a feature on or off for everyone at once, but the idea is often extended to allow turning on features for specific users, or collections of users. This allows you to roll out a new feature to consumers gradually, over an extended period.

So, here’s the proposal:

  • Allow consumers to pass a set of feature flags dictating which features they’d like enabled in the service.
  • Whenever you build a major new feature that would otherwise cause a breaking change, only enable it when the feature flag is passed.
  • If appropriate to your environment, control access to feature flags like you would to any other resource – e.g. you might want to restrict access in the early days to just a single consumer, making it easier to iterate.
  • Once we’re comfortable with a feature, it becomes publicly available – i.e. anyone can toggle the flag.
  • Every so often (e.g. once every couple of years), create a new major version of the service, refactoring it to include popular, battle tested features by default. Also, take this as an opportunity to clean out the cupboard and abandon any features that aren’t well used.

What do you think? Comments and thoughts very welcome…

 

Please remember to Rate and Like this post.  If you can, please leave a Comment.

Jeff Bezos Photo by John Keatley, Seattle's leading photographer keatleyphoto.com

Jeff Bezos
Photo by John Keatley, Seattle’s leading photographer keatleyphoto.com

Every time I hear this story, it makes me smile. From Kim Lane over at API Evangelist:

[…] one day Jeff Bezos issued a mandate, sometime back around 2002 (give or take a year):

  • All teams will henceforth expose their data and functionality through service interfaces.
  • Teams must communicate with each other through these interfaces.
  • There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
  • It doesn’t matter what technology they use.
  • All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

The mandate closed with:

Anyone who doesn’t do this will be fired. Thank you; have a nice day!

Assuming for the moment that this is true, the thing that makes me smile here isn’t the closing rhetoric. What Jeff described here is pretty well everything you need to know about successful SOA.

Look at the wording again. “All teams”. He didn’t say “all systems” or “all services”. Technology isn’t [the most] important. People are.

By focussing on teams rather than technology, Jeff ensured that Amazon’s embryonic SOA was business aligned. One, simple decision was all it took. Well, that and ten years of concerted effort of one of the brightest engineering teams on the planet.

When it comes to adopting cloud computing, to my mind there are three types of company:

  • Early adopters who swallow the pill in a big way. They’ll get burned, almost without exception. But they’ll come out stronger, leaner, meaner and faster than the rest.(Netflix, I’m looking at you.)
  • Those who do their homework the day it’s set. They’ll either have or will shortly select non-mission critical applications and move them into the cloud, and at the same time start looking to create new apps in the cloud albeit in a low key way. These guys will be slow and steady, but they’ll get there in the end. (Most of the 2015 FTSE 100?)
  • Those who do their homework the night it’s due. They’ll wait for everyone else to ‘take the risk’ for them, and only then start a gradual, lumbering migration. Just like at school, these guys will get outpaced by the competition. For some of them, it’ll be a terminal mistake. (Most of the current FTSE 100?)

Make no mistake, all companies will end up in the cloud eventually. How (and if) you get there is up to you.

My advice? Don’t be last.

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers