You will now find all our blogs at

Change, they say, is inevitable.

The idiom somehow doesn’t reflect how positive change can be. At least that’s how we see it.

Our blogs have been running on WordPress since 2008 and have attracted hundreds of thousands of views. The time had come to have all this goodness included as part of our main online presence.

We’ll retain this trusty blog site for a while longer.

Fresh content from Monday 12 May 2014 can be found only at the new address.

Join us at the new address today and save as one of your favourites.





“Work hard. Have fun. Make history.”

If that strapline doesn’t ring any bells, it’s because you may not have encountered an Amazon Web Services (AWS) employee yet.

At London ExCel next week, there will be no shortage of AWS cloud geeks ready at your service. You can’t miss them. They are distinctive - usually sporting their black polo shirts and cream chinos (standard issue AWS battledress, apparently).

But my guess is that for as many Amazonians that gather,  it will be nowhere near to the number of delegates flocking to Docklands. More UK business and IT people than ever are hungry to learn more about how they can use the AWS Cloud.

Yup, an AWS Summit is coming to town (29-30 April) and I have heard through the grapevine that online registrations have been buzzing with demand for weeks. No surprise there.

I have watched AWS Summits in the UK grow in popularity year on year. Now, it seems only a venue the size of ICC at ExCel is big enough. I think that probably signals something important about peoples’ appetite for new approaches as well as new technology.

At least that’s why I’ll be going. And fellow Smarties from our senior execs to our account managers will be too. We think there is an important conversation to be had with more large enterprises who, just like our existing customers using the AWS cloud, are looking for the greater business agility that the cloud offers.

For that reason, we will be making lots of our clever chaps available (stand S4) to give actionable advice and war stories rather than sales patter.  We’ll be showcasing several of our customer engagements, including:

  • Disaster Recovery in the AWS Cloud for Haven Power
  • Big Data analytics in the AWS Cloud for Aviva /
  • Service Transition to the AWS Cloud for ATOC
  • Super-fast migration to the AWS Cloud for Kuoni

Jeff Bezos, Amazon’s founder, has acquired a tidy fortune by anticipating the step change in online shopping, firstly with bookselling and then with online retail of … er, well…  just about everything. Those close to the action know that’s not the whole story, Bezos also foresaw the shift in compute and storage. AWS itself has grown rapidly to become the undisputed leader in the cloud computing space.

In amassing an enormous IT capability and renting it out to others on metered tariff basis, AWS effectively offers to everyone with a valid credit card a chunk of its own IT systems. This has opened up “on-demand delivery of IT resources via the Internet with pay-as-you-go pricing.”

Bezos likens the emergence of cloud computing to the change that happened years ago in the electricity supply industry. Instead of investing in their own individual power generation facilities, and paying to run it, businesses and consumers in increasing numbers opted to buy their electricity from operators running centralised power generators.

Visionaries such as Tesla, Edison and Insull knew the gains and the pains in the electricity market. Does Bezos know what he’s up against too? I’ve no doubt he does.

AWS is getting a helluva lot right. Innovation launches and price decreases have been relentless. Outstanding choice, reliability and security. If nothing else, AWS is a chip of the old block when it comes to Amazon’s now famous obsession with listening to what the customer wants, and getting it done.

Personally, I’m quite looking forward to seeing what AWS will be wheeling out next week. I not really a betting man, but a fiver says another price drop or another new whizz-bang offering.

Or maybe both.  What do you reckon?  Go on, leave a comment to let me know.


Please Rate and Like this blog. Share it using the social icons or via short URL 

Our readers want to know what YOU think so please add a Comment.



layer7-ca-logoLast in our series of posts looking at API management platforms is Layer 7.

The Layer 7 API Management Solution evolved from their SOA gateway products, which Smart421 has been tracking for a number of years. Computer Associates (CA) acquired Layer 7 on 22nd April 2013, with the Layer 7 products becoming a key strategic element of CA’s security product portfolio.

The Layer 7 products can be deployed in four ways: as hardware appliances, as VMWare Virtual Appliances (i.e. packaged VXDs), as Amazon AWS Machine Images, and as a traditional deployable software package. Different product ranges apply to each deployment approach, but all options use a traditional perpetual licence arrangement with annual support. Exact licence terms and costs vary by deployment approach, but are in general are based on the performance of the hardware.

For companies that prefer to use hardware appliances, terms are significantly less onerous than other appliances (e.g. IBM DataPower), as hardware and software licences are paid separately, so replacing hardware doesn’t require a new software licence. Equally, software upgrades for appliances are provided as a standard part of annual support for as long as the hardware can support them, rather than being firmware upgrades which are provided for a shorter length of time.

Alongside their core API management products, Layer 7 have a software as a service offering known as APIfy. This proposition is currently in beta, is free to use, and could be an interesting deployment option for customers if a clear upgrade path to the full product becomes available when it leaves beta.

The Layer 7 products support all the features you would expect of an API management platform, but because this platform is based on Layer 7′s mature XML gateway product, it also supports very extensive and flexible features for traffic management, custom security, message encryption, transformation, and routing. The core API management functions have been implemented using the same SOA gateway primitives available to developers, which gives a good indication of the power of the gateway.


  • Long history of providing high security SOA gateway technology is an excellent foundation for deployment in blue chip organisations with stringent security requirements. Supports a wide range of security technologies, e.g. SAML, X.509, LDAP, OAuth, OpenID and Kerberos.
  • Very flexible technology providing support for esoteric/unusual environments common in enterprises. Supports protocol transformation (even down to TCPIP sockets), complex routing, orchestration and parallel execution.
  • Extensible with Java plugins.
  • Flexible deployment models, on prem and in-cloud.
  • Very strong scoring by both Gartner & Forrester
  • The only of the 4 vendors offerings which is available from the AWS Marketplace (but still using a BYOL model)


  • Unlike e.g. APIgee, there is no ‘free’ version that can be used for a production pilot with easy migration to the production version. This may change once APIfy leaves beta.
  • Traditional commercial models only – no pay-as-you-go option, although licences are available for trial use.

When would we use it?

  • Enterprises requiring high-security on premises deployment with virtual or hardware appliances.
  • Enterprises wanting to deploy a custom solution within an AWS virtual private cloud (i.e. where all components are hosted within the client’s virtual cloud rather than on the public internet).
  • Enterprises with complex integration requirements (e.g. integration with MQ, databases, TCP/IP sockets etc).


MasheryNext in our series of posts looking at API management platforms is Mashery.

Mashery scored well in both Gartner and Forrester reports. Mashery were acquired in April last year by Intel. This has strengthened both Mashery with the backing of a company the size of Intel, but also provides Intel with a way into the API Management market place and aligns with their recent shift towards the software market (e.g. through the acquisition of McAfee)

The Mashery product provides similar features to the other products, and can be deployed both in the cloud and on-premises. Integration between Mashery and Intel’s Expressway Gateway appliance will also add comfort to those customers who are used to having a physical appliance on premise.

Interestingly, Mashery’s marketing message revolves as much around internal APIs as public ones: Something we agree with wholeheartedly.


  • Strong, feature rich product (including protocol translation; SAML, X.509, LDAP, OAuth, OpenID support; policy enforcement etc).
  • On-premise, Cloud and hybrid options available which provides flexibility when engaging with customers.
  • Strong presence in the UK markets with the likes of Argos, TomTom, ASOS, Experian etc using their products.
  • Strong Developer Portal supported by Mashery I/O Docs.
  • Backing of Intel likely to lead to significant investment into the Mashery products.


  • Risk of potential product consolidation as a result of Intel Acquisition, although no sign of this occurring yet.
  • Like Apigee, in our opinion the enterprise security story isn’t quite as strong with the core Mashery product as with some other options, although this is bolstered by integration with Intel’s Expressway appliances.
  • Level of sophistication of the integration with Expressway was unclear in our investigation. It might be brilliant, but we’d advise further investigation.

When would we use it?

  • Deployment where quality of portal experience is paramount.
  • Where a customer is an existing Expressway customer, or has a strong preference for physical appliances and/or Intel networking kit.
  • To utilise the enhanced capabilities such as  pre-packaged reporting for internal and/or external use , policy enforcement or protocol translation.
Heartblled logo

Well if you have had your head in the sand, then you might just have had a chance of missing out on the news of the Heartbleed Bug. I thought that there was quite a good post on what it all meant on Troy Hunt’s blog, but the Codenomicon site that published the exposure is also very good.

The upshot of the bug, which is really a vulnerability, in OpenSSL, is that in versions 1.0.1 to 1.0.1f (1.0.1g is now released) of OpenSSL, there is a buffer overrun vulnerability in the heartbeat service which allows attackers to read chunks of memory which may contain the secret part of the SSL key (which would allow the attacker to pretend to be you) or even users passwords.

Services that use the OpenSSL implementation include apache HTTP servers, and commercial offerings based on the same (e.g. by Oracle, IBM etc.), other open-source servers like nginx. Also many commercial offerings are based on this code, including firewall appliances, VPN concentrators, and Load Balancers.

The vulnerability has been out in the wild for over two years, so there is a good chance that a web-site you use has been compromised at some time in the past, though many sites (google, amazon etc.) are patching up the vulnerability…you don’t know if your password has been compromised in the past.

This is yet another reason to use things like password managers for making sure you have separate passwords for all of your accounts, and for corporations, there is yet another reason to use Single Sign-On software. Even though your password will open more doors, the patch, if required, is to many fewer systems. Even if the remote systems that are integrated with your SSO solution have been compromised, they will not have seen any passwords in their traffic, only the session key which has a limited lifetime.

For example: In the case of ForgeRock’s OpenAM SSO solution, the authentication servers run on JEE platforms. This means and unless you are running on Tomcat and have configured it to use the native APR libraries,  that the OpenSSL libraries are not being used…. so it will not have been vulnerable. As you will see in other discussions… even if the downstream resources are protected you need to check that upstream resources (load balancers etc.) are not vulnerable, if they terminate the SSL session.

The end result is that there will be quite a few bleeding hearts.  Most organisations that use SSL will need to check for vulnerabilities, and patch as appropriate. Then they will need to renew all of their SSL certs on the previously compromised components. And if those certs are shared for multiple hosts (via multiple Subject Alternate Names (SANs)), then even the certs on the invulnerable resources will need to be renewed.

On top of that, most slightly paranoid consumers (including me), will want to renew their passwords once they are confident that the services that they use have been patched. Personally I would advise everyone to do it. Just because you’re not paranoid does not mean that no-one’s out to get you.

Please rate and Like this blog. Share it using the social icons below or via short URL

Our readers want to know what YOU think, so please Comment.

By Liliandecassai (Own work) [CC-BY-SA-3.0 (], via Wikimedia Commons

Impala by Liliandecassai

Impala 1.0 was launched back in July last year, and it’s been supported by AWS EMR since last December so I’ve been meaning to have a quick play and also to compare it with a classic map-reduce approach to see the performance difference. It’s not like I don’t believe the promises – I just wanted to see it for myself.

So I ran up a small cluster on AWS – with an m1.large for the master node and 2 core nodes, also running m1.large. I used the US-West region (Oregon) – which offers the same cheap price points as US-East but is 100% carbon-neutral as well :). This was all running using spot instances in a VPC. For interest, the total AWS cost for 24 normalised instance hours (I actually ran the cluster for just over 3 hours, including one false cluster start!) was $1.05.  Using developer standard units of cost, that’s nearly the price of half a cup of coffee! (or since we’re using Oregon region, a green tea?)


As I’m lazy, I used the code and datasets from the AWS tutorial – and decided to just use a simple count of records that contained the string “robin” in the email address field of a 13.3m row table as my comparison. Here’s how you define the basic table structure…

create EXTERNAL TABLE customers( id BIGINT, name STRING, date_of_birth TIMESTAMP, gender STRING, state STRING, email STRING, phone STRING ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LOCATION '/data/customers/';

The output is…

[] > select count(*) from customers;
Query: select count(*) from customers
| count(*) |
| 13353953 |
Returned 1 row(s) in 1.09s

[] > select count(*) from customers where like "%robin%";
Query: select count(*) from customers where like "%robin%"
| count(*) |
| 66702    |
Returned 1 row(s) in 1.73s

A slight aside – Impala uses run-time code generation to compile down the query down to machine code using LLVM, and this introduces a compilation overhead of circa 150ms, but which more than pays back on the majority of queries.  So this is where some of our 1.73s is going.  More about this here.

Pig comparison

As a glutton for punishment, I decided to use pig rather than the more usual hive for the comparison with Impala. The first thing to say – it was way harder, as the aptly named pig is just a bit more foreign to me than the SQL-like niceness of Impala…so there was some desperate checking of cheatsheets etc to remind me how best to do it…

The basic code for the same source data (already loaded into HDFS) is as follows…

CUST = LOAD 'hdfs://' USING PigStorage('|')
as (id:    chararray,
name:  chararray,
dob:   chararray,
sex:   chararray,
state: chararray,
email: chararray,
phone: chararray);
C2 = FILTER CUST BY REGEX_EXTRACT_ALL(email, '(.*)robin(.*)') IS NOT NULL;
dump C3;

As you can see the pig approach ran 8 maps. The output is as follows (with all the INFO messages and some other noise removed)…

HadoopVersion PigVersion UserId StartedAt           FinishedAt          Features
2.2.0   hadoop 2014-04-10 12:11:13 2014-04-10 12:12:26 GROUP_BY,FILTER


Successfully read 13353953 records (9 bytes) from: "hdfs://"

Successfully stored 1 records (9 bytes) in: "hdfs://"



I was just trying it out, so this is not a fair test in some ways – and I didn’t try and do any optimisation of either approach. The Impala approach ran about 40x faster, and this was consistent with repeated runs.


I checked out the CPU, IO etc and there was nothing hitting any limits, and CPU consumption when I was alternately using Impala and pig looked like this – load was even across my two core nodes, and the master had it’s feet up most of the time…

CPU CloudWatch metrics

I haven’t reported the data here, but I also played with some nasty 3-way joins using Impala and the results were really impressive. Obviously though it’s horses-for-courses – MapReduce-based approaches like hive and pig will soldier on when Impala has run out of memory for certain query types, or in the event of a node failure etc. But definitely a great bit of kit to have in the AWS EMR toolbag!

Next in our series of posts looking at API management platforms is 3scale.

3Scale offer a SaaS API Management Solution which differs from the other API Management Vendors in the way it handles API traffic. Rather than applying authorisation, rate limits and quotas through a centralised proxy, 3Scale provide a series of open source plugins, allowing decentralised processing of traffic. These plugins can be installed either within individual applications (APIs provided for Java, Node.js, PHP, Ruby, .NET, Perl and Python), existing ESBs, or within on-premises or cloud hosted proxy servers running NGINX, Varnish or Apache HTTP Server. 3Scale also supports integration with the Akamai Content Distribution Network allowing authentication, throttling and caching to occur at the network edges.

As with the other API Management vendors, 3Scale’s offers reporting on API utilisation for both API owners and API consumers.

Regardless of chosen deployment methodology, API traffic does not traverse or get stored within 3Scale’s infrastructure, eliminating a potential scalability bottleneck, and easing any potential concerns about security particularly given recent revelations about national intelligence agencies’ ability to conduct surveillance on private communication lines.

3Scale is a simpler product than many of the others, and therefore does not support e.g. message transformation or routing (see the disadvantages section below). Smart421 would therefore recommend 3Scale is deployed alongside existing integration infrastructure. 3Scale’s plugin architecture should allow 3Scale capabilities to be added to an existing ESB technology.Whilst they didn’t score as highly in the Gartner and Forrester reports, 3Scale do have some big named customers such as Skype, US Department of Energy, Telegraph Group, Transport for London and JustGiving.


  • Simple, low pricing.
  • Free tier allows POCs and Pilots to be built and deployed cheaply and easily.
  • Clean simple architecture supporting both cloud and on-prem deployment of traffic management components.
  • IaaS deployment solutions for NGINX on AWS, Heroku and Microsoft Azure.
  • Solid core product including authentication/authorisation, developer plans, developer portals, forums and billing engine.


  • Not as feature rich as some of the competition. In particular doesn’t provide the ability to do protocol or message transformation, although this can be achieved by creating custom NGINX Lua scripts. Orchestration of multiple underlying calls is also not supported.
  • Portal always cloud hosted, which may be a hard barrier for some customers.
  • Rated towards the back of the pack by both Gartner and Forrester.
  • Smaller company than most other players, which carries some commercial risk.  3scale secured $4.2m private funding in April 2013.

When would we use it?

  • Smaller customers for whom cost is the overriding factor.
  • Customers looking for a simple solution to combine with an existing investment in internal REST-capable ESB technology, or green field customers who will expose REST APIs directly from back-end systems.

Get every new post delivered to your Inbox.

Join 1,122 other followers