Technical Architecture

Infrastructure Architecture is dead, long live Infrastructure Architecture.

Cloud infrastructure has changed the role of the infrastructure architecture into one of a virtual architect. The tin and wires approach, the HA and DR, the test and release and following the sun have all been replaced by Cloud infrastructure. Careful sizing, location, rack allocation etc. are unnecessary for most applications.

The desktop environment is changing radically: desktop management, massive rollout projects and investment decisions obsolete.  The use of virtual terminal software returns the desktop to the server. BYOD removes the need and ability to direct the desktop and mobile landscape. Network architecture is increasingly virtualised both within the virtual datacentre, between datacentres and client devices.

It is hard to imagine that bastion of physicality the Infrastructure Architect dealing with purely virtual server  communicating on a virtual network and that it can be assembled from their own virtual device. There is as serious point to this, as it depends on the Application Architect to design in such a way that enables the full power of the cloud to be used.

Not that it is anything new, just more pressure on the Application Architect.



Please Rate and Like this blog.  We always welcome your feedback and your opinion, so please leave a Comment.

Cover storyAlways nice to be name checked, isn’t it?

Whether by Amazon’s CTO, Werner Vogels, in front of +1,000 people at an AWS Summit or by leading technology journalist, Graeme Burton, in front of Computing magazine’s entire circulation, we are always pleased when our work is noticed.

Graeme’s article ‘The mouse that roared: how the balance of power in IT is shifting’ has made it to Computing’s cover story today. And deservedly so. We think that it is because it shines a light on how some technologies really are disrupting traditional IT approaches.

In his piece, Graeme showcases how Cloud computing has matured to become a truly viable alternative to the locked-in, long term, on-premise approach that has held enterprises hostage to heavy capex for yonks.

His piece also illustrates how open source has risen in importance through adoption, arriving at where even enterprise level customers are not only willing to try it but demanding it as part of technology argument.

Moreover, the confluence of cloud and open source makes the commercial use case stack up in ways previously considered impossible, especially when designing for business-critical and highly competitive, regulated industries such as rail transport.

And, as always, the real winners are our customers.

In addition to the technology argument, the article highlighted the very real gains that the Association of Train Operating Companies (ATOC) decided they needed and, in their cloud adoption journey, deliberately procured for the distinct advantages of doing things in new ways, even if that meant looking beyond the traditional “big boys” in IT out to more agile and flexible IT partners. Hence Graeme’s accent on how the balance of power in IT is shifting in favour of the little guys.  I must admit, we were tickled to be described as a “minnow”. If that is how we are regarded by the industry, we are not so bothered.  It’s what our customers think that counts.

Whereas our cloud architects and solution designers are quite used to creating amazing things, it takes a customer like ATOC to recognise how that has direct relevance to their business. At an enterprise level, entrusting their IT estate on the AWS Cloud and their application layer to a raft of open source solutions such as ForgeRock, MySQL, Jaspersoft, Hadoop/Amazon EMR and others, is the right way forward for their business (supporting revenue a stonking + £7.5 billion per year).

So bravo to them for their vision.

And thanks to the editors at Computing for researching and running the story. A lesson for all of us in the IT industry.

Could this be the shape of things to come?

Please Rate and Like this blog.  We welcome your feedback or critique - so please Comment.

sexton_blakeSome things take a bit of unravelling. But to solve mysteries, you don’t have to be Sexton Blake (doubt you remember him?).

With the help of search engines, a few analysts’ reports and a bit of time, the fog quickly clears to reveal (another?) new wave coming in the IT industry.

Only this time, we’re talking databases.

Databases?  Ok – not the sexiest of subjects – I grant you – but we would do well to note the emerging trend in NoSQL and in open source distributed datastores generally.

Fear not. SQL hasn’t suddenly abdicated its crown, or become the object of sordid revelations about its private life. Far from it. SQL has deservedly won its place in the history of computing, especially for transactional databases.

But apparently not all databases were created the same (all the vendors will tell you that… and show you their glossy marketing brochures to back up their assertions – right?).

Mystery solved -  NoSQL means “Not Only” SQL

NoSQL doesn’t mean literally “No” SQL. And it is this “not only” aspect that is causing a bit of a stir. NoSQL databases are created in an entirely different way compared to traditional SQL databases.

In fact there are four main kinds:


Technology Landscape: No SQL

In the blog by our CTO on 28 May, Robin made mention of one such technology, a graph database called Neo4j which was one of the things that caught his eye at Big Data London.

I first heard Neo4j explained by Ian Robinson back in February this year at SyncConf. I was somewhat riveted by the capability of a graph database, which is regarded by many as a superset of all the others.

here at Smart421, we have already been working with others on customer engagements, for example with Cassandra one of the leading column data stores and  MongoDB, which is arguably the leading document database, overtaking CouchDB.

If you’re a Solution Architect and Technical Architect, you will almost certainly be tracking these and several others.

If you’re a developer, programmer or involved in some capacity in DevOps, you will almost certainly had a play or done something more serious with NoSQL (if not, why not?)

For what it’s worth, I’ve been quite impressed by some I’ve seen. Take Riak, a key-value pair distributed datastore by Basho which, although a comparatively young business, has an impressive management team exported out of Akamai and has already built a strong user base in the United States. Riak looks like it deserves more prominence over here; I’ll stick my neck out and predict it will rise to become major name before too long.  Basho will be sponsoring MobDevCon this July where two “Smarties” will be speaking.

Basho will also be organising RICON Europe, a tech led event for those interested in all-things NoSQL which will be coming to London in October (remember – you heard it here first).

NoSQL is on the up – it’s official

As a collective, NoSQL database management systems are on the move and picking up pace. Market analysts are tracking their progress carefully.

Gartner for example has predicted that NoSQL could account for 20 per cent of market penetration as early as 2014, which seems rather astonishing until you see how Gartner arrives at its assumptions. Merv Adrian, ex-Forrester and now Research VP at Gartner (@merv), appears to have done his homework on this and he is seeing NoSQL rise from basically a standing start.

As recently as 2012, Adrian quantified NoSQL Database Management Systems as having a market penetration of 1 per cent to 5 per cent of target audience (Adrian in Lapkin, 2012, pp. 36-38), upgrading his assessment in 2011 of NoSQL having a market penetration of less than 1 per cent of target audience (Adrian in Edjlali and Thoo, 2011, pp. 31-33).

Merv Adrian, and other market watchers, will be well worth listening to both this year and next if you get the chance at a Gartner Event, or if you have a Gartner research subscription perhaps you should request an inquiry call sooner rather than later.


araven07 (2011) Introduction to Graph Databases. Recording of presentation by E. Eifrem, 14 July 2011]. Available at <> [accessed 23 May 2013].

Amazon Web Services (2013 ) AWS Marketplace: Riak. [Online]. Available at <> [accessed 29 May 2013].

Adrian, M. (2012) Who’s Who is NoSQL DBMS. Gartner. 07 Jun. G00228114.

Aslett, M. (2013) ‘Navigating 451 Research’s revised database landscape map’. 451 Research. 10 January. [Online]. Available <> [accessed 25 May 2013].

Aslett, M. (2013) ‘451 Research survey highlights growing adoption of NoSQL databases’. 451 Research. 16 May. [Online]. Available <> [accessed 25 May 2013].

De Castro, R. (2012) ‘Why I think Riak is a great NoSQL’ DZone. 30 July. [Online]. Available at <> [accessed 26 May 2013].

Eagle, L., Brooks, C. and Sadowski, A. (2013) ‘New wave databases in the cloud, part 3: SoftLayer and Basho’. 451 Research. 01 May. [Online]. Available <> [accessed 27 May 2013].

Edjlali, R. and Thoo, E. (2011) Hype Cycle for Data Management, 2011. Gartner. 26 Jul. G00213386.

Eifrem, E. (2011) Overview of NoSQL. [Recording of presentation by E.Eifrem ]. Available at <> [accessed 23 May 2013].

Kovacs, K. (2013) Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase vs Couchbase vs Neo4j vs Hypertable vs ElasticSearch vs Accumulo vs VoltDB vs Scalaris. [Online]. Available at <; [accessed 08 June 2013].

Lapkin, A. (2012) Hype Cycle for Big Data, 2012. Gartner. 31 Jul. G00235042.

Novet, J. (2013) ‘Basho Technologies takes aim at more enterprises with upgrades’ GigaOM. 21 February. [Online]. Available at <> [accessed 26 May 2013].

Ricon (2013) RICON 2013. Available at <>  [accessed 25 May 2013].

Villanovauniversity (2011) A Tour of the NoSQL World. [Recording of lecture by David Cassel, Senior Consultant of MarkLogic at Department of Computer Science at Villanova University, United States of America on 07 Nov 2011]. Available at <> [accessed 27 May 2013].

Please Rate and Link this blog.  We welcome your Comments.

EOA-summit-logo-2013It was great to see National Rail Enquiries (NRE) win an award at the European Outsourcing Association Awards in Amsterdam last Friday (26 April).

In recognition of their SIAM outsourcing strategy (Service Integration and Management), NRE won the award for Best Multi-sourcing Project of the Year , beating strong category finalists 60k and Centrica (Centrica won this category in 2012).

Smart421 is pleased to be a large part of that initiative, performing the Managed Services element on top of an AWS Cloud platform for several key NRE applications.

As customers struggle with the chains of traditional SI relationships, Smart421 is providing agile delivery and innovation methods in the IaaS world.

Many analysts see this as “third generation outsourcing” and a change for good – and so do I.


Please rate and Like this blog.  If you can, please leave a Comment.

What is the OTA?

The OpenTravel Alliance provides a community where companies in the electronic distribution supply chain work together to create an accepted structure for electronic messages, enabling suppliers and distributors to speak the same interoperability language, trading partner to trading partner.

What does the OTA look like?

A set of XML schemas (XSDs) that define travel-related entities organised into common and domain-specific types. Domains include Air, Rail, Hotel, Vehicle, Insurance, Cruise. AirPriceRS example below (from XMLSpy):


OTA Pros?

  • Off-the-shelf extensible set of components developed by the travel industry that can save valuable time and effort when designing your XML message structures.
  • Provides a common vocabulary.
  • Helps towards developing a canonical schema/data model .
  • The OTA is updated twice a year, and all schemas are backwardly compatible.
  • Maximum flexibility – all elements and attributes are optional which allows companies to choose which parts they want to use.
  • Enables companies to derive maximum value from legacy systems by wrapping them in a service façade.

OTA Cons?

  • Provider systems may only support subsets of the OTA.
  • Companies often have their own internal vocabulary for OTA entities – mapping from one to the other can be confusing.
  • Bespoke schemas will still be required. However, XML namespaces allow OTA and bespoke vocabularies to be used side-by-side.
  • If you make any custom extensions to the OTA, these will be lost when moving to a new OTA version.
  • The flexibility of OTA entities can sometimes result in unwieldy messages.

Why use the OTA?

The choice of whether to use the OTA or a bespoke solution will ultimately depend on how applicable the OTA is for a specific travel sector and the take-up of OTA by provider systems in that sector. Smart421’s experience of working with Virgin Atlantic to develop their SOA offering is that using the OTA is beneficial.


Please Rate and Like this blog.  If you can, please leave a Comment.

Photo © Ciprian.d Stock Free Images & Dreamstime Stock Photos

Photo © Ciprian.d Stock Free Images & Dreamstime Stock Photos

As you already know from the blog 5 Dec 2012, as we were preparing to tell  the world about our Big Data capabilities, our CTO Robin Meehan coined a cracking one-liner:

“It’s not about survival of the fittest – it’s about survival of the best informed”

We liked it so much we decided to include it on our go-to-market materials.

Since then, we’ve witnessed some upsides such as how our customers have decided to deliberately ‘think outside of the box’ to better understand their portfolio of brands and be better equipped to attract customers without canabalisation of other lines of business.

A great case in point would have to be which is already taking advantage of the analytical power in cloud computing with Big Data on AWS to understand its brand and build its business.

Aviva launched in August 2011. With a national reputation and strong brand in the general insurance sector at stake, they wanted to know how their multi-touchpoint cross marketing activities impacted each brand in their enterprise portfolio.

Smart421 was invited to assist.  Smart421 architected the Cloud instances using Amazon Web Services (AWS) and developed the algorithms needed to maximise the power of the customer’s own data and the Big Data analytical environment. This gave the business a level of insight not previously possible with traditional on-premise business intelligence tools and techniques.

And the customer was kind enough to go on record about what we had been able to do.

“Smart421’s Cloud architects gave us a head start on making Big Data real for us, including how business insights are really delivered, what the costs really are, and how the technology really works in our context. Their output contributes to how we differentiate ourselves in a crowded market.”
Keith Misson, Operations Director at (an Aviva company).
Naturally, we asked if we could feature this on our website because it evidences the transformation effect of what a well architected IT strategy can deliver for a business.

(Why not share this case study with your colleagues and friends using short URL )

Big Data is a good example of how technologies developed for one use have been deployed for an altogether different use. I think that Robin’s original quip on survival has actually gone on to deliver a powerful lesson on technological exaptation.


Please Rate and Like this post, If you can, please leave a Comment.


In my previous blog post on using AWS CloudFormation to provision a Centos based environment I mentioned how at first the JSON syntax used within CloudFormation templates can be a little daunting, especially for people with limited scripting experience.

At the time of writing the blog I was using a combination of   Notepad++ with the JSON Viewer Plug-In to create and maintain my CloudFormation templates. One of the problems I had with this approach was that the templates are treated as pretty basic text files by Notepad++ and the JSON Viewer Plug-In only checks the base JSON syntax, so there is no validation of the objects defined in the JSON to ensure that the correct properties are being creating to allow the template to form a valid CloudFormation stack.

As an early Christmas present to anyone working with CloudFormation AWS recently announced the introduction of a new CloudFormation editor as part of the AWS Toolkits for Visual Studio and Eclipse. I have now had a chance to download and experiment with the Visual Studio version of this and am really impressed with how much easier it makes the creation and maintenance of CloudFormation templates.

As a .Net developer I have grown used to regularly relying upon IntelliSense features to aid with code creation, particularly when it comes to accessing and updating object properties. The CloudFormation editor provides this for the objects defined within a template as well as code snippets for all the object types you might want to define within your template. This greatly reduces the number of errors caused by ‘finger problems’ that used to occur when creating a template by hand.

The other really useful feature of the editor is the ability to estimate the costs of a stack before creating it. In the past when quoting for customers we have tended to pull together a design, plug the figures into the AWS Simple Monthly Calculator and then once we have received the go ahead provision the environment. With the ability to create costs from a template we are now looking at generating a base template at the design phase and then costing and building from this, which should help with the speed of environment creation and clarity around the prices for altering the design.

Based on our experiences so far  it’s a big thank-you to AWS Santa and his helper elves for the early Christmas present of the CloudFormation Editor and we are all looking forward to further presents from AWS  throughout next year :-)

Turning disc

Turning disc by Toshiyuki IMAI.

With most enterprise arrays now providing the ability to allocate more storage than is physically installed and with data moving transparently between tiers of low cost and high performance with no human intervention – what could possibly go wrong?

For a long time now SAN infrastructure has been at the core of the Data Centre satisfying a need to hoard as much data as could possibly be collected. With the required capacity growing at an exponential rate, it has become common to predict massive data growth when designing services, often calculating growth for two or more years. This scenario leads to large amounts of storage being purchase in anticipation of things to come, by the time two years has passed, some of the attached systems will become legacy as they are superseded by shiny new solutions and some of the systems will just not have grown as expected. Even those systems that have behaved as designed are probably using no more than 70% of the storage as likely as not due to a “fudge factor” arbitrarily thrown into the mix by the architect.

Virtualised storage arrays (each in it’s own way it would seem) allow us, within reason, to allocate as many devices of whatever size we like to as many hosts as we see fit with only a small pool of physical disk. Allowing us to satisfy our storage hungry analysts whilst buying the disk in a phased manner and delaying spend until it is necessary. Add into this the reducing cost per Gigabyte over time and the benefits mount up.

At the same time, the array architecture has developed in such a way that it is harder to facilitate small amounts of physical growth. With larger RAID groups becoming more common and larger disk sizes compounding the issue, traditional SAN allocations became inflexible and for a small requirement, potentially expensive. The smallest building block becomes a RAID Group and for a one Terabyte requirement on a traditional storage array might require the business to grow an array by thirty Terabytes or more. With virtualisation comes the ability to create more devices for allocation without having to have physical storage installed to match. There is no longer a need to grow a system until the written data exceeds thresholds and the larger building blocks are far more palatable when they are shared between all attached systems.

As with everything in life, there is always a flip side. The balance in this case is increased risk; what if the physical pool fills up. At the very least, writes will have to stop but in some cases I/O stops altogether, neither scenario is one that a business will want to contemplate. This is where the need for careful planning and monitoring is essential.

In a simple virtualised environment, physical storage will be configured into Storage Pools and a number of virtual devices created and attached to these pools. As data is written to the logical device it will be written across the physical disks making up the storage pool.

Considerations when designing virtual solutions:

  • The number of servers that will be connected – To provision for sufficient connectivity and I/O capability. It is often not available but any additional I/O profiling for attached systems will also be useful.
  • The average server allocation size and utilisation – To calculate required space for allocation and physical writes.
  • The maximum allocation size –  Large allocations could result in a single server having the ability to fill all available pool space.
  • Maturity of the service – Mature systems will require larger initial space with slower growth; new systems may start small and grow into the allocated space over an extended period.
  • Performance – Is there sufficient throughput for the attached servers?
  • Criticality – There is an associated risk with over provisioning. There are ways to mitigate this and in some cases systems will be too critical to the business to accept any additional risk.
  • Mixed Production, Test and development environments – Will the different environments share the same pools or should they be ring fenced?
  • Alerting  – What level of alerting is configured? At what level of utilisation do warnings start? Can unexpected write bursts from a single application be identified and highlighted?
  • Capacity for growth – Can the pool be quickly expanded?
  • Time to deploy – How quickly can the vendor react to increase the Pool size.
  • Plan B – Can space be redeployed to production systems?
  • Cost reduction per GB – Flexibility may be cited as a benefit but the main driver for virtualised storage is to drive the cost per GB down, on the most part this is achieved by deferred purchase and tiered storage.


With thin provisioning, monitoring is essential to maintaining a low risk environment.

As a minimum the capacities must be monitored on a regular basis depending on the activity in the environment, the attitude to risk and the subscription and utilisation figures. In most large environments daily should be sufficient or even weekly.

We have found the following capacity metrics to be the most useful and collect on a weekday basis so that we can produce trend analysis to support future forecasts.

For each pool on each array we collect:

  • Installed storage
  • Used
  • Virtual TB
  • Allocated
  • Subscription
  • Nett subscription

Experience shows that the larger environments have a very linear growth both in allocated and written storage and the trending figures provide quite accurate forecast estimates.

Performance figures should also be reviewed but a less regular basis is sufficient – weekly or even monthly.

With virtualisation improving the overall utilisation and ever larger physical disks being installed we see as a result that the I/O profile per disk is also increasing. These performance trends should be monitored and reviewed to anticipate thresholds being reached.

As a result of the increased utilisation and larger physical disks, we are also seeing the number of host connections to front-end ports (fan-out ratio) and the amount of storage allocated to front-end ports both increasing. The host ports should be monitored in the same way as the disks to anticipate performance thresholds being reached.

There is no point increasing the size of the pools to accommodate further allocations if the path to the data is already fully loaded.

This performance monitoring is specific to the capacity planning and does not replace the normal daily performance monitoring and alerting on the storage arrays.

What can be done to prepare for emergencies?

Most virtualised storage solutions have the functionality to preallocate storage. For systems that cannot afford any outage the storage can be preallocated in advance.

Understand the process with the vendors to increase the installed physical capacity.

  • How long for the vendor to install new storage from the request?
  • Can elements (such as procurement) be prepared in advance of the need or post deployment to avoid delaying the process?
  • Can hardware be left on site to reduce hardware deployment timescales?
  • Are sufficient resources available to accept additional disk?
    • Free drive bays
    • Cache
    • Power
    • Floor space

Understand the environment.

  • What systems are attached that could be sacrificed to maintain critical services?
  • Are there device replicas in the pool for non-production purposes that could be released?

These observations are based on simple virtualised pools, most hold true for tiered storage environments but these also bring their own sets of concerns and even more metrics to follow.

Please Rate and Like this blog. If you can, please leave a Comment.

ImageIf you have any experience of supporting large scale infrastructures, whether they are based on ‘old school’ tin and wires, virtual machines or cloud based technologies you will know that it is important to be able to create consistently repeatable platform builds. This includes ensuring that the network infrastructure, ‘server hardware’, operating systems and applications are installed and configured the same way each time.

Historically this would have been achieved via the use the same hardware, scripted operating system installs and in the Windows application world of my past the use of application packagers and installers such as Microsoft Systems Management Server.

With the advent of cloud computing the requirements for consistency are still present and just as relevant. However the methods and tools used to create cloud infrastructures are now much more akin to application code than the shell script / batch job methods of the past (although some of those skills are still needed). The skills needed to support this are really a mix of both development and sys-ops and have led to the creation of Dev-Ops as a role in its own right.

Recently along with one of my colleagues I was asked to carry out some work to create a new AWS based environment for one of our customers. The requirements for the environment were that it needed to be:

  • Consistent
  • Repeatable and quick to provision
  • Scalable (the same base architecture needed to be used for development, test and production just with differing numbers of server instances)
  • Running Centos 6.3
  • Running Fuse ESB and MySQL

To create the environment we decided to use a combination of AWS CloudFormation to provision the infrastructure and Opscode Chef to carry out the installation of application software, I focussed primarily on the CloudFormation templates while my colleague pulled together the required Chef recipes.

Fortunately we had recently had a CloudFormation training day delivered by our AWS Partner Solutions Architect so I wasn’t entering the creation of the scripts cold, as at first the JSON syntax and number of things you can do with CloudFormation can be a little daunting.

To help with script creation and understanding I would recommend the following:

For the environment we were creating the infrastructure requirements were:

  • VPC based
  • 5 subnets
    • Public Web – To hold web server instances
    • Public Secure – To hold bastion instances for admin access
    • Public Access – To hold any NAT instances needed for private subnets
    • Private App – To hold application instances
    • Private Data – To hold database instances
    • ELB
      • External – Web server balancing
      • Internal – Application server balancing
      • Security
        • Port restrictions between all subnets (i.e. public secure can only see SSH on app servers)

To provision this I decided that rather than one large CloudFormation template I would split the environment into a number of smaller templates:

  • VPC Template – This created the VPC, Subnets, NAT and Bastion instances
  • Security Template – This created the Security Groups between the subnets
  • Instance Templates – These created the required instance types and numbers in each subnet

This then allowed us to swap out different Instance Templates depending on the environment we were creating for (i.e development could have single instances in each subnet whereas Test could have ELB balanced pairs or production could use features such as auto-scaling).

I won’t go into the details of the VPC and Security Templates here, suffice it to say that with the multiple template approach the outputs from the creation of one stack were used as the inputs to the next.

For the Instance Templates the requirement was that the instances would be running Centos 6.3 and that we would use Chef to deploy the required application components onto them. When I started looking in to how we would set the instances up do this I found that the examples available for Centos and CloudFormation were extremely limited compared to Ubuntu or Windows. As this is the case I would recommend working from a combination of the Opscode guide to installing Chef on Centos and AWS’s documentation on Integrating AWS with Opscode Chef.

Along the way to producing the finished script there were a number of lessons which I will share with you to help with your installation, the first of these was the need to use a AMI from the AWS Marketplace. After identifying the required AMI I tried running up a test template to see what happens before signing up for it in the Marketplace, in CloudFormation this failed with an error of ‘AccessDenied. User doesn’t have permission to call ec2::RunInstances’ which was slightly misleading. Once I’d signed our account up for the AMI then this was cured.

The next problem I encountered was really one of my own making / understanding. When looking at AMIs to use I made sure that we had picked one that was Cloud-Init enabled, in my simplistic view I thought that this meant that commands such as cfn-init that are used within CloudFormation to carry out CloudFormation specific tasks would already be present. This wasn’t the case as the cfn- commands are part of a separate bootstrap installer that needs to be included in the UserData Section of the template (see below):

"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
 "#!/bin/bash -v\n",
 "function error_exit\n",
 " cfn-signal -e 1 -r \"$1\" '", { "Ref" : "ResFuseClientWaitHandle" }, "'\n",
 " exit 1\n",
 "}\n",<br /> "# Install the CloudFormation tools and call init\n",
 "# Note do not remove this bit\n",<br /> "easy_install\n",
 "cfn-init --region ", { "Ref" : "AWS::Region" },
 " -s ", { "Ref" : "AWS::StackName" }, " -r ResInstanceFuse ",
 " --access-key ", { "Ref" : "ResAccessKey" },
 " --secret-key ", { "Fn::GetAtt" : [ "ResAccessKey", "SecretAccessKey" ]},
 " -c set1",
 " || error_exit 'Failed to run cfn-init'\n",
 "# End of CloudFormation Install and init\n", 
 "# Make the Chef log folder\n",
 "mkdir /etc/chef/logs\n",
 "# Try starting the Chef client\n",
 "chef-client -j /etc/chef/roles.json --logfile /etc/chef/logs/chef.log &gt; /tmp/initialize_chef_client.log 2&gt;&amp;1 || error_exit 'Failed to initialise chef client' \n",
 "# Signal success\n",
 "cfn-signal -e $? -r 'Fuse Server configuration' '", { "Ref" : "ResFuseClientWaitHandle" }, "'\n"

As the cfn-signal which comes as part of the bootstrap installer is used for messaging to any wait handlers defined in the template this can lead to long breaks at the coffee machine before any feedback is received if they are not present.

The final lesson was how to deploy the Chef Client and configuration to the instances. Chef is a rubygems package, so needs this and supporting packages present on the instance before it can be installed. Within CloudFormation packages can be installed via the use of the packages configuration sections of AWS::CloudFormation::Init which for Linux supports rpm, yum and rubygems installers. Unfortunately for the AMI we chose to use the available repositories didn’t contain all packages necessary for our build, to get around this I had to rpm on the rbel repository definitions before using a combination of yum and rubygems to install Chef:

"packages" : {
 "rpm" : {
 "rbel" : ""
 "yum" : {
 "ruby" : [],
 "ruby-devel" : [],
 "ruby-ri" : [],
 "ruby-rdoc" : [],
 "gcc" : [],
 "gcc-c++" : [],
 "automake" : [],
 "autoconf" : [],
 "make" : [],
 "curl" : [],
 "dmidecode" : [],
 "rubygems" : []
 "rubygems" : {
 "chef" : [] 

Once Chef was installed the next job was to create the Chef configuration files and validation key on the instance. This was carried out using the “files” options within AWS::CloudFormation::Init:

"files" : {
 "/etc/chef/client.rb" : 
 "content" : { "Fn::Join" : ["", [
 "log_level :info", "\n", "log_location STDOUT", "\n",
 "chef_server_url '", { "Ref" : "ParChefServerUrl" }, "'", "\n",
 "validation_key \"/etc/chef/chef-validator.pem\n",
 "validation_client_name '", { "Ref" : "ParChefValidatorName" }, "'", "\n"
 "mode" : "000644",
 "owner" : "root",
 "group" : "root"
 "/etc/chef/roles.json" : {
 "content" : { 
 "run_list" : [ "role[esb]" ]
 "mode" : "000644",
 "owner" : "root",
 "group" : "root"
 "/etc/chef/chef-validator.pem" : {
 "source" : { "Fn::Join" : ["", [{ "Ref" : "ParChefKeyBucket" }, { "Ref" : "ParChefValidatorName" }, ".pem"]]},
 "mode" : "000644",
 "owner" : "root",
 "group" : "root",
 "authentication" : "S3Access"

The hardest part of this was the validation key, as we had multiple instances wanting to use the same key we decided to place this within an S3 bucket and pull the key down. During the script creation I tried multiple ways of doing this, such as using S3Cmd (which needed another repository and set of configuration to run) but found that using the files section worked best.

Once Chef was installed the client was started via the UserData section (basically a shell script), this then handed control of what additional software and configuration is installed on the instance to the Chef Master. How much Chef does at this stage is a bit of a balancing act as the wait handler within the template will fail the stack creation if its timeout period is exceeded.

As you can probably tell if you have got this far, the creation of the templates took quite a few iterations to get right as I learnt more about CloudFormation. When debugging what is going on it is worth remembering that you should always set the stack to not rollback on failure. This then allows you to access the instances created to find out where they got to within the install, as the UserData section is basically a shell script with some CloudFormation hooks, more times than not the faults are likely to be the same as you would see on a standard non-AWS Linux install. Also for a Centos install remember that the contents of /var/log are your friend as both cloud-init and cfn-init create log files here for debugging purposes.

After watching Werner Vogels keynote speech from AWS Re:Invent it’s clear that treating infrastructure as a programmable resource (i.e. using technologies such as CloudFormation and Chef) is somewhere organisations need to be moving towards and based on my experience so far I will be recommending using this approach on all future AWS environments we get involved with, even the small ones.

Whilst there is a bit of a learning curve the benefits of repeatable builds, known configuration and the ability to source control infrastructure far outweigh any shortcomings, such as granular template validation which I’m sure will come with time.

If you have any comments or want to know more please get in touch.

Having recently spent time working on the IBM Worklight platform, I thought it would only be fair if I documented some of my findings. No disrespect to the IBM’ers, but its reasonably fair to say that documentation is a little sparse in places, so lets give a little back to the community by discussing some of the hurdles. Lets not dwell on what Worklight is, Andy has already covered this well in a previous post; but lets just dive right into some of the technical aspects.

General Thoughts

Development on the whole is a relatively straightforward process, even for someone like myself that often steers well clear of anything that involves web presentation technologies (it reminds me of dark nights at the university labs spending hours trying to get a button to align correctly, the night before coursework submission *shudder*).

The Worklight eclipse plugin provides a good drag & drop GUI builder, but with support only for dojo. I opted to drop dojo and go for jQuery. jQuery is very well documented, and is easy to get help should you require it. One of the main things I like about jQuery is its showcase and examples, they are documented very well and the learning curve is generally quite small, but also the themeroller, it becomes incredibly easy to customise the default colour scheme and drop the generated CSS into your app. It always amazes me how excited the marketing guys will get if you can add in the corporate colour scheme to your app (thanks Joseph!).

Continuous Integration

We’re big fans of CI here, so I was quite keen to understand how easy it would be to have our Worklight apps built from the command line, and ultimately on a Jenkins CI box. The chaps over at IBM have done a fantastic job of exposing an array of Ant tasks that help with building and deploying apps, you’ll almost certainly want to read through module 42 on the getting started page that covers these tasks:

  • adapter-builder – Use this task to build your adapter and create the .adapter file
  • adapter-deployer – Use this to deploy a .adapter file to a Worklight server (very  useful for deploying to a remote AWS instance)
  • war-builder – Use this to build the server .war file that you will deploy to the application server (some manual tweaks are required)
  • app-builder – Use this to build the .wlapp files that you will deploy into your Worklight container
  • app-deployer – Use this to deploy your .wlapp files onto a Worklight server (useful again for remote deployments)

Lets have a closer look at each of those targets, and how we’re using them here at Smart421:

Getting the party started, with init

Firstly, grab the worklight ant jar (you’ll need to have purchased the WL Enterprise edition for this) and add it into your ant context like so :

<target name="init">
 <echo message="Loading ANT Tool"/>
 <taskdef resource="com/worklight/ant/">
 <pathelement location="./build-config/worklight-ant.jar"/>
 <property environment="env"/>

Now you’re free to use the ant tasks anywhere in your build script.

Building & Deploying WL Adapters

You need to build each adapter individually, and then deploy each one. You can create the following ant targets to do that for you:

<target name="buildAdapters" depends="init">
 <echo message="Building all adapters"/>
 <!-- Build your other adapters here, same as above-->

<target name="deployAdapters" depends="init">
 <property name="WLSERVERHOST" value="http://my_aws_ip_here:8080/SmartConf"/>
 <echo message="Deploying all adapters"/>
 <!-- Deploy your other adapters here, same as above-->

Building the Server WAR

You can build the server war file using the war-builder task, as shown below. It is important to note however, that I needed to do some tweaking to the war file to avoid any post-installation configuration tasks. According to the Worklight forums, there doesn’t appear to be a way to include files in the WEB-INF when the war is created, which means that once you’ve expanded the war on the application server you’d need to manually replace the default web.xml and context.xml files (to set your datasources), this can be quite frustrating, so in true Blue Peter fashion, I’m updating the war file with files I created earlier.

<target name="warBuilder" depends="init">
 <echo message="Building the war file"/>

<target name="updateWar">
 <echo message="Updating the war file"/>
 <war destfile="./bin/SmartConf.war" update="true" webxml="./build-config/web.xml">
 <metainf dir="./build-config" includes="context.xml"/>

Building & Deploying the WL Apps

You’ll also want to automate the building and deploying of the wlapp files, you can do this with the following :

<target name="buildApps">
 <echo message="Building all WL Apps"/>

<target name="deployApps">
 <property name="WLSERVERHOST" value="http://my_aws_ip_here:8080/SmartConf"/>
 <echo message="Deploying all WL Apps"/>

Building the Native Application Distributable Binaries You’ve survived this far, and I’m thankful to you for that, however we’re not quite finished yet. Worklight will generate the native projects for you, but its your own responsibility to take those project directories and build the Android APK, or the iOS IPA etc. IBM will draw the line at this point, so you need to build them yourself, you can do this for all of the environments quite easily using additional ant tasks, android is the easiest :

<target name="client-android" depends="buildAndroid">
 <!-- Run the android native build, in its own directory -->
 <ant antfile="./apps/SmartConf/android/native/build.xml" target="release" useNativeBasedir="true"/>
 <!-- Copy up the apk into the bin area, for consistency -->
 <copy file="./apps/SmartConf/android/native/bin/SmartConf-release-unsigned.apk" tofile="./bin/SmartConfSmartConfAndroid.apk" overwrite="true"/>

Building Blackberry and iOS apps from the command line is slightly more involved, and I feel they warrant their own blog post on that, alternatively, get in touch and we’d be glad to offer some assistance. Bear in mind you will need an Apple MAC to build iOS, for which we’ve installed a shared box in our build environment.

Other Gotchas

As with taking on board any emerging technology, there will always be plenty of head-scratching moments where the documentation is thin, and Uncle Google doesn’t provide much help, fortunately for you, we’re a nice bunch of guys here at Smart421 so we’ll share some of the things that had us pondering over a coffee:

  • The trailing “/” in the Worklight server host URL is required, don’t ask why, it just is.
  • The versioning conventions for Worklight are a little strange5.0.0.270 = v5.0 GA, but the developer edition is =
  • If you have an existing WL server installation, don’t upgrade it to, it fails to upgrade all components and leaves you with some obscure error messages that are hard to trace. The best plan of action is to uninstall, install again, but make sure you check for updates at time of installing, via the wizard
  • App crashes with Unreachable host? When you build and deploy the app to your device, it has the WL server IP hardcoded into it. The next day when you arrive at the office and hop onto the Wifi, DHCP gives you a different IP address…It’s a classic schoolboy error, but catches us out from time to time. A simple solution if you don’t have a spare box lying around is to install the Worklight server on AWS and deploy to the cloud, bearing in mind that it needs to be open to your mobile devices over the Internet in a real-life installation anyway.
  • Results is undefined on adapter call. A subtle difference here, HTTP adapters use invocationResult.results, whereas SQL adapters use invocationResults.result. That last character makes all the difference.
  • Response cannot be parsed, please contact support; this is an annoying error that you often see in the developer preview, just make sure you set the body onload to WL.Client.init() as mentioned here.
  • Unable to use geolocation services on android? You’re probably seeing Caught security exception registering for location updates from the system, this should only happen in DumpRenderTree. Make sure you have the geolocations permission in your android manifest as detailed here.


On the whole, I was very impressed with Worklight, they are offering a lot of functionality over and above the standard Cordova project. Some of the errors I’ve encountered have been a little frustrating, as often my only source of help was the forums, but I can accept that it is a product in its early stages of adoption, and will probably go very far. I’m looking forward to working with it in the future.

If you’d like to have a look at some of the apps we’re creating, or generally just want a chat about Worklight and some of its capabilities, or Mobility in general, we’d love to hear from you.

Next Page »


Get every new post delivered to your Inbox.

Join 1,084 other followers