wind-battered union jack at the Cobb at Lyme Regis, Dorset Photo: Richard Austin / Rex Features via BBC News

A wind-battered union jack at the Cobb at Lyme Regis
Photo: Richard Austin / Rex Features via BBC News

If the wind and rain storms hitting the UK right now teach us one thing, then it’s to be ready for every eventuality. The UK is taking the brunt of everything the Atlantic can throw at us. We’ve all seen the evening news reports of terrible flooding in the Somerset Levels and the rail line swinging in mid air at Dawlish.

You cannot fail to admire the resilience of those impacted.  We salute you.

I’m not going to stray into the politics of it all.  Leave that to the professionals and the pundits. But it does seem an apt moment to remind ourselves of the vital importance of business continuity planning and having a proper disaster recovery strategy in place.

Ok so DR planning may not be the sexiest gig on the planet, but it does offer a hero factor at times when DR needs to be invoked for real, such as when water starts come up through the floor or trees crash through your buildings where you keep your servers.

Regular readers of this blog will know we’ve seen very successful strategies using on-premise IT estates, data centres, external hosting, and the Cloud. One great example I’d think is worth highlighting again today is the innovative approach to using the power of the AWS Cloud for disaster recovery.  Tech journalists as well as Amazon picked up on the outstanding work done by Paul Armstrong and others at Haven Power, part of Drax Group.

Here’s their story.

Read Haven Power’s story on the Smart421 website here

Find out more on the on the AWS website here

Discover more of our blogs about Amazon Web Services here

Blogs on Cloud

Our Cloud architects will be available to discuss this and other engagements at the Cloud Expo Europe, at London ExCel on 26-27 February.   You can find Smart421 on the AWS Village (stand 1070)

Please Rate and Like this blog. Share it using the social icons below or via the short URL

Our readers want to know what YOU think, so please post a Comment.

ServiceNow_logo_TM_SMALL_RGB_600dpi_061912Struggling for years with an internally hosted, antiquated and legacy solution, we found we just couldn’t keep pace with our evolving service management needs. As a company that is driven by the demand of the customer, we quite liked what we found in ServiceNow. But how did we arrive there?

Read on.


We evaluated a range of possibilities. We managed the company’s migration to new tooling as a formal project with a proof of concept stage, a pilot migration and then finally a phased migration of the services we support.

Lessons Learnt

  • “Software as a service” (SaaS) is definitely the way to go! With a growing service management portfolio, the availability of the tools to support that business are essential.
  • Managed as a formal project with a defined set of requirements and planned life-cycle made sure that the delivery met expectations.
  • We decided not to use the professional services offered by ServiceNow, which was a mistake. We thought we would be able to adapt the solution to meet our needs with little support and no training – big mistake. Although we have ended up with what was needed, it would have been achieved quicker, at higher quality and at less cost if we had used the services available from service-now.
  • Build a “regression” test pack as part of the project to enable testing of any future changes (either internally or new versions by service-now).
  • We only contracted for the services for a year, in case the tool wasn’t right for us – in hindsight we should have signed up for 3 years to fix the pricing over that term.
  • One of our core requirements was to be able to provide our customers with a “portal”. This requirement grew and customers also wanted to manage and assign tickets. This requirement means that these users would incur additional license costs, so we developed a portal solution that negates the need for licensed users.

Final word

Ok so SaaS solutions may not be everyone’s cup of tea. Some are definitely over-rated. But we have seen the advantages of going the SaaS route for our business. We’ve never been totalitarian about any one technology or vendor. Our neutrality is well known and that goes for the tools we select for our own business. We think this is healthy and suggest you think the same way. Always know where the exit door is and have a pathway out of, as well as into, any technology you deploy. But when you find something that works, persist with it, iron out the minor gotchas and be a responsive customer so your supplier knows what you like and what you don’t like.

Put the needs of your customers first, and work backwards from there.

Please Rate and Like this blog. Share it using the social icons below or via the short URL

Our readers want to know what YOU think, so please leave a Comment.

Keynote room before we all troop in...

Keynote room before we all troop in…

Here at AWS re:Invent, Werner Vogels  (@Werner) was the keynote compere on day 3, and I quickly realised the formula. On day 2, Andy Jassy announced some new stuff with more of a business value message, but the real techy new announcements were for Werner to unveil. He also made light of the fact that all AWS ever do is bleedin’ mention Netflix, saying that a new drinking game had been created where you have to down a shot every time anyone mentions them.

Anyway, there was a lot of announcements…

  • PostgreSQL on RDS – this actually got a spontaneous whoop from the gathered audience, and so there is clearly a lot of pent-up demand for it. I note that it has multi-AZ support from launch, which hasn’t been the case for other databases on RDS.
  • A new I2 instance family – for uber-high IO
  • A new C3 instance family – for uber-high CPU
  • Global secondary indexes on DynamoDB
  • Cross region Redshift snapshot copies
  • Amazon Kinesis – stream processing that can handle huge data ingest rates and deliver it to a number of consuming applications or services.

So that’s enough for most vendors for a year of releases really.

Later in the day I attended the Amazon WorkSpaces session (which wasn’t previously on the re:invent agenda as it wasn’t announced!) to understand this new offering a bit better. I have to say that the first impression is one of… well… it’s a bit dull. What can I say – it’s a Windows desktop – even if you are accessing it via a laptop, an iPad or and Android tablet. It just does what it says on the tin. It’s the economics of it that are the really interesting thing. It’s not a market I know really well (i.e. the price points of the key vendors), but it’s fair to say that the Citrix guys on the expensive-looking stand in the vendor expo were not looking too happy – or very busy. In a kind of “my breakfast has just been eaten” way.

Interestingly I had a chat with an Amazonian on this point and his view was that Citrix could have easily launched a pay-as-you-go virtual desktop product like Amazon WorkSpaces if they had really wanted to, but I guess Citrix had the classic dilemma of whether to cannibalise their on-premise business model or not. The answer clearly is – you HAVE to do this, as someone else will regardless.

Both Citrix and VMware‘s share prices dipped following the announcement.

VMware’s response was as you would expect from a competitor – as detailed in Erik Frieberg’s blog post:

  • We welcome a new competitor in our market etc etc
  • Ours is better than theirs

What else could he say? But of course I strongly suspect he knows the real unspoken truth here – it’s not AWS’s first product launch you need to worry about (as it will always typically be relatively immature and lacking key features), it’s the pace of product updates that follow it that should concern you.

From a technical point of view, Amazon WorkSpaces uses the same protocol over the wire to “deliver the pixels” as VMware’s product – PCoIP, a UDP protocol that both parties have licensed. Only Windows 7 desktops are supported in this preview release, running on Windows Server 2008r2 under the covers. Other OSs are clearly in the pipeline, including support for browser based clients. Provisioning time is currently 15-20 minutes, but they plan to get that down to 5 mins when it comes out of preview. Active Directory integration is supported.

The day wrapped up with the re:Play party, sponsored by Intel. I had a chat with the Intel guy to understand what their motivation for such a strong sponsorship with AWS. Obviously AWS buy a shedload of Intel chips, but it’s more than that – I just think they know they need to be onboard with the way the wind is blowing, simple as that.

The party was good I must admit, although it’s never a good sign at a party when there is a queue for the men’s toilets but not the women’s :).  There was lots of entertainment laid on – 1980’s video games, Jenga, helicopter flying, laser-dodging etc – and then a set from deadmau5 – who I must confess I was not massively familiar with.  Good to feel your jeans shake with the sub-bass though….


I was off playing Centipede (badly) after 30 minutes of this.  I spoke to a Canadian (Dov Amihod, CTO from Lagoa – a really interesting start-up doing really high quality photo-real 3D image rendering for use in retailer catalogues etc as it’s cheaper than using getting a photo-shoot done) on the way back from the party and apparent deadmau5 is really big there, but he was a bit suspicious as to why he’s want to play in-front of 8000 geeks, for Intel :)

Experience deadmau5 by watching my short clip on YouTube here.

Please Rate and Like this blog. Our readers want to know YOUR opinion so please leave a Comment.

Yesterday at AWS re:Invent, Andy Jassy delivered the main keynote. As you can see from the photo below, the event was immense – the day before I was in the APN Summit so it was AWS partners only, and that felt big.

1384361167005But this was 9,000 attendees from 57 countries in a room. The photo doesn’t really capture the epic scale – which struck me as kinda like a metaphor for AWS itself, i.e. the scale of the administrative operation was off the chart, it was all very efficiently managed, and it gets bigger every year!

I thought it was interesting that they didn’t even “save up” the recent announcement about the 10% price reduction for M3 EC2 instances that was announced on 5th November for re:Invent. To me, this just shows how baked into the business model these regular price reductions have become.

In content terms, the three main new announcements were:

  • Amazon CloudTrail – the ability to log all AWS API calls to S3 for audit and compliance purposes. This is a nice feature that we’ve asked for before, but actually hasn’t been too much of a barrier to customer adoption previously, probably because we are typically managing the entire AWS layer for a customer anyway.
  • Amazon WorkSpaces – virtual desktops-as-a-service. Interestingly desktop “state” is maintained as you move between access devices, e.g. from laptop to tablet. We’re deployed virtual desktops in AWS for a number of customer projects – either desktops for key users in a Disaster Recovery scenario, or for developers who are located around the world and need a consistent desktop with known applications installed etc in order to access AWS-hosted dev and test environments. So I can see us using this new feature in future projects as I suspect the cost model in terms of the saved installation/build/ongoing patching effort of putting in a bunch of Windows Remote Desktop servers.
  • Amazon AppStream – HD quality video generation and streaming across multiple device types. This is related to another announcement that was made on 5th Nov – the new g2.2xlarge instance type, which has the GPU grunt to enables the creation of 3D applications that run in the cloud and deliver high performance 3D graphics to mobile devices, TVs etc.

Weirdly being at the event you get less time to look into these new product announcements and so you probably have less detail than if you were just reading about it on the web – after the keynote it was straight into a bunch of technical sessions.

I mainly focused on the data analytics sessions. First off, I got to hear about what NASA have been doing with data visualisation – I think all attendees expected to hear about exciting interstellar data visualisations, but it was actually about much more mundane visualisations of skills management, recruitment trends etc – and this in fact made it much more applicable to the audience’s typical use cases as well. There were some great takeaways about how to maximise your chance of success which I need to write-up at some point…

I then attended an excellent deep dive on Amazon Elastic MapReduce (EMR) – this covered Hadoop tuning and optimisation, architecture choices and how they impact costs, dynamically scaling clusters, when the use S3 and when to HDFS for storage, instance sizes to use and how to design the cluster size for a specific workload.

This was followed by some customer technical overviews of their use of RedShift. They had all migrated to RedShift from either a SQL or NoSQL architecture. For example, have deployed two RedShift clusters in order to isolate read from write workloads, but I felt that they had been forced to put considerable effort into building a proxy in front of RedShift to optimise performance – fundamentally as RedShift is limited to 15 concurrent queries and for their reporting workload, they are not in control of the peaks in their user’s demand for reports. So they’ve implemented their own query queuing and throttling mechanism, which sounds like a whole heap of technical and tricky non-differentiating work to me. A key takeaway from this session for me though was that the price-performance characteristic of RedShift had really worked for these customers, and given them the ability to scale at a cost that they just could not before. They were all achieving very high data ingress rates by batching up their data inserts and loading direct from S3.

The final session I attended was about a Mechanical Turk use case from InfoScout. Mechanical Turk is an intriguing service as it’s so different to the other AWS offerings – in fact it’s not a service at all really although it exposes a bunch of APIs – it’s a marketplace. Classic Mechanical Turk use cases include translation, transcription, sentiment analysis, search engine algorithm validation etc, but InfoScout’s need was for data cleaning and capture following an automated by fallible OCR process – capturing the data from pictures of shopping receipts taken on smart phones. The main takeaway for me was about how they manage quality control – i.e. how do you know and therefore tune and optimise the quality of the results you get from the workers executing your HITs? InfoScout use two quality control strategies:

  • Known answers – in a batch of receipt images that is handled by a Mechanical Turk worker, they inject a “known” receipt and compare the data captured with the known data on that receipt. This technique is good for clear yes/no quality checks, e.g. is this receipt from Walmart. This allows them to compute a metric for each worker as to how likely their other receipts have been accurately processed.
  • Plurality – send unprocessed receipt to more than one worker and see how consistent the returned results are. InfoScout use this to build a confidence score based upon this and other factors such as worker tenure etc.

The final event of the day was the re:invent pub crawl around 16 of the coolest bars in The Venetian and The Palazzo hotels. I’m guessing I don’t need to tell you much about that event, other than it started with sangria… :)

Tough but someone has to do it...

Tough, but someone has to do it…

Vegas baby!

Vegas baby!

I’ve survived my first full day in Vegas at AWS re:Invent, the annual Amazon Web Services shindig, although I must admit to being jet-lagged to hell. Handily nothing ever shuts down here so waking up at 2am is not a problem :)

The first day was dedicated to the AWS Partner Network (APN) Summit, and the #1 highlight had to be the announcement that Smart421 have been awarded Premier Consulting Partner status – one of only 3 partners in the EMEA region to be recognised in this way. This is the highest level that there is globally, and it makes me really proud of what our internal AWS Practice have achieved over our four year journey with AWS – this is not something that AWS give to any old partner! It’s recognition of the great customer case studies and deployments that we’ve jointly undertaken with AWS, and the investment in deep skills that we’ve made.

APNThe sheer scale of re:invent is pretty staggering. The venue (The Venetian) is enormous, the rooms are massive, and there’s a huge number of attendees with a very wide variety of interests – enterprise-level, gaming, HPC, start-ups etc. As I was at the APN Summit all day which was on its own floor, this didn’t really hit me until I went to the Expo part of the event at the end of day – where 180+ different vendors are touting their wares. It was a struggle even to walk through the room as it was so busy – although copious amounts of food and alcohol probably helped drive attendance :).

Here’s a couple of other takeaways from the APN Summit yesterday:

  • AWS have just updated the quote that they use to demonstrate the rate at which they are scaling their infrastructure. Anyone familiar with AWS will probably have heard before that one way of quantifying their rate of infrastructure growth is in terms of comparing with the number of servers etc needed to run the retail business at some point back in the past. Well – AWS has carried on growing, and so this comparison metric has had to be updated. They are now adding enough server capacity every day to power when it was a $7bn business – which is quite an incredible statement really. Cloud computing is indeed a scale game…
  • One of the big push areas from AWS is in driving use of AWS to host specific packages such as Microsoft Exchange, various Oracle business products (i.e. not just the technology components such as database, middleware etc), SAP, Microsoft SharePoint etc. Hence yesterday they announced some new partner competencies for some of these products. Personally I don’t quite get this – in my view, the cloud model is not so compelling for these kinds of IT workloads, as they tend to be very “steady state” in nature, not particular peaky workloads and if they are somewhat peaky, then you’ve usually got to have a resilient pair running all the time anyway and so they options for scaling down are limited. There’s a myriad of companies out there offering very price-competitive hosted Exchange and SharePoint models (like our sister company in fact) and they can exploit multi-tenancy across customers to drive a really low price point. Office365 (which wraps in Exchange and SharePoint with other stuff) is also the direction of travel for many enterprise customers. Having said all that, AWS are obviously seeing traction for these more enterprise-level software deployments otherwise they wouldn’t be aligning their partner model to it – as they are clearly not dummies given that they are giving the rest of the IaaS/PaaS market a bit of a hiding.

Today we have the opening keynote from Andy Jassy, and then we get into the more nitty-gritty technical sessions…

2013 IP Expo, London

2013 IP Expo, London

With our vantage point and close proximity to the Amazon Web Services Lab at IP Expo, we were eye witnesses to the ongoing popularity of AWS.  You would never have known it was Amazon’s first time at IP Expo; their AWS Labs sessions were packed out all day long. We didn’t see that sort of traffic anywhere else in the hall.

This sent a strong signal that only underlines the leader position that Amazon has fought for, and won, in the UK.  They are in front by a country mile.

Which suited us down to the ground, of course.

We were beneficiaries of all that goodwill and warm buzz it generated.  After each Lab session, attendees polarised to the Amazon Partner Village to ask questions and to see for themselves what Amazon Partners are up to.

Enterprise Cloud. Ask us how.

Anticipating high levels of interest, we staffed our stand with senior Cloud Architects as well as with Account Managers, all of whom were able to take questions and talk about how Smart421 is helping large UK companies on their separate cloud adoption journeys.

As evidence, we showcased a set of case studies covering a selection of customer engagements: Big Data analytics on the AWS Cloud for Aviva, Services Transition to the AWS Cloud for ATOC Rail Settlement Plan and National Rail Enquiries, and Disaster Recovery on the AWS Cloud for Haven Power – part of Drax.

These real war stories certainly caught people’s attention.   It seems that Smart421 is still one of the only IT services firms able to put up so much in the way of proof that we’ve done it for real.

As well as those case studies, my colleagues tell me that our AWS Competencies logos had a remarkable “magnetic effect” on drawing visitors to the stand.

Over-the shoulder. AWS Lab session

Over-the shoulder. An AWS Lab session

But we didn’t just absorb all the goodness from the AWS Labs.  We went out and hunted down prospects for ourselves.  Glad we did.  Evaluating the results today tells us that it really was worth all the hard work. We met some great people.

IP Expo was for us a trade show with a difference.  As well as meeting people we already know (good), it has put us in touch with a whole new audience. OK, so we didn’t get to speak with everyone (with 12,500 attendees it would have taken some doing) but the 200 or so we did speak with were engaged enough to want know more about us and what we are doing with AWS. Therefore, mission accomplished.

We were pleased that our colleagues in Kcom and Eclipse (other brands in the KCOM Group plc) were exhibiting. By all accounts they had a great show too.

Well, I’m off to give our stand team a well-deserved pat on the back.

We’ve got a few emails to send and some calls to make…

Please share this blog by using the social button below, or via the short URL

Please Rate and Like this this blog.  Our readers want to know YOUR opinion, so please leave a Comment.

Positively Noctilucent:  not all clouds were created equal

With just 3 weeks to go until its Enterprise Summit in London (17 Sept), Amazon Web Services (AWS) is on a bit of a roll. ( hashtag #AWSsummit )

Research reports by analysts at Forrester and at Gartner have shown again AWS’ leadership position in a significant way.  For those still looking for evidence that Cloud Computing is a robust alternative to on-premise compute and storage, these reports are as close as you are likely to get to an objective viewpoint.

In June, Forrester Research published its ‘Wave’ report on Enterprise Public Cloud platforms, Q2 ( get the reprint here or via short URL ). In it, the authors Rymer and Staten took all the mainstream IaaS / PaaS / SaaS vendors (except absences e.g. Savvis) and analysed from a number of viewpoints: rapid developers, coders, DevOps pros, and application development & delivery (AD&D).

Incidently, they also registered a blurring of the lines between the widely accepted stratified offerings software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS),  But I digress. You should read the report.

Amazon Web Services (AWS) came out the strongest of all vendors across three of our four developer segments.

In August, Forrester’s arch rivals Gartner published its Magic Quadrant report on Cloud IaaS  ( get the reprint here or via short URL ). Analyst team Leong, Toombs, Gill, Petri and Haynes put AWS way out in front.  Other mainstream vendors (including Savvis) looked like a tightly bunched peloton on their now-famous 2 x 2 matrix.

Research clients of Forrester and Gartner place a lot of currency in these types of reports; substantial IT deals are decided on the strength of vendors’ positioning and analyst opinions.  Of course, analysts don’t always get it right, but when it comes to Cloud they have for the most part been prescient.

Regular readers of the Smart421 blog will know only too well that we have been uncharacteristically vocal of our support for the AWS Cloud technology stack.  This is only because a number of UK enterprises have already engaged Smart421 to architect, deliver and support solutions on the AWS Cloud.

If you’ve registered to attend on 17 September, you’re in for a treat with a great agenda.  If you’d like to have a more pointed conversation with our Cloud Architects or Senior Management team, please fee free to visit our special microsite to book an on-site meeting.

Share this blog using the short URL

Please Rate and Like this blog.  We always welcome Comment.

Infrastructure Architecture is dead, long live Infrastructure Architecture.

Cloud infrastructure has changed the role of the infrastructure architecture into one of a virtual architect. The tin and wires approach, the HA and DR, the test and release and following the sun have all been replaced by Cloud infrastructure. Careful sizing, location, rack allocation etc. are unnecessary for most applications.

The desktop environment is changing radically: desktop management, massive rollout projects and investment decisions obsolete.  The use of virtual terminal software returns the desktop to the server. BYOD removes the need and ability to direct the desktop and mobile landscape. Network architecture is increasingly virtualised both within the virtual datacentre, between datacentres and client devices.

It is hard to imagine that bastion of physicality the Infrastructure Architect dealing with purely virtual server  communicating on a virtual network and that it can be assembled from their own virtual device. There is as serious point to this, as it depends on the Application Architect to design in such a way that enables the full power of the cloud to be used.

Not that it is anything new, just more pressure on the Application Architect.



Please Rate and Like this blog.  We always welcome your feedback and your opinion, so please leave a Comment.

After the polished video intro (screen shot below and catch the full key note here) Brad Anderson (Corporate VP at Microsoft) kicked off TechEd Europe here in Madrid by highlighting the opportunities that we have available to us as at the moment in IT. He talked about innovation, helping customers differentiate themselves and how Microsoft aim to make others great, so whilst I definitely agree that with him, it was the resonance with so much of what we do at Smart that I really liked.


Four areas of innovation were discussed around; People Centric IT, Building better Apps for people, Data, and the Cloud Platform. Aligning to these were new product release announcements including SQL 2014 CTP, VS2013 and TFS2013 CTP, Windows Server 2012 R2 and System Center 2012 R2. Better still, a partnership with Oracle was announced which means Oracle is fully supported on Azure and Hyper-V (and from what I gathered Oracle images ready to spin up in Azure).

One of the best parts (and I tweeted at the time – @wanty1975 btw) was a video of one of our customers (Aviva) referenced as an example of mobile and Azure development combining to provide an innovative solution for their customers. Why was this so good? Well partly because my old boss was in the video…but more so because seeing real world examples, and close to home too help inspire and make me realise the opportunities we have available to us.

TechEd Pres

So a brief list of new features mentioned in the key note that you should be aware of:

  • BYOD – two-factor authentication making it easy to add devices (a MS Surface in the demo) to AD domains using Windows 8.1 (not sure what the experience would be like for other devices though??). The simple process means files are sync’d, permissios granted to corporate data and Line of Business (LoB) applications downloaded onto the device accessible via a company portal-like homepage.
  • VS and TFS 2013 – ALM benefits throughout the develoment lifecycle but I really liked InRelease (following a recent acquisition by Microsoft) which provides workflow and approval mechanisms for moving builds through environments. I could see this really helping to take the pain away from tracking builds through environments and ensuring everyone knows what versions are where. From a dev perspective I also thought the new Code Lens and Code Map features looked really helpful. These come under the banner of Situational Awareness and are designed to reduce the time and effort needed by developers when context switching. Code Lens integrates with TFS 2013 to provide details within the code base, such as tests passing/failed, who it was last checked in by and any recent check-ins. It helps developers understand who has done what, where and why, much more simply than the previous version/compare routine that we’ve all experienced. Code Map provides a diagramatic view of code (similar to architecture modelling tools) but this can be viewed at runtime in debug mode, like a call stack, helping developers familiarise themselves with code much quicker and easier.
  • SQL 2014 – some huge performance gains achieved through in-memory OLTP and row based access rather than page, therefore reducing locking issues and increasing performance. SQL will suggest what tables are ‘hot’ and recommend moving them into memory (I’m assuming they’re persisted to disk to fairly frequently!!). I also picked up on the ability to replicate an existing on-prem DB to Azure, to provide HA and DR via a right-click menu option…didn’t see or hear any more evidence of this but sounds like a great feature. Also HDInsight and Hadoop linkage to Excel to provide visualisations and insight to Big Data.
  • Windows Server and System Center 2012 R2 – providing the tools to make Hybrid cloud simple and easy, with tooling to help sync data, create replicas for DR purposes and significant performance benefits for virtualised machines.

So there are obviously things that I’ve missed and lots more to talk about but hopefully this represents a fair summary. I’ll also be tweeting any further nuggets throughout the week.

EOA-summit-logo-2013It was great to see National Rail Enquiries (NRE) win an award at the European Outsourcing Association Awards in Amsterdam last Friday (26 April).

In recognition of their SIAM outsourcing strategy (Service Integration and Management), NRE won the award for Best Multi-sourcing Project of the Year , beating strong category finalists 60k and Centrica (Centrica won this category in 2012).

Smart421 is pleased to be a large part of that initiative, performing the Managed Services element on top of an AWS Cloud platform for several key NRE applications.

As customers struggle with the chains of traditional SI relationships, Smart421 is providing agile delivery and innovation methods in the IaaS world.

Many analysts see this as “third generation outsourcing” and a change for good – and so do I.


Please rate and Like this blog.  If you can, please leave a Comment.


Get every new post delivered to your Inbox.

Join 1,122 other followers