I attended the Big Data Analytics 2012 event at the Victoria Park Plaza, London (organised by Whitehall Media) yesterday along with our CTO, Robin Meehan. We wanted to attend to keep in touch with what some of the big players are saying about “Big Data” and their views on analysis with Cloudera, Oracle, SAP, Informatica, SAS UK, SGI, MapR Technologies, GigaSpaces, MicroStrategy, Pentaho etc all there (and furnished with masses of pens, notepads, pen drives and stands etc).

The event itself was good; usual mix of CIO’s, CTO’s and techies in attendance. A number of the key note speakers (in and amongst the sales pitches) had some interesting stories and facts, such as John O’Donovan, Director of Technical Architecture & Development at the Press Association, talking about how they analysed the “masses of data” captured during the London 2012 Olympics, to deliver “Content as a service” to consumers around the globe (including the translation of the content on route) supporting up to 50K TPS. This was followed by a great clip explaining the math behind Robert Carlos’s improbable goal – it’s worth a look – click here to watch on YouTube.

Where do I start?

Where do I start?

Bob Jones, Head of the CERN Openlab project gave us an insight to some of the “Big Data” challenges they are facing at CERN and with the generation of 1 petabyte per second it is clear to see why! Even when throwing most of the data they still permanently store between 4 – 6 GB of data per second and on board for recording 30 PB of data for 2012 and they aren’t even running at full steam yet!

Many other companies spoke at the event but the one that resonated with me most was the one by David Stephenson, Head of Business Analytics at eBay. It wasn’t the impressive stats such as 70-80 billion database calls per day, 50+ TB of new data stored per day or the 100+ PB of data processed daily.  It was what he called “The prize”:

“using behavioural data to understand our customers intenet, preferences and decision making processes”

The reason this resonated so much with me, because this is exactly “the prize” that I have been working with one of our (Smart421’s) customers for – tapping into the rich vein of information available and utilising this to ensure that they engage with the customer in a more relevant and timely manner.

It really does come down to the four V’s: (i.e. Doug Laney’s “3Vs” construct of Big Data mentioned in previous blog here, plus one further crucial point)


And actually the one we all really want to focus on is the fourth VALUE!  Otherwise, why are you doing the first three anyway – right ?!

It is the VALUE of the data that we seek, seeing what else we have available to us to allow us to progress, make better applications, communication more effectively and more relevantly, whatever you business is is the VALUE that you derive from your data that really counts.

One of the points that David Stephenson made was that 85% of the eBay analytical workload is new or unknown – you don’t need to know all the questions you need answers for when you start a “Big Data” programme, just look at what you already have, see how this can be supplemented, what is relevant to your market or other areas of business and take it from there!

You’ll be amazed at what you find and the impact that it can have on your business! It is not all about unstructured data, or installing and using Hadoop, it is about using your data and this will most likely fall into all three of the structured, semi structured and unstructured camps and no one tool is going to give you a solution – it is about realising that there is an untapped resource that give you so much – so remember it’s all about the four V’s (well… really… it is the last V we are all wanting to get from our “Big Data”).

Nearly a month behind original indications from the AWS Oracle guys but Oracle RDS is here (released 23rd May).


Below are some quick calculations for a months continuous usage (excluding storage) in EU Ireland for the On Demand model. You will see the cost doubles each time you in theory double the power :-)

On Demand (License Included) High Mem/Extra Large DB Instance $699.36 per month
  High Mem/Double Extra Large DB Instance $1,398.72 per month
  High Mem/Quadruple Extra Large DB instance $2,797.44 per month
On Demand (BYOL) High Mem/Extra Large DB Instance $550.56 per month
  High Mem/Double Extra Large DB Instance $1,101.12 per month
  High Mem/Quadruple Extra Large DB instance $2,202.24 per month

The Reserved model still remains significantly cheaper over a year period – $305.05 per month as opposed to $699.36 but with upfront year 1 cost of $1,850 – so year 1 costs £5510.48 for reserved as opposed to $8392.32 for on demand for the example of high mem/extra large instance.

Some of our Customers have been keen to see progress on the AWS-Oracle roadmap and whilst any progress will always be welcome news, one of the key observations on ‘first-glance’ of the Oracle RDS service is that there is a multi-AZ option but that the only current available selection is ‘No’ – this is not the case for the MySQL RDS service. This will be a limiting factor particularly for our enterprise Customers when considering a highly resilient/highly available database solution. Some digging is required into this but it is likely to be a consequence of the underlying AWS architecture and the ‘new ways’ of thinking in terms of having a ‘shared nothing’ architecture – this currently prevents mutli-node Oracle RAC implementations on AWS.

Based on the recent AWS outage  (see previous blog posting here) and the re-enforced message to implement your solution across AZ’s to ensure high resilience/availability, then this Oracle release may not be enough to impress enterprise-level Customers – it may be a case of  continuing to ‘watch this space’ on the progress along the AWS-Oracle roadmap……

"Small opportunities are often the beginning of great enterprises" - Demosthenes

"Small opportunities are often the beginning of great enterprises" - Demosthenes

Did you know that of the 500 original companies of the original S&P 500 (Standard & Poors) in 1957, only 74 of those remained in 1997 – that is a staggering 85% reduction!

Why is this?

The capital markets in the world today encourages rapid and vast growth leading to greater revenues and wealth, meaning people are expecting companies to adapt and perform to the market conditions and are not too understanding when they under-perform over the long term. Of those companies that are no longer in the S&P 500 one of the key factors that they were unable to be keep up with the changing demands of the markets and been able to be agile in entering new high value markets. Whilst change can be slow, it is always powerful!

It is having this ability to adapt and change that helps companies today to survive and a great way to help your business achieve this agility is by adopting a SOA architecture. It can really help a business create an IT infrastructure that will support a truly dynamic enterprise.

SOA is all about defining services that align and support business process, and exposing these services so they can re-used easily. It also allows these services to change independently so only aspects of business processes that need to change (as the business adapts and changes) minimising the impact on the rest of the IT infrastructure and business.

The loose coupling gained from adopting a SOA also provides abstraction from underlying systems and applications. This allows greater flexibility on consuming and invoking the capability exposed through the SOA, whilst enabling the ability to  switch the underlying systems and platforms as and when this is required.

When you consider all of these benefits you can see the agility this provides your business.

SOA is nothing new and has been around for a number of years now (and in a number of different guises) but in our opinion it is now a widely accepted mainstream approach within IT architecture, which we are seeing with a number of our clients investing and driving forwards with SOA as they have realised (and are already reaping) the benefits that it brings and how that journey can be the beginning  of a great and dynamic future.

It’s pretty much a universal principle in a large enterprise: If you want something new, first you try to reuse, then you try and buy something, and only as a last resort do you build something yourself. 

Sound enough, right? It ought to be cheaper to buy something from a specialist vendor than work it out and build it from scratch yourself. They get the economies of scale, you get a reduced price, everyone’s a winner. 

The only trouble is, sometimes buy before build sucks. 

The trouble is that usually when you buy, you’re making an often large up front financial commitment to something. Not only that, but often we buy something before we’ve had a chance to really work out what it is we need. So, we end up buying the uber product – something that delivers our every whim and desire. 

Very often, when it comes down to it, we buy 100%, use 20%, and wind up bespoking the living daylights out of the rest. The Vincent van Gogh of our vision becomes more like an HR Geiger Alien. It costs as much to customise as it would have cost to build, and far from being a virtual utopia becomes the treacle holding us back. 

So, how do you make sure this doesn’t happen to you? Simple: Only buy what you’re absolutely sure you need. 

How do you know what you really need? Simple: Build it and see how your users use it, rework, repeat. 

Sometimes life has its little ironies…

Photo: Coolmath.com

Now I’ve had time to reflect on the Gartner Enterprise Architecture Summit, London 17-18 May (twitter hashtag #gartnerea), let’s recap starting with the closing keynote.

 The “Berlin Wall” between IT and the business – no, scrub that, –  IT and “demand-side stakeholders” (Kyte, session P5 closing keynote) still looms very large, causing IT practitioners to be apologetic rather than evangelistic about their craft. If the wall can come down, then a “reunification” can take place, but that will depend on CIOs and others being able to articulate IT in business parlance. And that will necessitate IT learning to talk about VALUE.

Only stakeholders can determine what they value, claimed Burke (session G7).  In other words, the focus needs to be more on the value received.

But is this thing called ‘value’ really so intangible and unquantifiable?

According to Kyte in his animated and charismatic closing keynote, enterprise architects must start with analysing full life Cost (estimating real cost of hairy ‘n hungry ‘dog’ not just cost of cute ‘n cuddly ‘puppy’), next enterprise architects need to analyse Value (taking ‘utilisation’ as a proxy for value), lastly they need to indicate Risk (expressing this in simple red / amber / green indicators). Present this as a 3-column report to a chief finance officer or anyone in the C-suite (Gartner-speak for board of directors) and they’ll get it immediately – unlike all other reports typical of the IT department.

Visualisations and models (Allega, session G13) such as Gartner’s new “Root Anchor” model, Business Capability model, Generic Federated model, TIME model, etc are all subservient to a Value Proposition, to answer “how will this be used to deliver a business result?” Perhaps the best way to communicate that is to use Gartner’s Business Value model, authored by Michael Smith (Lapkin, session G16), especially now that the focus for chief executive has changed in the last 12 months from ‘sheer survival’ to ‘return to growth’ (Lapkin, session G16).

 I listened carefully to others who I met complaining that the ‘framework bashing’ during many sessions delivered by Gartner analysts was over-stated and unnecessarily self-serving. Thankfully, Burke seemed to strike the right balance (Burke, session G7) advising delegates to be pragmatic, starting with Future-State Architecture (FSA) before ever touching Current-State and demoting any priority currently attached to selecting a framework or rigidly following a framework. “We’re ‘doing’ TOGAF” was not something analysts wanted to hear.

To address Future-State, and because organisations are increasingly “hyperconnected”, the trick for enterprise architects is to start thinking about “the lines between the boxes” (Robertson, session G23) and begin to work more closely with Sourcing and Risk, not just Security, as it may mean architecting-in some cloud-based capability.

 I debated some of the other conference themes in conversations afterwards with delegates. Among them, an enterprise architect called Carl Chilley, who also picked up on ‘Hybrid thinking’ hailed as a new discipline for transformation, innovation and strategy (Gall, session P2 opening keynote). This is derived from ‘Design Thinking’, which is now at the heart of Gartner’s EA thinking and modelling. Go to Twitter.com and search on the hashtag #designthinking. Chilley said that Gartner is in the process of changing a lot of their EA materials to reflect the implications of this world view. He said that there is a need towards “computing where people matter” (deliberate misquote of the E F Schumacher tag line to the book ‘Small is Beautiful: Economics Where People Matter’) where solutions needs to be people-centric.

Chilley makes a valid point. It certainly echoes the idea that value can only be determined by the demand-side stakeholders (i.e. the people in the business who use the systems that have been architected for them).

 If enterprise architecture is not about technology, it should be about “design [of] systems for humans”. Inspiring stuff for some enterprise architects who may want to take more of their cues from social science than from computer science. More Emile Durkheim, less Tim Berners-Lee?  You decide.

 It does make you think if there is more to ‘Design thinking’ that first meets the eye. During Gall’s opening keynote several books were put forward as suggested reading. These might interest enterprise architects:

 As far as the challenges posed by so-called ‘Wicked Problems’ those problems that just cannot be factored in and for which there are no apparent solutions. We pondered what an enterprise architect’s equivalent of an Icelandic volcanic ash cloud might look like…  a cloud of another type, maybe.

As a systems integrator and consultancy, we at Smart421 frequently have to justify to potential clients why they should use us, and explain what value we provide to their organisation.

Thinking on this point, which is primarily an issue for sales and marketing types, you will also realise that this applies at a personal level too – as an individual, during regular personnel reviews, or if you are job hunting, people need to explain the value that you bring to a company. I’m not gong to discuss the approaches that bring best results in these particular aspects (hey, we are all in competition at some level or other, but if you want to make a pitch to join Smart421 you can certainly check on our job openings and apply – a good pitch from you can get you into the team).

Back to the consultancy aspect, a prospective client has to feel comfortable that we will provide a level of service that they will be happy with, and that helps them in achieving something that they may not otherwise be able to do at that time. So what are clients buying? Two things, I would say. First is the approach that we bring in working to resolve their problems. Second is the experience and expertise that we hold as a company.

Smart421 has expertise over the whole area of the software lifecycle, right from strategy, enteprise architecture and analysis through to application design, development, delivery and support. We also handle migration and retirement of systems at the end of their useful life too. Within that whole range, we have knowledge of numerous software products and platforms, alternative project management approaches, quality controlled processes for delivery and service management as well as our own controls for staff and finances as required for any company.

If a single person held all of these skills, how valuable do you think they would be to your company?

An application from someone that could list and validate all of this knowledge on their C.V. would seem almost unbelievable. But that is what you get if you use our services – access to all of that knowledge and expertise, provided on either an individual or team basis.

Any commercial agreement will of course define the terms of each particular engagement, so you don’t get endless access to all of this just through using an individual consultant. That consultant does however have this backup to refer to, increasing their value substantially to the end client. For a team of consultants, that provides more of the same, through greater points of contact and an enhanced collective viewpoint. If you could create a small internal team with this large amount and range of knowledge, imagine the potential benefits to your business.

When project work is handled by Smart421, this same set of skills and knowledge will be used to assure reliable delivery, using best practice and with our commitment to providing the best solution to meet client needs.

Enough of the sales pitch. I just thought it worthwhile to put forward some viewpoints about the value of using at Smart421, especially from the point of view of being part of the team that has to deliver on our promises.

So next time you are considering the use of third-party resources and question the value over that of internal resources, or plain hired-in contractors, this should provide some food for thought about what you are actually buying from such a consultancy.


Get every new post delivered to your Inbox.

Join 1,122 other followers