This is about a recent British Computer Society (BCS) event on Stakeholder Management.

The event was entitled Creating Value through effective stakeholder management and the presenter was Alison Charles.

Date / Venue: Thursday 6.30 start 20th Feb 2014 /  BCS, Southampton Street, London.

Why go to BCS meetings?

My reasoning is to:

        • Learn some new stuff
        • Get a different slant on a topic
        • Remind myself of some of the basics
        • Be able to do a better job tomorrow than I did today
        • Network
        • Enjoy the free buffet!

Back to Thursday evening, here are a few snippets ….

Who needs a slide deck,- not our presenter! Two flip-charts, some marker pens and the confidence and knowledge to communicate effectively.

Melding extensive experience as a Project Manager (PM) with more recent experience of being a Neuro Linguistic Programming (NLP) trainer; at times Alison got the whole audience on their feet, trying 2 minute exercises in pairs, some of which, at times, made us a little uncomfortable.

The 1st flipchart (there were only 3 or 4 – KISS!) was:

Stakeholders Impact of change for each
Their present mind-set Support needed (= what can they do to help the project? –   but have a contingency, just in case they don’t)
Their concerns, issues, resistance Influence, strategy, responsibility.Test out success areas
Risk – a PM may guess the present mind-sets, instead draft and use questions to get answers
Suggestion – Identify problem areas by comparing your understanding and their understandings. It’s a lot about putting yourself in their position and viewing the situation.

And that leads us on to …


Shared world view, finding common ground, getting along, feeling between two people.

How do we communicate?

Face to face On the phone
pie chart for blogSource   – Mehrabian Communication Model Words 16%Tonality 84%

So it’s not all about what you say after all!

Back to the “R” word, it is not a panacea, but it can get you to a place where you can talk about the issue :>)

The 4 MAT system  ( was mentioned and it turns out that I am a bottom right hand quadrant man. I like What!  So I took notes that enabled me to recall stuff for this blog!

Alison shared some personal experiences including how she converted the adversity of losing her luggage on an international flight into a successful outcome making use of some emergency overnight items provided by the airline (razor, T-shirt and a condom!) – only kidding – she only used one of the items listed!

I enjoyed, and got value, from “Creating Value through effective stakeholder management”. It ticked my “why go …” boxes from above and I will be attending more BCS events.

So why not have a look at what is on offer and give it a try?

How to join the BCS.

I am Gordon Elliott a business analyst/consultant with several decades of experience working in IT. I have worked for Smart421 (part of KCOM) for the last decade and have been a BCS member for 3 years.

Tonight’s British Computer Society (BCS) event in the Davidson Building opposite the Savoy in London was the first one I’ve attended for years. The subject was the Network Rail “ORBIS” programme which focuses on data asset management.

The attraction for me was three-fold: First, I’m developing a keen interest in the railways generally thanks to our work with ATOC RSP and second, because of Smart421′s focus on BigData challenges. Thirdly, the agenda mentioned something about specialist mobile apps, which is my main interest in leading our Enterprise Mobile capability in Smart421 which is backed with IBM Mobility solutions.

Secure end user enterprise mobile devices

The slick presentation from the impressive Davin Crowley-Sweet and his team brought out the approaches to data as an asset and highlighted the challenges of modernising an organisation that still collects much of its information, such as survey data, on paper. The slideshow was “Prezi” style zooming in and out like a game of Angry Birds and contained a LOT of detailed analysis and roadmaps which made me glad I sat in the front row to try and take it all in.

My rough notes are summarised here but I believe the interested reader can learn much more about ORBIS (which has been “reverse-acronymed” with the name “Offering Rail Better Information Services”) from the Network Rail website. The first point to note is that this is not another BigData experiment but an extremely far-reaching business transformation programme, focussing on business processes and aiming to be business-led. There are very serious business benefits already being gained and (according to the paper linked to above)

the £327 million total cost of ORBIS is expected to yield £270 million in asset benefits plus £100 million per year in maintenance improvements.

An iPhone for every maintenance engineer

Maintenance worker using new tablet technology

The company took the far-sighted step of giving every engineer their own ruggedized iPhone along with the request to use the built-in features as best they could to suggest what can be used. Not surprisingly, one of the most useful features was GPS, accurately pinpointing problems with rails. Then when engineering work comes along on the Sunday consigning us travellers to the replacement bus service the disruption should be minimised as engineers should be guided directly to the problem location. Taking photos with the iPhone of problems with rails was another obvious use and now there are Apps being developed (one per month on average) that can automate some of the data capture along with the photos. I asked the presenter Edward afterwards about the apps and he said there are now developments across other platforms and O/S to use other (perhaps cheaper Android-based) handsets. Having around 12,000 engineers with a ~£500 iPhone is significant cost although the positive aspect of the workers being given the desirable devices should not be overlooked.

These unstructured data inputs were combined with significant mapping data, collected using helicopters and cameras on trains similar to “Google Streetview”. I think they mentioned collecting a few Terabytes of data from each train journey that was recorded in this way. That is some BigData problem to deal with the analysis and Network Rail to develop some sophisticated decision support rules around which stretches of railway need maintenance work most urgently based on the data analysed around curvature of the line, weather conditions, etc.

Some 20,000 miles of track, 40,000 bridges and tunnels and huge electrical, telecoms and physical networks make for a highly complex set of problems to manage. The seven year mission focuses on 5 asset types: Fixed assets, Fleet assets, Topology, Topography, Unstructured data (schematics, drawings, etc). It has several defined stages, in stage 1 it asked “what?” and “where?” and continues in stage 2 to try joining up and optimising the collected asset data.

The Contribution of Data Analysis to saving lives

There is a strong emphasis on improving safety through improvements to data management; recommendations following rail disasters like the terrible Lambrigg crash in Cumbria, for which Network Rail are still apologising, said that the points failure could have been detected and fixed earlier with better data analysis. Literally a case of life and death software.

Davin answered questions after the presentation and made the very relevant observation that enterprises should manage their operational data in the same way as they manage any other (physical) asset: know what it is and where it is, monitor its quality, use it while it is relevant and when it reaches end of life get rid of it.

Attribution: Phillip PerryTonight I attended a talk by Sainsbury’s IT Director Rob Fraser (hosted by the BCS ELITE group) – who was voted Computing’s CIO of the Year 2011 no less! Whilst I do rub shoulders with the odd CIO, it’s usually on a very specific topic so tonight was a great opportunity to hear some candid details of what it’s really been like driving through a three year transformation programme at Sainsbury’s, what the gotchas were, what he’d do differently next time etc.

So here’s some of his key pieces of advice and reasons for success:

  • They used an architecture-led approach. This was music to my ears to be honest – I tend to oscillate between “(enterprise) architecture is the answer” and despair at the state of the architecture profession in practice, so this gives me hope. When he landed they had an architecture team of about 5, and to drive the transformation programme they grew this up to about 40 (with approx 500 staff in IT, excluding external delivery resources, that’s moving from a ratio of 1:100 to 1:12.5).
  • He hired some key staff for the IT transformation from the retail side of the business, not from IT (but supported by strong technologists). The credibility of a business representative massively helped drive change through with the rest of the business, and enabled more educated push back on the inevitable attempts at scope creep. In fact, the whole transformation strategy was very people-focused.
  • The transformation programme was tracked over a 3 year time period, and the original “plan on a page” was still in use as the benchmark to measure progress against every year (i.e. they did not suffer from a “stretching” programme), and the programme had a definite end. It started, ran, and finished – rather than the original objectives bleeding into other change activities and giving a fuzzy back-end to the work.
  • “Time to value” for any delivery was limited to 18 months maximum – as most organisations just don’t have the attention span to concentrate on anything with a longer payback.

At Smart421, we actively get involved in many communities, from the Websphere User Group to Vbug, the British Computer Society to the local chamber of commerce, but often the most rewarding community we are part of is our local community.

It is great for us to be able to get involved with our community and help others within it, such as when we donated a laptop to Private Liam King who was injured during tour in Helmand Province, Afghanistan and was being treated in the Queen Elizabeth Hospital in Birmingham many miles away from his family. By having the laptop Liam was able to stay in touch with his friends and family back in Ipswich (please click here for more on this story).

Another great example of our involvement in the community was last Sunday (15th May) where Smart421 sponsored the Woodbridge 10K road race and entered a team of 20 members of staff (many of which are pictured here) who helped raise funds in memory of Helen Barrett who was step sister to one of our colleagues and a friend to many of us who sadly lost her battle with Cystic Fibrosis last year.

The whole team put in a lot of blood, sweat and tears in the hours training, but we all enjoyed the day with a few personal goals achieved and a great sense of the good we can do for our community as we have managed to raise over £800 so far (there is still time to donate).

It is this kind of involvement in the community and commitment, dedication and selflessness of the team that makes me proud to work for Smart421!


Woodbridge 10K – The Movie.  Released 25 May 2011.  Find it on YouTube click here

Soup I had on the way home - not warm enough but yum! It’s been a while since I’ve managed to attend one of the BCS EA specialist group meetings, so it was good to make my way to Southampton Street for tonight’s meeting, which was a joint session with the BCS North London branch.

Both presentations were by EAs from Capgemini, and as always I got something out of both of them. Rob Rowe went first by giving a high-level view of the characteristics that an EA needs and what their role consists of (knitting fog, herding cats etc :)). As always, the audience questions provoked by the presentation were really interesting – the typical BCS audience contains a wide range of perspectives and disciplines. Much of the debate focused on the response…

…but if that’s what an Enterprise Architect does, I do that as well…

…where the claim was made by either a business analyst, solution architect, IT director, IT strategist or technical design authority. Bottom line – there’s clearly lots of overlap in these roles, and different organisations define these terms differently. It was also raised that the term “architect” shouldn’t really be used for legal reasons (but “architecture” was OK)…

Richard Noon then went through an interesting EA case study. The key theme from this presentation for me was about traceability – traceability from requirements (represented as use cases maybe, including NFRs, and a logical services catalog) through to the underpinning components and technical services that realise those requirements. It sounded like a great endeavour, and the architect designer in me liked the idea of being able to perform automated impact assessments of a change in one component by just querying an EA model. The idea of “one model to rule them all” is appealing to all modelers, but back in the real world I know that modeling the ar5e off everything rarely ends in a happy place…so I had to suspend my cynicism about how well any model would be maintained going forward etc. But I’m not knocking it – it was a compelling advert for the power of EA.

For a winter’s night in London the turnout was really good, and I’m pleased that so many people gave up their evenings (especially the two presenters) – even if it means you do have to have soup at Liverpool St on the way home…

One of the great benefits of being a member of the British Computer Society is being able to attend the very good conferences, forums and presentations they host and run, one of which I attended earlier this – “Software as a Service – Is it the right time to move services into or from the cloud?”.

The event was run by the CMA who are part of BCS from the BCS headquarters in London and was very well attended, with the likes of IBM, Shell, Sun, Fujitsu, Logica, Barron McCann, Experian, BT, Siemens, Ordnance Survey and many others sending delegates showing the rise in interest in Cloud Computing across all sectors. There were seven presentations in all, two of which were examples of companies that have embraced Cloud Computing, and a presentation from Rahoul Bhansali from Hudson & Yorke who has been heavily involved in the government’s G-Cloud initiative (which is something I am sure I / Smart421 will blog on later as the initiative matures). However for this blog entry I want to focus on the two presentations that caught my eye and overlay these with the two examples of companies embracing Cloud Computing.

The first presentation I want to focus on was from Nick Coleman of IBM (author of the Coleman report) who delivered a presentation on “The future directions of the Cloud Computing”. The presentation focussed on the message that “Cloud Computing is evolutionally and not revolutionary”, something which I wholeheartedly agree with; managed services have been around for years, virtualisation is not new and many companies have run / hosted applications on hardware hosted by a third party – Cloud Computing is about pulling these together and enabling different combinations of SaaS, PaaS and IaaS to be put together to meet the specific and individual needs of a company. Cloud Computing is transformative; it offers businesses, especially SMEs, a lot of benefits such as a reduction in costs, agility in deliveries, quicker time to market, easy to scale solutions etc, but there are still a number of considerations that need to be taken into account – security is still the main concern with the Cloud – in 2009 just under 50% of vulnerabilities found in web apps were not patchable, with Cloud based solutions you are not aware of what else is being hosted with your application, and where is your app being hosted? – these are all answerable but you will most likely have to dig down to get these answers and may even end up having to pay extra to resolve them. As Nick alluded to during his presentation, Cloud Computing is transformative, businesses and people need to adapt and be willing to relinquish control over some key aspects that have traditionally been kept in house. It was these considerations that featured heavily in the presentations from the companies that had embraced Cloud Computing – Ordnance Survey and EasyNet Connect.

Richard Britton who was up until recently IT Director for EasyNet Connect, presented on how they migrated their complex 50+ applications into a SalesForce Cloud based solution. They also surveyed their customers and:

  • 73% of their customers said they would be using Cloud based solutions within 5 years
  • 37% of their customers said they would be using Cloud based solutions within 1 years
  • 66% of their large SME (50+ staff) customers are already using Cloud based solutions or plan to do so within a year

EasyNet have definitely embraced Cloud based solutions and plan to press forward with this as they grow.

Tim Martin and Ian Hoult from Ordnance Survey presented on how the OS OpenSpace product had been migrated to the Cloud; The OpenSpace application is used by many companies, including many local councils that seek the level of detail in the maps they provide. The application has between 5,00 and 10,000 updates on a daily basis that were shipped to all of the app consumers on a 6 weekly cycle, this equated to about 60GB worth of information so had to be loaded onto multiple DVD’s so was a very expensive process and a prime candidate to be converted to SaaS. In addition to the expense of shipping all of the updates, the demand for information on the local council web sites can flux quite a bit, the example they used was the recent increase in volume (around 80% above and beyond normal use) during the cold weather of people looking for salt bins. With this is in mind, it would have being very expensive for them to build the extra capacity that would be required to deal with a huge serge in requests. They decided to push OpenSpace into the Cloud to deal with these issues and can now serve 7 million tiles as opposed to 500,000 and for a third of the cost for delivering and now have a scalable and economical solution with reduced support costs – they can now add an additional server into an environment within 5 minutes if the need arises. With that said it wasn’t all plain sailing, they had to switch their database as the licensing with their existing DB was not “Cloud Friendly”, also the data has been split, so all personal data is held locally within Ordnance Survey and only the public data is hosted within the Cloud within the EU. All in all they have seen this as a huge success and will be delivering 2 more of their products into the Cloud, but will spend more time in looking into the contracts and licensing aspects which leads me nicely into the 2nd presentation I wanted to focus on that was delivered by Richard Kemp from Kemp Little who specialise in IT litigation on the differences between cloud services and outsourcing.

Richard, who has worked with Google and SalesForce on this subject, was a very captivating speaker with an interesting subject, one of which I think is the most important for those looking at Cloud Computing. As I mention earlier, Cloud Computing is evolutionary and not revolutionary, we have all been involved in outsourcing in some way, shape or form in our careers and we all know that one of the hardest aspects is the contracts that need to be drawn up to support these. The traditional approach is a transfer out of assets and in return you receive a service contract. Richard said that with the ever increasing shift to Cloud based solutions, things are not changing as much as people think; you still need to set the KPIs, SLAs and service description but now certain items are becoming critical to these agreements such as:

  • Performance – key service metrics need to be specified
  • Availability – with the services being accessed over the internet who bears the risk of an outage?
  • Data – Data Security is always a key item to any business, but when hosting that data in the Cloud you need to be sure that the data standards applied by the Cloud provider is that that you require; you should be able to access that data at any given point and most importantly, you need to be able to get your data back at any given point and in a usable format
  • Exit – an exit strategy is key – being able to regain control of the service in a quick and timely manner

As well as these key points, there are some differences in the approaches taken to draw up the agreements for either outsourcing or Cloud:

  • Outsourcing tends to be a custom / bespoke deal where as with a Cloud based deal being a one to many / generic deal
  • With outsourcing the supplier is responsible for delivery; with Cloud based solutions the customer takes on the risk of their services being served over the internet and accept the risks with that
  • Outsourcing generally work to a set up front free where as Cloud is generally pay as you use

I think the key thing is to do all of the hard thinking up front when thinking about moving to a Cloud based solution, look at the key points above, remember the criticality of your data and the fact that the customer is responsible for the data (make sure you specify where it needs to be hosted, what standards need to be adhered and make sure that the data can not be used with express consent), make sure an exit plan is agreed up front as this helps things nearer the end of any agreement and helps avoid litigation as it is always the lawyers who win!

All in all I thought the event was very good and gave some excellent insight as to what to bear in mind – in my honest opinion I think now is the time to be moving items into the Cloud, SOA is fast becoming adopted as much of a standard as OO and Cloud based solutions can be a truly cost effective, flexible and scalable solution for any business!!

Samuel HolcmanThis evening I attended this BCS EA specialist group event at the IET building in London. The presentation was given by Samuel Holcman from EACOE, a US-based EA consultancy/training/certification organisation that are trying to break into the UK market. Consequently the presentation was a bit of a sales pitch and didn’t feel very “BCS-like”, but as always with these things whilst most of it was the usual familiar EA messages, there were a few interesting little snippets that I picked out.

I must admit to being pretty restless during the first 40 minutes (!) when Sam went over the usual intro material – definition of EA etc. Everything he said was sensible. But how many times can you have the basic interrogatives of the Zachman framework explained to you before you start to glaze over? I appear to have reached my limit anyway! To be fair, there was some interesting historical new ground covered when he was describing the pre-Zachman seminal paper days in IBM during the tenure of Dewey Walker.

Then he launched into the “Maximising Business Sponsorship” material. I was expecting the discussion to be about how to get the business excited about the possibilities of EA, why is it important to them, and how to keep that excitement/engagement alive over an extended period of time etc, but Sam focused on the initial EA engagement really and how to maximise business sponsorship during those early days. In our experience this is not the problem – not saying that it’s easy, but it’s keeping it going over the long term that is tougher. We were discussing this afterwards and someone proposed an interesting theory – that “organisational memory” is about 3 years long (related to organisations’ regular personnel changes and re-orgs), and so in that time period anything older is forgotten and tends to then get performed again (such as EA initiatives). The people you dealt with 3 years ago have all been replaced and moved on, and they didn’t tell their replacements which cupboard the corporate data model was put in…

His key message was that business engagement/sponsorship requires 3 things:

  • A clear methodology and defined roles/responsibilities for the EA effort. In a discussion afterwards with my BCS colleagues the general conclusion was that business guys generally won’t want to know the method you are using (in detail at least), but will want to be reassured that you have one, i.e. you’re not making it up as you go along. The method outlined by Sam was TOGAF-esque in nature – as we know there’s only so many ways of skinning that particular cat.
  • “Human consumable” outputs. Sam outlined some sensible practices and rules of thumb here for “consumability”. However I was pretty amazed when he said that all the outputs they produce are either in Visio or Excel. I like the idea of outputs being in a business-friendly format, but maintaining them in Visio? If you needed just to rename something used in more than on diagram, then…er…oh dear. Please… :)
  • Traceability. Now, I initially thought he meant traceability from corporate goals->strategy->divisional goals->divisional strategy->projects, or something similar – but maybe I jumped to that conclusion as that is one of my areas of work at present. But he meant traceability of everything (corporate goals etc) to the actual source document section that they were harvested/discovered from. This is new news to me and seems like a lot of work (and implies a kind of “plough through hundreds of documents to discover the enterprise architecture”-type approach), but I can see the benefits. It draws out and provides direct evidence for conflicts in the views between business stakeholders and also demonstrates that nothing has been “made up” by the EA team.

All in all, I’m glad I attended. It’s always a bit of a hassle to go to an event like this after your day job but Sam provided good sensible reminders of the EA basics (e.g. eat your own dog food, start scope small and build on success, communications strategy is key, have a method, timebox for frequent deliveries etc) and also threw in some good provocations to keep me interested. Thanks!

ByronCookThe other night I attended the BCS Roger Needham lecture held at the Royal Society in London – this is an annual event that recognises outstanding research work in the IT world and provides a platform for the author to present. This year it was the turn of Dr Byron Cook from Microsoft Research, speaking on the subject of “Proving the programs eventually do something good”. Byron reminded me of a cross between Jarvis Cocker from Pulp and Smart421′s WebSphere practice manager, and there is a bit of the IT research pop star about him (if such a thing exists!).

My attendance was part of my strategy to follow Guy Kawasaki advice – “eat like a bird, poop like an elephant”, although I must admit that beforehand I was pretty sceptical of how useful I would find the event.

So I have to say straight off – the presentation was utterly, utterly fascinating. He was a great speaker – interesting, amusing, great anecdotes and just eccentric enough to be a convincing academic. To be honest, he is actually quite a practical academic – having produced real code solutions that solve real problems and are in use today. I found myself fished in and attracted to the idea of the purity of academic research – at least until he got started on the maths and the set theory :). The basic message of the research papers he and his colleagues have worked on is that static analysis of code can prove that programs will eventually terminate (i.e. not hang), and that some other programs will hang under certain conditions, but that this cannot be proven for all programs. The nature of some programs means that it just cannot be proven either way. Ever. Turing worked that out years ago. But his vision is that it will be possible to increase the proportion of programs in the “we definitely know if we have a problem or not” category to a useful level, giving you a kind of automatable 100% test coverage for some programs.

So in the future, as well as just using the static code analysis tools like CheckStyle, lint, FxCop etc that we use today, we could also perform static analysis that will prove (I say again – for some programs) that they will not hang. Impressively he has already demonstrated this by finding some bugs in Windows device driver code (the size of which was up to 30k lines) which had previously gone undiscovered despite being in use in the field. Of course there are many barriers to this becoming mainstream – two of which were that it sounded like you needed pretty huge processing power (so could only be done overnight typically and maybe using a high degree of parallel computing capacity, although fortunately the algorithm does fit parallelism quite nicely) and also that the range of ‘checkable’ programs and data structures is currently quite limited, e.g. today we can handle linked lists but not tree structures.

But Byron gave me hope that static code analysis is not a ‘dead’ area and that advances are being made that can lead us to produce better software in the future. If you want to read more, have a look on the Microsoft research site.

BCS SouthamptonStClockAs threatened, I attended the BCS Enterprise Architecture specialist group meeting in London near Covent Garden yesterday. There were two interesting presentations regarding EA case studies, both of which caused lots of debate and questions from the gathered audience.

Amit Apte from SITA presented an anonymised case study from the airline industry. As is always a good idea, he started off with a provocation – that “enterprise architecture is boring” – and I think he was a little disappointed with the lack of reaction he got to that statement. I tend to agree to the extent that if it’s all done well, then it should be largely mechanical in nature and so rather dull. However in general this is not the case – in fact it is far too exciting! But the key thing that attracts me to EA is the ability to influence with a more significant level of impact, e.g. stopping the wrong change projects and starting the right ones, rather than operating at a solution architecture level and only influencing with a smaller scope. Maybe it’s a power thing, anyway…

He got a fair amount of stick from the audience about whether this was really an overview of an EA initiative or a one-off enterprise-wide solution architecture that had used a modified version of TOGAF v8 as a method, and the discussion centred around whether a sustainable EA function had been created or not.

Andrew Jolly from Deloitte presented an anonymised case study from the TMT (technology, media, and telecom) sector – concerning the creation of an EA governance capability in an organisation of circa 29k staff/partner staff. Some interesting things came out…

  • Both presentations mentioned something like “the business don’t need to know about EA”, i.e. a rather depressing but not unusual admission that selling the concept of EA to the business community is in the too hard pile. Andrew added to this with the sage comment that brand awareness for your EA initiative with the wider stakeholder community is key though, even if it is a meaningful acronym. Call it A.A.R.D.V.A.R.K. (I’ll leave that as an exercise for the reader to come up with) or whatever, but call it something so they can hang a label on what you are doing for them.
  • The usual general advice applied – start small to demonstrate value, winning over hearts and minds by demonstration of real value rather than selling potential future value. The good old “virtuous circle” that Smart421 (and especially Richard Latham) have been banging the drum about. I was going to ask about metrics etc but someone else got in before me – and got the answer I expected which was they set up some KPIs at the start, but eventually the qualitative measures took over, i.e. did the projects the EA function had cherry-picked to engage with ‘feel’ that they were adding sufficient value.
  • One of your selection criteria for where to start is on a project where you have influence (probably by chance, e.g. you know the project manager personally etc).
  • As a mechanism of starting to embed the idea of architects contributing to projects/programmes, Andrew’s suggestion was that you could provide a ‘free’ architect as projects will never turn away free resource. You need to ensure they are a good resource and then some benefit will naturally emerge and the project will see it, and so be more likely to ask for (and then pay for) architecture input next time round. Obviously this requires some seed funding which is non-trivial to find.
  • Even in what may appear to be an organisation that appears architecturally doomed initially, there are generally people inside the organisation who are performing a psuedo-architecture role some of their time even if they haven’t got the title and wondrous benefits package that goes with it. Otherwise how has the enterprise made it this far? Get these guys involved in some kind of virtual team as they hold the keys to your initial EA artefacts.
  • Publish EA materials early – don’t wait for perfection as you’ll never get there. Even if they’re wrong an early viewing so that they get ripped to bits (hopefully not too badly) and then improved is a good thing. Obviously they’ve got to be of a certain quality though. This point really reminded me of the practice recommended I read in a book from Guy Kawasaki about releasing new products to market early, which is completely common practice in the software industry. His quote – “Revolutionary products don’t fail because they are shipped too early; they fail because they are not revised fast enough”. Hence never buy a vs.0…
  • What was the biggest risk to the EA function that had now been established? Andrew’s view was that it was “taking our eye off the ball” and losing sight of the fact that the roadmap for the EA function itself must be maintained and pursued – just like the other roadmaps that the EA function might generate for business architecture etc.

Andrew’s parting message was an interesting one – that putting in place an EA capability is a business change project in itself and so should be treated as such, i.e. get the organisation’s “change” people involved to execute the business change.

It was good to put some faces to the names of some more of the the movers and shakers in the UK EA world – I can now point Amit, Andrew, Tom Graves and Sally Bean out of a line-up if required…

Specialist group chairman Mike Buck mentioned that the next event is a presentation by the grand-daddy of EA, a certain Mr John Zachman on October the 6th, so I expect that event to be very well attended…


I’ll be attending the BCS EA specialist group meeting tomorrow (10th Sept 2009) at 18:00, 5 Southampton Street, London – at this meeting there will be a presentation of case studies from specialist group members focusing on the following areas:

  • Business value delivered by EA
  • Professional development for enterprise architects
  • Governance of enterprise architecture

I’m looking forward to meeting my EA peers, and I’m keen to hear different perspectives on questions such as these…

  • If the EA effort has been going for a while, are they entering the “trough of disillusionment” and how have they addressed this?
  • What was the trigger event that meant they got funding/energy for an EA initiative? New CEO? A big disaster that they were rebounding from?
  • Are they well engaged with project/programme teams?
  • Architecture and/or project governance in place? Working well?
  • Centralised or decentralised model?
  • What lessons have they learned?
  • Views on TOGAF and other frameworks?
  • Any relationship to business process improvement teams, six sigma initiatives etc?
  • Budget, team size etc
  • EA cycle times
  • Communication approach?
  • Metrics in place?


Get every new post delivered to your Inbox.

Join 1,084 other followers