There have been some high profile instances recently of a Mobile App being “Retired”:

  • LinkedIn for iPad (they are retiring versions prior to 7)
  • Flappy Birds (yes, I know this link goes nowhere – it’s been withdrawn!)

Let’s take the second one first as that has even made CNN News and the pages of TIME as incredulous tech and gaming journalists speculate about the real reasons why anyone would voluntarily sacrifice over $50k a DAY in revenues by withdrawing an App from Apple and Google stores. Maybe Dong Nguyen just made enough money, or maybe he really was getting fed up with something that a lot of App Developers forget about – how to support your App in the ever-changing world of mobile. Or maybe he just wanted to create loads of publicity before cheap imitations like this took over.

In the former example of LinkedIn, like many of you probably,  I’ve been getting emails for a few days now encouraging me to change:

We wanted to follow up and remind you that we’ll no longer be supporting LinkedIn iPad app versions older than 7.0 starting February 18. This will help us focus on creating even better mobile products and experiences for you.

You currently have one of these older apps, but you can download the latest app anytime from the iTunes App Store. It’s a brand new app — we think you’ll like it! With the new app you can now search for jobs — plus like, share, and comment on what you’re reading.

Have questions? Visit our Help Center for more info.

Now, this is in spite of my having updated to version 7.1 of the App almost as soon as it came out as I regularly update my Apps. Why don’t they know that and stop spamming me? Oh, I forgot, that’s what LinkedIn does best…

“So what?”  you say…

Well, one common theme is that the “idea” to “retirement” lifecycle of mobile is fast – less than a few months in flappy birds (rather extreme) case and seems like LinkedIn have put some thought and effort into trying to ensure customers did not continue using their unsupported App version. This is accepted and understood by consumers who most likely downloaded the thing for free anyway but what if you’re the CEO of a company that just invested a few hundred thousand in developing some internal Apps for your employees?

Most people accept that the mobile development landscape is complicated and not getting any easier, in spite of cross-platform tools and web development paradigms so one of your pillars of your Mobile Enterprise is managing those Apps, supporting them, providing updates as operating systems update and, before long, retiring them completely. Have you thought through this before you launch your Apps on to your staff or customers?

We are seeing common trends, one very obvious one is that developing successfully for mobile within the Enterprise needs Agile methods to deliver value. So in a mature organisation a good choice for extending development to cover inception and longer term management could be an extended Agile delivery lifecycle such as provided in Disciplined Agile (DAD). The lifecycle extends your standard iterations to provide the initiation and support parts of the lifecycle.

extended-agile-mobile

The important points are not to stifle innovation, nor to slow down responsiveness to your users’ demands but to make sure you don’t waste your innovators’ time supporting out of date code and you also notify your users to get new versions in an intelligent way. Notification of users seems such a simple and common practice it’s amazing that Windows-8 Mobile doesn’t have common notification management yet although it’s rumoured to be coming soon as the Action Centre.

Having only just bitten the bullet and dumped my Android phone for a shiny new Nokia Windows-8 handset I’m finding first hand now a lot of these subtle differences in maturity between Android, iOS and Windows-Mobile, but Microsoft/Nokia are catching up fast and needs to be part of your mobile first strategy.

I had a demo of Lombardi’s business process management (BPM) tooling the other day – Lombardi TeamWorks.

The first thing to note is that the tooling is Eclipse based and uses BPMN notation – in fact at first glance I thought it was IBM WebSphere Business Modeller as I’m more familiar with this! It looks very similar, and all the concepts are equivalent. I guess BPM is BPM and so the “problem space” is what it is and so they will all tend to look the same. The difference is down to the quality of the implementation between vendors. It certainly looked easier to use than WebSphere Integration Developer (WID) – the Java code was all hidden from the designer until you get to wanting to put custom controls in JSPs or invoke external services. Debugging a process flow fired up quickly, no booting up a test instance of WAS required etc. Also, there is no real mention of BPEL, unlike in WPS (WebSphere Process Server).

There is also a SaaS modelling environment for business users called Lombardi BluePrint – this uses a simpler subset of BPMN, and so is essentially a clever online Visio-style tool. You can can pull models from BluePrint into the Eclipse dev tool and flesh out the process with more technical/implementation detail, as you might using WID for Modeller models.

There’s a business portal web app for business users to launch business processes, see what’s “in their inbox” etc – very similar to the ‘Business Space’ portal that WPS v6.2 now has, so I think IBM have caught up a bit here as the ‘human task management’ part of WPS has up until now been a weak area.

Inbuilt rules engine support is weak apparently (like in WPS IMHO, hence IBM bought iLog and got JRules, amongst other things), so you’d need to integrate with external “decision services” if a strong capability was needed.

The other vendor in this space that keeps coming up in our customer engagements is Peg a (strong in the business rules area), and in general my view is that are all pretty much capable of doing the job. One word of caution is that I’d have thought that as an almost “single product” vendor, Lombardi must be at risk of being acquired at some point, which could lead to consolidation into the acquirer’s existing product lines etc.

BCS SouthamptonStClockAs threatened, I attended the BCS Enterprise Architecture specialist group meeting in London near Covent Garden yesterday. There were two interesting presentations regarding EA case studies, both of which caused lots of debate and questions from the gathered audience.

Amit Apte from SITA presented an anonymised case study from the airline industry. As is always a good idea, he started off with a provocation – that “enterprise architecture is boring” – and I think he was a little disappointed with the lack of reaction he got to that statement. I tend to agree to the extent that if it’s all done well, then it should be largely mechanical in nature and so rather dull. However in general this is not the case – in fact it is far too exciting! But the key thing that attracts me to EA is the ability to influence with a more significant level of impact, e.g. stopping the wrong change projects and starting the right ones, rather than operating at a solution architecture level and only influencing with a smaller scope. Maybe it’s a power thing, anyway…

He got a fair amount of stick from the audience about whether this was really an overview of an EA initiative or a one-off enterprise-wide solution architecture that had used a modified version of TOGAF v8 as a method, and the discussion centred around whether a sustainable EA function had been created or not.

Andrew Jolly from Deloitte presented an anonymised case study from the TMT (technology, media, and telecom) sector – concerning the creation of an EA governance capability in an organisation of circa 29k staff/partner staff. Some interesting things came out…

  • Both presentations mentioned something like “the business don’t need to know about EA”, i.e. a rather depressing but not unusual admission that selling the concept of EA to the business community is in the too hard pile. Andrew added to this with the sage comment that brand awareness for your EA initiative with the wider stakeholder community is key though, even if it is a meaningful acronym. Call it A.A.R.D.V.A.R.K. (I’ll leave that as an exercise for the reader to come up with) or whatever, but call it something so they can hang a label on what you are doing for them.
  • The usual general advice applied – start small to demonstrate value, winning over hearts and minds by demonstration of real value rather than selling potential future value. The good old “virtuous circle” that Smart421 (and especially Richard Latham) have been banging the drum about. I was going to ask about metrics etc but someone else got in before me – and got the answer I expected which was they set up some KPIs at the start, but eventually the qualitative measures took over, i.e. did the projects the EA function had cherry-picked to engage with ‘feel’ that they were adding sufficient value.
  • One of your selection criteria for where to start is on a project where you have influence (probably by chance, e.g. you know the project manager personally etc).
  • As a mechanism of starting to embed the idea of architects contributing to projects/programmes, Andrew’s suggestion was that you could provide a ‘free’ architect as projects will never turn away free resource. You need to ensure they are a good resource and then some benefit will naturally emerge and the project will see it, and so be more likely to ask for (and then pay for) architecture input next time round. Obviously this requires some seed funding which is non-trivial to find.
  • Even in what may appear to be an organisation that appears architecturally doomed initially, there are generally people inside the organisation who are performing a psuedo-architecture role some of their time even if they haven’t got the title and wondrous benefits package that goes with it. Otherwise how has the enterprise made it this far? Get these guys involved in some kind of virtual team as they hold the keys to your initial EA artefacts.
  • Publish EA materials early – don’t wait for perfection as you’ll never get there. Even if they’re wrong an early viewing so that they get ripped to bits (hopefully not too badly) and then improved is a good thing. Obviously they’ve got to be of a certain quality though. This point really reminded me of the practice recommended I read in a book from Guy Kawasaki about releasing new products to market early, which is completely common practice in the software industry. His quote – “Revolutionary products don’t fail because they are shipped too early; they fail because they are not revised fast enough”. Hence never buy a vs.0…
  • What was the biggest risk to the EA function that had now been established? Andrew’s view was that it was “taking our eye off the ball” and losing sight of the fact that the roadmap for the EA function itself must be maintained and pursued – just like the other roadmaps that the EA function might generate for business architecture etc.

Andrew’s parting message was an interesting one – that putting in place an EA capability is a business change project in itself and so should be treated as such, i.e. get the organisation’s “change” people involved to execute the business change.

It was good to put some faces to the names of some more of the the movers and shakers in the UK EA world – I can now point Amit, Andrew, Tom Graves and Sally Bean out of a line-up if required…

Specialist group chairman Mike Buck mentioned that the next event is a presentation by the grand-daddy of EA, a certain Mr John Zachman on October the 6th, so I expect that event to be very well attended…

architecture IstanbulQ. How have organisations approached setting up an Integration Competency Centre (ICC)/Centre of Excellence (CoE) and what are the challenges?

A. An ICC/Integration CoE (put your organisation’s favourite name here) includes the following functions, which sometimes organisations have in separate teams, and sometimes combined…

  • Standards creation and ownership
  • Integration project governance, including the admin aspects
  • Service definition and canonical data modelling, owning data representations ‘on the wire’ etc
  • Doing real work for real projects, including the project/resource management aspects of this

Different organisations approach this in different ways. The key questions to consider include:

  • Funding model – centralised or decentralised (i.e. project based)?
  • Organisational model – if your organisation is heavily de-centralised (typically it’s grown through acquisition and the business strategy has been to leave acquired units to behave autonomously) then the ICC structure needs to reflect this in order to succeed.
  • Change PMs – are they rewarded/incentivised for supporting reuse? for minimising operational (IT and business) as well as change costs? If this is not well set up then you’re in for a continual battle.
  • Use of external product vendors – if they do not insist on the SOA standards/architectural guidelines role being fulfilled, this is an indicator that they don’t know what they are doing!
  • Are you expecting to use the supplier’s data model for integration?
  • Are you using significant offshore/remote resources? – this strongly affects how best to structure your teams

So what’s the best model? Well – of course – it depends. In general we recommend a decentralised team with some limited centralised standards support/expert help and project management/admin capability as this reflects the reality that we see with our customers – i.e. that a large non-project cost centre is not sustainable. The key is in the sustainability of the relationship between the ICC/ICoE and the individual project teams/federated parts of the organisation – if you get that functioning well with the projects feeding back ‘value’ into the ICC and the projects see the value that the ICC is providing (by removing the barriers to integration), then you’ve cracked it. This is all a subset of our ‘Sustainable Enterprise Architecture’ vision really.

Other challenges to consider

Here’s a hit list of other things to think about/look out for that we might have seen out in the field…

  • Holy wars – over technology etc, partially due to poor comms/selling of pros/cons/vision typically
  • Keeping up the energy/commitment in the long run – it’s a 10k run, not 100m sprint. It’s always difficult for any long-term initiative to survive business cycles and staffing changes.
  • Business involvement/input – depends on whether it was a “technology sell” in the first place – but fundamentally the business guys should know best about where various services are likely to head, and therefore maximise agility down the road
  • You can only go so fast, each step on the road takes a while to bed down, and you can’t leap-frog a maturity step, so allow for bedding-in phases. A common issue we see is that initial versions of internal standards are “over-egged” making it hard for service providers to comply with them – maybe just intellectually difficulty if not technically so. There maybe a bit of “showing off” going on there from new staff in the ICC who are keen to impress etc. Just stop it, right? I’m not interested in your use of the emerging esoteric WS-yada-yada standard when you don’t even have a service catalogue under control :o)
  • How do you manage a tier 1 supplier? – SAP, Oracle etc – to ensure you buy the right products for you, even though their consultants are all over you like a rash. A common and tempting pattern for organisations is to go for a “one stop shop” from a single vendor for HR, ERP etc approach so that they can minimise integration problems. I’m not saying that this doesn’t have merit, but the key point is that your integration standards need to still be yours if you want to retain any control over the selection and integration of other components in the future. Ensure your suppliers have the necessary motivators/vested interests to deliver ‘good’ services that integrate well with the wider solution (i.e. they they don’t have a cost driver to minimise the customisation of their product leading to non-ideal/non-conformant services). Also if you have multiple partners/vendors involved, ensure that they have no incentive to rubbish and impede each other. As a customer, that’s no fun.
  • Be clear about deliveries from your integration partner – exactly what they will produce and how it will add value, not just trust them to come up with the rigour/’magic of the ESB’, as there’s a lot of BS out there around integration/SOA.
  • Service versioning – We tend to favour an “n+1 model”. but often we see that organisations don’t sort this out early enough as they don’t see it as a “day 1″ problem and then falter later on.
  • Service ‘hygiene’ in general – invest in it.
  • Metrics to measure and demonstrate success and business benefit, and also the effectiveness of governance processes (Robin’s rule – if your governance process never “stops” a project, it’s a sick process)
  • Have a communication plan – your heavily federated organisation isn’t going to guess what your ICC is up to are they and magically fall into line with it? And probably the key point here – keep it alive. One time comms happens – well – once. Once you’ve read this blog you’ll forget it in 15 minutes, if I’m lucky. Your ‘virtual ICC’ colleagues will do the same unless you keep repeating the messages. It’s the old adage – tell them what you’re going to tell them, then tell…etc.
  • Plan for the necessary admin/management support for your ICC – especially important for the ongoing governance process/project touch points as this task won’t happen for ‘free’.
  • Define a concessions process – this is the get out of jail card – grant concessions to deviate from standards in special cases, document concession & mitigation/consequences, set a review or mitigating action deadline.
  • Don’t let the ICC get bogged down in delivery, so not not enough time is spent on vision/standards/governance. A tough one to avoid this and where lots of organisations struggle. It all comes back to good project planning/resourcing really.

That’s all there is to it!

Following on from the previous post, here are some more random notes about CloudBurst – pros, cons, features, limitations etc.

One thing that occurred to me was “Why is cloudburst a hardware appliance?” – it could just be a software appliance…well the key reason it seems to me is that it holds lots of OS/WAS admin passwords etc and so the ‘hardening’ of a hardware appliance with tamper-resistance etc is a key feature.

Deployment patterns and deploying

WAS hypervisor edition is actually an OVA image with 4 virtual disks, with multiple WAS profiles set up already – so the hypervisor edition is actually a VM image, rather than a customised WAS code base it seems.

There are patterns for a single server deployment, small cluster (3 VMs), large cluster (15 VMs) etc. You can modify a standard pre-packaged VM (e.g. add a fixpack etc) and then ‘capture’ back into CloudBurst as a standard catalogue VM for use in new patterns.

Control is available over whether certain pattern properties (passwords, memory size etc) can be overridden for each instance of that pattern or not.

A key point – keep track of any changes made to your VM patterns (e.g. any tuning done) and then ‘bake’ the in the pattern in CloudBurst so that any future deployments get the changes – otherwise they’ll be lost when you redeploy the pattern.

The first image transfer to each hypervisor can take up to 1 hour to transfer (obviously this is environment dependant)!

IP addresses are allocated on the fly when deploying, i.e. it pulls them from a pool of available IP addresses that the admin user sets up.

Script packages

Post deployment, CloudBurst can run any script, not just wsadmin scripts – essentially it ssh’s over to the VM and uses a zip file and an executable name (e.g. <some_path>/wsadmin.sh) with some arguments (e.g. what JACL file to run). ‘wsadmin’ scripts can be used against the deployment manager VM to install an application (EAR file) into the cloud cluster. Some “wsadmin” scripts are provided out of the box for common tasks – setting up global security etc.

Management facilities
CloudBurst provides a centralised access point to the consoles for each VM/WAS instance.

You can control separate permissions for each user (with LDAP integration), so you can have multiple users using a single CloudBurst box at one time, creating their own ‘private’ patterns etc.

You can use it to control the hypervisors to create snapshots of all the VMs in a particular deployment, so for example you can run some tests and then quickly recover the entire virtual deployment (i.e. all the VMs).

License management/metering etc – seems a pretty limited offering, it relies on admin REST APIs exposed by CloudBurst that are called by something like Tivoli etc.

CloudBurst admin console interface seems v..e..r..y.. slow to respond sometimes.

RobinAtWIUGJuly2009After a journey characterised by a conspiracy between parking meters and failed tube signals, I made it yesterday to IBM’s Hursley Park for the WebSphere Integration User Group meeting. Here’s the photo at a sunny Winchester station as proof…ok…so Jamie and I forgot to take any photos at the event.

The key note presentation was from Kevin Turner – ESB and Messaging Technical Strategy Manager. He covered IBM’s Smarter Planet initiative and then moved on to future architectural trends – the noteworthy points were:

  • The 2nd wave of SOA is coming – there’s a significant number of ’1st wave’ adopters out there now who have been through the joy and the pain, and have now fully understood the upfront investment required and the potential benefits. This 2nd wave is likely to consist of organisations trying to join up their islands of SOA (probably built along departmental lines due to a sensible pragmatic strategy of ‘baby steps’ SOA), and so federation of multiple ESBs will be a key theme. Governance will be crucial here if these islands are ever to be consolidated into a virtual enterprise bus that solves the problems of visibility of services across ESBs, end-to-end security and manageability etc.
  • Patterns – IBM are working on building some integration patterns support into their tooling (presumably WebSphere Business Modeller, WebSphere integration Developer etc) to allow an expert from an organisation’s ICC/ICoE to define the standard ‘pattern’ and therefore accelerate the development of instances of that integration pattern. The integration developer might just have to supply the pattern parameters for example, with many of the decisions such as how to manage errors etc already consistently solved for them.

There were a couple of presentations about specific MQ-related Supportpacs (MO71 and IH03) which I expected to be dull due to their nature, but the presenters managed to bring them alive – especially Paul Clarke. It was clear that he’d written and refined quite an impressive beast of an application over many years and was rightly proud of it.

Kevin mentioned CloudBurst during his keynote, and I managed to get some time with an IBMer later in a hastily arranged session to discuss it some more – I’ll post details in another blog post later as there’s quite a lot to report.

That means I missed the start of the next session about the developments in the WebSphere Process Server product to better support the human interaction aspects of BPEL processes (which have been sorely needed in our experience!). Paul Smith demoed the Business Space web app from WPS v6.2 which goes some of the way to addressing these shortcomings, with better human ‘override’ control of processes (skipping a step, repeating a step etc) and better visualisation tools for business users to use to understand where a particular process instance has got to, etc. This is clearly still a developing area of the product set though.

ilogThe last session I attended was a demo of rules from the recent iLog acquisition by IBM. An ex-iLogger Lenny Bromberg gave a very engaging demo which involved my colleague Jamie playing the role of “business user” to dynamically change rules that influenced the behaviour of a mock motor insurance quotation app. An interesting aspect of Lenny’s “pitch” was that essentially rules engines are 10 a penny, there’s several good open source ones out there if you want one, but what rules provides is a Business Rules Management System (BRMS), i.e. all the significant extras wrapped around the actual runtime rules execution environment that you need to make it really workable, manageable and governable. This includes rule development/editing environments (developer IDE and business facing), rule versioning, rule comparison tools, audit, simulation environments etc. Some other observations:

  • Lenny’s experience from previous projects where they have integrated with BPM solutions (like WPS etc) is that they often find that the business process definition/BPEL has become ‘spaghetti’ as the business rules are not clearly separated from the business process – and so the use of an external rules engine enforces a good business process automation design practice, and leads to more maintainable BPEL etc.
  • This is related to BRMS’s in general and not specifically iLog rules, but a weakness that I could see is that the rules rely on a good, stable business object model and we know from experience with numerous customers how difficult it is to get enterprise data models together and agreed. This is the potentially shifting sand that the rules are all built upon.

Many thanks to Mike Wyvill and Keith Guttridge and others for organising the event. Well worth the £28… :o)

This must be one of the most over-blogged topics in the known universe – but this subject keeps coming up in discussions both internally with in Smart421 and externally, so I thought I’d post a summary for reference purposes…

Pros

  • Agility, e.g. don’t have to wait for your infrastructure team to try something out on a dev server, or get a trial of a SaaS app up and running etc
  • Elastic – can have 20 servers up and running inside an hour and then turn them off 2 hours later after a performance run
  • Scale – scale as your business/demand grows without having to plan in detail
  • Linear cost – Pay for what you use, no upfront investment at all – can even pay by the hour for WebSphere Portal server now on Amazon (WAS coming soon)
  • A way of having a DR/business continuity strategy cost effectively
  • Flexible – want to host in the US? Then change your mind and want to host in Europe? Or both for resilience? Fine with Amazon for example…
  • Access – have access from more locations without putting additional infrastructure in place
  • Control – control your entire deployment from a single browser console (e.g. with Amazon)
  • Greener – maybe…fewer bigger and more efficient data centres rather than everyone having their own
  • Stick to the knitting – do what you do (insurance etc) not running data centres

Cons

  • Loss of control – security etc is in your service providers hands, do you really know they are backing up your data ok? Data separation in a multi-tenancy architecture? SLAs defined? Is the availability really good enough etc? Can you export your data for backup and/or reporting purposes?
  • Regulatory – e.g. the Data Protection Act requires that data is secure and “not transferred to other countries without adequate protection”
  • Maybe more costly if you know your demand (i.e. if elasticity is not a big requirement for you – as you potentially pay a premium for it – someone has to build the data centre…)
  • Limited choice of environments – cannot have a non-standard deployment so easily, best to stick to LAMP etc
  • Very hard/impossible to migrate from one cloud provider to another I suspect at the mo – so a new form of vendor lock-in basically
  • Real risk of business users buying their own SaaS apps without any governance – and creating another generation of silo’d business apps (just this time hosted outside rather than inside!)
  • Performance/latency – not as quick if your components are hosted “not on your doorstep”
  • Extra bandwidth costs incurred
  • Is your cloud provider’s business stable – will they be there is 2 years time?

jbehaveI spent a really interesting hour the other day with a colleague of mine, Steve Cresswell, going through his recent work to integrate natural language capabilities into a popular open source web test tool.

He’s been integrating JBehave with Selenium.

JBehave is a framework for Behaviour-Driven Development, and takes test-driven development (TDD) for agile to the next logical level, where instead of your business reps looking at your JUnit test with you (and hoping that the comments that explain them are up to date and consistent with the code!), you create and then execute your tests in the ultimate natural format – in English.

A simple test scenario might look like this…

Given I am logged in
When I enter an order for 10 books at a price of £2.10
Then I should see an order confirmation

Selenium is widely known in the agile community – the killer features of the Selenium IDE are that there’s very little barrier to entry for QA’s and Product Owners (customers) and unlike some of the alternatives it runs in the real browser rather than a partially simulated environment. What Steve has done is to put Selenium and JBehave together in a way that the authors hadn’t expected (i.e. having Selenium “drive” JBehave, rather than JBehave drive selenium) using AJAX + JSON to make the process more seamless…and then created a simple web app which dynamically discovers the JBehave’s “given” steps and lets you invoke them.

It’s a fascinating area and one I’d like to spend more time getting into – but heh, you can’t cover it all. But I love the idea of sitting down with your customer and creating definitions of success in English language, then building the app and those tests become your regression test suite – it takes away another barrier between the developer and customer worlds which can only mean better, quicker deliveries.

With all the excitement surrounding cloud computing, I’ve been chewing over what the realities are in an enterprise of deploying into a private cloud environment. Whilst there are obviously great cloud ‘use cases’ like the ability to scale up rapidly to create a performance test environment etc – the rather more mundane fact of corporate life is that many of the deployments are relatively small scale, maybe a few servers at most to support specific and specialist business operations. But we still want to have them hosted inside our corporate private cloud – we want them deployed to a virtualised environment, with fail-over support, higher resource utilisation and managed in the same way as the rest of our IT estate and so on.

But some relatively specialised, industry-specific software packages still require the use of hardware dongles plugged into a USB port in order to enforce license agreements. So then what do you do? Well, you can ‘virtualise away’ the dongle to the extent that there is technology out there to allow VMs to connect over TCP/IP to USB ports that are actually hosted elsewhere, but that still leaves you with a hardware affinity in the data centre, e.g. you couldn’t transparently move a dongle-dependant VM from one data centre to another in the cloud. And it also gives you an additional single point of failure on the infrastructure design that needs to be addressed. Obviously you would hope that vendors would be moving away from using dongles – but some still will for the foreseeable future.

It’s a silly but real example, and the point is that that the vision of everything moving to being deployed to private clouds is subject to these kinds of practical considerations, and the application vendors have a role to play in enabling their products for these kinds of deployment models…

As previously announced here , my colleague Richard Latham is presenting at the Open Group’s Enterprise Architecture Practitioners Conference on April 30th at 11am. The conference itself runs from 28-30th April and is located at the Central Hall Westminster, Storey’s Gate in London.

Richard will be presenting on the subject of “Sustainable Enterprise Architecture” – based upon real world experience from our recent Enterprise Architecture engagements, how to create and make EA work without undue strain on resources of money, political capital and change to an organisation’s status quo.

Here’s a summary image from his presentation to give some context…

SustainableEASmall

Richard and other Smarties will be there throughout the whole event, so if you’re there grab them and say hello!

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers