This is about a recent British Computer Society (BCS) event on Stakeholder Management.

The event was entitled Creating Value through effective stakeholder management and the presenter was Alison Charles.

Date / Venue: Thursday 6.30 start 20th Feb 2014 /  BCS, Southampton Street, London.

Why go to BCS meetings?

My reasoning is to:

        • Learn some new stuff
        • Get a different slant on a topic
        • Remind myself of some of the basics
        • Be able to do a better job tomorrow than I did today
        • Network
        • Enjoy the free buffet!

Back to Thursday evening, here are a few snippets ….

Who needs a slide deck,- not our presenter! Two flip-charts, some marker pens and the confidence and knowledge to communicate effectively.

Melding extensive experience as a Project Manager (PM) with more recent experience of being a Neuro Linguistic Programming (NLP) trainer; at times Alison got the whole audience on their feet, trying 2 minute exercises in pairs, some of which, at times, made us a little uncomfortable.

The 1st flipchart (there were only 3 or 4 – KISS!) was:

Stakeholders Impact of change for each
Their present mind-set Support needed (= what can they do to help the project? –   but have a contingency, just in case they don’t)
Their concerns, issues, resistance Influence, strategy, responsibility.Test out success areas
Risk – a PM may guess the present mind-sets, instead draft and use questions to get answers
Suggestion – Identify problem areas by comparing your understanding and their understandings. It’s a lot about putting yourself in their position and viewing the situation.

And that leads us on to …

RAPPORT

Shared world view, finding common ground, getting along, feeling between two people.

How do we communicate?

Face to face On the phone
pie chart for blogSource   – Mehrabian Communication Model Words 16%Tonality 84%

So it’s not all about what you say after all!

Back to the “R” word, it is not a panacea, but it can get you to a place where you can talk about the issue :>)

The 4 MAT system  (http://www.aboutlearning.com/what-is-4mat) was mentioned and it turns out that I am a bottom right hand quadrant man. I like What!  So I took notes that enabled me to recall stuff for this blog!

Alison shared some personal experiences including how she converted the adversity of losing her luggage on an international flight into a successful outcome making use of some emergency overnight items provided by the airline (razor, T-shirt and a condom!) – only kidding – she only used one of the items listed!

I enjoyed, and got value, from “Creating Value through effective stakeholder management”. It ticked my “why go …” boxes from above and I will be attending more BCS events.

So why not have a look at what is on offer and give it a try?

How to join the BCS.

I am Gordon Elliott a business analyst/consultant with several decades of experience working in IT. I have worked for Smart421 (part of KCOM) for the last decade and have been a BCS member for 3 years.

Having recently re-read Alistair Cockburn’s book on Agile Software Development, I thought I’d repeat some of the concepts that he describes and try to expand on how those may apply to the development of Enterprise-level software, compared to Application-level software.

There is a very interesting section describing concepts for the design of a methodology for software development. Within this, the choice of a suitable approach is based on a range of criteria, including the following:

  • Methodology Size – the number of control elements in the approach
  • Ceremony – a level of precision and tolerance applied by the approach; greater ceremony corresponds to tighter controls
  • Methodology Weight – the product of size and ceremony, for comparison purposes
  • Problem Size – the number of elements in the problem domain and their inherent cross-complexity
  • Project Size – the number of people whose efforts need to be coordinated
  • System Criticality – the importance of the system, or the level of damage that may by caused by any undetected faults; Alistair uses a scale as follows: loss of comfort, loss of discretionary money, loss of irreplaceable money, loss of life

Other criteria listed in his book are precision, accuracy, relevance, tolerance, visibility, scale and stability. This last one is to determine how likely things are to change, explained with a suitable statement (‘if I were to ask the same questions today and in two weeks, how likely would I be to get the same answers?’)

Those I have included in a bullet list above are perhaps most useful to classify the different types of methodologies, with the others being useful to expand on the detail of any particular approach.

The most commonly-used Agile methodologies, such as XP and Scrum, are inherently lightweight, having minimal control elements and low ceremony, with them being most suited to smaller problems and projects.

Alistair includes an interesting table showing how projects can be characterised by communication load, criticality and priorities. Discussions can then position which type of methodology should, or should not, be used for a project under consideration. Those projects with smaller teams, changing requirements, flexibility in delivery, low risk and small scale are seen at one end of the scale, whilst massive programmes and projects needing strict controls against high risk (financial or life implications) will appear at the upper end of the scale.

In many cases, development of a new piece of standalone application software can benefit from the advantages of an Agile methodology. In this, early visibility of functional software is really useful, allowing changing ideas of the purpose and features of the application to be changed as development progresses. That has to be the true home of Agile software development, backed-up by approaches such as Test-Driven Development to improve the quality of software from ‘day one’.

However, it would seem that for the development of multiple integrated applications in an Enterprise software architecture, this is not going to be the best approach to take. For enterprise-level systems, the requirements are dictated by the business and not just by a set of end users as for an individual application. Up-front work is required to ensure that the overall architecture and requirements are understood, and that planning allows for dependencies between systems to be addressed during the longer software development lifecycle that will be involved. This does not remove the need for good measures of progress and assurances of quality, which can be seen from an Agile approach, but the scale of the project must be borne in mind.

It is for this reason that project management disciplines such as Prince2, and quality control standards such as ISO-9001 have been defined. These are often maligned as being overly heavyweight, but the reaction to use lightweight approaches instead does not often resolve the problems of poor software development and delivery. A lot of failed projects are hampered by understanding of requirements, communication difficulties and false reports of progress. Agile doesn’t fix that on a large scale, although that is essentially what it does address on smaller projects.

Stepping aside from the Agile/non-Agile thoughts, I would tender that for Enterprise-level programmes, the risks of failure are perhaps equal to that of smaller and mid-sized projects, in percentage terms. However, if those odds result in an overrun or under-delivery, then the sheer scale of an enterprise software project will mean that the costs incurred are much higher, the failings more visible, with the lack of delivery being more meaningful to a larger area of the business.

Given these (personal) views, it would seem that there is more risk is using lightweight methodologies for Enterprise-level software development as a whole, whilst still permitting their use within a programme for clearly-scoped or isolated areas of functionality. Would you seriously want to consider an informal approach to developing the software applications that are supposed to drive through large-scale business change in your enterprise? Would you not want to know what you are aiming to deliver before committing to developing chunks of that software?

Another consideration is that, if you wish to move on your whole software architecture to a new platform of integrated components, such as for many SOA initiatives, perhaps the stronger thoughts should go into a phased approach. Doing ‘big bang’ software development and delivery is always an all-or-nothing commitment. Instead, aim to develop and deliver incremental functionality, integrating new with old as you go. This may seem more expensive, due to the extra integration overhead, but this allows better management of costs, reduced risks and also – as for Agile approaches – allows you to use that software from each project phase in live operation if you wish.

To sum up, I feel that software development should indeed be performed in an ‘agile’ approach regardless of scale, but that when looking to delivery Enteprise-level solutions, there is a need for Enterprise-level methodologies and programme management, which is not what gets offered by Agile approaches. I also feel that this is just what Alistair Cockburn presses for in his book; he is an Agile proponent, but he clearly recognises the need for scale to be an input to the choice of approach taken to software delivery.

I’m starting to wonder if mention of the word ‘Agile’, relating to software development, is already starting to be seen as some sort of swear-word. In just the same way as ‘Waterfall’ is frowned on, even demonised.

Over my none-too-short career, I have worked in many different environments, using many different software development processes and quality standards. Those include military spec systems, banking and financial applications, telecommunications, pharmaceutical regulatory systems and assorted other application and integration projects. As a result, I have had to work on software with different requirements based on scale and rigour (consider BS5750, ISO9001 and FDA rules, for instance).

At Smart421, we have ISO accreditation for our software development processes as well as our ITIL/ISO-20000 service management activities. Being systems integrators, we will use whichever project management process is most suitable, or that which is requested by our clients. In this regard, our Prince2-based project management approach is our default choice, scaled to meet the needs of each particular project. We have also delivered projects using Rational Unified Process (RUP) and Scrum, as well as developing with other Agile software development approaches.

Given that expertise (mine and Smart421’s), it is quite clear that no one project development process is going to be correct for all software projects. Returning to my initial comment, it seems to me that ‘Agile’ may already be suffering from too much adoption on projects where it is not entirely suitable. This results in poor delivery and of customer expectations not being met, which is just the sort of problem that ‘Waterfall’ projects have been accused of over many years.

I’m not aiming to attack Agile, nor defend Waterfall, but just want to raise the issue that both have their merits and that both have a number of failures (high or low profile). The natural ground for Agile projects is for smaller scale developments, although that does not preclude it being used for large scale deliverables, but the level of rigour must be increased to allow for this. Not to say that Waterfall is the answer to delivering large projects, but it tends to bring the associated rigour (documentation, whether seen as overhead or not) needed for such systems.

Of course, Waterfall has earned its criticism – often on very large scale, large budget, failures. Agile may be lucky in that its failures may be on smaller scale, smaller projects. A further benefit is that if approached properly, even failed projects will (should) delivery something of value. If an Agile project fails to do that, it doesn’t deserve to be called Agile either.

Recall the Agile Manifesto:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more

Then read Alistair Cockburn’s book ‘Agile Software Development’ and don’t just look for the worked example on XP, but read the sentiments and meaning in there. He advocates greater controls and levels of artefacts for larger projects, based on not only the scale of the problem, but on the importance of the solution. For a life-critical system, there is a need for much more rigour than on a discretionary, nice-to-have system. All quite obvious really, but something that a number of Agile proponents seem to miss.

I intend to add a further blog article about the tension between Agile processes and enterprise level, or SOA, software, to expand on my views as to how these may or may not fit together.

In the mean time, I’d like to hope that those Agile converts don’t fail to see the wood for the trees, and that not too many projects lead to failure through inappropriate choice and use of such software development processes.

I’m sat on a train at the moment, from Norwich to Ipswich. This train will take 39 minutes to reach its destination. It’ll then take me 7.5 minutes to walk to the office, 1.3 minutes to make a coffee. I’ll be ready to work at 08:17:48. Obviously, I’ll stop to take a sip of coffee at 08:17:57, 08:18:20 & 08:18:50, and a large, cup emptying slurp at 08:19:20. Other than that though, it’ll be working straight through ’till 17:30:00. I will not get peckish and stop to raid the snack machine. There will be no interruptions. Nobody will, at short notice, book a meeting or pitch up at my desk for an impromptu chat about the current status of Project X or Initiative Y. My fiancé will not send me an SMS asking me to pick up a bottle of wine on the way home, and absolutely no recruitment consultants will call. All my days are like this. I never have a bad day, never fail to get my head around the task at hand first time, never struggle to think where to start. All my tasks are predictable, have well defined goals and require no assistance from anybody who might be having a bad day. Of course, in this environment, I deliver what I said would, when I said I would with 100% certainty, every time.

Of course, I’m dreaming.

The reality is that nothing about the average day of the average employee is in any way precise. My train might take 39 minutes, or it might take 41, or if I’m lucky and there’s a northerly wind, it might take 38 minutes and 59 seconds. My walk will be marred by traffic lights, and I (shock) will have bad days. If I were to use the fingers of every occupant of this rush hour train to count the number of times I’ve been asked by a project manager over the last 10 years “How long will this project take? How much will it cost?”, I’d probably have no more than two fingers left to type the remainder of this post.

The trouble is, you see, I don’t know.

Don’t get me wrong, I can estimate things as well as the next guy – it’s just counting widgets at the end of the day, but I know I’ll be wrong. Of course, project managers are reasonable people, they say things like “Well, we have to be 100% confident in this estimate, so we’ll add 10% contingency to your total, ok?” No. Not OK. The issue here isn’t that I under-estimate routinely (although that might well be an issue), it’s that adding ten percent will not make any estimate 100% confident, and nor will adding thirty, one hundred or even three hundred percent, regardless of how good an estimator I am.

Anyone who remembers the whale and bowl of petunias from Douglas Adams’ Hitch-hikers Guide to the Galaxy will remember that according to Adams, the sudden appearance of flora and fauna in deep space is unlikely, but none the less has a real finite probability. Perhaps somewhat shockingly, this sort of thing is a reasonably well accepted side effect of quantum mechanics; this could happen at any time in any place. You could be just tucking into a medium (sorry, Grandé) cappuccino at your local Starbucks when something, anything, appears. House, car, cat, dog, frog, you name it. Now, just to stop you panicking over your caffeinated beverages, the chances of this happening are infinitesimally small, but it serves to illustrate the point that you never know what might get in the way of your project.

So, finally, to the point of my post. I think it’s time estimating grew up a bit, and stopped pretending that it knew all the answers. What’s needed is a way of estimating the cost and duration of a project that softens the boundaries a bit, and gives a truer picture not of how long a project will take, but how long it might take. Imagine for a second that a programme manager knew that there was a 66% chance that the project would be in by Christmas, and a 88% chance it’d be in by June. He’d probably be much more likely to give the board a sensible message than if all he had was a vague message from you that it could conceivably be done before the turkey gets cold. Equally, those of us who sometimes work on fixed price contracts would have a much better way of assessing the risk of a given project, allowing us to reliably turn a profit while offering a good deal for the client.

It strikes me that there are two approaches that could be taken to this:

  • Use probability theory to attach a probability distribution to every input parameter in the estimating model, and then carry these through the model and give a final probability distribution at the end. Complex, nasty, not much fun unless you have a PHD in applied statistics.
  • Make the input parameters to the model fuzzy (more on what I mean by this later) and then run the estimating model over and over again, collect all the answers and build a histogram (chart) out of the results.

I’ve thought long and hard about this over the last few years, and frankly the former option is beyond the capability of my AS-level statistics. Even if it wasn’t though, I’d be recommending the latter, and here’s why: It’s intuitive. What you’re doing is running the project over and over again, and seeing how long it takes. Project leaders can be old and wise without being old and wise; they’ve already done this project 1000 times (albeit in the mind of a machine), so they’ve got a good idea how long it might take.

Using this approach, you could build absolutely anything into your model; Productive day averages 6 hours, but varies between 2 and 20 hours on occasion? Sorted. 0.04% chance of aliens abducting your senior developer? No problem, at least for the project. The options are endless.

Nicer still, because this approach uses simulation rather than algebra, we don’t need to be too anal about how the parameters are set. If it’s easier to say “95% of the time it’ll take 5 hours, but 5% of the time it’ll take a random value between 8 and 10 hours”, then that’s fine. We don’t have to put together some strange combination of probability distributions that models this; we just run with it. Equally, if you have a set of example data to use as a basis (well, this system has N classes, and it took X long), then these values could be used directly, without having to build a complex model from them first. That said, if the guys doing the estimating do understand probability, then they can use a Poisson distribution to determine how many use cases will be delivered by a week next Friday if they so desire.

Equally, because we’re actually running the project, we can apply all sorts of interesting things to the model that would be impossible using a purely statistics driven approach. For example, in an agile project, we can simulate the team size and length of sprints against the simulated sizes of the products to determine the optimum length of a sprint for the project. We could simulate the quality of deliverables based on whether code reviews are expected, and use this to estimate the impact on the length of the test cycle. Obviously, this stuff is a bit trickier to achieve than answering the usual how long/how much question, but it’s always good to know there’s scope to develop things further in the future.

The architecture underlying this kind of estimating machine is pretty trivial. I’d say, with 100% certainty, you could deliver the underlying engine in 27 hours, 5 minutes. Elephants and petunias not withstanding.

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers