Modern Architecture with Bauhaus inspired elements Photo: Architect Weekly

Modern Architecture with Bauhaus inspired elements
Photo: Architect Weekly

I’ve got to that age when one reflects on one’s career; the troubled projects, late nights, cancelled holidays, “go live” celebrations and [it's work, honestly] client entertaining.  Not quite rock-n-roll but a rich tapestry of consultancy and what strikes me, repeatedly, is how things change, but yet remain the same.

For my undergraduate thesis, I looked at three small businesses (we now call them SMEs), developing a picture of their business objectives, critical success factors and key performance indicators (motivation modelling) in order to understand what they wanted to be able to do (capability modelling) and what technology facilities (architecture building blocks) they needed.  Since then, I’ve done analysis, development, management, selling (shh!) and lots of other stuff but I’m back doing architecture again.  Like Viggo Mortensen’s character, I just can’t escape my past.

What we now call enterprise architecture was once an aspect of business analysis and before that was strategic planning.  So if things really have remained the same, what can we learn from our architecture forebears?  Our architecture is relatively young so let’s look at traditional architecture.

Physical architecture typically involves a professional team being commissioned by their client to plan, design and supervise the building of structures.  The formative academic study of architecture in the UK (University College, 1841) shaped architecture as a fusion of art and science, I think this mostly holds for our architecture. We have clients (internal or external), we undertake commissions (contractual or not) and we plan, design & supervise.  Do we, however, see art and science in our architecture?  The importance of communicating ideas in a digestible and attractive way certainly needs artistic flare and the development of pleasant user experience needs artistic understanding and creativity.  The need to understand people and their inter-relationships is critical to success so humanities come in to play, too.  Maybe architects are the polymaths of the information world.  To balance this accolade,  maybe I should question whether we’re professional but let’s make that rhetorical.

One area of difference is the reach of architectural projects.  Physical architecture is comfortably project based and, even when developing the head offices of corporate giants, has a specific area of impact.  Our architecture, dealing with information rather than concrete and steel, has a much more enterprise impact and hence has an existence and mandate beyond the project.  This is important as it does require an elevation for the practitioners of our architecture and it requires us to supervise for a longer period of time.  Further, physical structures have a clear purpose (to get across the river safely, to comfortably house 1000 staff) so benefits realisation is clearer and sooner.  As long as the bridge is inspected safe and cars can cross the river on it, the purpose has been achieved; Norman and Richard can be given a pat on the back and can cash their cheques.  With our information structures, the basic achievement of purpose cannot be so clearly signed off and the full realisation of benefit may need a while to confirm.  Our Zaha cannot be congratulated quite so readily.

Final, let’s consider context.  Byzantine,  Renaissance and other, older architectures are reflections of prevailing, cultural norms.  Modern architectures can be said to be establishing styles and setting trends that other aspects of society later adopt; notable in this regard is Bauhaus.   Corporately,  the move from evolved to architected enterprises (though early stage) emulates this lifecycle of physical architecture.

So…

  • Architecture needs a wide variety of skills; arts, humanities & science
  • Architecture is about change so buying in architecture is completely valid
  • Information architecture does not have physical form on which rules of certainty can be employed so we must accept a degree of ambiguity
  • The change from evolved to architected happened in the physical world and is rightly happening in the information world

Our history is indeed shadowing that of our cousins in physical architecture; things do remain the same.

 

Please Rate and Like  this blog.  Share it via the social icons below or via short URL http://bit.ly/1ox7sl4

Our readers want to know what YOU think, so please add a Comment. 

 

 

Getting ready for the 2014 conference season, it struck me that the technology revolution has moved from “E” to “D” – by that I mean Digital has replaced Electronic (as in E-Commerce) as the new “must have” conference title.

I remember back in the 1970’s when Digital was replacing Analogue – with the mass introduction of digital watches and calculators – so it amuses me to see it re-cycled. Obviously the generation adopting the word today don’t see the irony in it – nor do the companies inventing Digital divisions.

The serious point is that we’re struggling to articulate the impact of disruptive change on many axis simultaneously. Led by Cloud and Mobile and closely followed by Social Media, Big Data, the need for a secure on-line Identity and even ‘wearable’ technology (back to my old digital watch again) how do enterprises encapsulate the change ?

Cloud is probably the easiest to grasp as it is the most mature and is already releasing its technology potential, but now it needs to be recognised as commercial disruptor – it has already impacted traditional hosting vendors and subjugated “lock-in” contracts they felt were safe. Cloud provides not only a natural ‘leap frogging’ for new entrants into markets by reducing up front set up costs, but can also be a defensive strategy for those businesses trying to adapt to meet rapidly changing customer expectations and behaviours.

Mobile is a key driver changing behaviour – where the acceleration of mobile and tablet (e.g. non-PC) platform adoption is changing the location of the commercial interaction with customers. The mobility of smart phones and tablets has released the consumer from a seat at the desk (office or home PC) and consumers are “inviting” enterprises onto their commuter trains or into their sitting room as they ‘browse in the morning’ and ‘buy in the evening’.

Social Media has benefited from this informal interaction and given access to every review, post, tweet and blog – allowing research ahead of an eventual instore or online purchase – and making C2C communications the primary channel for feedback. Not only do we look up facts at the dinner table using our smart phones but we’re looking at everyone else’s opinion of that new camera, car or city-break as part of the selection process.

All that “opinion” needs a home and – adding it together with all the data produced from location tracking, monitoring and automated machine to machine communication – we have the exponential growth in the volume of data. Then you need tools and techniques to analyse that data (back to Cloud again).

Consumers are also demanding personal interaction which drives the need for Identity – allowing industries to start to drive up the quality and richness of exchanges to enhance customer experience.

So finally I come back to my original question – is my old Digital watch trendy again ?

Please Rate and Like this blog. Share it using the social icons below or via the short URL http://bit.ly/1boh2B7

Our readers want to know what YOU think, so please post a Comment.

There have been some high profile instances recently of a Mobile App being “Retired”:

  • LinkedIn for iPad (they are retiring versions prior to 7)
  • Flappy Birds (yes, I know this link goes nowhere – it’s been withdrawn!)

Let’s take the second one first as that has even made CNN News and the pages of TIME as incredulous tech and gaming journalists speculate about the real reasons why anyone would voluntarily sacrifice over $50k a DAY in revenues by withdrawing an App from Apple and Google stores. Maybe Dong Nguyen just made enough money, or maybe he really was getting fed up with something that a lot of App Developers forget about – how to support your App in the ever-changing world of mobile. Or maybe he just wanted to create loads of publicity before cheap imitations like this took over.

In the former example of LinkedIn, like many of you probably,  I’ve been getting emails for a few days now encouraging me to change:

We wanted to follow up and remind you that we’ll no longer be supporting LinkedIn iPad app versions older than 7.0 starting February 18. This will help us focus on creating even better mobile products and experiences for you.

You currently have one of these older apps, but you can download the latest app anytime from the iTunes App Store. It’s a brand new app — we think you’ll like it! With the new app you can now search for jobs — plus like, share, and comment on what you’re reading.

Have questions? Visit our Help Center for more info.

Now, this is in spite of my having updated to version 7.1 of the App almost as soon as it came out as I regularly update my Apps. Why don’t they know that and stop spamming me? Oh, I forgot, that’s what LinkedIn does best…

“So what?”  you say…

Well, one common theme is that the “idea” to “retirement” lifecycle of mobile is fast – less than a few months in flappy birds (rather extreme) case and seems like LinkedIn have put some thought and effort into trying to ensure customers did not continue using their unsupported App version. This is accepted and understood by consumers who most likely downloaded the thing for free anyway but what if you’re the CEO of a company that just invested a few hundred thousand in developing some internal Apps for your employees?

Most people accept that the mobile development landscape is complicated and not getting any easier, in spite of cross-platform tools and web development paradigms so one of your pillars of your Mobile Enterprise is managing those Apps, supporting them, providing updates as operating systems update and, before long, retiring them completely. Have you thought through this before you launch your Apps on to your staff or customers?

We are seeing common trends, one very obvious one is that developing successfully for mobile within the Enterprise needs Agile methods to deliver value. So in a mature organisation a good choice for extending development to cover inception and longer term management could be an extended Agile delivery lifecycle such as provided in Disciplined Agile (DAD). The lifecycle extends your standard iterations to provide the initiation and support parts of the lifecycle.

extended-agile-mobile

The important points are not to stifle innovation, nor to slow down responsiveness to your users’ demands but to make sure you don’t waste your innovators’ time supporting out of date code and you also notify your users to get new versions in an intelligent way. Notification of users seems such a simple and common practice it’s amazing that Windows-8 Mobile doesn’t have common notification management yet although it’s rumoured to be coming soon as the Action Centre.

Having only just bitten the bullet and dumped my Android phone for a shiny new Nokia Windows-8 handset I’m finding first hand now a lot of these subtle differences in maturity between Android, iOS and Windows-Mobile, but Microsoft/Nokia are catching up fast and needs to be part of your mobile first strategy.

Yesterday at AWS re:Invent, Andy Jassy delivered the main keynote. As you can see from the photo below, the event was immense – the day before I was in the APN Summit so it was AWS partners only, and that felt big.

1384361167005But this was 9,000 attendees from 57 countries in a room. The photo doesn’t really capture the epic scale – which struck me as kinda like a metaphor for AWS itself, i.e. the scale of the administrative operation was off the chart, it was all very efficiently managed, and it gets bigger every year!

I thought it was interesting that they didn’t even “save up” the recent announcement about the 10% price reduction for M3 EC2 instances that was announced on 5th November for re:Invent. To me, this just shows how baked into the business model these regular price reductions have become.

In content terms, the three main new announcements were:

  • Amazon CloudTrail – the ability to log all AWS API calls to S3 for audit and compliance purposes. This is a nice feature that we’ve asked for before, but actually hasn’t been too much of a barrier to customer adoption previously, probably because we are typically managing the entire AWS layer for a customer anyway.
  • Amazon WorkSpaces – virtual desktops-as-a-service. Interestingly desktop “state” is maintained as you move between access devices, e.g. from laptop to tablet. We’re deployed virtual desktops in AWS for a number of customer projects – either desktops for key users in a Disaster Recovery scenario, or for developers who are located around the world and need a consistent desktop with known applications installed etc in order to access AWS-hosted dev and test environments. So I can see us using this new feature in future projects as I suspect the cost model in terms of the saved installation/build/ongoing patching effort of putting in a bunch of Windows Remote Desktop servers.
  • Amazon AppStream – HD quality video generation and streaming across multiple device types. This is related to another announcement that was made on 5th Nov – the new g2.2xlarge instance type, which has the GPU grunt to enables the creation of 3D applications that run in the cloud and deliver high performance 3D graphics to mobile devices, TVs etc.

Weirdly being at the event you get less time to look into these new product announcements and so you probably have less detail than if you were just reading about it on the web – after the keynote it was straight into a bunch of technical sessions.

I mainly focused on the data analytics sessions. First off, I got to hear about what NASA have been doing with data visualisation – I think all attendees expected to hear about exciting interstellar data visualisations, but it was actually about much more mundane visualisations of skills management, recruitment trends etc – and this in fact made it much more applicable to the audience’s typical use cases as well. There were some great takeaways about how to maximise your chance of success which I need to write-up at some point…

I then attended an excellent deep dive on Amazon Elastic MapReduce (EMR) – this covered Hadoop tuning and optimisation, architecture choices and how they impact costs, dynamically scaling clusters, when the use S3 and when to HDFS for storage, instance sizes to use and how to design the cluster size for a specific workload.

This was followed by some customer technical overviews of their use of RedShift. They had all migrated to RedShift from either a SQL or NoSQL architecture. For example, Desk.com have deployed two RedShift clusters in order to isolate read from write workloads, but I felt that they had been forced to put considerable effort into building a proxy in front of RedShift to optimise performance – fundamentally as RedShift is limited to 15 concurrent queries and for their reporting workload, they are not in control of the peaks in their user’s demand for reports. So they’ve implemented their own query queuing and throttling mechanism, which sounds like a whole heap of technical and tricky non-differentiating work to me. A key takeaway from this session for me though was that the price-performance characteristic of RedShift had really worked for these customers, and given them the ability to scale at a cost that they just could not before. They were all achieving very high data ingress rates by batching up their data inserts and loading direct from S3.

The final session I attended was about a Mechanical Turk use case from InfoScout. Mechanical Turk is an intriguing service as it’s so different to the other AWS offerings – in fact it’s not a service at all really although it exposes a bunch of APIs – it’s a marketplace. Classic Mechanical Turk use cases include translation, transcription, sentiment analysis, search engine algorithm validation etc, but InfoScout’s need was for data cleaning and capture following an automated by fallible OCR process – capturing the data from pictures of shopping receipts taken on smart phones. The main takeaway for me was about how they manage quality control – i.e. how do you know and therefore tune and optimise the quality of the results you get from the workers executing your HITs? InfoScout use two quality control strategies:

  • Known answers – in a batch of receipt images that is handled by a Mechanical Turk worker, they inject a “known” receipt and compare the data captured with the known data on that receipt. This technique is good for clear yes/no quality checks, e.g. is this receipt from Walmart. This allows them to compute a metric for each worker as to how likely their other receipts have been accurately processed.
  • Plurality – send unprocessed receipt to more than one worker and see how consistent the returned results are. InfoScout use this to build a confidence score based upon this and other factors such as worker tenure etc.

The final event of the day was the re:invent pub crawl around 16 of the coolest bars in The Venetian and The Palazzo hotels. I’m guessing I don’t need to tell you much about that event, other than it started with sangria… :)

Tough but someone has to do it...

Tough, but someone has to do it…

St Paul's Cathedral from Grange St Paul's Hotel

St Paul’s Cathedral from Grange St Paul’s Hotel

The AWS Enterprise Summit yesterday was excellent.  I use superlatives sparingly, but it was iconic.

When registrations were so heavily over-subscribed as they were for yesterday’s event, you know you have a significant indicator for the levels of interest that the AWS Cloud is generating.  And, not for the first time, this was particularly pronounced in the UK Enterprise sector.

The place was rammed.

More than 500 delegates converged on the Grange St Paul’s Hotel in London to hear two AWS directors and a raft of director and senior IT people from UK Enterprises tell their story for themselves.

When Sir Christopher Wren (1632-1723) architected St Paul’s Cathedral, he demonstrated a level of architectural expertise that surpassed mere practicality and function. Although the same cannot really be said of the architecture for every instance in every cloud, increasingly more engagements for large UK companies do provide a monument to others of all that is good and enduring about Cloud computing.

We think that AWS is, on balance, getting a lot right.

But perhaps we should really let the attendees speak for themselves.

Please leave your Comment below.

Infrastructure Architecture is dead, long live Infrastructure Architecture.

Cloud infrastructure has changed the role of the infrastructure architecture into one of a virtual architect. The tin and wires approach, the HA and DR, the test and release and following the sun have all been replaced by Cloud infrastructure. Careful sizing, location, rack allocation etc. are unnecessary for most applications.

The desktop environment is changing radically: desktop management, massive rollout projects and investment decisions obsolete.  The use of virtual terminal software returns the desktop to the server. BYOD removes the need and ability to direct the desktop and mobile landscape. Network architecture is increasingly virtualised both within the virtual datacentre, between datacentres and client devices.

It is hard to imagine that bastion of physicality the Infrastructure Architect dealing with purely virtual server  communicating on a virtual network and that it can be assembled from their own virtual device. There is as serious point to this, as it depends on the Application Architect to design in such a way that enables the full power of the cloud to be used.

Not that it is anything new, just more pressure on the Application Architect.

 

 

Please Rate and Like this blog.  We always welcome your feedback and your opinion, so please leave a Comment.

After the polished video intro (screen shot below and catch the full key note here) Brad Anderson (Corporate VP at Microsoft) kicked off TechEd Europe here in Madrid by highlighting the opportunities that we have available to us as at the moment in IT. He talked about innovation, helping customers differentiate themselves and how Microsoft aim to make others great, so whilst I definitely agree that with him, it was the resonance with so much of what we do at Smart that I really liked.

OpeningSeq

Four areas of innovation were discussed around; People Centric IT, Building better Apps for people, Data, and the Cloud Platform. Aligning to these were new product release announcements including SQL 2014 CTP, VS2013 and TFS2013 CTP, Windows Server 2012 R2 and System Center 2012 R2. Better still, a partnership with Oracle was announced which means Oracle is fully supported on Azure and Hyper-V (and from what I gathered Oracle images ready to spin up in Azure).

One of the best parts (and I tweeted at the time – @wanty1975 btw) was a video of one of our customers (Aviva) referenced as an example of mobile and Azure development combining to provide an innovative solution for their customers. Why was this so good? Well partly because my old boss was in the video…but more so because seeing real world examples, and close to home too help inspire and make me realise the opportunities we have available to us.

TechEd Pres

So a brief list of new features mentioned in the key note that you should be aware of:

  • BYOD – two-factor authentication making it easy to add devices (a MS Surface in the demo) to AD domains using Windows 8.1 (not sure what the experience would be like for other devices though??). The simple process means files are sync’d, permissios granted to corporate data and Line of Business (LoB) applications downloaded onto the device accessible via a company portal-like homepage.
  • VS and TFS 2013 – ALM benefits throughout the develoment lifecycle but I really liked InRelease (following a recent acquisition by Microsoft) which provides workflow and approval mechanisms for moving builds through environments. I could see this really helping to take the pain away from tracking builds through environments and ensuring everyone knows what versions are where. From a dev perspective I also thought the new Code Lens and Code Map features looked really helpful. These come under the banner of Situational Awareness and are designed to reduce the time and effort needed by developers when context switching. Code Lens integrates with TFS 2013 to provide details within the code base, such as tests passing/failed, who it was last checked in by and any recent check-ins. It helps developers understand who has done what, where and why, much more simply than the previous version/compare routine that we’ve all experienced. Code Map provides a diagramatic view of code (similar to architecture modelling tools) but this can be viewed at runtime in debug mode, like a call stack, helping developers familiarise themselves with code much quicker and easier.
  • SQL 2014 – some huge performance gains achieved through in-memory OLTP and row based access rather than page, therefore reducing locking issues and increasing performance. SQL will suggest what tables are ‘hot’ and recommend moving them into memory (I’m assuming they’re persisted to disk to fairly frequently!!). I also picked up on the ability to replicate an existing on-prem DB to Azure, to provide HA and DR via a right-click menu option…didn’t see or hear any more evidence of this but sounds like a great feature. Also HDInsight and Hadoop linkage to Excel to provide visualisations and insight to Big Data.
  • Windows Server and System Center 2012 R2 – providing the tools to make Hybrid cloud simple and easy, with tooling to help sync data, create replicas for DR purposes and significant performance benefits for virtualised machines.

So there are obviously things that I’ve missed and lots more to talk about but hopefully this represents a fair summary. I’ll also be tweeting any further nuggets throughout the week.

Caution workforce in the road!

What would your reaction be if the workforce in the road, fixing the road, did not have any tools or machines to do the job?

Frustration at the waste of time in the resulting traffic queue?

What would be your reaction if the washing machine repair man turned up without his tool kit, without a diagram of the appliance and without access to spare parts?

Refuse to pay the bill?

A security company providing security without enough staff

Questions in Parliament?

How is that so many Enterprise Architects can do their job without the tools of their trade?

Often Enterprise Architects are missing vital parts of their tool kit:

  • Standards
  • Principles
  • Reference architectures
  • Models of the Organisation
  • Application Landscape
  • Analysis and design tools
  • Information sources to feed the analysis tools
  • Stakeholder analysis

Worse than this they seem to lack the basic tools to be able to create the EA tools they need such as the processes to maintain the models, principles, guidance and governance.

Do you wonder why EA gets a bad name?

I am not suggesting that we go back to the old EA approaches

  • Boil the ocean documenting the current state
  • Tons of detailed standards (always out of date)
  • Heavy handed governance that increases costs,  misses deadlines and the point

And any of the other EA anti-patterns

Togaf 9.x of course points us at lots of artefacts and things to do, it is supposed to. We do not have to do them all, we can mix and match – What happens when we mix and match ourselves out of TOGAF9.x in all but name? Are we no longer doing architecture?

There are precedents for this situation:

SSADM was created and adopted, but everyone picked the bits they liked or could do. No one could afford to complete the whole SSADM – Especially with paper and pencil (there were few tools around).  SSADM became discredited; Every claim of compliance was subject to interpretation.

A similar thing happened to PRINCE.

I guess that there are many other examples of the dilution of the good practices until they are no longer effective.

Will this be the fate of TOGAF?

Are we architects no longer doing architecture?

The reason for this post is because I am still seeing Helper, Utility and Singleton implemented code in large enterprise scale solutions today. It is almost as bad as coding in Hungarian notation. Now this is to say that practices like Hungarian notation had a place and a purpose, notably when using dynamically linked languages such as C++ for example where you did not have type safety. Things have moved on since those days so PascalCase and camelCase are now the preferred convention, certainly within the .NET space and camel case preferred in the Java space.

Microsoft has a good post on Design Guidelines for Class Library Developers.

So really Helpers or Utilities come from (in my view) procedural languages and by definition they tend to be static types that often equate to “general stuff” that doesn’t fit anywhere else which is very much a procedural mind-set as when you build software in OO languages such as C# or Java, everything has a single responsibility.

This discussion has been flogged for years especially when .NET was becoming popular. But it seems all these discussions haven’t really changed peoples mind sets. Many developers still believe that a helper is a good thing. I am not sure how these patterns are perceived in the Java space but in .NET they are generally considered anti OO and there are good reasons for this which we will go into in this post. Nick Malik from Microsoft wrote this post 6 years ago about this very topic. I have to say I agree 100% to Nick’s comments. It’s actually quite scary to the responses Nick got to that post.

I have copied a particular comment here for reference from a chap named Nate on Nick’s blog, not to pick on Nate but purely for a discussion and example perspective:

This is all good and all, but if helper classes are undesirable, where do you put miscellaneous code that you would normally put in a helper class? Where does that method go that say, helps encode a URL? Or that function that converts an array to a comma delimited list? Or in C#, that wrapper for a certain Win32 API? In all practicality, I find that I often have these piddely tasks that don’t really lend themselves to encapsulation within an object and are pretty much on their own, but are called from myriad places. If I was coding in C++, I would place it outside of a class, but languages like C# and VB.NET don’t allow me to do that. Does it violate OO principles? Of course it does. But OO principles are not meant to be a straightjacket. All that said, I do agree that if they show up in a UML diagram, your helper class is probably doing more than isolating a “piddely task” and that something is probably wrong.

Helpers

For starters I’d like to say what “miscellaneous” stuff. This is one of the biggest issues with things like Helpers actually more with Utilities, they start out being quite small, but before you know they become a dumping ground for all things to man and beast. And to make things worst they tend to be of type static so very hard to test.

In Nates response above, I’d respond to possibly putting URL encoded functions as an extension method to the String class. But then you might say two things:

  1. Extension methods were not available back in 2005 when that blog post was written
  2. Extension methods are really helper classes anyway

The answer to the above is yes and yes! With point 2 the usage is rather different. Extension methods are helper classes in that you can only write static methods. But there is one major difference; you do not actually call the helper class directly. Instead extension methods are an extensibility point in the .NET type system that extends .NET. They often should only be very short functions that normally should not have dependencies. So the developer using an extension method should be very clear what the function is doing and should be able to use those functions in his unit tests without issues.

By the way, my extension method post linked above needs updating. Extension method classes should have the name <class>Extensions. I.e. for IEnumerable extensions, it should be IEnumerableExtensions. So here it’s very clear what are extension method classes and what are not.

So where should those URL encoded functions go back in .NET 2.0 before extension methods existed? The answer, I would incorporate a new class called URL with no static methods and make it part of my framework for my application. This then gives me the power to inject dependencies in the future should I need to and makes testing very easy.

So if class URL required dependencies I.e in the future we might want to send a sample message to a given URL which has a dependency on a custom communication adapter for example, it wouldn’t be a suitable candidate for extension methods anyway.

So take above scenario, if we had a helper class that looks like the following:

public static class Helper 
{ 
    public string static EncodeUrl(Url url) 
    { 
       // call encryption 
    } 
}

That looks ok, so for now forget about testability. But for one thing the class name is not particularly useful. So we could change it, or leave it as is. Often it will get left as is. So then another developer will think, “oh there’s a helper class for library functions”. “I’ll add my FoobarAppSpecificFeature method to it”. And so it goes on and then you end up with a god class.

God Classes

God classes are classes that do or know too much. They breach the Single Responsibility Principle. They are often a result of a procedural developer trying to implement an object oriented system with a global variable mind-set. They often do not fit in well with TDD (Test Driven Development) purely because the classes are hard to test in that they rely on external dependencies which are very hard to mock out. Also, they do not model the system in any way so maintainability comes into question as if you have god classes, you will also have god unit test classes and projects too.

Are Singletons Really bad?

I personally don’t think they are when used in conjunction with an IoC container. The reason here is that a singleton when used in conjunction with an IoC container looks no different from a non-static instance type. This is one of the major benefits of using IoC in that the life cycle is controlled by the container and not the consumer consuming the class. And this configuration should be in the composition root.

There are many valid reasons you would want such a class and the fact you can piggy back on an IoC gives great power and flexibility.

Singletons when used with IoC do not prevent you from testing unlike static types which are mostly bad for reasons already mentioned and should be avoided when possible.

Singletons <> Static types

The moment you define a static type, you immediately write off any use of IoC and testability. That really is the key message here.

The other thing to remember is that, if you implement a static type via an Extension Method or other means, remember that the developer needs to understand what is actually happening and that he wouldn’t be interested in mocking out those calls in a unit test. If the consuming developer would be interested in mocking out those calls, then reconsider the use of an extension method/static type.

So I think Singletons are ok so long as they are not static types and are made singletons are part of the container configuration they are part of.

Utilities

Utility and helper classes in terms of naming are often interchangeable. A Utility class is often equal to a helper god class.

I’m interested in your views on this post, so please comment.

BananaSkinOver the years, we’ve seen some blunders in the Enterprise Architecture (EA) world, and some reoccurring themes have emerged. Being honest, we’ve even been in the room sometimes when they happen :). So we thought we’d document a few of these mistakes – here’s the first three. Feel free to add comments with your own favourites – maybe we’ll collate this info together at some point and summarise it in a white paper.

Introduction

Do these things (below) for a while, and then pack your bag because your EA function will be disbanded. OK, it might be renamed (always a good rule – if a project is failing, the first thing to do it rename it) or re-organised in some way – but it’s just the organisation thrashing around you as it tries to deal with the fact that YOU HAVE FAILED. A clear symptom of this is when other teams start doing what you might consider to be EA work (“hey, I should be doing that”) – that’s when you know you’ve blown it.

Great PowerPoint strategy, but no execution

Without strategy, execution is aimless. Without execution, strategy is useless (Morris Chang)

A common EA behaviour is to specify a preferred technology for some enterprise function, but define no roadmap to create the necessary dev/test environments, train people etc – therefore the barriers to use by projects are just too high for any one project to bear. The strategy is therefore utterly pointless, even damaging in fact as it just wastes resources.

Seduced by complexity

The engineer in us loves it – e.g. we love trying to populate the whole Zachman grid. It’s like Pokemon – you’ve “gotta catch ‘em all!”. Also, we love to have a conceptual model, then a logical model, then a physical model etc – and for some aspects of EA maybe this is appropriate. But the numerous dimensions can multiply up to create a seemingly infinite number of artefacts and viewpoints that you can never complete (and definitely cannot maintain).

If you feel a need to fill in all the boxes or else “I’m not done, I don’t have a complete EA”, just take step back. It’s nonsense. Ask yourself these questions:

  • Why are you there? To support the business that you work for.
  • Where do they get the value? Probably 50% of the value of EA comes from the first 20% of the effort – having a vision for various aspects of your EA, and a candidate roadmap to get there.

Modeling hell

This is a special case of the “seduced by complexity” error that deserves its own special mention. There is something incredibly seductive about using a model ling tool…

  • You suddenly feel “if only I could capture everything perfectly, then the world would be perfectly understood!”…
  • …quickly followed by “and if I could capture the right meta-data, maybe I could execute some of my model, or at least have some great live reporting from it!”…
  • …slowly followed by several months of coffee and darkened rooms…
  • …and then rapidly followed by your notice period

Sure, model ling has its place (a key role in fact) but the trick is to remember why on earth you are doing it and what value you and the business as a whole will derive from it. Otherwise you’ll descend into the 9th level of model ling hell…

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers