Design Patterns

Infrastructure Architecture is dead, long live Infrastructure Architecture.

Cloud infrastructure has changed the role of the infrastructure architecture into one of a virtual architect. The tin and wires approach, the HA and DR, the test and release and following the sun have all been replaced by Cloud infrastructure. Careful sizing, location, rack allocation etc. are unnecessary for most applications.

The desktop environment is changing radically: desktop management, massive rollout projects and investment decisions obsolete.  The use of virtual terminal software returns the desktop to the server. BYOD removes the need and ability to direct the desktop and mobile landscape. Network architecture is increasingly virtualised both within the virtual datacentre, between datacentres and client devices.

It is hard to imagine that bastion of physicality the Infrastructure Architect dealing with purely virtual server  communicating on a virtual network and that it can be assembled from their own virtual device. There is as serious point to this, as it depends on the Application Architect to design in such a way that enables the full power of the cloud to be used.

Not that it is anything new, just more pressure on the Application Architect.



Please Rate and Like this blog.  We always welcome your feedback and your opinion, so please leave a Comment.

Photo: Industrial backdrop by Pilarts  Dreamstime Stock Photos & Stock Free Images

Photo: Industrial backdrop by Pilarts Dreamstime Stock Photos & Stock Free Images

I’d like to propose a best practice for rolling out new features in a Service Oriented Architecture (SOA).

Traditionally, when we roll out a major new feature, we often end up causing a breaking change to the service. We’re then faced with a choice: (a) Force all our consumers to upgrade to the new version, and making all our consumers hate us, or (b) continue to support the old version of the service as well as the new, making only our own teams hate us. Suck it up, plan (b) is the better option, but try telling that to the guy having to patch fixes in three concurrent versions of a service.

Now, there are patterns that can help here (more on that another day), but they all still mean more work for everyone.

Also, when we first roll out a feature is exactly the moment we understand it least. We’ve got absolutely no idea how people will use it, nor whether it will even turn out to be useful. By baking the feature into a new major version of the service, we’re taking all our options away. The feature will be hard to remove if we decide it isn’t useful, and if we want to change how it works, we’re back into a major version upgrade again.

To my mind, good engineering is largely about keeping your options open. It’d be nice if we can try a new feature with a subset of consumers first, iterating quickly with just that subset, gradually adding more consumers as we get more confident.

Enter the Feature Flags pattern. Feature flags allow you to turn features on an off at a moment’s notice. At its most basic, a feature flag just turns a feature on or off for everyone at once, but the idea is often extended to allow turning on features for specific users, or collections of users. This allows you to roll out a new feature to consumers gradually, over an extended period.

So, here’s the proposal:

  • Allow consumers to pass a set of feature flags dictating which features they’d like enabled in the service.
  • Whenever you build a major new feature that would otherwise cause a breaking change, only enable it when the feature flag is passed.
  • If appropriate to your environment, control access to feature flags like you would to any other resource – e.g. you might want to restrict access in the early days to just a single consumer, making it easier to iterate.
  • Once we’re comfortable with a feature, it becomes publicly available – i.e. anyone can toggle the flag.
  • Every so often (e.g. once every couple of years), create a new major version of the service, refactoring it to include popular, battle tested features by default. Also, take this as an opportunity to clean out the cupboard and abandon any features that aren’t well used.

What do you think? Comments and thoughts very welcome…


Please remember to Rate and Like this post.  If you can, please leave a Comment.

Old record player

Photo: Old record player by Grafphotogpaher (c) Dreamstime Stock Free Images

Those of you accustomed to developing android application will be familiar with the mechanism used for building android apps, such as having the IDE do the work for you, which is somewhat limited, or breaking out and writing ant scripts to orchestrate a build. Some of the adventurous amongst you may be using the maven-android-plugin.

It appears that the Google Android build tools team has come to terms with the fact that ant probably isn’t the best option these days, and are moving towards replacing ant with gradle in the SDK in the future.

Last night I was fortunate enough to attend a event with the London Android Group (@londroid) at the SkillsMatter (@skillsmatter) HQ, where Hans Dockter (@hans_d), the CEO of gradleware gave a presentation on what gradle is, and how it can be used for building android projects, enabling developers to manage their builds with groovy based scripts rather than XML.

What is a build framework anyway?

As Hans put it so well, in short, a build framework will “compile stuff, copy things around, and drop out an archive“.

Whats so limiting with the current options?

Ant can be considered imperative, simply meaning that you have to spoon feed it with instructions via tasks; compile these files that reside here, copy them to this directory, jar it up etc, its not really capable of figuring things out for itself.

Maven on the other hand is considered declarative, meaning that you can focus more on the outputs of a task, such as “I want a web application”, providing you have your source in the right place, maven is smart enough to figure out where to find it, how to compile it, and what the output for a web application should look like. Essentially you tell maven what the inputs are, and what you expect to get out of it, and maven figures out the bit inbetween, thus avoiding the need for scripting tasks as you would have to if using ant.

Sounds great, so what does gradle bring to the android party?

Free beer! If only…but we get the next best thing.

Gradle attempts to take the best parts of ant and maven, by using it for building your android projects you can benefit from:

  • Very light and easy to manage build files, no trawling through humungous XML files. (gradle vs maven).
  • Gradle follows convention over configuration like maven. It knows where your source is for a java project (unless you decide to override it), you can read more about convention over configuration here.
  • Flexible dependency management, integrate with existing maven/ivy repositories. Different versions of dependencies for different build tasks? No problem.
  • It gives you the freedom and flexibility to define your own custom behaviour without needing to write plugins as you would if using maven. Groovy is your friend here.
  • Support for multiple projects. You don’t need to have separate projects for production code and integration tests, you can have keep them within the same project and define different source sets. This greatly reduces parent/child projects that can be a chore to maintain.
  • Don’t have gradle installed? Can’t install gradle easily? No worries, there is gradle wrapper for that. This is particularly useful on CloudBees Jenkins environments where you don’t have access to install gradle directly.
  • You have a free and paid for version of the app, with some common shared library between them? Gradle handles this perfectly via product flavours
  • In addition to free and paid for flavours, you also have builds for different architectures such as ARM and x86? Flavour groups will help you there.
  • You’re not tied to an IDE, the build scripts should be IDE independent so your team can choose their own flavour of IDE, or build from the command line as you would on a jenkins environment.
  • Don’t want to use Eclipse, prefer IntelliJ instead? No worries, apply the idea plugin and run gradle cleanIdea idea. Boom, idea project is setup and ready to go, no messing around with project settings.
  • Easily run instrumentation tests on multiple virtual devices, no need for manual testing each time you make a change.

Getting started?

  1. Watch the presentation
  2. Read through the android build tools page, plenty of information here on how to use the android gradle plugin
  3. Have a read of the gradle documentation, its very well documented.
  4. Checkout the samples on github
  5. Have a browse of the groovy documentation so you can understand the basic syntax of the language.
  6. code, code, code!

Make sure you join the London Android group on and look out for future events like this, they’re free and well worth the train ticket. For those in the south east, be sure to checkout SyncIpswich and SyncNorwich for free technology meetups.


In my previous blog post on using AWS CloudFormation to provision a Centos based environment I mentioned how at first the JSON syntax used within CloudFormation templates can be a little daunting, especially for people with limited scripting experience.

At the time of writing the blog I was using a combination of   Notepad++ with the JSON Viewer Plug-In to create and maintain my CloudFormation templates. One of the problems I had with this approach was that the templates are treated as pretty basic text files by Notepad++ and the JSON Viewer Plug-In only checks the base JSON syntax, so there is no validation of the objects defined in the JSON to ensure that the correct properties are being creating to allow the template to form a valid CloudFormation stack.

As an early Christmas present to anyone working with CloudFormation AWS recently announced the introduction of a new CloudFormation editor as part of the AWS Toolkits for Visual Studio and Eclipse. I have now had a chance to download and experiment with the Visual Studio version of this and am really impressed with how much easier it makes the creation and maintenance of CloudFormation templates.

As a .Net developer I have grown used to regularly relying upon IntelliSense features to aid with code creation, particularly when it comes to accessing and updating object properties. The CloudFormation editor provides this for the objects defined within a template as well as code snippets for all the object types you might want to define within your template. This greatly reduces the number of errors caused by ‘finger problems’ that used to occur when creating a template by hand.

The other really useful feature of the editor is the ability to estimate the costs of a stack before creating it. In the past when quoting for customers we have tended to pull together a design, plug the figures into the AWS Simple Monthly Calculator and then once we have received the go ahead provision the environment. With the ability to create costs from a template we are now looking at generating a base template at the design phase and then costing and building from this, which should help with the speed of environment creation and clarity around the prices for altering the design.

Based on our experiences so far  it’s a big thank-you to AWS Santa and his helper elves for the early Christmas present of the CloudFormation Editor and we are all looking forward to further presents from AWS  throughout next year :-)

I thought that the mental comparison between construction and systems was something that everyone had in their minds when looking at the maturity of architecture (which has been recently the scope of a recent engagement for a client).
The other day I was talking to a fellow architect and it surprised me that he had never made that metal comparison, so I guess a couple of paragraphs on the topic could be interesting.
So here’s the key thought: we IT Solution Architects are professional equivalents to our counterparts in the construction world, the Architects… what a revelation! J
It needs to be clarified that what we ordinarily understand as a “real world” architect, the one that architects buildings, bridges, airports, etc. would be the equivalent of the Solution Architect in the systems world.
In IT we have another half dozen other architect roles (Security, Application, Integration, Infrastructure, Data, Business); those have also their equivalent in the construction world, but in that business they’re mostly known as Engineers (I guess the fact that constructions relates to tangible physical elements makes it more of an engineering domain). But effectively it is the same approach: as a Solution Architect is responsible for delivering a balanced system design that addresses the business requirements across the technology and operational disciplines and drags in domain architects as needed, the construction architect does the exact same thing: he brings in the Security Engineers, the Electrical Engineer, etc. and each of those professionals provide their input into the overall blueprint for the building in question.
Hemienu: Vizier, Master of Works, and architect of the Great PyramidIt is important to highlight that the construction architecture discipline has been around for a very long time now, without being precise at least 5,000 years. And don’t try to tell me that while construction is indeed an old activity, architecture is not. Not true; it may have not been the mature discipline it is right now, but it’s been around for quite a while (think of the Chinese Wall, the Pyramids, the Coliseum, all those were definitely not architecture-less feats).
Obviously construction architecture has matured to incredible levels nowadays; just look for example how successful the construction work for the Olympics in London has been. The level of sophistication of the design patterns; the clear boundaries between the disciplines; the crystal clear definition of the design deliverables; the standardization of tools and techniques; the precision and control of delivery; the absolute repeatability and predictability of it all…
How I’d wish my work in the system architecture space would run under such advanced and controlled frameworks… How I’d wish that whenever I start a new project I wouldn’t have this hopeless feeling that I’m entering into unchartered territory again and again and again…
So what else is new? What is it so exciting about drawing this comparison? What’s the bottom-line?
Well, the bottom line is that I draw consolation in that IT Solution Architecture is in its absolute infancy, just short of 70 years against those 5 millennia!
So whenever I get frustrated by my project issues, by my inability to prevent the same issues to creep over and over again in my designs, by how frustrated my project team, my stakeholders and ultimately my client get through the whole process, I always bring this image to my mind.
The image is that I’m not the equivalent to those smooth and trendy architecture boutiques effortlessly designing incredible stadia with an amazing level of complexity; the image is that I’m more like that architect the pharaoh commissioned to build the pyramid for his mortal body so long ago.
I can very easily imagine and get sympathetic with the level of absolute difficulty that challenge surely was for that anonymous architect, and sadly or not, that image brings peace and quiet to my soul… J
There you go, 5,000 years of experience for my discipline to catch on but also a certainty that at some point in time we will reach that dreamed-of maturity*… though not all of it may come true in my life-time… shame!
As those architects of old were constantly told… “the damn thing it’s just a pile of blocks, it surely can’t be that difficult to put together?”
* maturity that will enable us to apply those exciting building architectural concepts. Like the design principle of “pace layering” (, that allows the architect to define the different components of the building in a way by which faster changing layers, like the floor plans or seat distribution, are able to slip and evolved unobstructed by the slower changing layers, such as the building foundations… right now in IT architecture these concepts are just mostly aspirational, but their time is coming. In our current maturity state all layers in a solution change every day, if not every hour, during and after delivery, and little consideration is given to create durable structures… it’s like we’re constantly tearing down whole buildings and replacing them with new ones. Which is obviously one of the strengths of IT in itself, the pace and depth of change that systems support, but we definitely need to learn to architect in ways that enable us to evolve our systems by renewing “facades” and “floor plans”, while keeping long lasting components underneath for longer periods, foundations that are able and ready to support the frequent change on top of them.
PS: I haven’t touched upon Enterprise Architecture in the article, my view is that EA compares to the city planning activities that our beloved councils across the country try to deliver for our cities… I guess in one specific space it seems IT has already caught up and is probably doing a better job than our real world counterparts!
PS2: when drawing my comparison to pyramid architects, I humbly think of the lesser pieces, the ones that haven’t stood the test of time, the ones built for the pharaohs no one remembers… I haven’t really architected a system that will still be around and that technology tourists will explore, analyze and admire 3,000 years from today… but watch this space!

As Smart421’s partnership with Amazon Web Services (AWS) is slowly moving from pure infrastructure work into full cloud-based solution architecture design, I’ve recently spent some time analysing their IaaS platform from that angle.

My immediate goal with this was to better understand how the infrastructure side of my solutions are going to be affected by this paradigm shift moving forward. In other words, I was preparing myself to have a conversation with my infrastructure architect, let’s call him Jimmy, and not be caught showing my complete ignorance on the matter… again. :D

To my surprise I’ve found out that AWS, and IaaS in general, has a much more profound effect on my end to end solution design than just infrastructure, the key being how the “Servicelisation of Infrastructure” (sorry) provides a great mechanism to finally comprehensively address NFRs at the application view and close what I use to call “the leap of faith into the iron”… let me explain that.

We all know by now that our solution design is always going to start by a business architecture exercise that will feed the solution requirements, including functional and data requirements but also those dreaded non-functional requirements (NFRs)**, to our data and application architects, who in turn will produce the data model and component view, and that will be followed by our infrastructure architect, who will come up with the technical platform all those application components will run. Fine and dandy, but…

Dreaded NFRs? Why? Well, while it is quite straightforward for our data architect to create a logical data model out of the data requirements, or it is just BAU for our application architect to derive the system capabilities to cover the functional requirements, it is not easy for any of the two to cope with those NFRs… How does our data architect react to a NFR such as “the system shall process 2 million transactions per day”? Or what does our application architect think of the NFR “the site will provide the same response times independently of the location of the user”? Well, in most of the cases the reaction will be “Well, that’s for infrastructure to answer, isn’t it? Let’s pass those ugly things to Jimmy, he’ll know what to do with them…”.

So that leaves us with that situation we’ve been so many times, in which we provide a very detailed application architecture to Jimmy alongside a very long list of performance related requirements, hoping, and here’s the leap of faith I was referring to earlier, that our infrastructure hero will know how to put a lot of heavy equipment together that somehow magically will achieve those very ambitious performance goals…please raise your hands if that has never happened to you, or if that hasn’t inevitably ended up in all sort of performance issues detected just too late down the line, maybe a couple of weeks prior to go-live? Anyone?

Well, let’s see how Infrastructure as a Service may come to our rescue. Just do this simple exercise: go to the Amazon Web Services site and read all the documentation of the offering, all of it, trying to ignore the fact that it describes an infrastructure platform and rather set your mind-set as if you were looking at the functionalities of just another software system that needs to be part of your component architecture… Interestingly enough this made a “magic click” in my head, and suddenly I was thinking about my solution and my application architecture*** in terms of capabilities, functionalities and features that elegantly addressed all those long hated NFRs!

I’ve put this idea to test for a CMS solution I’ve been playing around with recently; I would have never typically defined capabilities such as a “Fast Access Data Access Layer” or a “Low Latency Distribution of Content” in my capabilities inventory, but suddenly my understanding of those AWS services such as ElastiCache or CloudFront made it dead simple to think about the NFRs and translate them to discrete solution components.

And what’s even more interesting is my design is not immediately coupled to the given IaaS platform as a result, not at all. As with the rest of the solution, these components and capabilities allow for a fit-gap exercise against the available options to be answered by my infrastructure architect: Do we achieve global performance by deploying the servers in our corporate data centres or maybe by deploying them in the AWS regions in the cloud? Or do I just keep my platform in a single location within my premises and use a CDN pull-zone for low latency delivery of static content? Quite a different proposition for Jimmy than the old “make it quick, boy”!

Now the problem is addressed where it should be in the design process, at the logical level, and decomposed into a set of features that achieve full traceability from the business into the application and then into the infrastructure layer and then back up again… the work of our infrastructure architect is now so much easier, as it is the predictability of our design exercise! Life is good! :D

Well just a thought in any case… I guess most of you were already through this learning, but just in case you had your own Jimmy’s suffering, this is a nice mental approach to try to bridge the gap.

* Infrastructure as a Service; I’ve focused this analysis to this type of cloud offering as it is probably where the biggest gap between architecture practices exist. In big organizations with dedicated software, middleware or platform architecture functions a similar situation will probably exist in which we could follow a similar approach with SaaS (eg. Microsoft Dynamics CRM Online) or PaaS (eg. Microsoft Azure)…

** It’s worth mentioning the usual problem of the business architecture exercise not really producing NFRs other than maybe a couple of fuzzy “it must be like really really fast” or “the site needs to look gorgeous”… this article is working on the bold assumption that our business analysts have been able to get blood out of stones and have coaxed the business into expressing real tangible NFRs to come along with the rest of requirements.

*** After all aren’t we solution architects just application architects with a good working knowledge of the other disciplines? At least I’ll confess that’s my case…

The reason for this post is because I am still seeing Helper, Utility and Singleton implemented code in large enterprise scale solutions today. It is almost as bad as coding in Hungarian notation. Now this is to say that practices like Hungarian notation had a place and a purpose, notably when using dynamically linked languages such as C++ for example where you did not have type safety. Things have moved on since those days so PascalCase and camelCase are now the preferred convention, certainly within the .NET space and camel case preferred in the Java space.

Microsoft has a good post on Design Guidelines for Class Library Developers.

So really Helpers or Utilities come from (in my view) procedural languages and by definition they tend to be static types that often equate to “general stuff” that doesn’t fit anywhere else which is very much a procedural mind-set as when you build software in OO languages such as C# or Java, everything has a single responsibility.

This discussion has been flogged for years especially when .NET was becoming popular. But it seems all these discussions haven’t really changed peoples mind sets. Many developers still believe that a helper is a good thing. I am not sure how these patterns are perceived in the Java space but in .NET they are generally considered anti OO and there are good reasons for this which we will go into in this post. Nick Malik from Microsoft wrote this post 6 years ago about this very topic. I have to say I agree 100% to Nick’s comments. It’s actually quite scary to the responses Nick got to that post.

I have copied a particular comment here for reference from a chap named Nate on Nick’s blog, not to pick on Nate but purely for a discussion and example perspective:

This is all good and all, but if helper classes are undesirable, where do you put miscellaneous code that you would normally put in a helper class? Where does that method go that say, helps encode a URL? Or that function that converts an array to a comma delimited list? Or in C#, that wrapper for a certain Win32 API? In all practicality, I find that I often have these piddely tasks that don’t really lend themselves to encapsulation within an object and are pretty much on their own, but are called from myriad places. If I was coding in C++, I would place it outside of a class, but languages like C# and VB.NET don’t allow me to do that. Does it violate OO principles? Of course it does. But OO principles are not meant to be a straightjacket. All that said, I do agree that if they show up in a UML diagram, your helper class is probably doing more than isolating a “piddely task” and that something is probably wrong.


For starters I’d like to say what “miscellaneous” stuff. This is one of the biggest issues with things like Helpers actually more with Utilities, they start out being quite small, but before you know they become a dumping ground for all things to man and beast. And to make things worst they tend to be of type static so very hard to test.

In Nates response above, I’d respond to possibly putting URL encoded functions as an extension method to the String class. But then you might say two things:

  1. Extension methods were not available back in 2005 when that blog post was written
  2. Extension methods are really helper classes anyway

The answer to the above is yes and yes! With point 2 the usage is rather different. Extension methods are helper classes in that you can only write static methods. But there is one major difference; you do not actually call the helper class directly. Instead extension methods are an extensibility point in the .NET type system that extends .NET. They often should only be very short functions that normally should not have dependencies. So the developer using an extension method should be very clear what the function is doing and should be able to use those functions in his unit tests without issues.

By the way, my extension method post linked above needs updating. Extension method classes should have the name <class>Extensions. I.e. for IEnumerable extensions, it should be IEnumerableExtensions. So here it’s very clear what are extension method classes and what are not.

So where should those URL encoded functions go back in .NET 2.0 before extension methods existed? The answer, I would incorporate a new class called URL with no static methods and make it part of my framework for my application. This then gives me the power to inject dependencies in the future should I need to and makes testing very easy.

So if class URL required dependencies I.e in the future we might want to send a sample message to a given URL which has a dependency on a custom communication adapter for example, it wouldn’t be a suitable candidate for extension methods anyway.

So take above scenario, if we had a helper class that looks like the following:

public static class Helper 
    public string static EncodeUrl(Url url) 
       // call encryption 

That looks ok, so for now forget about testability. But for one thing the class name is not particularly useful. So we could change it, or leave it as is. Often it will get left as is. So then another developer will think, “oh there’s a helper class for library functions”. “I’ll add my FoobarAppSpecificFeature method to it”. And so it goes on and then you end up with a god class.

God Classes

God classes are classes that do or know too much. They breach the Single Responsibility Principle. They are often a result of a procedural developer trying to implement an object oriented system with a global variable mind-set. They often do not fit in well with TDD (Test Driven Development) purely because the classes are hard to test in that they rely on external dependencies which are very hard to mock out. Also, they do not model the system in any way so maintainability comes into question as if you have god classes, you will also have god unit test classes and projects too.

Are Singletons Really bad?

I personally don’t think they are when used in conjunction with an IoC container. The reason here is that a singleton when used in conjunction with an IoC container looks no different from a non-static instance type. This is one of the major benefits of using IoC in that the life cycle is controlled by the container and not the consumer consuming the class. And this configuration should be in the composition root.

There are many valid reasons you would want such a class and the fact you can piggy back on an IoC gives great power and flexibility.

Singletons when used with IoC do not prevent you from testing unlike static types which are mostly bad for reasons already mentioned and should be avoided when possible.

Singletons <> Static types

The moment you define a static type, you immediately write off any use of IoC and testability. That really is the key message here.

The other thing to remember is that, if you implement a static type via an Extension Method or other means, remember that the developer needs to understand what is actually happening and that he wouldn’t be interested in mocking out those calls in a unit test. If the consuming developer would be interested in mocking out those calls, then reconsider the use of an extension method/static type.

So I think Singletons are ok so long as they are not static types and are made singletons are part of the container configuration they are part of.


Utility and helper classes in terms of naming are often interchangeable. A Utility class is often equal to a helper god class.

I’m interested in your views on this post, so please comment.

Microsoft has recently started to embrace all things from the ALT.NET community. If you’re not an ALT.NET’er, then what I mean by this is most good things in terms of current thinking, design patterns, technical architecture etc spawns from ALT.NET and the wider community. Microsoft tends to jump on things when they become popular, then they tend to roll these things into their development tools and products. Some of these make it into the actual products like Visual Studio, .NET etc. Some get given to the patterns & practices team to implement and some go to other development teams within Microsoft.

Some examples of this are as follows:

  1. IoC/DI – Unity comes from Windsor, StructureMap, Ninject
  2. MVC – ASP.NET 3 (latest cut) comes from Castle MonoRail (and Smalltalk!)
  3. ORM – Entity Framework comes from NHibernate – among others
  4. etc

The patterns & practices team whose job it is to lay down guidance and best practices using Microsoft technology and to also provide application blocks and frameworks that work with Microsoft’s toolset. The p&p team are best known for the Enterprise Library. I find most of what comes out of the p&p team is somewhat over complex and configuration heavy, although some things like Prism for WPF is very good.

Note what I said above “configuration heavy”. The industry has been moving away from angle brackets (XML) for sometime. XML is often massively over used and it has quite a heavy payload often not that suitable for embedded mobile devices, not to mention quite nasty to work with not only to parse but to read.

Many platforms/tools frameworks has in recent years been adopting this thing called “Convention over Configuration” CoC for short. It is essentially something that conforms to a convention i.e. a certain way or assumed way of working unless told otherwise – namely using configuration – whether this is XML or something else is not important.

Microsoft started (at least the only thing I can think of) to introduce CoC with ASP.NET MVC. Since then other products are showing signs of CoC such as Prism for WPF and Silverlight, WCF 4.0. Which kind of brings me to the reason for this post.

WCF in the past has required a lot of configuration even just to get a service up and running with a basic binding. The fact that I label my service contract with an attribute (ServiceContract) should be enough! At last Microsoft has recognized that you shouldn’t have to do this.

Take the following configuration which is what you had to do prior to WCF 4:

My configuration above is slightly more simple than it would normally be due to the fact I am using configSource. i.e. separating out WCF sections for easier maintainability.

But what if I wanted to knock up a really simple service that looks like the following:

public interface ICustomerService
   RetrieveCustomersResponse RetrieveCustomers(RetrieveCustomersRequest r);

The fact I have decorated my service with the needed attributes, should I have to do any more? why do I need to wire up a basic HTTP binding class to my service in config code or XML? Now in WCF 4 you don’t!

Now consider the amended config below:

Notice in the system.serviceModel sections we no longer have any configuration. Will this service still work? lets try it, by navigating to the endpoint using a web browser:

How is this possible??!? Well now WCF has a default configuration which comprises of default bindings and behaviours. WCF now exposes 4 protocols by default: basic HTTP, net TCP, named pipes and MSMQ. This is known as protocol bridging in WCF 4.0. You can easily change any of these things i.e. behaviours protocols, bindings etc globally or per service in the normal way with one difference which I’ll show in a second. But first, notice the service above no longer exposes a WSDL document because we no longer have any behaviours enabled and the default ones locks down the metadata exchange MEX endpoint.

We can soon fix that globally by using the following small bit of config, I have also lost the log4net config here to add more clarity:

The behaviours config file looks like so:

Notice how we have enabled debugging exception details. You would normally only do this in debug mode (development and not production). Notice the difference between this and a normal WCF behaviour?? This one has no name, which means it will be applied to all services. So this feature is really powerful, you can apply blanket configuration across your enterprise without having to specify the config per every service!

Now if we ask for the WSDL, we get the following back from the WCF 4 service:

So here the convention is the pre-configured protocols and bindings, we still have the ability to change this convention if we want to using normal config.

In the next part to this post, I’ll show examples on how to change protocol bridging i.e changing default basicHttpBinding to wsHttpBinding.

I have been using the Service Locator pattern for years and I think it has a place in some circumstances – I’ll explain what I mean by that in a minute but first for some scene setting…

If you’re not familiar with the Service Locator then the following shows a simple class diagram that depicts its use:

The following descrription of what the Service Locator pattern is, is detailed here from Wikipedia:

The service locator pattern is a design pattern used in software development to

encapsulate the processes involved in obtaining a service with a strong abstraction layer. This pattern uses a central registry known as the “service

locator” which on request returns the information necessary to perform a certain task

Often the Service Locator is used in conjunction with other patterns such as IoC and DI. (self promotion here, I’ve put together a IoC/DI container with support for event aggregation for Silverlight on Windows Phone :

If you are a Microsoft shop, then the Common Service Locator is a very popular implementation of this pattern. You can download the code here (by the patterns & practices team at Microsoft):

In fact the above website contains some providers for popular Inversion of Control containers such as StructureMap, Unity, Castle Windsor and so on.

Anyway back to the point of this post, when should you implement a Service Locator and is it an anti-pattern? According to this post and many others it is an anti-pattern. My view to that question is it is an anti-pattern if used in the wrong way. The example given in the above post is an anti-pattern because the benefits of IoC are not realised due to lack of understanding in IoC – I see this a lot in enterprise. In fact you can use a vanilla IoC container as a service locator (kind of) it’s more of a factory than service locator but bear with me.

Here is an example of a good use of IoC and DI – to which many of you reading this will already be doing today and will continue to do in the future:

public class Foobar : IFoobar
    private readonly IFoobarAdapter _adapter;
    private readonly IFoobarAdapter2 _adapter2;

    public Foobar(IFoobarAdapter adapter, IFoobarAdapter2 adapter2)
        _adapter = adapter;
        _adapter2 = adapter2;

    public void DoStuffWithAdapters()

Now the above code is nothing new, we are using IoC and DI to abstract dependencies which thus enables us to write good software code that is easilly testable and maintainable. There is no requirement for Service Locator here. The code also tells any human or software program to generate some metrics as to how coupled or otherwise loosely coupled our software might or might not be. So the above, NDepend (my tool of choice for metrics) can easilly figure out the dependencies between class Foobar and the adpaters (interfaces).

Now what if we introduced the Service Locator pattern to the mix here. Consider the following changes to the above Foobar class that now uses Microsofts IServiceLocator:

public class Foobar : IFoobar
    private readonly IFoobarAdapter _adapter;
    private readonly IFoobarAdapter2 _adapter2;
    private readonly IServiceLocator _serviceLocator;

    public Foobar(IServiceLocator serviceLocator)
        _serviceLocator = serviceLocator;
        _adapter = serviceLocator.GetInstance<IFoobarAdapter>();
        _adapter2 = serviceLocator.GetInstance<IFoobarAdapter2>();

    public void DoStuffWithAdapters()

So what’s happening here, and why is the new code not so good? but more importantly, why would you use Service Locator as it seems to be complicating things somewhat…

Firstly, the code is not so good for the following reasons:

  1. In order to test this class, I’m going to have to new up a IServiceLocator type and ensure it responds to GetInstance for types IFoobarAdapter and IFoobarAdapter2. I know I can use mocking tools like Rhino Mocks here, but I’m just giving myself more work and I’m not getting anything from that change.
  2. It gives any human and tool such as NDepend an artificial view of the actual coupling between the layers.

By the way it’s worth pointing out here that I can’t remember whether Microsofts implementation of IServiceLocator includes generics. I think I might have added the generic GetInstance to the interface in order to make my implementation cleaner and easier to code.

The above could be worst still..consider the following additional “hack”:

public class Foobar : IFoobar
    private readonly IFoobarAdapter _adapter;
    private readonly IFoobarAdapter2 _adapter2;
    private readonly IServiceLocator _serviceLocator;

    public Foobar(IServiceLocator serviceLocator)
        _serviceLocator = serviceLocator;

    public void DoStuffWithAdapters()
        _adapter = _serviceLocator.GetInstance<IFoobarAdapter>();

    public void DdoStuffWithAdapters2()
        _adapter2 = serviceLocator.GetInstance<IFoobarAdapter2>();

So you *might* be thinking, well what’s wrong with that, shame on you if you are! Foobar here is quite light in that there arn’t many lines of code, but when it gets larger it will be more difficult to figure out Foobar’s dependencies from both a tools and a humans perspective.

So this then is leading to the conclusion that Service Locator is an anti-pattern right? wrong! take the following slightly different scenario.

I am using the command pattern and have a command that looks like so:

public class EmailCustomerConfirmationCommand :
     private IEmailAdapter _emailAdapter;    

     public EmailCustomerConfirmationCommand(IEmailAdapter emailAdapter)
          //inject dependencies here.
          _emailAdapter = emailAdapter;

     public void Execute
       (EmailCustomerConfirmationContext context)

So with the above I have a nice implementation of the command pattern (I’ll talk about the command pattern in a later post) that takes a context as a generic. We register this command with the context so it can be picked off the container easily. This command gets registered like so:


This of course will look different depending on your container of choice. Here I am using the Compact Container.

So my requirement is to now wireup the calling of the above command in my Foobar class. This is easy, I could just inject ICommand via the constructor like I do with all the other types, right? well I could but this will not scale very well as I might have potentially 10′s or 100′s of commands. Imagine testing that. Foobar could be a presenter layer in a MVP application or Controller layer in an MVC application or some sort of domain event. Instead, I’d like to delegate the execution of commands to another responsibility outside of class Foobar honouring the SRP (single reponsibility principle). So with that, what I want to do is something like this:

public class Foobar : IFoobar
     private readonly IFoobarAdapter _adapter;
     private readonly IFoobarAdapter2 _adapter2;
     private readonly Icontroller _controller; 

     public Foobar(IFoobarAdapter adapter,
          IFoobarAdapter2 adapter2,
          IController controller)
          _adapter = adapter;
          _adapter2 = adapter2;
          _controller = controller;

     public void DoStuffWithAdapters()

     public void DoStuffWithAdapters2()

     public void SendNotificationEmailToCustomer(Customer customer)
         (new EmailcustomerConfirmationContext(customer));

I have now added a dependency named IController that handles the actual execution of commands. This class could look like the following:

public class Controller : IController
    private readonly IServiceLocator _serviceLocator;
    public Controller(IServiceLocator serviceLocator)
         _serviceLocator = serviceLocator;

    public void Execute<TContext>(TContext context)
         var command =
         if (!command.IsNull())
         var disposable = command as IDisposable; 

         if (disposable != null)

So here the Controller class is nice and clean in that we are not injecting commands via the constructor or any other means. Instead we are asking Service Locator for a given command based on the context passed to it. The power of this is that the Controller class in this example also doesn’t know the underlying container it is working with.

So hopefully you can see the power the Service Locator gives us here. We have made our Foobar class very clean with little dependencies which makes it easy to test and separated out responsibilities for the actual execution of commands – much like Microsoft’s WPF framework does.

Microsoft is starting to embrace these patterns. The IDependencyResolver is Microsoft’s version of Service Locator in MVC 3. I will write a blog on that soon.

So to conclude, the Service Locator pattern is *not* an anti-pattern so long as you have a good reason to use it!

In the meantime, happy coding!

I’ve been interested in CQRS – Command Query Responsibility Segregation for sometime now and been very fond of the simple command pattern for a long while. The command pattern is baked into new technologies such as WPF (Microsoft’s Windows Presentation Foundation). I wrote a post a while back on how to implement the nice clean command pattern here:

My post assumes you are familiar with patterns such as dependency injection and IoC containers as these are additional concepts promote good use of the command pattern and in turn CQRS.

If you’re aware of the command pattern or have read my post linked above, you’ll see how nice it is in that it promotes the “Single Responsibility Principle” in that an object should only have 1 reason to change. In other words, do not over-complicate your code. This not only makes your code easier to read, it makes it easier to test and maintain and easier to apply rules such as FxCop during build processes to.

CQRS is an architectural design pattern that extends the command pattern. Udi Dahan has a great post on what CQRS is designed to solve here:

With the traditional command pattern, everything gets executed. So your queries and updates might execute within one command. So you quite possibly execute a repository layer that gets some data from a data store (perhaps via an ORM), then you might create a unit of work, make changes then write them back to the data store. Although this is testable as your command isn’t doing too much, it could be better to separate the two. Why would you do that? well it gives you the opportunity to have a cache store for queries. So when a create, update or delete command is executed, the command updates a separate cache by simply serializing the data perhaps as JSON to a distributed cache – something like Windows Server AppFabric Velocity for quick retrieval of query commands later by serializing back into the domain or DTO object. So in this case, when a query comes in, instead of it going through all those layers to get data from a database/data store it first looks in the distributed cache. If found, then serializes the data into the domain object or data transfer object and returns to the caller. This is lightning fast.

What I explained above is in contrast to a typical system, your retrieval commands would receive a request in the form of a context, this would then possibly call a repository layer, that then might call into an ORM that serves up a domain object of the thing you’re looking for. There is a lot of layers here just for displaying data. This is where CQRS can simplify this.

You should be able to see from this that CQRS is designed to simplify the whole process of writes vs reads. As I said the reads could be some sort of distributed caching technology like Velocity or memcached.

I attended the CWDNUG (Canary Wharf Dot Net User Group) meeting on Wednesday 1st June (last night): although execution of the presentation could have been better the content was ok. The presentaion talked about Event Sourcing too which I think is another post as this can be used with CQRS in certain senarios.

For me I think CQRS for separating out queries from commands (writes, creation and updates) and keeping the data store as a RDBMS normal form is a good place to be. Event Sourcing – which I will talk about later is good for green field projects.



Next Page »


Get every new post delivered to your Inbox.

Join 1,084 other followers