UPDATE 15/01/2012: I have written a related post here: Why I prefer BitBucket over GitHub

Distributed Source Control Systems (DVCS) are taking off (and have done for quite some time) and are becoming quite popular especially within the community where there are large numbers of contributors. More recently large enterprises have been seen to be adopting DVCS over regular source control systems like Subversion, CVS and TFS, this doesn’t mean they are making the right decisions. More on this in another post, but I thought I’d talk about bitbucket.org.

Bitbucket.org is a free source code repository portal for non-commercial use for a max of 5 contributors, it supports more at cost. Bitbucket supports both Mercurial and Git DVCS.

If you have never heard of DVCS then I suggest to start learning about it by reading this.

If you’ve heard of GitHub (if you haven’t, then you have probably been living under a rock for the last year) then Bitbucket is similar except it not only supports Git but Mercurial repositories as well.

GitHub and Bitbucket alike are the new SourceForge, Google code and CodePlex of today.

I like Bitbucket as not only does it support both Mercurial as well as Git, it also has support for custom domains. All you have to do is create a custom CNAME record for your internet domain name (DNS) to point to bitbucket.org. Then when that domain is hit, Bitbucket redirects to your repositories. For example I have set mine to http://code.simonrhart.com

The configuration in Bitbucket:

I think that is a killer feature. I will see if I can get http://code.smart421.com configured, or perhaps http://git.smart421.com

I have created a single repository called MVC 3 Examples. If you are a Git guru, simply clone that repository at:

 $ git clone https://bitbucket.org/simonrhart/mvc-3-examples.git 

If you are a Microsoft person with Visual Studio, simply download the Git tools here: http://code.google.com/p/msysgit/

Once you have done that, run the above Git command and you will have access to the entire repository. The repository is a work in progress. I plan to add more stuff to it as well as many other repositories mainly for learning. Right now, there is an example on how to write and test Action Filter providers for Inversion of Control container StructureMap in MVC 3.

One thing I will say about Git is it has a very steep learning curve. All I can say is keep using it and you will understand it! It’s one of those tools/technologies that you have to keep using it in order to understand it. I actually quite like Git, there was a time when I hated it. I have to say it doesn’t compare to TFS, however – although there are a couple of features in Git that I’d like to see in TFS one is the branch graph. Id like to see which changeset I branched from like I can with Git – that feature is a killer feature.

Once you have mastered Git, install Git Extensions downloadable from Google code as it happens! Git Extensions is a Visual Studio plugin as well as a standalone application that attempts to make your Git experience better which I believe it does.

If you are new to any DVCS, learn the raw commands first, then use the extensions.

More on Git, Mercurial in future posts…as well as TeamCity -> BitBucket. Watch this space…

I installed one of the coolest applications I have seen in a long time to do with source control systems and it’s called Gource. Gource is a Source Control Visualization program written using OpenGL that shows you a graphical representation of a source code repository over time as it is being developed by team members working on it. The application builds up a graphical animated view of a source code repository from the source control logs and shows you the relative timeframe as developers come in and add/change source code files. It is incredibly smooth and is actually very relaxing to watch!

Gource works with source control systems: Git, Mercurial, SVN, Bazaar and CVS.

Gource is simple to use, you simply download it and run it passing it parameters like the branch you want to animate and a whole bunch of options.

Here is a recording of source control system Git from inception. You can actually see Linus Torvalds Gravatar in the application as he works on Git in the early days (yes it supports Gravatars like Git does):

Git:

JBoss EJB 3:

Remember the Amiga Operating System? well here it is from 1985!

See more Gource videos here: http://code.google.com/p/gource/wiki/Videos

Microsoft announce Windows 8 at //Build

You may have heard recently that Microsoft announced an early release of Windows 8 at the //Build conference in California - although Microsoft are calling this release Windows Developer Preview (otherwise known as a CTP).

You might have downloaded this build and attempted an install on release day last week. If you were lucky, the install will have succeeded. If you were unlucky, like I was and tried to use virtualization for the install, you might have received an error similar to:

DRIVER_IRQL_NOT_LESS_OR_EQUAL

I have written a blog post on this error on my other blog that you might find to help you resolve this problem specifically for the Windows 8 pre-beta build here: http://www.simonrhart.com/2011/09/windows-8-pre-beta-driverirqlnotlessore.html

NOTE: This is *only* for the pre-release of Windows 8, the error above is a very generic Microsoft Operating System error.

In MVC 3 a good way to validate your models using custom code, is to derive from class ValidationAttribute and implement method ValidationResult IsValid(object value, ValidationContext context) then decorate the desired data in your View Model with that attribute. Doing this gives great power and keeps the Controller simple and moves validation out of the controller. I see all too often the controller with too many responsibilities, validation shouldn’t be one of them.

ValidationAttribute was enhanced in .NET 4 and is now part of the DataAnnotations namespace.

So for example, if I had a model called CustomerViewModel that looked like this:

public class CustomerViewModel
{
    public string Name {get; set;}    
    public int CreditLimit {get; set;}
}

And I want to validate CreditLimit based on a specific customer name for example, if the customer Name field is “Simon Hart” then the credit limit should be £100,000 otherwise the credit limit should be £1,000.

There is no magic silver bullet in MVC that will allow me to validate this model with the above rules without writing some custom code in the consuming command/controller. The controller is normally where you see such logic. So instead I can move those responsibilities out of the controller and into a validation attribute that MVC will call for me by writing a custom attribute that looks something like the following:

public class CreditLimitValidatorAttribute : ValidationAttribute
{
    protected override ValidationResult 
        IsValid(object value, ValidationContext context)
    {
        var model = context.ObjectInstance as CustomerViewModel;        
        ValidationResult result = null;
        
        // validate whether we are dealing with the correct model.
        if (model == null)
        {
            throw new InvalidOperationException(string.Format(
            CultureInfo.InvariantCulture, 
               "Context of type {0} is not supported. 
               Expected type CustomerViewModel",
                context.ObjectInstance.GetType().Name));
        }
         
        if (model.Name == "Simon Hart" 
             && model.CreditLimit > 100000)
        {
            return new ValidationResult
               ("CreditLimit is £100,000 for you Simon");           
        }
        else
        {
           if (model.CreditLimit > 1000)
           {
               return new ValidationResult("Unless you are Simon, 
                you have a credit limit of £1,000");
           }
        }
        
        return ValidationResult.Success;      
    }
}

So that is pretty straight forward. Notice the ValidationContext gives us access to the full object graph, so we can inspect other parts of the model if we wanted. The logic in the validator is, unless you are me, you get a £1,000 credit limit as a customer, whereas I get £100,000! If these rules are violated, an error is returned to the user. So wiring up that custom attribute, looks as simple as follows:

public class CustomerViewModel
{
    public string Name {get; set;}
    
    [CreditLimitValidator]
    public int CreditLimit {get; set;}
}

Then in the controller action, you simply need to check that the model is not in an invalid state like so:

public class CustomerController : Controller
{
    [HttpGet]
    public ActionResult ValidateCustomer(CustomerViewModel model)
    {
        if (ModelState.IsValid)
        {
            //Auth...
        }

        // not valid...
        return View();
    }
}

This is nice and clean. It leaves the controller to do what is was designed to do, worry about navigation and not validation or logic.

So the next question is, “how do I test that attribute?”. This is a very valid question since the IsValid method that needs to be implemented has to be protected:

public class CreditLimitValidatorAttribute : ValidationAttribute
{
    protected override ValidationResult 
        IsValid(object value, ValidationContext context)
    {
    }
}

You could created a mocked attribute and derive from it, then create another method signature that accepts the parameters you are interested in and call the protected method from there. Thankfully there is an easier way. Instead you just call Validate on the ValidationAttribute passing a context. This then calls IsValid for you.

So our unit test for the above could look like the following:

[TestFixture]
public class CreditLimitValidatorAttributeTests
{
    [Test]
    public void ShouldErrorOnInvalidCreditLimitForSimonHart()
    {
          // Arrange
          var model = new CustomerViewModel();
          model.Name = "Simon Hart";
          model.CreditLimit = 999999999;

          var testContext = new ValidationContext(model, null, null);
          var attr = new CreditLimitValidatorAttribute();
          
          // Act
          var validationException =
                ExtendedAssert.Throws<ValidationException>(() =>
             attr.Validate(model.CreditLimit, testContext))
             as ValidationException;
          
          // Assert
          Assert.IsTrue(validationException.Message == "CreditLimit is £100,000 for you Simon");        
    }

}

Notice the attribute throws an exception. Of course this is caught by MVC at runtime and displayed to the user. As in the above test, we can even check that the validation message coming back from the attribute is correct.

The ExtendedAssert class I am using above is just a custom class that I used for testing, it is simply a class the catches expected exceptions so I do not have to litter my tests with try{} catch{} statements.

The reason for this post is because I am still seeing Helper, Utility and Singleton implemented code in large enterprise scale solutions today. It is almost as bad as coding in Hungarian notation. Now this is to say that practices like Hungarian notation had a place and a purpose, notably when using dynamically linked languages such as C++ for example where you did not have type safety. Things have moved on since those days so PascalCase and camelCase are now the preferred convention, certainly within the .NET space and camel case preferred in the Java space.

Microsoft has a good post on Design Guidelines for Class Library Developers.

So really Helpers or Utilities come from (in my view) procedural languages and by definition they tend to be static types that often equate to “general stuff” that doesn’t fit anywhere else which is very much a procedural mind-set as when you build software in OO languages such as C# or Java, everything has a single responsibility.

This discussion has been flogged for years especially when .NET was becoming popular. But it seems all these discussions haven’t really changed peoples mind sets. Many developers still believe that a helper is a good thing. I am not sure how these patterns are perceived in the Java space but in .NET they are generally considered anti OO and there are good reasons for this which we will go into in this post. Nick Malik from Microsoft wrote this post 6 years ago about this very topic. I have to say I agree 100% to Nick’s comments. It’s actually quite scary to the responses Nick got to that post.

I have copied a particular comment here for reference from a chap named Nate on Nick’s blog, not to pick on Nate but purely for a discussion and example perspective:

This is all good and all, but if helper classes are undesirable, where do you put miscellaneous code that you would normally put in a helper class? Where does that method go that say, helps encode a URL? Or that function that converts an array to a comma delimited list? Or in C#, that wrapper for a certain Win32 API? In all practicality, I find that I often have these piddely tasks that don’t really lend themselves to encapsulation within an object and are pretty much on their own, but are called from myriad places. If I was coding in C++, I would place it outside of a class, but languages like C# and VB.NET don’t allow me to do that. Does it violate OO principles? Of course it does. But OO principles are not meant to be a straightjacket. All that said, I do agree that if they show up in a UML diagram, your helper class is probably doing more than isolating a “piddely task” and that something is probably wrong.

Helpers

For starters I’d like to say what “miscellaneous” stuff. This is one of the biggest issues with things like Helpers actually more with Utilities, they start out being quite small, but before you know they become a dumping ground for all things to man and beast. And to make things worst they tend to be of type static so very hard to test.

In Nates response above, I’d respond to possibly putting URL encoded functions as an extension method to the String class. But then you might say two things:

  1. Extension methods were not available back in 2005 when that blog post was written
  2. Extension methods are really helper classes anyway

The answer to the above is yes and yes! With point 2 the usage is rather different. Extension methods are helper classes in that you can only write static methods. But there is one major difference; you do not actually call the helper class directly. Instead extension methods are an extensibility point in the .NET type system that extends .NET. They often should only be very short functions that normally should not have dependencies. So the developer using an extension method should be very clear what the function is doing and should be able to use those functions in his unit tests without issues.

By the way, my extension method post linked above needs updating. Extension method classes should have the name <class>Extensions. I.e. for IEnumerable extensions, it should be IEnumerableExtensions. So here it’s very clear what are extension method classes and what are not.

So where should those URL encoded functions go back in .NET 2.0 before extension methods existed? The answer, I would incorporate a new class called URL with no static methods and make it part of my framework for my application. This then gives me the power to inject dependencies in the future should I need to and makes testing very easy.

So if class URL required dependencies I.e in the future we might want to send a sample message to a given URL which has a dependency on a custom communication adapter for example, it wouldn’t be a suitable candidate for extension methods anyway.

So take above scenario, if we had a helper class that looks like the following:

public static class Helper 
{ 
    public string static EncodeUrl(Url url) 
    { 
       // call encryption 
    } 
}

That looks ok, so for now forget about testability. But for one thing the class name is not particularly useful. So we could change it, or leave it as is. Often it will get left as is. So then another developer will think, “oh there’s a helper class for library functions”. “I’ll add my FoobarAppSpecificFeature method to it”. And so it goes on and then you end up with a god class.

God Classes

God classes are classes that do or know too much. They breach the Single Responsibility Principle. They are often a result of a procedural developer trying to implement an object oriented system with a global variable mind-set. They often do not fit in well with TDD (Test Driven Development) purely because the classes are hard to test in that they rely on external dependencies which are very hard to mock out. Also, they do not model the system in any way so maintainability comes into question as if you have god classes, you will also have god unit test classes and projects too.

Are Singletons Really bad?

I personally don’t think they are when used in conjunction with an IoC container. The reason here is that a singleton when used in conjunction with an IoC container looks no different from a non-static instance type. This is one of the major benefits of using IoC in that the life cycle is controlled by the container and not the consumer consuming the class. And this configuration should be in the composition root.

There are many valid reasons you would want such a class and the fact you can piggy back on an IoC gives great power and flexibility.

Singletons when used with IoC do not prevent you from testing unlike static types which are mostly bad for reasons already mentioned and should be avoided when possible.

Singletons <> Static types

The moment you define a static type, you immediately write off any use of IoC and testability. That really is the key message here.

The other thing to remember is that, if you implement a static type via an Extension Method or other means, remember that the developer needs to understand what is actually happening and that he wouldn’t be interested in mocking out those calls in a unit test. If the consuming developer would be interested in mocking out those calls, then reconsider the use of an extension method/static type.

So I think Singletons are ok so long as they are not static types and are made singletons are part of the container configuration they are part of.

Utilities

Utility and helper classes in terms of naming are often interchangeable. A Utility class is often equal to a helper god class.

I’m interested in your views on this post, so please comment.

Microsoft has recently started to embrace all things from the ALT.NET community. If you’re not an ALT.NET’er, then what I mean by this is most good things in terms of current thinking, design patterns, technical architecture etc spawns from ALT.NET and the wider community. Microsoft tends to jump on things when they become popular, then they tend to roll these things into their development tools and products. Some of these make it into the actual products like Visual Studio, .NET etc. Some get given to the patterns & practices team to implement and some go to other development teams within Microsoft.

Some examples of this are as follows:

  1. IoC/DI – Unity comes from Windsor, StructureMap, Ninject
  2. MVC – ASP.NET 3 (latest cut) comes from Castle MonoRail (and Smalltalk!)
  3. ORM – Entity Framework comes from NHibernate – among others
  4. etc

The patterns & practices team whose job it is to lay down guidance and best practices using Microsoft technology and to also provide application blocks and frameworks that work with Microsoft’s toolset. The p&p team are best known for the Enterprise Library. I find most of what comes out of the p&p team is somewhat over complex and configuration heavy, although some things like Prism for WPF is very good.

Note what I said above “configuration heavy”. The industry has been moving away from angle brackets (XML) for sometime. XML is often massively over used and it has quite a heavy payload often not that suitable for embedded mobile devices, not to mention quite nasty to work with not only to parse but to read.

Many platforms/tools frameworks has in recent years been adopting this thing called “Convention over Configuration” CoC for short. It is essentially something that conforms to a convention i.e. a certain way or assumed way of working unless told otherwise – namely using configuration – whether this is XML or something else is not important.

Microsoft started (at least the only thing I can think of) to introduce CoC with ASP.NET MVC. Since then other products are showing signs of CoC such as Prism for WPF and Silverlight, WCF 4.0. Which kind of brings me to the reason for this post.

WCF in the past has required a lot of configuration even just to get a service up and running with a basic binding. The fact that I label my service contract with an attribute (ServiceContract) should be enough! At last Microsoft has recognized that you shouldn’t have to do this.

Take the following configuration which is what you had to do prior to WCF 4:

My configuration above is slightly more simple than it would normally be due to the fact I am using configSource. i.e. separating out WCF sections for easier maintainability.

But what if I wanted to knock up a really simple service that looks like the following:

[ServiceContract]
public interface ICustomerService
{
   [OperationContract]
   RetrieveCustomersResponse RetrieveCustomers(RetrieveCustomersRequest r);
}

The fact I have decorated my service with the needed attributes, should I have to do any more? why do I need to wire up a basic HTTP binding class to my service in config code or XML? Now in WCF 4 you don’t!

Now consider the amended config below:

Notice in the system.serviceModel sections we no longer have any configuration. Will this service still work? lets try it, by navigating to the endpoint using a web browser:

How is this possible??!? Well now WCF has a default configuration which comprises of default bindings and behaviours. WCF now exposes 4 protocols by default: basic HTTP, net TCP, named pipes and MSMQ. This is known as protocol bridging in WCF 4.0. You can easily change any of these things i.e. behaviours protocols, bindings etc globally or per service in the normal way with one difference which I’ll show in a second. But first, notice the service above no longer exposes a WSDL document because we no longer have any behaviours enabled and the default ones locks down the metadata exchange MEX endpoint.

We can soon fix that globally by using the following small bit of config, I have also lost the log4net config here to add more clarity:

The behaviours config file looks like so:

Notice how we have enabled debugging exception details. You would normally only do this in debug mode (development and not production). Notice the difference between this and a normal WCF behaviour?? This one has no name, which means it will be applied to all services. So this feature is really powerful, you can apply blanket configuration across your enterprise without having to specify the config per every service!

Now if we ask for the WSDL, we get the following back from the WCF 4 service:

So here the convention is the pre-configured protocols and bindings, we still have the ability to change this convention if we want to using normal config.

In the next part to this post, I’ll show examples on how to change protocol bridging i.e changing default basicHttpBinding to wsHttpBinding.

Anyone doing modern enterprise software engineering today will be practicing or partially practicing Continuous Integration, otherwise known as CI.

If you have never heard of CI before, Martin Fowler has a really good post on the subject here: http://martinfowler.com/articles/continuousIntegration.html

Just quickly, CI really comes down to a few main key points – at a minimum (some of which lifted from Fowlers post above):

  1. Automate the build
  2. Make your build self testing
  3. Everyone commits/check’s in to trunk every day
  4. Every commit/check in should build trunk on an integration machine
  5. Support for buddy builds
  6. Keep the build fast
  7. Must be able to run a local command-line single build that does everything that represents the build machines process

As far as I am concerned those 7 items are mandatory. There are of course many more that make up a really good CI strategy which I’ll go into in a future post but for now, lets concentrate on those 7 items.

I’m not going to go into detail into each item, but will pick out the ones I think are the most important.

2. Make your build self testing

The thing is everyone seems to know about CI now, which is good. But many are still getting it wrong.

Number 2 on the list need explaining in order to get it right. I’ve seen so many large companies get this wrong, let me explain.

There are normally two types of automated testing:

  1. Unit testing (by no means should the component being tested go down a rabbit hole and test a whole bunch of other components, these dependencies should be isolated out)
  2. Integration testing or acceptance testing

So if unit testing is done right i.e. they are *not* integration testing and the tests test the smallest testable unit, then unit testing honors point 6 (keep the build fast) on the above list. Who cares whether the build is fast? well you normally only have a finite number of build servers and on those servers you have a finite number of CPU’s and RAM which means a limited number of agents. When all your agents are busy serving multiple simultaneous check-in’s, builds get queued, the more queued builds the longer it takes to see feedback from a check-in/commit.

If you don’t have a buddy build system (point 5 on the list – I’ll explain what a buddy build is in a minute), imagine what can happen when you get lots of queued builds, if 1 fails, then it is very likely all the following builds will fail. This means someone has to sort out the mess. This mess then break downs the benefits of CI.

So going back to my original point, keep the build fast is very important that often gets missed or not really thought about too much. I often see unit tests doing too much meaning that they take too long to execute. I often see unit tests being a mix between integration tests (like BDD/acceptance tests) and unit tests more like a hybrid. Unit tests should only test a single class – where possible, this keeps the build very fast.

But then what about acceptance/integration/BDD tests? normally such tests will execute end-to-end i.e. from a UI all the way to the database or downstream system and back. If you practice BDD you’ll have these tests for all your use cases main and alternate flows. So often you will have hundreds of these tests for a typical enterprise large system. So with that, these tests will take much longer to run due to the fact the tests are testing the full stack. Sometimes the downstream system will be on another network perhaps communication to it is via a IPsec VPN tunnel to the other side of the world. Or could be over the internet. Whatever it might be, incorporating these tests with the unit tests is not a good idea.

So when should you run these long running integration tests and when should you run unit tests? Integration tests should normally run nightly i.e. like 02:00am when the build servers are quiet and don’t have much activity. Unit tests should run on every check-in/commit during the day for constant feedback.

My CI tool of choice is Microsoft Team Build (comes with Team Foundation Server). In Team Build 2010 it allows you to easily setup controllers so I often use one controller for integration and one for CI. These controllers can delegate to n number of build servers. So with this kind of configuration, I normally have quite a meaty set of CI build servers (for running unit tests on each check-in) then have a less powered build server for deploying code and running integration tests.

The power of this is such that the whole daily build activities is fast, thus satisfying the fast build and constant feedback requirement. Developers can also start an integration build anytime they wish during the day which will not affect the day-to-day running of a software development team.

5. Support for Buddy Builds

What the heck is a buddy build I hear you say! I’m not sure if this is just a Microsoft term or not that has come from Redmond. But a buddy build is a way of ensuring your changes will integrate with the version on the server before your changes are actually committed into the branch. So how do you do that? in the old days before software products did this, you would send your changes to a buddy who would merge your changes with his, then he would attempt to compile/run tests etc before you committed your changes.

In TFS 2010, Microsoft has introduced the concept of a gated check-in. So when you check-in, you actually check-in as a shelveset (not to the main branch) then Team Build will execute the build scripts against your shelveset first, if it succeeds, then your changes will be committed to source control – hence it is impossible to break the build!

I’m not sure about other build server technologies like, Jenkins, TeamCity, Hudson etc but if you want to read more about TFSÂ 2010 gated check-in, see here: http://blogs.msdn.com/b/patcarna/archive/2009/06/29/an-introduction-to-gated-check-in.aspx

7. Must be able to run a local command-line single build that does everything that represents the build machines process

Now this is very important. I see all to often that there are no command-line builds at all! and if there are some, you sometimes have to run more than one to get a complete picture of whether your check in will succeed on the server or not.

The benefits of having a local command build are huge, here are a few reasons:

  1. It allows you to get a local copy of the solution running, i.e. setting up a web server, adding users setting security permissions, deploying web sites into a web server
  2. Have one single command-line build script batch file or power shell script that does everything your check-in will do, i.e. compile, run code analysis, run code style rules, run tests etc
  3. Enables faster development, geeks love command-line tooling!

If you have read this far, congratulations! until next time..

Happy building…

I have been using the Service Locator pattern for years and I think it has a place in some circumstances – I’ll explain what I mean by that in a minute but first for some scene setting…

If you’re not familiar with the Service Locator then the following shows a simple class diagram that depicts its use:

The following descrription of what the Service Locator pattern is, is detailed here from Wikipedia:

The service locator pattern is a design pattern used in software development to

encapsulate the processes involved in obtaining a service with a strong abstraction layer. This pattern uses a central registry known as the “service

locator” which on request returns the information necessary to perform a certain task

Often the Service Locator is used in conjunction with other patterns such as IoC and DI. (self promotion here, I’ve put together a IoC/DI container with support for event aggregation for Silverlight on Windows Phone : http://wp7.codeplex.com)

If you are a Microsoft shop, then the Common Service Locator is a very popular implementation of this pattern. You can download the code here (by the patterns & practices team at Microsoft): http://commonservicelocator.codeplex.com/

In fact the above website contains some providers for popular Inversion of Control containers such as StructureMap, Unity, Castle Windsor and so on.

Anyway back to the point of this post, when should you implement a Service Locator and is it an anti-pattern? According to this post and many others it is an anti-pattern. My view to that question is it is an anti-pattern if used in the wrong way. The example given in the above post is an anti-pattern because the benefits of IoC are not realised due to lack of understanding in IoC – I see this a lot in enterprise. In fact you can use a vanilla IoC container as a service locator (kind of) it’s more of a factory than service locator but bear with me.

Here is an example of a good use of IoC and DI – to which many of you reading this will already be doing today and will continue to do in the future:

public class Foobar : IFoobar
{
    private readonly IFoobarAdapter _adapter;
    private readonly IFoobarAdapter2 _adapter2;

    public Foobar(IFoobarAdapter adapter, IFoobarAdapter2 adapter2)
    {
        _adapter = adapter;
        _adapter2 = adapter2;
    }

    public void DoStuffWithAdapters()
    {
         //...
    }
}

Now the above code is nothing new, we are using IoC and DI to abstract dependencies which thus enables us to write good software code that is easilly testable and maintainable. There is no requirement for Service Locator here. The code also tells any human or software program to generate some metrics as to how coupled or otherwise loosely coupled our software might or might not be. So the above, NDepend (my tool of choice for metrics) can easilly figure out the dependencies between class Foobar and the adpaters (interfaces).

Now what if we introduced the Service Locator pattern to the mix here. Consider the following changes to the above Foobar class that now uses Microsofts IServiceLocator:

public class Foobar : IFoobar
{
    private readonly IFoobarAdapter _adapter;
    private readonly IFoobarAdapter2 _adapter2;
    private readonly IServiceLocator _serviceLocator;

    public Foobar(IServiceLocator serviceLocator)
    {
        _serviceLocator = serviceLocator;
        _adapter = serviceLocator.GetInstance<IFoobarAdapter>();
        _adapter2 = serviceLocator.GetInstance<IFoobarAdapter2>();
    }

    public void DoStuffWithAdapters()
    {
         //...
    }
}

So what’s happening here, and why is the new code not so good? but more importantly, why would you use Service Locator as it seems to be complicating things somewhat…

Firstly, the code is not so good for the following reasons:

  1. In order to test this class, I’m going to have to new up a IServiceLocator type and ensure it responds to GetInstance for types IFoobarAdapter and IFoobarAdapter2. I know I can use mocking tools like Rhino Mocks here, but I’m just giving myself more work and I’m not getting anything from that change.
  2. It gives any human and tool such as NDepend an artificial view of the actual coupling between the layers.

By the way it’s worth pointing out here that I can’t remember whether Microsofts implementation of IServiceLocator includes generics. I think I might have added the generic GetInstance to the interface in order to make my implementation cleaner and easier to code.

The above could be worst still..consider the following additional “hack”:

public class Foobar : IFoobar
{
    private readonly IFoobarAdapter _adapter;
    private readonly IFoobarAdapter2 _adapter2;
    private readonly IServiceLocator _serviceLocator;

    public Foobar(IServiceLocator serviceLocator)
    {
        _serviceLocator = serviceLocator;
    }

    public void DoStuffWithAdapters()
    {
        _adapter = _serviceLocator.GetInstance<IFoobarAdapter>();
    }

    public void DdoStuffWithAdapters2()
    {
        _adapter2 = serviceLocator.GetInstance<IFoobarAdapter2>();
    }

So you *might* be thinking, well what’s wrong with that, shame on you if you are! Foobar here is quite light in that there arn’t many lines of code, but when it gets larger it will be more difficult to figure out Foobar’s dependencies from both a tools and a humans perspective.

So this then is leading to the conclusion that Service Locator is an anti-pattern right? wrong! take the following slightly different scenario.

I am using the command pattern and have a command that looks like so:

public class EmailCustomerConfirmationCommand :
           ICommand<EmailCustomerConfirmationContext>
{
     private IEmailAdapter _emailAdapter;    

     public EmailCustomerConfirmationCommand(IEmailAdapter emailAdapter)
     {
          //inject dependencies here.
          _emailAdapter = emailAdapter;
     }       

     public void Execute
       (EmailCustomerConfirmationContext context)
     {
         _emailAdapter.Send(context.Customer);
     }
}

So with the above I have a nice implementation of the command pattern (I’ll talk about the command pattern in a later post) that takes a context as a generic. We register this command with the context so it can be picked off the container easily. This command gets registered like so:

container.AddComponent<ICommand<EmailCustomerConfirmationContext>,
 EmailCustomerConfirmationCommand>();

This of course will look different depending on your container of choice. Here I am using the Compact Container.

So my requirement is to now wireup the calling of the above command in my Foobar class. This is easy, I could just inject ICommand via the constructor like I do with all the other types, right? well I could but this will not scale very well as I might have potentially 10′s or 100′s of commands. Imagine testing that. Foobar could be a presenter layer in a MVP application or Controller layer in an MVC application or some sort of domain event. Instead, I’d like to delegate the execution of commands to another responsibility outside of class Foobar honouring the SRP (single reponsibility principle). So with that, what I want to do is something like this:

public class Foobar : IFoobar
{
     private readonly IFoobarAdapter _adapter;
     private readonly IFoobarAdapter2 _adapter2;
     private readonly Icontroller _controller; 

     public Foobar(IFoobarAdapter adapter,
          IFoobarAdapter2 adapter2,
          IController controller)
     {
          _adapter = adapter;
          _adapter2 = adapter2;
          _controller = controller;
     } 

     public void DoStuffWithAdapters()
     {
     } 

     public void DoStuffWithAdapters2()
     {
     } 

     public void SendNotificationEmailToCustomer(Customer customer)
     {
        _controller.Execute
         (new EmailcustomerConfirmationContext(customer));
     }
}

I have now added a dependency named IController that handles the actual execution of commands. This class could look like the following:

public class Controller : IController
{
    private readonly IServiceLocator _serviceLocator;
    public Controller(IServiceLocator serviceLocator)
    {
         _serviceLocator = serviceLocator;
    } 

    public void Execute<TContext>(TContext context)
    {
         var command =
           _serviceLocator.GetInstance<ICommand<TContext>>();
         if (!command.IsNull())
         {
             command.Execute(context);
         }
         var disposable = command as IDisposable; 

         if (disposable != null)
             disposable.Dispose();
     }
}

So here the Controller class is nice and clean in that we are not injecting commands via the constructor or any other means. Instead we are asking Service Locator for a given command based on the context passed to it. The power of this is that the Controller class in this example also doesn’t know the underlying container it is working with.

So hopefully you can see the power the Service Locator gives us here. We have made our Foobar class very clean with little dependencies which makes it easy to test and separated out responsibilities for the actual execution of commands – much like Microsoft’s WPF framework does.

Microsoft is starting to embrace these patterns. The IDependencyResolver is Microsoft’s version of Service Locator in MVC 3. I will write a blog on that soon.

So to conclude, the Service Locator pattern is *not* an anti-pattern so long as you have a good reason to use it!

In the meantime, happy coding!

I’ve been looking into the new-ish Razor ViewEngine shipped with ASP.NET MVC 3 RTM recently, I am very impressed. It has changed slightly from the last time I looked at it during the beta the RC releases. I plan to write a series of blog posts around the whole MVC 3 stack as there seems to be great confusion around the Service Locator pattern (as implemented in MVC 3 as IDependencyResolver) and how this differs from the CTP IMvcServiceLocator. I’ll also talk about how this compares to the Common Service Locator IServiceLocator codeplex project and why Microsoft didn’t use the Common Service Locator in MVC 3!

But back to the simple point of this post “how do I comment out markup in a .cshtml Razor file?” simply use the @* *@ syntax so for example:

@inherits BaseView
@{
   ViewBag.Title = "Customers";
   Layout = "~/Views/Shared/_Layout.cshtml";}
}
@*The version: @VersionResolver.Version*@

So above we are deriving from base type BaseView then we were calling VersionResolver.Version (dependency injected property). But we commented out this line of code so it will never get called. The new comment syntax looks different from HTML comment so remember this if you’re using Razor.

To give some context, BaseView looks like this:

public abstract class BaseView : WebViewPage
{
   [Dependency]
   public IVersionResolver VersionResolver { get; set; }
}

Here we are using property injection (the only type of injection supported in Razor right now). I happen to be using Unity IoC container by Microsoft.

Until next time…

I’ve been interested in CQRS – Command Query Responsibility Segregation for sometime now and been very fond of the simple command pattern for a long while. The command pattern is baked into new technologies such as WPF (Microsoft’s Windows Presentation Foundation). I wrote a post a while back on how to implement the nice clean command pattern here: http://www.simonrhart.com/2010/03/making-use-of-command-pattern-on.html

My post assumes you are familiar with patterns such as dependency injection and IoC containers as these are additional concepts promote good use of the command pattern and in turn CQRS.

If you’re aware of the command pattern or have read my post linked above, you’ll see how nice it is in that it promotes the “Single Responsibility Principle” in that an object should only have 1 reason to change. In other words, do not over-complicate your code. This not only makes your code easier to read, it makes it easier to test and maintain and easier to apply rules such as FxCop during build processes to.

CQRS is an architectural design pattern that extends the command pattern. Udi Dahan has a great post on what CQRS is designed to solve here: http://www.udidahan.com/2009/12/09/clarified-cqrs/

With the traditional command pattern, everything gets executed. So your queries and updates might execute within one command. So you quite possibly execute a repository layer that gets some data from a data store (perhaps via an ORM), then you might create a unit of work, make changes then write them back to the data store. Although this is testable as your command isn’t doing too much, it could be better to separate the two. Why would you do that? well it gives you the opportunity to have a cache store for queries. So when a create, update or delete command is executed, the command updates a separate cache by simply serializing the data perhaps as JSON to a distributed cache – something like Windows Server AppFabric Velocity for quick retrieval of query commands later by serializing back into the domain or DTO object. So in this case, when a query comes in, instead of it going through all those layers to get data from a database/data store it first looks in the distributed cache. If found, then serializes the data into the domain object or data transfer object and returns to the caller. This is lightning fast.

What I explained above is in contrast to a typical system, your retrieval commands would receive a request in the form of a context, this would then possibly call a repository layer, that then might call into an ORM that serves up a domain object of the thing you’re looking for. There is a lot of layers here just for displaying data. This is where CQRS can simplify this.

You should be able to see from this that CQRS is designed to simplify the whole process of writes vs reads. As I said the reads could be some sort of distributed caching technology like Velocity or memcached.

I attended the CWDNUG (Canary Wharf Dot Net User Group) meeting on Wednesday 1st June (last night): http://www.meetup.com/cwdnug/events/18182511/ although execution of the presentation could have been better the content was ok. The presentaion talked about Event Sourcing too which I think is another post as this can be used with CQRS in certain senarios.

For me I think CQRS for separating out queries from commands (writes, creation and updates) and keeping the data store as a RDBMS normal form is a good place to be. Event Sourcing – which I will talk about later is good for green field projects.

 

 

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers