After the polished video intro (screen shot below and catch the full key note here) Brad Anderson (Corporate VP at Microsoft) kicked off TechEd Europe here in Madrid by highlighting the opportunities that we have available to us as at the moment in IT. He talked about innovation, helping customers differentiate themselves and how Microsoft aim to make others great, so whilst I definitely agree that with him, it was the resonance with so much of what we do at Smart that I really liked.

OpeningSeq

Four areas of innovation were discussed around; People Centric IT, Building better Apps for people, Data, and the Cloud Platform. Aligning to these were new product release announcements including SQL 2014 CTP, VS2013 and TFS2013 CTP, Windows Server 2012 R2 and System Center 2012 R2. Better still, a partnership with Oracle was announced which means Oracle is fully supported on Azure and Hyper-V (and from what I gathered Oracle images ready to spin up in Azure).

One of the best parts (and I tweeted at the time – @wanty1975 btw) was a video of one of our customers (Aviva) referenced as an example of mobile and Azure development combining to provide an innovative solution for their customers. Why was this so good? Well partly because my old boss was in the video…but more so because seeing real world examples, and close to home too help inspire and make me realise the opportunities we have available to us.

TechEd Pres

So a brief list of new features mentioned in the key note that you should be aware of:

  • BYOD – two-factor authentication making it easy to add devices (a MS Surface in the demo) to AD domains using Windows 8.1 (not sure what the experience would be like for other devices though??). The simple process means files are sync’d, permissios granted to corporate data and Line of Business (LoB) applications downloaded onto the device accessible via a company portal-like homepage.
  • VS and TFS 2013 – ALM benefits throughout the develoment lifecycle but I really liked InRelease (following a recent acquisition by Microsoft) which provides workflow and approval mechanisms for moving builds through environments. I could see this really helping to take the pain away from tracking builds through environments and ensuring everyone knows what versions are where. From a dev perspective I also thought the new Code Lens and Code Map features looked really helpful. These come under the banner of Situational Awareness and are designed to reduce the time and effort needed by developers when context switching. Code Lens integrates with TFS 2013 to provide details within the code base, such as tests passing/failed, who it was last checked in by and any recent check-ins. It helps developers understand who has done what, where and why, much more simply than the previous version/compare routine that we’ve all experienced. Code Map provides a diagramatic view of code (similar to architecture modelling tools) but this can be viewed at runtime in debug mode, like a call stack, helping developers familiarise themselves with code much quicker and easier.
  • SQL 2014 – some huge performance gains achieved through in-memory OLTP and row based access rather than page, therefore reducing locking issues and increasing performance. SQL will suggest what tables are ‘hot’ and recommend moving them into memory (I’m assuming they’re persisted to disk to fairly frequently!!). I also picked up on the ability to replicate an existing on-prem DB to Azure, to provide HA and DR via a right-click menu option…didn’t see or hear any more evidence of this but sounds like a great feature. Also HDInsight and Hadoop linkage to Excel to provide visualisations and insight to Big Data.
  • Windows Server and System Center 2012 R2 – providing the tools to make Hybrid cloud simple and easy, with tooling to help sync data, create replicas for DR purposes and significant performance benefits for virtualised machines.

So there are obviously things that I’ve missed and lots more to talk about but hopefully this represents a fair summary. I’ll also be tweeting any further nuggets throughout the week.

Being relatively new to the IT profession, I wasn’t really too sure what to expect from JAX.  Over the years, I’ve been to plenty of other non-IT conferences; some of them great, some of them awful.
However, JAX 2012 did not disappoint.
Located in the depths of Park Plaza, Victoria, the venue was pretty ideal.  Spacious conferences rooms that easily accommodated any interested attendees (and then some) with plenty of communal seating areas to mingle with the other conference-goers.  Plus, it’s directly opposite a Pret and, as any regular London commuter will tell you, that is no bad thing.  The event staff were very efficient and helpful and there wasn’t a single hiccup in organization over the entire two days, which is a huge indication of the amount of effort that went into the planning of such a big event.
After a brief welcome by one of the conference’s organizers, the first session was a keynote by Brian Goetz (@BrianGoetz ), a language architect from Oracle, centered around Java 8, in particular, Lambda expressions.  It was a really eye-opening insight into the evolutionary forces in play that drive forward Java as a language and a technology to adapt to the fast-moving technological landscape whilst, at the same time, maintaining backwards compatibility with legacy code.  The new features of the language, due for release in Summer ’13, have definitely captured my attention.  There are definitely a lot of features ‘borrowed’ from Scala, in particular the use of parallelism in iterating over Collections, but these can only work to Java’s advantage.  Overall, a fantastic talk and has genuinely excited me over the future of Java.
Next up, a live coding session with Sandro Mancuso ( @sandromancuso ) as he showed us his philosophy on testing and refactoring legacy code.  Whilst endorsing the traditional test-driven development (TDD) approach and the manta of always leaving the code in a better way than you found it, quite a few hints and tips that he gave, such as start testing from the shortest branch of code and refactoring from the deepest branch, are small changes that can make working with legacy code that much easier.  A useful session overall.
I’d never really had much exposure to the Big Data world before so I jumped at the opportunity to attend a few of the Big Data lectures available.  First up was a lecture on Apache Cassandra by Tom Wilkie ( @tom_wilkie ).  Amongst the frequent plugging of Acunu, a reporting tool that sits on top of Cassandra, it gave me a decent understanding of the way that Cassandra works and the advantages, such as providing realtime analytics, and the disadvantages, such as having to declare your queries up-front, that the technology has to offer.
The second Big Data lecture I attended was regarding Hadoop and its evolution.  Similarly to the previous talk, this gave me a good basic understanding of the workings of Hadoop and how it has progressed over the years.   Very interesting indeed.
Having worked on mainly Agile Scrum projects at Smart421, I was quite keen to attend Roman Pichler’s ( @romanpichler )lecture, entitled ‘Choosing the Right Agile Innovation Practices: Scrum vs. Kanban vs. Lean Startup’.  It was really interesting to see Roman’s take on when each of these agile processes are relevent depending on the level of risk and uncertainty in the product.  Having never had any really exposure to Kanban and Lean Startup, this was a great session to get more of a feel for these paradigms.
Next up, was Heiko Seeberger’s ( @hseeberger ) ‘Scala In Action’ talk.  I was very excited about this talk because I’ve heard great things about Scala and had been meaning to experiment with it for a while.  I was not disappointed.  This live coding session showed the basics of the Scala programming language, as well as its interoperability with Java projects.  Whilst the syntax may take a bit of getting used to for existing Java developers, the readability of the code drastically increases, condensing what may take several lines of Java code into a single line of Scala.  As well as this, there are a plethora of features, such as parallel collections and lazy values that make this language very appealing.  A fantastic presentation and I’ll definitely be coding some Scala of my own very soon.
The second of Roman Pichler’s lectures I attended was about his concept of the Product Canvas.  The Product Canvas, in a nutshell, is a document for a new piece of product functionality that captures the stakeholder’s needs, user journies and UX design, constraints and developer stories all in one document.  It was suggested that, once created, this document is displayed for anyone in the business to see.  This is perhaps quite handy for ensuring that all parts of the business are, quite literally in this case, on the same page, but in reality, I’m not sure if this would really be feasible in a fast-moving Scrum team where the requirements are constantly changing.  Definitely an interesting concept though and provides much food for thought on how development teams can work more closely with the rest of the business.
‘Rock Arts and Rockets’ was the title of the next keynote from IBM’s Steve Poole.  This talk was a captivating insight into the Java community as a whole and was a cry of help to encourage us, as Java developers, to get more involved to help drive the future of the programming language.  An eye-opening talk that was very thought-provoking.
At conferences, I’ve always found that there is usually one talk where someone has a good whinge.  Russel Winder’s ( @russel_winder ) talk on Closures didn’t let that hypothesis down.  A lot of the content in this talk was material that had been covered by Brian Goetz’s keynote and the rest seemed to be a long rant about how Java isn’t object-oriented(!) and slides of squirrels.  The wasn’t really much to take away from this talk, which was a pity, as it sounded quite interesting.
Next, was a high-energy and entertaining talk by Holly Cummins ( @holly_cummins ) and Martijn Verburg ( @karianna @diabolicaldev ), which seems odd for a lecture about Software Complexity.  Covering a range of areas concerning software complexity, such as dependency management, issues with classloading and modularity principles, there was plenty to take away from this talk and apply to my day-to-day development with Java, which was fantastic.
Another keynote followed, given by Patrick Debois of Atlassian, regarding DevOps, in particular the importance of sharing in such a development methodology. Patrick took us through the 6-phases of implementing the DevOps way-of-thinking into a pre-existing workspace.  The key theme throughout showed that improved communication can do wonders for development and operational teams working in harmony.  I see this methodology getting a lot more attention over the coming years.
Having recently been involved in helping out with the performance testing for a client, I thought it would be good to attend a couple of talks relating to that area that could perhaps give me some useful skills to apply.  First up was a FAQs about Performance talk by Trisha Gee ( @trisha_gee ).  Aimed at the beginner performance tester, Trisha covered a wide range of different types of performance tests that can take place, as well as considerations and common pitfalls.  A very down-to-Earth session and gave me some very useful advice to apply day-to-day.
The second performance-related talk I attended was by Kirk Pepperdine on the topic of garbage collection logs.  I must admit, I’ve never really thought of analyzing garbage collection logs, purely because I wouldn’t really have a clue where to start.  However, this session showed that it’s not quite a scary as it first appears.  Kirk talked us through a few different garbage collection algorithms and how these can be applied to the heap, complete with practical examples.  Whilst one would need to do a lot more research to become fully comfortable delving into the logs themselves, this lecture gave a great overview of how to tweak the garbage collection mechanisms to make your code more high-performance.  Highly enjoyable!
The final lecture of the conference that I attended was entitled ‘Multi-tenancy in a Platform-as-a-Service’ and this was given by Paul Fremantle ( @pzfreo ). This talk seemed like a ‘shameless plug’ for a product called Stratos but it also highlighted the issues that Stratos came across when building their PaaS solution, which made for an interesting final session.  It’s also made me keen to get more exposure to cloud-based solutions.
Overall, this was a fantastic conference, both is organization and in content.  It’s a great opportunity for any developer to keep up to date with the latest goings on in the Java ecosystem.  It’s given me plenty of mind-food to munch on for the coming year.  The JAX 2012 team deserve a big pat on the back.

© Jameswimsel | Stock Free Images & Dreamstime Stock Photos

Having arrived at the Park Plaza Hotel early, I was the first attendee and got to chat with the girls on reception. Then I sat sipping coffee, alone, reading the gumpf in the goodies bag. Not much really, so I go check out the t-shirts and chat with some of the sponsor dudes: Mr Git-Hub has the best t-shirts.

I’m starting to get excited. Hey, there’s someone else. Looks like a geek.

Starting to get really excited now: “Lambda, Lambda, Lambda”, goes the chanting in my head.

Lambda Under the Hood

Here’s Mr Brian Goetz ( @BrianGoetz ): “It’s a cool time to be in Java in London”, he says.

An engaging speaker, he talks about the choices made to implement closures in java. One of the biggest headaches has been retaining Serialization; backwards compatibility is mandatory.

Key words: invokedynamic, MethodHandle, LambdaFactory.

The point about closures is that they allow iteration (and more) to be delegated to the library (“client says what, library decides how”), allowing library developers to implement, in particular, parallelism in new Collections methods.

For example, a “filter(…)” method (among others) will be added to java.util.Collection so that the following code can be written:

    List<People> adults = people.filter( p -> p.getAge() >= 18 );

Default methods on interfaces (new in Java 8) allow interfaces to be evolved, extended so they won’t break existing code.

Mr Goetz encourages us to download the Java 8 early access releases and start playing around with this stuff. He highly recommended the following read: “How to talk about parallel programming – Not”.

http://www.infoq.com/presentations/Thinking-Parallel-Programming

Cassandra: Realtime Analytics

The Acuna guy Tom Wilkie ( @tom_wilkie ) was in a hurry. I could tell that straightaway because he was breathing fast and heavily.

“Know anything about Cassandra?”, he began: a few hands went up.

“How many attended the previous session on Cassandra?”, he continued: most hands went up but not mine, I had been in the session about Lambda.

“Weren’t she that greek bird what told the truth but no-one believed her?”, someone said, presumably to develop the ambience.

“Erm…, I don’t know”, said the Acuna guy.

What we needed to know was that Cassandra is really good at:

  • Writing random inserts
  • Range queries

And, that all the work is moved to the data ingest phase so all the queries must be pre-defined.

Acuna does realtime analytics, approximate analytics; one can pick no more than two from: “realtime”, “large-scale”, “exact”. The acuna guy then showed us an algorithm for computing an approximate (probablistic) number of distinct values based on hashing values and counting the number of their leading zeroes.

Origin of Hadoop

Bit disappointing, we learnt:

There are 2 components

  • HDFS: storage
  • MapReduce: compute

Moreover:

  • Its scalable: affordable, distributed, reliable
  • Its style of use: schema on read
  • It started as batch only, with Hbase the first non-batch component.

The speaker strongly recommended reading the following paper from Google:

http://research.google.com/archive/spanner.html

Back to Basics: Codemanship

This was an entertaining talk by Jason Gorman, essentially debunking the methodologies mire. The speaker walked us through his 11 hype free principles of software development.

Pretty obvious stuff really, but worth it for the following:

“No developer ever suggested using a waterfall model, nor ever would”.

“In software, perfection is so close to ‘good enough’ that we might as well aim for perfection”.

http://www.codemanship.co.uk/backtobasics.pdf

The Scrum Hangover

Two guys from UBS,  Mashooq Badar ( @mashooq ) and Sandro Mancuso ( @sandromancuso  ) talked through their company’s experiences over the past few years participating in projects with offshore/onshore teams.

Key items:

  • Focus on quality – reserve time for improvements (developers make the call on what to improve and when)
  • Quality is a team concern
  • Stop and fix attitude – no broken windows
  • Boy scout rule – leave the campsite cleaner than you found it.
  • Continuous and frequent delivery to production.
  • Emphasis on TDD – automated tests at all levels.
  • Business domain reflected in the code and design – readability, maintainability.
  • Simple design.
  • Healthy intolerance of bad code – develop an allergy for bad code smells.
  • DO NOT satisfy your need to use a pattern.

Connected Data

Mr Ian Robinson at Neo Technology (@iansrobinson ) gave the best talk of the conference. I could immediately see how I could use this stuff in my genealogy project.

He explained that:

  • Semi-structured data in RDBMS leads to sparse tables and lots of null-checking logic.
  • Highly connected data in RDBMS leads to lots of join tables.

Graph databases (such as Neo4j) address semi-structured, highly connected data.

It’s property graph data models are volatile and easy to evolve. They are constructed from nodes, relationships between nodes, and properties (of nodes and relationships).

Neo4j provides a language for:

  • describing graphs
  • creating nodes, relationships and properties
  • querying data
  • pattern matching

Neo4j is ready to go!

Day 2

We start, all together, in the plenary room.

Go join a JUG

“Lower your reverence levels and get involved in OpenJDK”, we are requested by Mr Steve Poole.

“Does java stay in a dark cave guarded by acolytes, or do we reignite the java flame?” he asked us.

“Reignite the flame”, I could just manage to lipread on someone’s lips.

Mr Steve Poole gave us a brief overview of the language jungle, and the key java ethos: compete on implementation, collaborate on specification.

Closures

Having had his thunder stolen by Mr Goetz the previous day, Mr Russel Winder ( @russel_winder ) still gave a talk on closures. He suggested that closures would bring the opportunity to: “stop using 70’s techniques and use 30’s ideas and maybe finally do some OO programming (in java).

Key observation: “Its about how your data evolves. Its not about the flow of control”.

Spaghetti

No more noodling with code!

A vaguely humorous talk about dependency management and the Liskov Substitution Principle (http://en.wikipedia.org/wiki/Liskov_substitution_principle).

The perpetual problem is getting hold of an implementation; factories (factories (factories…)), and home-rolled frameworks. Dependency injection externalises the factory problem; be strict about dependency management we are advised.

Jar Hell – jars are transparent: classes encapsulate data, packages contain classes, jars contain packages, but there are no jar-scoped access modifiers.

  • There is no means for a jar to declare its dependencies, no versioning, no modularization charactistic.
  • At runtime a jar is just a collection of classes on a classpath.
  • In a normal jar, an implementation is as easy to use as an interface.
  • In fact its easier as there are no worries about factories, encouraging the breaking of encapsulation.
  • Don’t think jars, think modules!
  • Understand dependencies: What other code does my code touch?, when does it connect?
  • Retro-fitting modularity is hard.

We are advised to check out OSGi, it’s a lot easier than we might think.

NB: project jigsaw (modularity in Java) will not appear in java 8.

Complexity Theory & Software Development

Mr Tim Berglund ( @tlberglund ) gave an entertaining review of simple complexity theory and a demonstration using golly (game of life: http://golly.sourceforge.net/)

There was very little to do with development other than that we have super-huge complex projects and should be aware and look for emergent behaviours.

He used the excellent term “afferent coupling” (the number of classes and interfaces from other packages that depend on me), and I intend to use it myself sometimes.

Stinkin’ Locks

Not for the first time we began a talk with a schematic of a processor. Mr Mike Barker explained what actually happens in the processor when locks are invoked in code. Lines of cache memory get binned, forcing the re-retrieval of data (that may just be unlucky enough to be in the same line of memory).

He introduced us to the processor memory model, and compared its brevity with the java memory model (NB: this arrived in Java 1.5 and is a big, although underrated, thing).

We walked through some simple algorithms using the Atomics classes, and he explained why “hard-spinning” screws up hyper-threading.

Hmm… I’m not sure I want to use a class whose name is “Unsafe”.

http://stackoverflow.com/questions/5574241/interesting-uses-of-sun-misc-unsafe

Microsoft 8 and HTML/Javascript

I rounded out my conference by attending a talk on building Metro apps for MS8 using HTML and javascript.

Pretty slick, if you want to do that sort of thing. You have to have Windows 8, and you have to have Visual Studio or it just can’t be done.

Not sure I’ll be doing this myself though.

Caution workforce in the road!

What would your reaction be if the workforce in the road, fixing the road, did not have any tools or machines to do the job?

Frustration at the waste of time in the resulting traffic queue?

What would be your reaction if the washing machine repair man turned up without his tool kit, without a diagram of the appliance and without access to spare parts?

Refuse to pay the bill?

A security company providing security without enough staff

Questions in Parliament?

How is that so many Enterprise Architects can do their job without the tools of their trade?

Often Enterprise Architects are missing vital parts of their tool kit:

  • Standards
  • Principles
  • Reference architectures
  • Models of the Organisation
  • Application Landscape
  • Analysis and design tools
  • Information sources to feed the analysis tools
  • Stakeholder analysis

Worse than this they seem to lack the basic tools to be able to create the EA tools they need such as the processes to maintain the models, principles, guidance and governance.

Do you wonder why EA gets a bad name?

I am not suggesting that we go back to the old EA approaches

  • Boil the ocean documenting the current state
  • Tons of detailed standards (always out of date)
  • Heavy handed governance that increases costs,  misses deadlines and the point

And any of the other EA anti-patterns

Togaf 9.x of course points us at lots of artefacts and things to do, it is supposed to. We do not have to do them all, we can mix and match – What happens when we mix and match ourselves out of TOGAF9.x in all but name? Are we no longer doing architecture?

There are precedents for this situation:

SSADM was created and adopted, but everyone picked the bits they liked or could do. No one could afford to complete the whole SSADM – Especially with paper and pencil (there were few tools around).  SSADM became discredited; Every claim of compliance was subject to interpretation.

A similar thing happened to PRINCE.

I guess that there are many other examples of the dilution of the good practices until they are no longer effective.

Will this be the fate of TOGAF?

Are we architects no longer doing architecture?

SyncNorwichIt doesn’t always have to be a huge gig at a big plush venue that draws good speakers and skilled attendees curious to learn.

Quite often, user groups and locally-run events can be such rich sources of education and inspiration. I found Agile East Anglia to be one such group. Started by Paul Grenyer (@pjgrenyer) their deep dive sessions on aspects of Agile has been excellent. These have included Agile User Stories, Dialogue Sheets and, most recently, Behavior Driven Development with speaker  Liz Keogh (Twitter @lunivore ).

Developers from IT teams in huge corporates and developers new start-ups have been rubbing shoulders at this group which has all helped to add colour to the subject area under discussion. Content is pragmatic, realistic and nobody is looking to make a big name for themselves; all attendees have gone home the richer by learning from each other. How refreshing.

It’s good to learn that other regional groups have twigged that there is lots of good stuff happening, and a decision has been taken to merge Agile East Anglia with Norwich StartUps and Norwich Developers Community to form a group with wider reach and deeper appeal. The new group has been branded SyncNorwich (Twitter @SyncNorwich) and the inaugrual event is scheduled to take place in Norwich on 05 July 2012. Details and registration : http://www.meetup.com/SyncNorwich/events/68577412/

Meanwhile, Smart421 has engaged itself because of its hunger to fuel its deep use of Agile on numerous engagements for several enterprise Customers. For those with an interest in learning about our Smart Agile Development Process (SADP) please check out our Agile page.

Oh, and you’ll also discover a neat Android app available free for download.

Sometimes you have a requirement to generate your C# class contracts from an existing WSDL document so they can be used by consumers. It might be that the customer simply wants to keep the existing service contract and in this case only the implementation needs to be developed.

The way to do this in .NET is to use the SVCUTIL.exe command-line tool that you get when you install Visual Studio. Most people will know about this tool for generating C# client proxy classes to handle the service invocations and marshalling of simple and complex data contracts from the client side. It can also handle generation of the C# service contracts – although not pretty and will need to be cleaned up afterwards.

Just to note, when I say C# above, the tool also supports VB.NET, you just need to pass in the /l:VB switch. It defaults to C#.

So to generate your concreate service contract from an existing WSDL you can use the following syntax:

svcutil /dconly myservice.wsdl

If you’re lucky you’ll get a single myservice.cs source code file that contains everything i.e. all data contracts and the service contracts that you need in order to implement your service. You’ll need to clean this up though as the tool makes a mess of creating these contracts. You’ll also get a .XML file that can be added to your web.config or app.config file as it specifies the address, bindings and contracts for your service. It takes the address from the WSDL document and applies it to the config file.

If you’re not so lucky, you might get an error similar to the following:

D:\Workspaces\Simon\Project_Foo\Trunk\Foo\Source\Artefacts\WSDL>svcutil myservice.wsdl
Microsoft (R) Service Model Metadata Tool
[Microsoft (R) Windows (R) Communication Foundation, Version 4.0.30319.1]
Copyright (c) Microsoft Corporation. All rights reserved.

Error: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter
Error: Schema with target namespace ‘http://schemas.foo/Services/Foo/2009&#8242; could not be found.
XPath to Error Source: //wsdl:definitions[@targetNamespace='urn:com:foo:services:foo:function:2011']/wsdl:portType[@name='PortType']

Error: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='urn:com:foo:services:function:feature:2011']/wsdl:portType[@name='PortType']
XPath to Error Source: //wsdl:definitions[@targetNamespace='urn:com:foo:services:function:feature:2011']/wsdl:binding[@name='EndpointBinding']

Error: Cannot import wsdl:port
Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on.
XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='urn:com:foo:services:function:feature:2011']/wsdl:binding[@name='EndpointBinding']
XPath to Error Source: //wsdl:definitions[@targetNamespace='urn:com:foo:services:fucntion:feature:2011']/wsdl:service@name=’Service’]/wsdl:port@name=’EndpointPort’]

Generating files…
Warning: No code was generated.
If you were trying to generate a client, this could be because the metadata docu
ments did not contain any valid contracts or services
or because all contracts/services were discovered to exist in /reference assembl
ies. Verify that you passed all the metadata documents to the tool.

Warning: If you would like to generate data contracts from schemas make sure to
use the /dataContractOnly option.

Looking through the error, the first thing that comes to mind is it looks like the separate namespace being imported could not be found as per the following error message:

Schema with target namespace ‘http://schemas.foo/Services/Foo/2009&#8242; could not be found

This might be the case if your WSDL document looks something like the following:

<wsdl:types>
<xs:schema>
<xs:import namespace=”http://schemas.foo.com/Services/Foo/2009&#8243; schemaLocation=”Context.xsd”/>
</xs:schema>
</wsdl:types>

You look at the above WSDL snippet and think, well Context.xsd is present in the same folder as the WSDL document to which I’m trying to generate a code contract, so why can’t SVCUTIL find it.

It can’t find it because it’s not looking for it. You have to supply SVCUTIL with all the dependent XSD files on the command-line with the name and location of the WSDL document file like the following, i.e.

SVCUTIL myservice.wsdl context.xsd

Once you execute that command, you’ll notice you’ll get a single C# file with the same name as the WSDL document (in this case myservice.cs) containing all the code required to implement the WSDL. This includes correctly decorated service contracts (interface) and the correctly decorated data contracts for use with WCF (Windows Communication Foundation). I admit it makes a mess of this but it’s a starting point and saves you a great deal of time especially if your data contracts are large and complex as often seen in SOA environments.

You’ll probably be familiar with the idea of checking parameters at member level when writing your classes in order to improve the quality and reliability of your code. The following might be what you’re used to writing/seeing. In fact most of the Microsoft .NET Framework has this littered everywhere. Using red-gate .NET Reflector to view the .NET source will show you type checking. Of course .NET Reflector generates code based on the IL (Intermediate Language) which is not a true reflection of the actual source code, but the intention or what the CLR will actually execute at runtime:

public void Foo(string foo, string bar)
{
     if (foo == null)
        throw new ArgumentNullException("foo")

     if (bar == null)
        throw new ArgumentNullException("bar");
}

You might be thinking, well what’s wrong with that? nothing really from the outset, it gets the job done. But what if now your requirements get slightly more complex in that you need to check for white space too. So now your code might look like the following:

public void Foo(string foo, string bar)
{
     if (foo == null && foo == string.Empty)
        throw new ArgumentNullException("foo")

     if (bar == null && bar == string.Empty)
        throw new ArgumentNullException("bar");
}

You can see how the code is starting to look more complex than it should. You could actually use string.IsNullOrEmpty to achieve the above but for the purposes of this example I used two conditions. This is what we call “plumbing” and the more “plumbing” we have in our code, the harder it is to change and maintain. Also, the above makes code coverage more difficult and increases the cyclomatic complexity to 3 when it could be 1. Cyclomatic complexity is calculated from the number of potential paths through a peice of code. The more potential pass-throughs, the more complex the code. As you can imagine the more parameters we have the the more complex “guard” clauses we need to validate that data.

The first step to tackle this problem is to do what OO is partly designed to solve, abstract those clauses out of the important code and into another object or objects. The simple form of the above could be written as follows:

public void Foo(string foo, string bar)
{
     Guard.ArgumentNullOrEmpty(foo, "foo");
     Guard.ArgumentNullOrEmpty(bar, "bar");
}

That is all there is to it!

So now the number of pass throughs is reduced from 3 to 1. This not only makes testing easier, it simplifies the the code and honours the SOLID principle to OOAD.

The static class Guard in this case can have many members on it that validate all different types of parameters.

Using the Guard pattern is a really simple way to validate and honour code contracts in your code. But recently there has been a programme out of Microsoft Research called….”Code Contracts” – nice name!

It is a more sophisticated Guard. With Microsoft Code Contracts, there is actually static and run-time checking. Static checking meaning when you compile, Code Contracts will check if the contract (pre or post conditions) has been satisfied or not. This is a very powerful feature promoting more reliable and higher quality code. Runtime checking uses a binary rewriter that essentially acts like the Guard clause defined above but offers a templated approach to it.

Microsoft Code Contracts is an implementation of the modern day Guard pattern.

Check it out here: http://msdn.microsoft.com/en-us/devlabs/dd491992

You can download many code samples from the above site.

Have you ever wanted to package up your Windows Mobile/Phone or Windows CE applications into CAB files as part of your automated nightly build process. This can easily be achieved using either TFS 2010 or TFS 2008.

This article assumes you are using the Upgrade Template if using TFS 2010 as this allows you to carry on using the TFSBuild.proj file that gets created in the TeamBuildTypes folder off of the Team Project structure instead of Workflow Foundation to build and deploy your code.

Wait a minute….your’re probably thinking – why don’t I just use the CAB project template in VS and build that? well this would be good if it was XML. The Windows CE CAB project template is very old and not XML so you can’t build it using MSBuild as an automated build process. This does mean you will have to create your own INF file for the CABWIZ.exe application to call. CABWIZ is the tool that actually packages up the INF and artefacts into a distributable CAB file. In Windows CE/Mobile/Phone, the CAB file is like a Microsoft Installer (.MSI) file on the desktop.

This article: http://msdn.microsoft.com/en-us/library/ms839402.aspx shows you how to create an INF and use CABWIZ to execute it on the command-line.

1.  Once you have your INF file you’ll need to take this and the CABWIZ.exe, MAKECAB.exe and the cabwiz.ddf files from your development machine and check them into source control. A good location for things like this is as follows: $/TeamProject/Trunk/Product/Library/cabwiz/

2. Next you need to create a project file that will actually do the execution of the CABWIZ application. Create a .proj file, make it a meaningful name something like company.product.Installer.proj i.e. SimonrHart.Foo.Installer.proj

The project file needs to look something like the following:

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
DefaultTargets="CreateCab">

<Target Name="CreateCab">

  <Exec
  Command='"$(SolutionRoot)\Product\Library\cabwiz\cabwiz.exe"
  "$(BinariesLocation)\SimonrHart.Foo.Installer.inf"
  /err $(DropLocation)\cab_error.txt' />

</Target>

</Project>

Check this file into source control, the location is not important but sometimes can be something like the following: $/TeamProject/Trunk/Product/Source/Scripts. The benefit of this is when you branch or have multiple streams, your scripts can be different without changing the name.

Now in the TFSBuild.proj file or in a targets file it doesn’t matter which, create a custom target named something meaningful like Deployment that looks something like the following:

<Target Name="Deployment">

<Message Text="Generating CAB file" Importance="high"/>

<Copy
SourceFiles="$(SolutionRoot)\Hermit\Source\Scripts
\Installer\SimonrHart.Foo.Installer.inf"
DestinationFolder="$(BinariesRoot)\Release" />

<MSBuild
Projects="$(SolutionRoot)\Product\Source\Scripts\Installer
\Simonrhart.Foo.Installer.proj"
Targets="CreateCab"
Properties="BinariesLocation=$(BinariesRoot)\Release;
SolutionRoot=$(SolutionRoot);
DropLocation=$(DropLocation)\$(BuildNumber);"
/>

<Copy
SourceFiles="$(BinariesRoot)\Release\MyProduct.cab"
DestinationFiles="$(DropLocation)\$(BuildNumber)\Product_$(BuildNumber)
_$(VersionNumberComponent).cab" />

</Target>

The above target copies the INF to the drop location, executes the INF, generates a CAB named MyProduct.cab and copies the output to the DropLocation that’s configured as part of the Team Foundation Server build definition.

The next thing is to wire all this up in the TFSBuild.proj file to execute when the build actually runs on the build server.

The perfect place to call this target is in the AfterDropBuild event in the TFSBuild.proj file:

<Target Name="AfterDropBuild">
<Message Text="Configuration flavour is:
%(ConfigurationToBuild.FlavorToBuild)"
Importance="high"/>

<!-- Call the Deployment which creates the CAB file.-->
<CallTarget Targets="Deployment" Condition=
"'%(ConfigurationToBuild.FlavorToBuild)' == 'Release'"   />
</Target>

So this will execute after the code is built, tests executed and been dropped to the drop location.

Simply check in the TFSBuild.proj into source control, queue a build, then check the drop location for a fully packaged up application ready for the testers to install!

The next ALM (Application Lifecyle Management) London user group has been announced and will be held on the 9th March 2011 at Hitachi Consulting’s offices in London (nr London Bridge).

You can register here (it’s free): http://uk-alm-user-group-march-2011.eventbrite.com/

I always find these events pretty good, the last one (held last year) had Karel Deman from Avanade and Jamie Jones from Confused.com talk about how Microsoft’s Team Foundation Server 2010 was implemented to streamline Confused.com development processes. They talked about the whole migration experience from native code and multiple source code repositories and complete manual release management and build/testing environment over to a streamlined automated ALM tool – Team Foundation Server. They talk about the whole experience and most importantly what they learned – good and bad.

In fact Karel and Jamie did a showcase in November 2010 at the ALM Summit in Redmond. You can watch their presentation online here: http://65.55.21.250/showcase/en/us/details/b250a216-c6bd-41a6-9b3d-66046dfbb74e

The interesting thing about the presentation is how they used TFS to support 40+ developers in a high-availability environment, the configuration and hardware they used to achieve it. The concept of build controllers in TFS to balance CI and integration builds. They also talk a little bit about the various agile process templates available to anyone implementing TFS today. It is worth a watch.

I’m going to the ALM meet in March, so see you there!

This years Autumn JAX conference was actually two conferences in one. The OSGi DevCon shared two days with the main JAX event. Conference delegates were then free to attend sessions from either conference. This meant that developers with little experience of OSGI really benefited from gaining insight across sessions. This years event was very well attended, the huge conference room in the London Novotel was packed, with people standing. The sessions were only 50 mins each which meant they were kept concise and to the point, so no chance of getting bored. The smaller rooms were intimate and made asking questions much less intimidating. You could feel the enthusiasm of everyone during the breaks, people were chatting and networking rather than hiding away behind their laptops. Lets not forget the lunches, the food was excellent, a great choice of dishes to cater for all tastes.

The first session I attended was Ted Newards keynote speech entitled ‘Rethinking Enterprise’. Last year Ted spoke about developers needing to find their unique selling point in world of cheap developers. His speech this year started off along the same lines. He asked the question that all developers would like answered… “What do I do to ensure I don’t get replaced by a cheaper developer?” What does this have to do with rethinking the enterprise, I wondered?

We are all programmed to jump to the most obvious solution to a problem. When at school we learn a single subject then get tested on that subject. We learn another subject then get tested on that subject. Even if the questions in our test are different to what we are used to it is a good bet that they are related to the subject we have just learnt about. As adults we attend training courses. After attending a course we return to work with our newly gained knowledge and our boss assigns us a new project. We automatically assume that the new technology we have just learnt about is going to be what we should use to implement this new project. Why else would we have been sent on the course? The same goes for enterprise applications. When someone asks us to design or implement an enterprise application we automatically think J2E and more often than not EJB . Admittedly this is a highly simplified and contrived example, but there is no doubt that we have pre-programmed solutions to problems that we are asked to solve. We may think that we are choosing the ‘Best practice’ solution, however ‘Best practice’ suggests that we have finished and there is no room for improvement. Ted challenged us to rethink this and not settle on the first solution that comes to mind. We should really look for ‘Good practice’ as this allows us room for improvement. This kind of thinking allows us to design solutions that others may not think of and therefore give us that advantage and unique selling point that we need. Don’t just settle on the obvious solution, explore the less obvious and you never know what you may find.

Another interesting and more technical session was on the subject of RESTful services. This session was run by Neil Ford from Thoughtworks. SOAP based web services have been around for a while now and are widely used. This session started off with a brief introduction to REST and its benefits over SOAP style web services. SOAP based web services are often orchestrated using a heavyweight BPEL engine. This often couples together the application providing the SOAP service and the BPEL engine. The BPEL engine must maintain the state of the process, the applications involved in the process may also maintain knowledge of the process state. This is where HATEOAS provides a mechanism to manage the state in the applications that are involved in the process. At its most basic level a REST service request is simply an HTTP request using one of the four HTTP verbs, GET, POST, DELETE, and PUT. These requests are always resource based, so you GET an order, or DELETE an order etc. The parameter string of the request URL contains all the relevant input data. REST and HATEOAS provides the ability for the application providing the service to return the set of potential next links, ie state transitions. The state transitions of the process are stored in the links rather than a heavyweight BPEL engine. A simple example of this idea that we have all seen is a search engine result page. You submit a query to a search engine and are then presented with an HTML page of results. If this were a BPEL process the BPEL engine would know which page of the results you are viewing. If you click ‘next page’ the BPEL engine would know what page is coming next and return that for display. With HATEOAS the links on the page contain the actual next page number in the URL. So if we are viewing page 2 the next page link would request page 3 and the previous page link would request page 1. There is no need for the search engine to know what page you are viewing. The state of your search process is implied by the possible transitions stored in the links, therefore encouraging a stateless approach. HATEOAS may well be the future of SOA that provides an alternative to the usual SOAP based web services.

The final area that I’m going to write about was a session entitled “Software Quality – you know it when you see it” by Erik Dörnenburg. We all want to write quality software, but how do you spot quality software? We often have to work code that was written by someone else, or code that has been around for years evolving as requirements change. If we are working on a new project or legacy code we need to quickly determine problem areas and issues that exist. We need a view of the application. Our introduction to the applications will probably be a diagram consisting of some boxes with lines joining them together, the 30,000 feet view. If we are lucky a detailed design document may tell us what the application should look like, but these are often out of date and lacking in real detail so little use when looking at code quality. Lastly we have the code itself, the 0 feet view. Far too much detail when it comes to understanding an application. What we need is the 1,000 feet view that is generated from the code itself. Erik took us through several visualisation techniques that provided a very fast way of understanding an application. The “Code City ” was probably the most interesting visualisation.

This is essentially a 3D map of the application that allowed us to quickly understand the structure of application. In the above example classes are represented as buildings, while packages are represented as districts. Problem areas are easily spotted. For example a huge building may highlight a huge hard to maintain class. Dependencies between packages could be drawn onto the city allowing us to spot problem areas very quickly. These kinds visualisation give us that 1,000 feet view where we can see potential issues very quickly. As with all these new techniques it is the initial setting up the visualisation that is the time consuming part, but as tools mature this could be the future of metrics and tools such as Checkstyle.

Smart421 normally attend the Websphere User Group, however this is a very specialised event that may not interest many Java developers. I would certainly recommend the JAX conference to any Java developer no matter what level of experience you have. In our day jobs we often concentrate on a just few technologies for several months while working on a project. We don’t get the time to explore technologies, frameworks etc that give us the inspiration to ‘Rethink the Enterprise’ or just give us a head start on our next project This kind of conference is a good way to find out about cutting edge Java technology at a useful level of detail.

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers