Heartblled logo

Well if you have had your head in the sand, then you might just have had a chance of missing out on the news of the Heartbleed Bug. I thought that there was quite a good post on what it all meant on Troy Hunt’s blog, but the Codenomicon site that published the exposure is also very good.

The upshot of the bug, which is really a vulnerability, in OpenSSL, is that in versions 1.0.1 to 1.0.1f (1.0.1g is now released) of OpenSSL, there is a buffer overrun vulnerability in the heartbeat service which allows attackers to read chunks of memory which may contain the secret part of the SSL key (which would allow the attacker to pretend to be you) or even users passwords.

Services that use the OpenSSL implementation include apache HTTP servers, and commercial offerings based on the same (e.g. by Oracle, IBM etc.), other open-source servers like nginx. Also many commercial offerings are based on this code, including firewall appliances, VPN concentrators, and Load Balancers.

The vulnerability has been out in the wild for over two years, so there is a good chance that a web-site you use has been compromised at some time in the past, though many sites (google, amazon etc.) are patching up the vulnerability…you don’t know if your password has been compromised in the past.

This is yet another reason to use things like password managers for making sure you have separate passwords for all of your accounts, and for corporations, there is yet another reason to use Single Sign-On software. Even though your password will open more doors, the patch, if required, is to many fewer systems. Even if the remote systems that are integrated with your SSO solution have been compromised, they will not have seen any passwords in their traffic, only the session key which has a limited lifetime.

For example: In the case of ForgeRock’s OpenAM SSO solution, the authentication servers run on JEE platforms. This means and unless you are running on Tomcat and have configured it to use the native APR libraries,  that the OpenSSL libraries are not being used…. so it will not have been vulnerable. As you will see in other discussions… even if the downstream resources are protected you need to check that upstream resources (load balancers etc.) are not vulnerable, if they terminate the SSL session.

The end result is that there will be quite a few bleeding hearts.  Most organisations that use SSL will need to check for vulnerabilities, and patch as appropriate. Then they will need to renew all of their SSL certs on the previously compromised components. And if those certs are shared for multiple hosts (via multiple Subject Alternate Names (SANs)), then even the certs on the invulnerable resources will need to be renewed.

On top of that, most slightly paranoid consumers (including me), will want to renew their passwords once they are confident that the services that they use have been patched. Personally I would advise everyone to do it. Just because you’re not paranoid does not mean that no-one’s out to get you.

Please rate and Like this blog. Share it using the social icons below or via short URL http://bit.ly/1jAFCiq

Our readers want to know what YOU think, so please Comment.

You have probably already seen a press release from ForgeRock about the 11.0 release of the OpenAM access management product.

In terms of new features… one of the ones I want to play with is the new XUI user interface which will hit prime time in v12, but which is available in preview in 11.0. The JATO-based UI remains a default for now. The new UI is javascript-based, and easier to customise than the older interface. Both interfaces have a slightly revised branding:

New ForgeRock branding

However, that is not a headline feature. The fact that OpenAM now supports OpenID Connect is a headline feature though, which is impressive as the ink is hardly dry on the specification. OpenID Connect builds on OAuth 2, to provide proper authentication, rather than just bearer authorisation tokens which anyone can use. I would be expecting the social web sites to move to this within the next year as using OAuth 2.0 for authentication is insecure.

What you may not have heard is that the training has been updated to reflect the new features, and to reflect the changes to the internal architecture. As partners of ForgeRock, we were pleased to be able to take part in the beta course of the new training. The only commitment from our side being to provide feedback and suggestions. I feel slightly fraudulent as these consist of “great” and “keep up the good work” respectively.

Many of the same people have been involved in the product that were working on it within Sun (when it was called OpenSSO), and it is a sign of the commitment that ForgeRock are putting into the training, that it was run by one of the chief architects of the product: Allan Foster, and supported by Nathalie Hoet (who developed much of the materials) and Matthias Tristl, another experienced trainer and implementer of ForgeRock products.

The end result was, unusually for any kind of course: labs with well thought-out learning objectives and no bugs. The flow of the lectures was concentrated around building up domain knowledge, rather than the order in which an installation proceeds, which worked very well; especially for those of our team who were new to the subject area.

Somehow the content was well pitched for more experienced people too. I never felt tempted to start catching up on emails, as I did to want to miss any content. One of the other experienced delegates, who has been using the product for three years, also felt he had learned loads more. It is probably also from having people who can explain why the product is architected the way it is, and what will be changing… and why.

One of the exercises was performing an upgrade from version 10.0… and it was pretty much trivial for a stock install. The understanding of the architecture and how customisations are performed… and dealt with during upgrade gives great confidence to performing upgrades. One of the stated principles from version 10 is that upgrade should be ‘painless’ (for the supported enterprise builds), and with their stated aim of a new major release roughly every year it is definitely in users interest to keep up-to-date. We see that in loads of packages (not just IAM) where an enterprise will wait for one year (or six months) before the product runs out of its extended support before contemplating upgrade. By then the package is two versions out of date… and the upgrade is like starting again.

So in summary, the latest version of OpenAM includes OpenID connect, which showing a commitment to keep up to relevant standards. The roadmap currently shows new releases every 3rd quarter, and it is a design principle for the team that upgrades should be painless. Quite apart from it being good practice to keep any and all software up-to-date; the changes in IAM, with drivers such as social and mobile mean that change is going to keep coming in this area. It’s a good idea to check your vendor has a good story on upgrades.

Still from "The Prisoner"

… I am not a number! …

ForgeRock has recently refreshed their roadmap for their OpenAM product, and decided to change the numbering of the next major release from 10.2 to 11.0.

“OpenAM” (for those who may not know) is itself a name-change. It springs from the same code base that used to be the Sun “OpenSSO” product. Although the code was under the CDDL open-source license, the name was not, So ForgeRock had to change the name.

“SSO” in the ‘good old days’ (more than 3 years ago in internet time-frames) used to stand for Single Sign-On, and “AM” now stands for Access Manager.  In a similar fashion, the old OpenDS product name became OpenDJ, but that’s another story. Who knows, maybe OpenAM is a better name, given the product’s support for a whole raft of identity-federation technologies.

What’s in a Name? Or a version number come to that? Well probably ‘a fair bit’ if that name is part of your identity.

As for the number-change, though… we have a customer considering an upgrade, and what before had been considered low risk (10.1 -> 10.2), could now be considered a high risk, even though it is the same code-base. I guess that uncertainty is the down-side of a numbering scheme change, but the upside is that it implies some major new improvement. We’ll know the motivation for the numbering change after the summit, but I suspect that it is down to the number of extra features, notably OpenID Connect (OIDC) standard support.

In the alphabet-soup that is the list of federation standards (let alone implementations), OIDC is another standard funny  inconsistent name. Since OIDC builds  on OAuth 2.0, and not really on OpenID, it would better be called OAuth 2.0++ or something like that. But since the standard comes from the OpenID foundation, you can’t begrudge them the name.

OIDC goes a long way to enabling the use of OAuth for Authentication, rather than just Authorisation. Vanilla OAuth is just an Authorisation protocol, and although some Relying Parties (RPs) use Twitter and/or Facebook OAuth access_tokens as implicit proof of identity… having the valet key* to a car does not prove you own the car. The OAuth tokens allow the valet to drive the car (access the resource) in your name. OAuth does not even check that the valet is the same one you gave the key to which is a problem if you wish to allow (your) apps hosted on un-managed devices (e.g. B2C, BYOD) to access your resources.

Some people are claiming that OpenID is dead, and certainly sites seem to be moving to OAuth, but I quite like the principle of OpenID, that it is only notarising your identity, rather than handing over the keys to the car or apartment. OpenID Connect  provides the extra layer to OAuth to deal with just Authentication. It is explained quite well by Nat Sakumura (one of the standard’s authors), here, with a bit more detail here.

The subtle distinctions between standards require a good deal of concentration for professionals to internalise, but in addition to ensuring that you are supporting the appropriate protocol for the use-case that you are supporting, steps need to be taken to ensure that you are not mandating behaviours in your user community, which leave them open to social-engineering attacks.

Exposure to such attacks can leave the consumer with damage to their good name (identity theft), or to the service provider’s good name (brand damage). But hey – what’s in a name? As yet few companies are putting a $-value on the benefit of avoiding brand-damage. Most companies will find that there is more in a name than they thought.

*I have never seen a valet key which gives the holder the ability to drive a car, but not open the glove box or boot. Only in America.

Please share this blog using the social button below or with short URL http://bit.ly/18OFAuL

Please Rate and Like this blog.  Our readers want know what YOU think, so please leave a Comment.

Ant Allan speaking at 2013 Gartner IAM

Ant Allan speaking at 2013 Gartner IAM Summit, London.
Photo by David Taylor

In the life of an IT professional you end up learning a good deal of information relevant to your profession, at seminars and conferences. When those conferences deal with security, as the Gartner Identity & Access Management Summit 2013 in London near Westminster last week did, then you will often take away lessons for your personal life too.

So when Jeffrey Robinson (@WritingFactory) told us in the closing keynote that 100% of us would be subject to an electronic fraud attack over the course of the next year – multiple times… he was underlining a theme that had been running through the sessions over the two fully-packed days.

Hacking and cyber attacks are no longer (and have not been for a while) a form of entertainment for script kiddies to boast about pwn’ing your server, or workstation… they are big business with plenty of resources.

Peter Armstrong, Identity Assurance Tech Director at CESG, had underlined this on Monday (11 March, 4.15pm), in his session “How to Attack an Online Service, when he explained that part of the Ice IX attack (fuller explanation by John Leyden in The Register, 02 Feb 2012) involves setting up a call centre and diverting your house phone there, so that when the bank calls to verify the suspicious transactions that start appearing. The call centre has enough people to find someone of the same sex, age and regional accent to pretend to be you; and then using a screen with all the harvested details from your hard-disk previously to be able to answer questions about your favourite football team, pet dog, music and film tastes, mother’s maiden name…   et voila !

Criminals no longer bother to rob banks directly, as consumers are much softer targets, and the risks are much lower. Cyber criminals are very unlikely to even see the inside of a courtroom, let alone the inside of a prison; and the rewards are high. Cyber crime is just not going to go away. It is an arms race.

And on the mobile front it is pretty one-sided. Apparently 30% of mobiles have malware on them, with Android being a favourite, Symbian, and even (surprisingly) BlackBerry.

Search the net for ZitMo for one of the big examples.

There are even trojans turning up in the curated app stores. The numbers in the un-curated stores for hacked and rooted hardware must be frightening. The one major defence against having all your passwords (good or bad, weak or strong) harvested by key-loggers on your desk-top/lap-top is good hygiene. There are even viruses which download to your android’s storage, and then infect your PC when you next sync, or mount it as a mass storage device.

All this raises big questions about the suitability of the Bring You Own Device (BYOD) model.

Various exhibitors showcased Mobile Device Management (MDM) products, and walled gardens… and there are various strategies for securing applications in themselves, but most mechanisms that do not classify the device as “totally untrusted” rely on the underlying OS not being compromised. Would a firm be expected to support a rooted device in addition to a stock device? Is it easy to keep corporate data safe, even in “totally untrusted” mode?

So as well as the professional angle of what I do, the IAM conference got me thinking about my own digital safety, in addition to the business application of digital safety.

Many thanks to our partners, ForgeRock, for inviting me along. It was good to get to know more of their excellent team there too.

Please Rate and Like this blog.  If you can, please leave a Comment.

Photo by East Coast Main Line

I have had the luck to attend the WUG at the Royal Society of Edinburgh on George Street again this year. This is a bi-annual event hosted in Edinburgh in the Autumn and, from this year, at IBM’s facilities on the Southbank in London in the Spring.

The good luck was attending, when maybe a third of people failed to go when the weather was bad, but the bad luck was when the East Coast mainline was flooded on the way down, causing a very late return home.

There were a few interesting sessions, including the Worklight acquisition for developing mobile applications on to a variety of target devices, including iOS and Android. Possibly more on that later. There was also a good session by Alan Chambers on sample use-cases for using WebSphere eXtreme Scale, which is a distributed in-memory caching technology. This is an interesting area, which merits further attention. The slide deck for the various sessions, including ones I could not get to, are on the WUG site.

David Sayers of MidVision also gave a talk about DevOps, which is the set of disciplines for bringing development and operations closer to each other. Although MidVision supply a tool in this space, David was keen to stay away for instances of tools, and to say that there is no magic bullet, and that it’s about process and people too.

A phrase which struck a chord with me went something like: “many firms don’t want to make a change in a production system because ‘Steve’ is on holiday and he’s the only person who understands this”.

It’s a spooky coincidence, as we have just published a development policy stating that all environments, and deployments to those environments should be 100% automated, as part of our policy refresh.

The presentation I want to elaborate on a bit this time, is the “How Lightweight is the [WebSphere] Liberty Profile” which is part of WebSphere Application Server (WAS) 8.5.

Simon Maple  (Twitter @sjmaple) – one of IBM’s technical evangelists on WAS – explained that this profile is an OSGi-based application server kernel which only loads up the libraries and subsystems, as you need them. The end result is a *very* lightweight application server.

So much so, that the session involved showing the application server running on a Raspberry Pi (£20-odd computer, the size of the palm of your hand, delivered as a circuit board).

To follow this up Simon then started up a WAS instance on his older android phone which was then serving up a blogging application via the phone’s wireless hotspot. I connected to it with my phone, and posted: “Amazing!” (yes Oscar Wilde won’t be looking over his shoulder), which then showed up on his monitor, along with some more imaginative posts.

I have the tooling, which was provided on a memory key in our “info” shared area for any Smarties to download.

The liberty profile tooling (eclipse plugins) even runs on a Mac, along with the dev runtime. Even though this runtime is not supported in production on Mac, this is a pretty major step for IBM. I would not have imagined it five years ago.
In terms of production use though, the liberty profile WAS is still a standard WAS install from the perspective of licensing… though I’m not sure how many PVU’s a Raspberry Pi has.

IBM also have a new Head of WebSphere Software, Stuart Hemsley, who was keen to get feedback from the delegates, both by announcement at the keynote, and by walking around during the breaks.

Our feedback was that the application server just costs too much compared to the likes of Tomcat and JBoss, and includes technologies which are starting to be less relevant (e.g. session clustering), as the application architecture moves to pursue session-statelessness. Yes you would expect to pay a premium for a big-league vendor-supported product, but not 10x as much.

It would be a shame for IBM to loose out on market share because of pricing, when they provide excellent tooling and support, as shown by a session on performance tuning the JVM… but that (as they say) is another story.

I took a break from the wet weather in South England, in order to enjoy the wet weather in Edinburgh, and attend the WebSphere user group. For one thing; our Alan Philpott was also going up there to hold a presentation on applying best practice in terms of SDLC to Datapower devices. Also Bharat Bhushan would be presenting on trouble-shooting and performance tuning of the devices. I have the latter’s slides for any of our DP rangers who want a walk-though. I will post a pdf of them up on sharepoint.

The WUG had a bit of a wobble last year, when some of the committee (who do their work for ‘free’) found that they no longer had the spare capacity to put in. The committee are still looking for volunteers. Some people questioned whether the WUG would continue to be viable. From the attendance levels of this recent meeting, the degree of organisation, and the quality of the  presentations, you would not notice any issues.

There was an interesting set of presentations (full details here), but the stuff that caught my eye was the OSGi stream. The OSGi framework (the dynamic framework system for java) has been around for a few years, not least as the underpinning technology behind Eclipse, since version 3. There is a good definition of it on wikipedia. I have seen it before in a container used to host the enterprise service mix ESB (a.k.a. FUSE 4). Adrian Treneman gives a good run-through here.

What has been interesting has been seeing this technology hitting the mainstream (IBM), and being taken seriously there. I attended two presentations, one of which covered the tooling support in Rational Application Developer (RAD) 8, and another covering the support for OSGi in WAS 8. It was previously available as a feature pack for WAS 7.0. The interesting part here was to see that the technology was being incorporated for simple pragmatic reasons (ability to load different versions of libraries into the same JVM – to be used by different applications, lowering memory footprint by sharing libraries amongst several applications) – rather than as some new-age magic.

I have grossly over-simplified in the above, but it may be a prompter for anyone who is interested to follow it up.

The other major new thing for me was WebSphere eXtreme Scale (WXS). On first seeing the title of the presentation, I thought that it would be a re-branding of either WAS ND or WAS XD. Wrong! “It’s just a cache”, as Jonathan Marshall announced at the beginning of his presentation – hugely underselling the use of caching technologies, and the product itself – before going on to show those benefits in his talk. Having recently been involved in two use-cases for caching technologies (dynamic content management) and web-session management in hugely scaling environments: both situations where databases struggle, and application server session-sharing mechanisms struggle (the old n^2 communication overhead, as the number of app. servers increases)… I could appreciate the problems that WXS product is trying to solve.

WXS is not the only player in this space. As well as other commercial offerings (which I had previously heard issues about) there are several open source alternatives (e.g. memcached), but since the general area is very new… not all of the products implement all of the features (e.g. active standby). Indeed there is probably not a common consensus of what all the features are. I will distribute slides on the internal sharepoint site. They should become available on the WUG site in due course. The take-away news for me is that IBM have a seemingly very capable product in this area, now, which merits investigation in some of my existing scenarios, and also new scenarios such as using the cache as a system of record.

So even after a few years of going along to the WUG, it still has the ability to make me sit up and take notice! Long live the…

I also attended the IBM Impact 2010 event, but did not go to quite the same talks as Paul Russel. I have not got around to making a few notes until now, which for once is not a bad thing, as it has given me the opportunity to see how IBM have followed up since the event. For one thing they have made a quite comprehensive set of slide sets available for download here. I shall no doubt bone up on a couple of the sessions that I was interested in, but could not make, as I could not be in two places at one time.

Although the event centred on the WebSphere product set, it was different in flavour to the WebSphere User Group, and the WebSphere Integration User Group events in that it was explicitly targeting IT decision makers (architects and upwards), whereas the user groups – apart from being independant – tend to target the people involved in the implementation a bit more.

- As a quick aside, I notice that the WebSphere User Group website has had a bit of a refresh recently, to include at least some downloads from its March event, if not the information about its September event in Edinburgh this year – encouraging news after the hiatus resulting from leadership changes for the user group.

Back at the Impact event, there were separate streams for business areas (smarter work in Financial Services and Retail respectively), as well as the usual streams that are more centred on technology.

The financial services stream kicked off with a very intersting presentation by Nigel Solkhon, which provided a really interesting view of the business environment around banking, especially post credit crunch. For example, if you look at the fourth slide of his presentation, there is a graph of various banks’ loan/deposit ratios, andwhat happened next. 

Richard Brown did an interesting presentation on the use of WebSphere business monitor to instrument an existing technical process that was working inconsistently before – and still derive value in the form of both reduced errors/inconsistencies and then using the monitoring to document the ‘real’ business process – instead of what people think was happening.

I did also attend a session on ‘solving the business case challenge’ by Andy Martin. I never knew that there were experts within IBM whose main job is helping clients come up with a business case, especially for horizontal investmnts (which are often a challenge to IT shops in large organisations) this could provide value. I think that it is a resource that we could/should follow up on. 

According to the blurb on the Impact microsite: ‘The first [my emphasis] UK Impact 2010 was a resounding success’, which leads me to think that there will be more. I shall be recommending my practice members to attend, as it is a useful and informative event.

IaaS, PaaS, SaaS and now introducing SCaaS…. Public Cloud Computing gets Real

… for me at least. And it turns out that it has been real for some firms for several years.

I attended a session held by IBM and Amazon Web Services (AWS) today at Hursley labs in Hampshire. The title of the session was: “Software as a Service: Getting started with IBM software on Amazon Web Services featuring WebSphere sMash & DB2″

The name kind of disguised what a mind-expanding day it was going to be. There were two major aspects to the day:

  • that the commercial model for cloud computing is alive and well, and that we are at the beginning of a brand new ecosystem. This is as big as the first phase of the world wide web was, and bigger than web 2.0 by far
  • the hands on proof that all the parts worked.

AWS is a provider of IaaS, and as I found out when signing up for my AWS, by reading the T’s and C’s, Amazon also has a subsidiary, Alexa, which provides Service Components as a Service (call it SCaaS or CaaS for short) – notably search and indexing.

Instead we will look at two of the major offerings of AWS, namely Simple Storage Service (S3) and the Elastic Compute Cloud (EC2). There are quite a few other offerings from AWS, which are interesting if you are using IaaS, but the core offerings illustrate why IaaS is more than a flash in the pan.

S3 is simply the provision of secure storage. Secure in the sense that it is private, and in the sense that it is persisted across multiple data-centres.

EC2 is the ability to provision (i.e. not just turn on, but to create) virtual machines – specifically various flavours of intel machines, running various OSs – within minutes. And then just as quickly to be able to release the machines. An AWS virtual machine is called an Amazon Machine Instance (AMI).

The story of how Amazon came to be providing infrastructure on demand, originally for its internal consumption, arises out of their desire to remove ‘undifferentiated heavy lifting’, e.g. how do we provide secure storage? (now provided by S3) They wanted to save each team having to solve similar heavy lifting problems, and concentrate on adding value. The end result is that the one IaaS provider I do (now) know something about, now, is already in a position to provide cheaper infrastructure to most blue chips, than any internal IT department – including those which are run by outsourcing outfits. AWS certainly will not be the only game in town; we can expect more players to join the market – which will make the cloud IaaS offering even more competetive:

  • AWS have a large bank of hypervisors (hardware on which virtual machines can run), and have sorted out the provisioning of Virtual Machines to those hypervisors such that:
    • The provisioning is controlled by the user, either through a web gui, or via web-services invocations
    • The provisioning can be billed by the hour
    • The provisioning can be automated to provide scaling on demand
  • The hypervisors (i.e. the actual hardware) tend to run at about 80% utilisation. That is about sixteen times as much as most in-house IT shops. This means that AWS has one sixteenth of the cost of hardware, power, space, and cooling for any given set of software running.

Before I went to the session, I was aware that one of the benefits of cloud IaaS was that it was on-demand. I had no idea that the costs for running the machines 24×7 would be so much cheaper than most blue-chip’s IT shops.

The lab sessions were set up around the use of WebSphere sMash and IBM DB2 9.2 CE all running on virtual machines called Amazon Machine Instances (AMIs) in the EC2.

This is very much like the session I had a couple of weeks ago with the WebSphere Cloudburst appliance. Cloudburst makes it easy to deploy multiple instances of WebSphere Application Server (WAS) to a set of hypervisors (machines which are running as hosts to one or more virtual machines). The cloudburst appliance was very slick in its operation, and a big step towards virtualisation and re-use within a datacentre.

Setting up all of this virtualisation (i.e. the hypervisor estate on which to install and un-install the various virtual machines) within an IT shop requires a fair amount of capital expenditure, not to mention time.

The big difference between the cloudburst session, and this session is that the hypervisors are already there, ready to be used in the Amazon EC2 cloud: i.e. the capacity is there already. No capital investment required. No lead time. The labs allowed us to set up and instantiate the AMIs via a web gui with little fuss. A web-services API is also provided for this, with the relevant (X.509 and other) security mechanisms, and there is a vibrant developer community writing libraries in various languages to invoke these web-services.

The AMIs come in a variety of flavours, in terms of base operating system: windows, various Linux distros Open Solaris, but the second innovation is that commercial software is also available by the hour, just by paying a bit more on the hourly rate. So an AMI with a ‘free’ OS is cheapest ($0.0095 /hr at time of course, and now $0.0085 /hr). Windows costs a bit more (currently $0.120 /hr). All of the AMIs come with the necessary software to work with the AWS infrastructure – for instance to be able to capture snapshots of themselves (after configuration) for further cloning, or the capability to connect to S3 storage.

IBM is also taking part in that it is offering AMIs with various of its software products. Often the developer version is ‘free’ in that it costs the same as the minimum AMI cost with a bare OS. Then when moving to production the paid-for version is available as a higher cost AMI (say $0.36 /hr for express edition DB2, or $1.25 /hr for the workgroup edition). Similar terms exist for other IBM products (WAS, WebSphere Portal Server, sMash). This is a major step forward, as it allows the complete stack to be paid for on demand. IBM also allows you to bring your existing licenses to the infrastructure running on AWS.

It is worth noting that although IBM is a frontrunner, it is not alone in working with AWS in this way. Microsoft and Oracle also have database instances available, for example.

In summary.. by looking at AWS and IBM (so we expect the market to be at least as good as this) we can see that :

  • Cloud IaaS offerings are already here making the infrastructure offerings of in-house IT look expensive.
  • The management of that infrastructure by the hour, and the ability to pay by the hour (no capital expenditure) is compelling, especially for those organisations which have highly variable load
  • AWS have a good story around data protection, Sarbanes-Oxley and other FUD-inducing regulatory measures
  • Value added ‘heavy lifting’ like storage, and load-balancing is greatly simplifying development and deployment (users do not have to be Cisco and EMC black-belts), thus reducing development costs.
  • There will be a growing number of value add components, e.g. card merchant services, search engine in the cloud – such that it will soon be the case that many large solutions have more components ‘in the cloud’ than they would have ‘in our data-centre’.

So now I am going to stick my neck out, and say that within three years there will be no large organisation that does not investigate cloud IaaS for new infrastructure installs (including refresh operations); and within ten years 50% of all large organisations (£100m turnover +) will have at least 50% of their data-centres ‘in the cloud’. It will be a higher proportion (say 80% of smaller organisations).

In part 1 I set out my requirements for evaluating and choosing an open source ESB, and promptly fall down a rabbit hole of OSGI and Maven when I get to see how much more there is to them than I was previously aware.

From time to time we get requests on how to get started with middle-ware technology on the cheap. Here the emphasis is just about connecting service providers and consumers up, without getting into anything fancy like orchestration or service repositories. Of course, in an ideal world the solution should not rule the latter out.
So here are the requirements I can filter out of a few of these conversations.

  • Open source for low cost and wider options when upgrades appear – i.e. not always forced onto the latest version.
  • Must handle WS-Security, WS-Addressing
  • Freedom for choosing java-XML binding framework
  • Supports contract-first services design (as opposed to generating the service artefacts (WSDLs, schemata) from java classes.
  • Run-time is ‘light': i.e. when service-enabling components are deployed on the same machine as an application which will be service enabled, these service-enabling components do not gobble up all of the resources.

Contract first development is very important in a heterogenous environment. See arguments on the object – XML impedance mismatch here. Another way of putting it is: if you are going to stick with just one language (e.g. java) then why bother with XML in the first place – just go with some RMI technology, RMI-IIOP. If we are using Web-services, then interoperability is a big consideration, and for that we have to think contract-first.
One of the reasons for separating out the java-XML binding from the web-service end-point binding code is that it is great to use the same pojo to describe an entity, whether it is serialised to XML or persisted to a database or just as a value object.
On the one hand it is good practice to work with web-services in a contract-first style, and on the other hand: if you use the code (specifically the domain objects to which the XML is bound) throughout your application then you can introduce a dependancy on the XML marshalling generated classes, which is not great either. In an automated build environment, it means building your gen-src directory from schema before you can write any src code which uses it.
In the past I have got around this by specifically generating class libraries of domain objects, using jaxb, and then importing the resulting jar into any code (both client and server side) which manipulated these objects. The compromise at the time was that I ended up writing my own endpoints (servlets) to expose web-services – which is OK when there is not much WS-*, (e.g. Addressing, Security) going on.

I wanted to see if the latest incarnation of the open source frameworks would enable contract first, generation of domain objects first (such that they could also be used in persistence layers and as value objects) and relatively easy handling of WS-Security and WS-Addressing.
The new kids on the block for me are ServiceMix (a.k.a. Fuse), apache cxf, spring-ws, and the sun JAX-WS.

The previous time, the players had been Apache Axis, WSIF, and JAX-RPC. Oh I almost forgot Castor. Every one of these had had their own java-XML binding code and none of the produced beans were interoperable between frameworks. Stand-alone java-XML binding frameworks like JAXB (1.x) were not interoperable with the web services binding (e.g. JAX-RPC) generated objects.

Anyway: enough of the background… The first two I wanted to look at were both FUSE and Spring-WS, as they both allow contract first (Spring-WS will not allow anything else) and they both support Spring and its IoC (another Good Thing, but that’s a different discussion).

I had only got around to looking at FUSE when I fell down the first rabbit-hole: OSGI and Maven. I have had a look at the excellent video tutorials by Adrian Trenaman of progress (formerly Iona) software, (see the demo videos tab at the bottom of this page.
I had been aware of Maven for a while as a ‘slightly better Ant’, but the demo and a bit more digging around reveals there are two big extra features in Maven which move the game on a whole lot more:
Firstly there is the project template feature. This is the feature whereby you can create a java project using a Maven build file with a command-line argument. The command builds the appropriate directory structure and even the correct POM.xml (Maven equivalent of an Ant build.xml file). Although I had been demo’d this before, it has only really sunk in this time what a big deal this is.

We have in the past put a lot of energy into our automated build system, based around Ant. For it to work well, there is a mandated project directory structure, and set of files and libraries have to be in the right places relative to each other. There’s a bit of a learning curve on top of Ant to understand what is going on. The template projects from Maven give you all that in one go. That becomes especially evident when you try a new project: for example an OSGI plug in project. You just run Maven with the proper archetype and bingo…

Secondly there is the repository system. You can configure a set of remote repositories, and just by specifying a library in your project file (.POM file) the maven build will try to fetch the library – of the version you specify too to your local machine, in a local repository – which is then shared amongst all of your projects. Again you notice how powerful this is when you download a maven-enabled project, and on the first build it goes and fetches all of its libraries which it depends on – unless you already have them locally. A large number of common shared libraries (e.g. most of the well-know apache projects) are available in the default repositories. It is possible to configure which external repositories are trusted and should be used.

The repository system has become effectively just another resource, to the extent that to install an OSGI bundle from the OSGI console (more on this next time) ‘mvn:’ is named as the protocol type for a given resource. The resource is then seamlessly retrieved; either from local storage, or from one of the configured remote repositories.

All clever stuff.

So from starting to look at Open Source middle-ware, I have fallen down a couple of rabbit holes. The maven excursion is definitely going to make me sit up and give that a much closer look (talk about late adopter!). The second rabbit hole for me was OSGI. more on that next time. Then it will be back on track for the open source middle-ware.


Get every new post delivered to your Inbox.

Join 1,122 other followers