This week I attended a Cloudburst ‘Proof of Technology’ day at IBM’s Hursley Labs and I wanted to share my initial thoughts following this chance to play with the appliance. It was a scripted do this, then that set of labs along with some presentations so not necessarily a real tyre kicking but enough for things to click into place and some ideas around its value to form.

Colleagues have already blogged about this new appliance so I’ll keep it simple and liken it to a vending machine for virtualised WAS environments. You pick the one you want, press the button, wait and ta-da out it drops into your private cloud all ready to consume.

Its on-demand nature makes this compelling. Having spent significant periods of time creating virtual environments to play with various products this appeals to my fast food, consumer tendencies and although today it’s only WAS HV edition, DB2 HV is days away with Process and Portal Server HV editions following later.

Let’s imagine you are running WAS, have a number of servers that you are trying to maximise your investment in with a little virtualised sharing of hardware and you want clean, controlled dev and test environments to be simply and repeatably rolled out on demand and torn down when not required, with little fuss. You deploy the environment when you need to use it and when done tear it down. It doesn’t have to sit there consuming resources for any longer than the time you actually need it because when you do need it again you can faithfully recreate it. You can timebox the period that your environment is available and if this period is not extended, your environment is removed.

Thats what you get with this, and it’s so simple you can empower anyone who genuinely requires this level of access and automation to potentially remove a period of let’s be honest, faffing around to aquire the environment which could save huge amounts of time and money. There’s no elasticity, you pick your pattern and if you have 2 nodes that’s your lot. Cloudburst does do some monitoring but it’s not going to give you any more nodes when things are maxing out. It really is simple though.

It’s going to cost you, but nowhere near what you would spend scripting this level of automation and control and IBM are funding the ongoing development and maintenance costs with version 1.1 of the firmware coming so soon you can smell it and 2.0 well on the way.

Does it need to be a hardware appliance, I’m not sure but it is and its available today, in purple. Which must have upset the DataPower guys who already have a purple box in the form of the XB60 B2B appliance. So if you have both don’t rack them up too close together or you just know one day, someone is going to unplug the wrong one.

Is it possible to be brainwashed in a day? Possibly but I believe that if you have an investment in WAS you should have a look this appliance and what it brings to the table. Give us a call and we can put you in touch with a man who has one in a flight case with a pair of servers attached (he brings his own cloud!). He’ll come and see you, let you have a go and you can decide for yourself.

Following on from the previous post, here are some more random notes about CloudBurst – pros, cons, features, limitations etc.

One thing that occurred to me was “Why is cloudburst a hardware appliance?” – it could just be a software appliance…well the key reason it seems to me is that it holds lots of OS/WAS admin passwords etc and so the ‘hardening’ of a hardware appliance with tamper-resistance etc is a key feature.

Deployment patterns and deploying

WAS hypervisor edition is actually an OVA image with 4 virtual disks, with multiple WAS profiles set up already – so the hypervisor edition is actually a VM image, rather than a customised WAS code base it seems.

There are patterns for a single server deployment, small cluster (3 VMs), large cluster (15 VMs) etc. You can modify a standard pre-packaged VM (e.g. add a fixpack etc) and then ‘capture’ back into CloudBurst as a standard catalogue VM for use in new patterns.

Control is available over whether certain pattern properties (passwords, memory size etc) can be overridden for each instance of that pattern or not.

A key point – keep track of any changes made to your VM patterns (e.g. any tuning done) and then ‘bake’ the in the pattern in CloudBurst so that any future deployments get the changes – otherwise they’ll be lost when you redeploy the pattern.

The first image transfer to each hypervisor can take up to 1 hour to transfer (obviously this is environment dependant)!

IP addresses are allocated on the fly when deploying, i.e. it pulls them from a pool of available IP addresses that the admin user sets up.

Script packages

Post deployment, CloudBurst can run any script, not just wsadmin scripts – essentially it ssh’s over to the VM and uses a zip file and an executable name (e.g. <some_path>/ with some arguments (e.g. what JACL file to run). ‘wsadmin’ scripts can be used against the deployment manager VM to install an application (EAR file) into the cloud cluster. Some “wsadmin” scripts are provided out of the box for common tasks – setting up global security etc.

Management facilities
CloudBurst provides a centralised access point to the consoles for each VM/WAS instance.

You can control separate permissions for each user (with LDAP integration), so you can have multiple users using a single CloudBurst box at one time, creating their own ‘private’ patterns etc.

You can use it to control the hypervisors to create snapshots of all the VMs in a particular deployment, so for example you can run some tests and then quickly recover the entire virtual deployment (i.e. all the VMs).

License management/metering etc – seems a pretty limited offering, it relies on admin REST APIs exposed by CloudBurst that are called by something like Tivoli etc.

CloudBurst admin console interface seems v..e..r..y.. slow to respond sometimes.

CloudburstBox530x155We’ve got a bit more under the skin of CloudBurst now and I wanted to post some info that’s not been written by anyone in Marketing…about the realities of the product (good or bad) rather than the saleman’s spin.

So what does it do? Well, in a nutshell it holds VMWare ESX and ESXi VM images of WebSphere Application Server (WAS) on disk and can install them to some IP addresses of your choice at your command and follows them up with some automated WAS admin scripting of your choosing. Some pre-created WAS Hypervisor Edition VM images exist (based upon Novell SLES 10 SP2) or you can create your own and package them, up using OVF (Open Virtualisation Format). There’s no runtime component to the product other than it’s VM management/deployment role, i.e. it relies on WAS XD if you want load balancing etc. There’s more to it than that but that’s the bare bones of it.

So what are the key use cases for CloudBurst – why would someone want one when they can install VM images themselves? Well, the key reason for use is to take the deployment cost out of an IT organisation. The creation of the OVF VM images is still going to be just as easy/traumatic as it was before, but once you’ve got a “pre-canned” environment setup in CloudBurst then you can roll that out repeatedly and with confidence with very little manpower required.

Who would use it? Well, if you get benefit from being able to ‘can a topology’ rather than just putting a single machine image ‘in a can’ then there could be real cost savings and agility/reproducibility benefits from being to roll out a defined topology to your private cloud very quickly and repeatedly. So if your organisation has many projects running throughout the year that need multiple dev, test, pre-prod, prod etc environments created and ripped down all the time, then you’d very quickly get a payback I suspect. It would also make you more likely to kill off an unused environment if you knew you could painlessly recreate it, allowing you to have less capacity needs overall.

The immaturity of the Open Virtualisation Format (OVF v1.0 was only released in March 2009) is a key constraint at the moment and this is an industry-wide issue – it’s early days. A key impact relating to CloudBurst is that each VM image is a bit of a beast at 20Gb minimum (not entirely clear why this is – maybe due to a WAS install being big anyway, due to the way virtual disks are managed in the OVF standard, and the images being uncompressed?). This directly impacts deployment times just due to the sheer volume of data to be shunted around, but it’s not immediately clear to me if this an OVF issue (it does have some compression support) or an issue with the implementation/use of the standard. If deployed more than once to the same hypervisor then deployment times can be accelerated as all this data doesn’t need to be provided a second time. It can take something like 7-15 minutes to deploy a VM image.

There are two key design approaches when creating your VM images (the best approach is probably a mixture of the two):

  • use a ‘stock’ WAS image and then customise it with your coifing/settings and your EAR files installed etc, and create a new OVF image for deployment
  • use a ‘stock’ WAS image and then do the customisation post-deployment using WAS admin scripting

So where’s it going from here…? Well support for Xen-based VM images must be likely as this is crucial for anyone who is going to ultimately deploy into Amazon EC2. Portal Server is already available on EC2 on a pay-as-you-go basis and WAS is coming. Also, it’ll be interesting to see if IBM support non-WAS images in later versions of the product.

RobinAtWIUGJuly2009After a journey characterised by a conspiracy between parking meters and failed tube signals, I made it yesterday to IBM’s Hursley Park for the WebSphere Integration User Group meeting. Here’s the photo at a sunny Winchester station as proof…ok…so Jamie and I forgot to take any photos at the event.

The key note presentation was from Kevin Turner – ESB and Messaging Technical Strategy Manager. He covered IBM’s Smarter Planet initiative and then moved on to future architectural trends – the noteworthy points were:

  • The 2nd wave of SOA is coming – there’s a significant number of ‘1st wave’ adopters out there now who have been through the joy and the pain, and have now fully understood the upfront investment required and the potential benefits. This 2nd wave is likely to consist of organisations trying to join up their islands of SOA (probably built along departmental lines due to a sensible pragmatic strategy of ‘baby steps’ SOA), and so federation of multiple ESBs will be a key theme. Governance will be crucial here if these islands are ever to be consolidated into a virtual enterprise bus that solves the problems of visibility of services across ESBs, end-to-end security and manageability etc.
  • Patterns – IBM are working on building some integration patterns support into their tooling (presumably WebSphere Business Modeller, WebSphere integration Developer etc) to allow an expert from an organisation’s ICC/ICoE to define the standard ‘pattern’ and therefore accelerate the development of instances of that integration pattern. The integration developer might just have to supply the pattern parameters for example, with many of the decisions such as how to manage errors etc already consistently solved for them.

There were a couple of presentations about specific MQ-related Supportpacs (MO71 and IH03) which I expected to be dull due to their nature, but the presenters managed to bring them alive – especially Paul Clarke. It was clear that he’d written and refined quite an impressive beast of an application over many years and was rightly proud of it.

Kevin mentioned CloudBurst during his keynote, and I managed to get some time with an IBMer later in a hastily arranged session to discuss it some more – I’ll post details in another blog post later as there’s quite a lot to report.

That means I missed the start of the next session about the developments in the WebSphere Process Server product to better support the human interaction aspects of BPEL processes (which have been sorely needed in our experience!). Paul Smith demoed the Business Space web app from WPS v6.2 which goes some of the way to addressing these shortcomings, with better human ‘override’ control of processes (skipping a step, repeating a step etc) and better visualisation tools for business users to use to understand where a particular process instance has got to, etc. This is clearly still a developing area of the product set though.

ilogThe last session I attended was a demo of rules from the recent iLog acquisition by IBM. An ex-iLogger Lenny Bromberg gave a very engaging demo which involved my colleague Jamie playing the role of “business user” to dynamically change rules that influenced the behaviour of a mock motor insurance quotation app. An interesting aspect of Lenny’s “pitch” was that essentially rules engines are 10 a penny, there’s several good open source ones out there if you want one, but what rules provides is a Business Rules Management System (BRMS), i.e. all the significant extras wrapped around the actual runtime rules execution environment that you need to make it really workable, manageable and governable. This includes rule development/editing environments (developer IDE and business facing), rule versioning, rule comparison tools, audit, simulation environments etc. Some other observations:

  • Lenny’s experience from previous projects where they have integrated with BPM solutions (like WPS etc) is that they often find that the business process definition/BPEL has become ‘spaghetti’ as the business rules are not clearly separated from the business process – and so the use of an external rules engine enforces a good business process automation design practice, and leads to more maintainable BPEL etc.
  • This is related to BRMS’s in general and not specifically iLog rules, but a weakness that I could see is that the rules rely on a good, stable business object model and we know from experience with numerous customers how difficult it is to get enterprise data models together and agreed. This is the potentially shifting sand that the rules are all built upon.

Many thanks to Mike Wyvill and Keith Guttridge and others for organising the event. Well worth the £28… :o)

CloudburstBox530x155We first heard about this appliance at the UK WebSphere User Group meeting back in March although it was commercially sensitive at the time so I couldn’t blog about it, and we’ve been waiting for it to be launched. IBM are leveraging their acquisition of an appliance format with DataPower to bring other appliance products to market of which this is the first. It’s an interesting product, aimed at both those looking to create a private cloud and for the public cloud vendors.

It provides the ability to dynamically deploy and manage virtual machines including the new hypervisor version of WebSphere Application Server (which supports the OVF standard) including security and chargeback facilities.

I’m looking forward to our internal WebSphere practice finding out more and getting to grips/playing with it.

See for more details. What’s the next functional area that is ripe for ‘appliancisation’ (if that’s a word…)?


Get every new post delivered to your Inbox.

Join 1,122 other followers