Having recently spent time working on the IBM Worklight platform, I thought it would only be fair if I documented some of my findings. No disrespect to the IBM’ers, but its reasonably fair to say that documentation is a little sparse in places, so lets give a little back to the community by discussing some of the hurdles. Lets not dwell on what Worklight is, Andy has already covered this well in a previous post; but lets just dive right into some of the technical aspects.

General Thoughts

Development on the whole is a relatively straightforward process, even for someone like myself that often steers well clear of anything that involves web presentation technologies (it reminds me of dark nights at the university labs spending hours trying to get a button to align correctly, the night before coursework submission *shudder*).

The Worklight eclipse plugin provides a good drag & drop GUI builder, but with support only for dojo. I opted to drop dojo and go for jQuery. jQuery is very well documented, and is easy to get help should you require it. One of the main things I like about jQuery is its showcase and examples, they are documented very well and the learning curve is generally quite small, but also the themeroller, it becomes incredibly easy to customise the default colour scheme and drop the generated CSS into your app. It always amazes me how excited the marketing guys will get if you can add in the corporate colour scheme to your app (thanks Joseph!).

Continuous Integration

We’re big fans of CI here, so I was quite keen to understand how easy it would be to have our Worklight apps built from the command line, and ultimately on a Jenkins CI box. The chaps over at IBM have done a fantastic job of exposing an array of Ant tasks that help with building and deploying apps, you’ll almost certainly want to read through module 42 on the getting started page that covers these tasks:

  • adapter-builder – Use this task to build your adapter and create the .adapter file
  • adapter-deployer – Use this to deploy a .adapter file to a Worklight server (very  useful for deploying to a remote AWS instance)
  • war-builder – Use this to build the server .war file that you will deploy to the application server (some manual tweaks are required)
  • app-builder – Use this to build the .wlapp files that you will deploy into your Worklight container
  • app-deployer – Use this to deploy your .wlapp files onto a Worklight server (useful again for remote deployments)

Lets have a closer look at each of those targets, and how we’re using them here at Smart421:

Getting the party started, with init

Firstly, grab the worklight ant jar (you’ll need to have purchased the WL Enterprise edition for this) and add it into your ant context like so :

<target name="init">
 <echo message="Loading ANT Tool"/>
 <taskdef resource="com/worklight/ant/defaults.properties">
 <classpath>
 <pathelement location="./build-config/worklight-ant.jar"/>
 </classpath>
 </taskdef>
 <property environment="env"/>
 </target>

Now you’re free to use the ant tasks anywhere in your build script.

Building & Deploying WL Adapters

You need to build each adapter individually, and then deploy each one. You can create the following ant targets to do that for you:

<target name="buildAdapters" depends="init">
 <echo message="Building all adapters"/>
 <adapter-builder
 folder="./adapters/TwitterAdapter"
 destinationfolder="./bin"/>
 <!-- Build your other adapters here, same as above-->
</target>

<target name="deployAdapters" depends="init">
 <property name="WLSERVERHOST" value="http://my_aws_ip_here:8080/SmartConf"/>
 <echo message="Deploying all adapters"/>
 <adapter-deployer
 worklightServerHost="${WLSERVERHOST}"
 deployable="./bin/TwitterAdapter.adapter"/>
 <!-- Deploy your other adapters here, same as above-->
</target>

Building the Server WAR

You can build the server war file using the war-builder task, as shown below. It is important to note however, that I needed to do some tweaking to the war file to avoid any post-installation configuration tasks. According to the Worklight forums, there doesn’t appear to be a way to include files in the WEB-INF when the war is created, which means that once you’ve expanded the war on the application server you’d need to manually replace the default web.xml and context.xml files (to set your datasources), this can be quite frustrating, so in true Blue Peter fashion, I’m updating the war file with files I created earlier.

<target name="warBuilder" depends="init">
 <echo message="Building the war file"/>
 <war-builder
 projectfolder="./"
 destinationfolder="./bin"
 warfile="./bin/SmartConf.war"
 classesFolder="./bin/classes"/>
</target>

<target name="updateWar">
 <echo message="Updating the war file"/>
 <war destfile="./bin/SmartConf.war" update="true" webxml="./build-config/web.xml">
 <metainf dir="./build-config" includes="context.xml"/>
 </war>
</target>

Building & Deploying the WL Apps

You’ll also want to automate the building and deploying of the wlapp files, you can do this with the following :

<target name="buildApps">
 <echo message="Building all WL Apps"/>
 <app-builder
 applicationFolder="./apps/Smartconf"
 nativeProjectPrefix="SmartConf"
 outputfolder="./bin"/>
</target>

<target name="deployApps">
 <property name="WLSERVERHOST" value="http://my_aws_ip_here:8080/SmartConf"/>
 <echo message="Deploying all WL Apps"/>
 <app-deployer
 worklightServerHost="${WLSERVERHOST}"
 deployable="./bin/SmartConf-all.wlapp"/>
</target>

Building the Native Application Distributable Binaries You’ve survived this far, and I’m thankful to you for that, however we’re not quite finished yet. Worklight will generate the native projects for you, but its your own responsibility to take those project directories and build the Android APK, or the iOS IPA etc. IBM will draw the line at this point, so you need to build them yourself, you can do this for all of the environments quite easily using additional ant tasks, android is the easiest :

<target name="client-android" depends="buildAndroid">
 <!-- Run the android native build, in its own directory -->
 <ant antfile="./apps/SmartConf/android/native/build.xml" target="release" useNativeBasedir="true"/>
 <!-- Copy up the apk into the bin area, for consistency -->
 <copy file="./apps/SmartConf/android/native/bin/SmartConf-release-unsigned.apk" tofile="./bin/SmartConfSmartConfAndroid.apk" overwrite="true"/>
 </target>

Building Blackberry and iOS apps from the command line is slightly more involved, and I feel they warrant their own blog post on that, alternatively, get in touch and we’d be glad to offer some assistance. Bear in mind you will need an Apple MAC to build iOS, for which we’ve installed a shared box in our build environment.

Other Gotchas

As with taking on board any emerging technology, there will always be plenty of head-scratching moments where the documentation is thin, and Uncle Google doesn’t provide much help, fortunately for you, we’re a nice bunch of guys here at Smart421 so we’ll share some of the things that had us pondering over a coffee:

  • The trailing “/” in the Worklight server host URL is required, don’t ask why, it just is.
  • The versioning conventions for Worklight are a little strange5.0.0.270 = v5.0 GA, but the developer edition is 5.0.2.407-developer-edition = 5.0.0.3.
  • If you have an existing 5.0.0.2 WL server installation, don’t upgrade it to 5.0.0.3, it fails to upgrade all components and leaves you with some obscure error messages that are hard to trace. The best plan of action is to uninstall, install again, but make sure you check for updates at time of installing, via the wizard
  • App crashes with Unreachable host? When you build and deploy the app to your device, it has the WL server IP hardcoded into it. The next day when you arrive at the office and hop onto the Wifi, DHCP gives you a different IP address…It’s a classic schoolboy error, but catches us out from time to time. A simple solution if you don’t have a spare box lying around is to install the Worklight server on AWS and deploy to the cloud, bearing in mind that it needs to be open to your mobile devices over the Internet in a real-life installation anyway.
  • Results is undefined on adapter call. A subtle difference here, HTTP adapters use invocationResult.results, whereas SQL adapters use invocationResults.result. That last character makes all the difference.
  • Response cannot be parsed, please contact support; this is an annoying error that you often see in the developer preview, just make sure you set the body onload to WL.Client.init() as mentioned here.
  • Unable to use geolocation services on android? You’re probably seeing Caught security exception registering for location updates from the system, this should only happen in DumpRenderTree. Make sure you have the geolocations permission in your android manifest as detailed here.

Conclusion

On the whole, I was very impressed with Worklight, they are offering a lot of functionality over and above the standard Cordova project. Some of the errors I’ve encountered have been a little frustrating, as often my only source of help was the forums, but I can accept that it is a product in its early stages of adoption, and will probably go very far. I’m looking forward to working with it in the future.

If you’d like to have a look at some of the apps we’re creating, or generally just want a chat about Worklight and some of its capabilities, or Mobility in general, we’d love to hear from you.

Flooded
Photo by East Coast Main Line

I have had the luck to attend the WUG at the Royal Society of Edinburgh on George Street again this year. This is a bi-annual event hosted in Edinburgh in the Autumn and, from this year, at IBM’s facilities on the Southbank in London in the Spring.

The good luck was attending, when maybe a third of people failed to go when the weather was bad, but the bad luck was when the East Coast mainline was flooded on the way down, causing a very late return home.

There were a few interesting sessions, including the Worklight acquisition for developing mobile applications on to a variety of target devices, including iOS and Android. Possibly more on that later. There was also a good session by Alan Chambers on sample use-cases for using WebSphere eXtreme Scale, which is a distributed in-memory caching technology. This is an interesting area, which merits further attention. The slide deck for the various sessions, including ones I could not get to, are on the WUG site.

David Sayers of MidVision also gave a talk about DevOps, which is the set of disciplines for bringing development and operations closer to each other. Although MidVision supply a tool in this space, David was keen to stay away for instances of tools, and to say that there is no magic bullet, and that it’s about process and people too.

A phrase which struck a chord with me went something like: “many firms don’t want to make a change in a production system because ‘Steve’ is on holiday and he’s the only person who understands this”.

It’s a spooky coincidence, as we have just published a development policy stating that all environments, and deployments to those environments should be 100% automated, as part of our policy refresh.

The presentation I want to elaborate on a bit this time, is the “How Lightweight is the [WebSphere] Liberty Profile” which is part of WebSphere Application Server (WAS) 8.5.

Simon Maple  (Twitter @sjmaple) – one of IBM’s technical evangelists on WAS – explained that this profile is an OSGi-based application server kernel which only loads up the libraries and subsystems, as you need them. The end result is a *very* lightweight application server.

So much so, that the session involved showing the application server running on a Raspberry Pi (£20-odd computer, the size of the palm of your hand, delivered as a circuit board).

To follow this up Simon then started up a WAS instance on his older android phone which was then serving up a blogging application via the phone’s wireless hotspot. I connected to it with my phone, and posted: “Amazing!” (yes Oscar Wilde won’t be looking over his shoulder), which then showed up on his monitor, along with some more imaginative posts.

I have the tooling, which was provided on a memory key in our “info” shared area for any Smarties to download.

The liberty profile tooling (eclipse plugins) even runs on a Mac, along with the dev runtime. Even though this runtime is not supported in production on Mac, this is a pretty major step for IBM. I would not have imagined it five years ago.
In terms of production use though, the liberty profile WAS is still a standard WAS install from the perspective of licensing… though I’m not sure how many PVU’s a Raspberry Pi has.

IBM also have a new Head of WebSphere Software, Stuart Hemsley, who was keen to get feedback from the delegates, both by announcement at the keynote, and by walking around during the breaks.

Our feedback was that the application server just costs too much compared to the likes of Tomcat and JBoss, and includes technologies which are starting to be less relevant (e.g. session clustering), as the application architecture moves to pursue session-statelessness. Yes you would expect to pay a premium for a big-league vendor-supported product, but not 10x as much.

It would be a shame for IBM to loose out on market share because of pricing, when they provide excellent tooling and support, as shown by a session on performance tuning the JVM… but that (as they say) is another story.

I also had the opportunity to attend the    Websphere User Group (WUG) meeting on 23rd March 2011 at Bedfont Lakes. The WUG is a very popular topic amongst colleagues at Smart421 as its a great community

As someone who doesn’t have much direct WebSphere experience on a day-to-day basis, I was wondering if I would struggle to follow the content. However, I can say that I was pleasantly surprised as the tracks were designed for a variety of skills levels. There were several talks that any Java developer would find interesting. There were actually 13 streams running over the course of the day. Many focused on specific IBM products (as you’d expect) but also some focused on more general topics such as Java, and OSGi.

The first session I attended was the WebSphere Foundation Update and Technical Direction in the WAS 1 stream. This session by Ian Robinson gave an overview of forthcoming features in WAS v8. While this was a very WAS specific session it also provided useful updates on several areas in the J2EE space. To download the slides, click here.

The second session I attended was in the WAS 2 stream on JAX-WS 2.2 and JAX-RS 1.1 support in WebSphere Application Server Version 8.0 Beta . The presenter, Katherine Sanders, a software engineer at IBM Hursley,  gave a very good introduction to these two technologies without being tempted to delve into a lot of overly-heavy WAS-specific details. To download the slides, click here.

The third session I attended was given by Simon Cashmore, a Lead Engineer within the Global Middleware team at Barclays Bank.  This talk, Changing the way Java Application Hosting is delivered at Barclays Bank , stood out by a mile as it was the only session in the Customer stream (c’mon WUG Committee, more like this please). It was informative because it focused on Barclays’ new approach to hosting Java applications. Barclays have essentially built their own collection of virtualised WAS instances that can be made available in days rather than weeks or months. Previously, projects would buy brand new hardware that was not shared or reused, so costs and timescales were sky high. Now they have a shared resource that can be used and reused much more efficiently – and more cost effectively. I’m sure Barclays shareholders will be very pleased to hear that  ;o)

The fourth and final session I attended was a talk in the Java stream on Generational Garbage Collection: Theory and Best Practices. This was focused on how the IBM JVM works, but Chris Bailey, a technical architect in the Java Technology Center (JTC) team at IBM Hursley, gave a very detailed description of it which applies to any JVM that implements Generational Garbage Collection. To get a copy of Chris’ slides, click here.

So if you’re in doubt whether you should attend a WUG meeting because you feel don’t have enough WebSphere experience then let me reassure you that any Java developer will find something of interest. There were also suggestions of adding a more business focused stream to future meetings to widen the potential audience even more.

Details of all WUG activities and events can be found here.

WUG 10th Birthday Celebrations, IBM Bedfont 23 March 2011

Members of the WUG Board, past and present, cut the birthday cake. From left to right: Nigel Gale (founding Chairman), Simon Maple (IBM Representa tive), Alan Chambers (WUG founder and Board member), Chris Mason (Treasurer throughout the WUG's 10 years), and Jonathan Marshall (IBM Representa tive). Photo by kind permission of Alan Chambers.

On 23 March, over 200 members of the WebSphere User Group UK (WUG) and members of the WebSphere Integration User Group UK  descended on IBM Bedfont Lakes, Feltham, UK for the WUG’s spring-time gathering (2 annual meetings; March at Bedfont, September at Edinburgh). Smart421 was there with one or two of our bigs guns. More on that in a moment.

As longstanding members of the WUG, we get a lot out of these meetings – perhaps ‘cos we also put  lot in. A significant number of our customer engagements require deep Java skills and several depend on WebSphere technologies in some way or another. Most speakers are IBM-ers, many out of Hursley, or sometimes further afield. Delegates from IBM, end-users of WebSphere and IBM business partners make up the remainder of the rich ‘ecosystem’ that is today’s WUG.

Smart421 Lead Consultant, Stu Smith, had his proposal selected by the Committee, which carried the catchy little title ‘Software Development Life-cycle with Message Broker in end-to-end SOA’ [Download the slides]. Nevertheless, Stu pulled a bigger crowd than usual with his piece and people seemed to appreciate his content and the very good Q&A session he triggered; for the last session of the day, it was a lively interactive exchange among attendees, who by then probably had their minds on the drinks reception or what they had to do to catch the early train home.

Alan Mangroo, one of our elite tekkies, attended for the educational tracks and was last seen diving in and out of sessions he has pre-selected. Knowing him, he’ll have made copious notes, so try to make a point of reading his separate blog [posted 08 April, click here].

The WUG has been running for ten years in the UK (yeah…I know !) and the Committee didn’t run past the opportunity to celebrate with drinks and two rather impressive cakes to mark the occasion. I’ve included a photo, courtesy of Alan Chambers, so you can share the moment with us. Proof –  if ever you needed it – that even tekkies have soul, so long as you bring the candles ;o)    Actually, I only remember cute miniature marzipan figures: developers with laptops.

As is often the case, Smart421 ran a on-stand prize draw for a bottle of Bollinger and appropriately Nigel Gale, the WUG’s first chairman (pictured, far left), was the one who swooped the 1st prize. Good timing I’d say. Hope you enjoy that, Nigel.

This years Autumn JAX conference was actually two conferences in one. The OSGi DevCon shared two days with the main JAX event. Conference delegates were then free to attend sessions from either conference. This meant that developers with little experience of OSGI really benefited from gaining insight across sessions. This years event was very well attended, the huge conference room in the London Novotel was packed, with people standing. The sessions were only 50 mins each which meant they were kept concise and to the point, so no chance of getting bored. The smaller rooms were intimate and made asking questions much less intimidating. You could feel the enthusiasm of everyone during the breaks, people were chatting and networking rather than hiding away behind their laptops. Lets not forget the lunches, the food was excellent, a great choice of dishes to cater for all tastes.

The first session I attended was Ted Newards keynote speech entitled ‘Rethinking Enterprise’. Last year Ted spoke about developers needing to find their unique selling point in world of cheap developers. His speech this year started off along the same lines. He asked the question that all developers would like answered… “What do I do to ensure I don’t get replaced by a cheaper developer?” What does this have to do with rethinking the enterprise, I wondered?

We are all programmed to jump to the most obvious solution to a problem. When at school we learn a single subject then get tested on that subject. We learn another subject then get tested on that subject. Even if the questions in our test are different to what we are used to it is a good bet that they are related to the subject we have just learnt about. As adults we attend training courses. After attending a course we return to work with our newly gained knowledge and our boss assigns us a new project. We automatically assume that the new technology we have just learnt about is going to be what we should use to implement this new project. Why else would we have been sent on the course? The same goes for enterprise applications. When someone asks us to design or implement an enterprise application we automatically think J2E and more often than not EJB . Admittedly this is a highly simplified and contrived example, but there is no doubt that we have pre-programmed solutions to problems that we are asked to solve. We may think that we are choosing the ‘Best practice’ solution, however ‘Best practice’ suggests that we have finished and there is no room for improvement. Ted challenged us to rethink this and not settle on the first solution that comes to mind. We should really look for ‘Good practice’ as this allows us room for improvement. This kind of thinking allows us to design solutions that others may not think of and therefore give us that advantage and unique selling point that we need. Don’t just settle on the obvious solution, explore the less obvious and you never know what you may find.

Another interesting and more technical session was on the subject of RESTful services. This session was run by Neil Ford from Thoughtworks. SOAP based web services have been around for a while now and are widely used. This session started off with a brief introduction to REST and its benefits over SOAP style web services. SOAP based web services are often orchestrated using a heavyweight BPEL engine. This often couples together the application providing the SOAP service and the BPEL engine. The BPEL engine must maintain the state of the process, the applications involved in the process may also maintain knowledge of the process state. This is where HATEOAS provides a mechanism to manage the state in the applications that are involved in the process. At its most basic level a REST service request is simply an HTTP request using one of the four HTTP verbs, GET, POST, DELETE, and PUT. These requests are always resource based, so you GET an order, or DELETE an order etc. The parameter string of the request URL contains all the relevant input data. REST and HATEOAS provides the ability for the application providing the service to return the set of potential next links, ie state transitions. The state transitions of the process are stored in the links rather than a heavyweight BPEL engine. A simple example of this idea that we have all seen is a search engine result page. You submit a query to a search engine and are then presented with an HTML page of results. If this were a BPEL process the BPEL engine would know which page of the results you are viewing. If you click ‘next page’ the BPEL engine would know what page is coming next and return that for display. With HATEOAS the links on the page contain the actual next page number in the URL. So if we are viewing page 2 the next page link would request page 3 and the previous page link would request page 1. There is no need for the search engine to know what page you are viewing. The state of your search process is implied by the possible transitions stored in the links, therefore encouraging a stateless approach. HATEOAS may well be the future of SOA that provides an alternative to the usual SOAP based web services.

The final area that I’m going to write about was a session entitled “Software Quality – you know it when you see it” by Erik Dörnenburg. We all want to write quality software, but how do you spot quality software? We often have to work code that was written by someone else, or code that has been around for years evolving as requirements change. If we are working on a new project or legacy code we need to quickly determine problem areas and issues that exist. We need a view of the application. Our introduction to the applications will probably be a diagram consisting of some boxes with lines joining them together, the 30,000 feet view. If we are lucky a detailed design document may tell us what the application should look like, but these are often out of date and lacking in real detail so little use when looking at code quality. Lastly we have the code itself, the 0 feet view. Far too much detail when it comes to understanding an application. What we need is the 1,000 feet view that is generated from the code itself. Erik took us through several visualisation techniques that provided a very fast way of understanding an application. The “Code City ” was probably the most interesting visualisation.

This is essentially a 3D map of the application that allowed us to quickly understand the structure of application. In the above example classes are represented as buildings, while packages are represented as districts. Problem areas are easily spotted. For example a huge building may highlight a huge hard to maintain class. Dependencies between packages could be drawn onto the city allowing us to spot problem areas very quickly. These kinds visualisation give us that 1,000 feet view where we can see potential issues very quickly. As with all these new techniques it is the initial setting up the visualisation that is the time consuming part, but as tools mature this could be the future of metrics and tools such as Checkstyle.

Smart421 normally attend the Websphere User Group, however this is a very specialised event that may not interest many Java developers. I would certainly recommend the JAX conference to any Java developer no matter what level of experience you have. In our day jobs we often concentrate on a just few technologies for several months while working on a project. We don’t get the time to explore technologies, frameworks etc that give us the inspiration to ‘Rethink the Enterprise’ or just give us a head start on our next project This kind of conference is a good way to find out about cutting edge Java technology at a useful level of detail.

I took a break from the wet weather in South England, in order to enjoy the wet weather in Edinburgh, and attend the WebSphere user group. For one thing; our Alan Philpott was also going up there to hold a presentation on applying best practice in terms of SDLC to Datapower devices. Also Bharat Bhushan would be presenting on trouble-shooting and performance tuning of the devices. I have the latter’s slides for any of our DP rangers who want a walk-though. I will post a pdf of them up on sharepoint.

The WUG had a bit of a wobble last year, when some of the committee (who do their work for ‘free’) found that they no longer had the spare capacity to put in. The committee are still looking for volunteers. Some people questioned whether the WUG would continue to be viable. From the attendance levels of this recent meeting, the degree of organisation, and the quality of the  presentations, you would not notice any issues.

There was an interesting set of presentations (full details here), but the stuff that caught my eye was the OSGi stream. The OSGi framework (the dynamic framework system for java) has been around for a few years, not least as the underpinning technology behind Eclipse, since version 3. There is a good definition of it on wikipedia. I have seen it before in a container used to host the enterprise service mix ESB (a.k.a. FUSE 4). Adrian Treneman gives a good run-through here.

What has been interesting has been seeing this technology hitting the mainstream (IBM), and being taken seriously there. I attended two presentations, one of which covered the tooling support in Rational Application Developer (RAD) 8, and another covering the support for OSGi in WAS 8. It was previously available as a feature pack for WAS 7.0. The interesting part here was to see that the technology was being incorporated for simple pragmatic reasons (ability to load different versions of libraries into the same JVM – to be used by different applications, lowering memory footprint by sharing libraries amongst several applications) – rather than as some new-age magic.

I have grossly over-simplified in the above, but it may be a prompter for anyone who is interested to follow it up.

The other major new thing for me was WebSphere eXtreme Scale (WXS). On first seeing the title of the presentation, I thought that it would be a re-branding of either WAS ND or WAS XD. Wrong! “It’s just a cache”, as Jonathan Marshall announced at the beginning of his presentation – hugely underselling the use of caching technologies, and the product itself – before going on to show those benefits in his talk. Having recently been involved in two use-cases for caching technologies (dynamic content management) and web-session management in hugely scaling environments: both situations where databases struggle, and application server session-sharing mechanisms struggle (the old n^2 communication overhead, as the number of app. servers increases)… I could appreciate the problems that WXS product is trying to solve.

WXS is not the only player in this space. As well as other commercial offerings (which I had previously heard issues about) there are several open source alternatives (e.g. memcached), but since the general area is very new… not all of the products implement all of the features (e.g. active standby). Indeed there is probably not a common consensus of what all the features are. I will distribute slides on the internal sharepoint site. They should become available on the WUG site in due course. The take-away news for me is that IBM have a seemingly very capable product in this area, now, which merits investigation in some of my existing scenarios, and also new scenarios such as using the cache as a system of record.

So even after a few years of going along to the WUG, it still has the ability to make me sit up and take notice! Long live the…

This week I attended a Cloudburst ‘Proof of Technology’ day at IBM’s Hursley Labs and I wanted to share my initial thoughts following this chance to play with the appliance. It was a scripted do this, then that set of labs along with some presentations so not necessarily a real tyre kicking but enough for things to click into place and some ideas around its value to form.

Colleagues have already blogged about this new appliance so I’ll keep it simple and liken it to a vending machine for virtualised WAS environments. You pick the one you want, press the button, wait and ta-da out it drops into your private cloud all ready to consume.

Its on-demand nature makes this compelling. Having spent significant periods of time creating virtual environments to play with various products this appeals to my fast food, consumer tendencies and although today it’s only WAS HV edition, DB2 HV is days away with Process and Portal Server HV editions following later.

Let’s imagine you are running WAS, have a number of servers that you are trying to maximise your investment in with a little virtualised sharing of hardware and you want clean, controlled dev and test environments to be simply and repeatably rolled out on demand and torn down when not required, with little fuss. You deploy the environment when you need to use it and when done tear it down. It doesn’t have to sit there consuming resources for any longer than the time you actually need it because when you do need it again you can faithfully recreate it. You can timebox the period that your environment is available and if this period is not extended, your environment is removed.

Thats what you get with this, and it’s so simple you can empower anyone who genuinely requires this level of access and automation to potentially remove a period of let’s be honest, faffing around to aquire the environment which could save huge amounts of time and money. There’s no elasticity, you pick your pattern and if you have 2 nodes that’s your lot. Cloudburst does do some monitoring but it’s not going to give you any more nodes when things are maxing out. It really is simple though.

It’s going to cost you, but nowhere near what you would spend scripting this level of automation and control and IBM are funding the ongoing development and maintenance costs with version 1.1 of the firmware coming so soon you can smell it and 2.0 well on the way.

Does it need to be a hardware appliance, I’m not sure but it is and its available today, in purple. Which must have upset the DataPower guys who already have a purple box in the form of the XB60 B2B appliance. So if you have both don’t rack them up too close together or you just know one day, someone is going to unplug the wrong one.

Is it possible to be brainwashed in a day? Possibly but I believe that if you have an investment in WAS you should have a look this appliance and what it brings to the table. Give us a call and we can put you in touch with a man who has one in a flight case with a pair of servers attached (he brings his own cloud!). He’ll come and see you, let you have a go and you can decide for yourself.

I had a demo of Lombardi’s business process management (BPM) tooling the other day – Lombardi TeamWorks.

The first thing to note is that the tooling is Eclipse based and uses BPMN notation – in fact at first glance I thought it was IBM WebSphere Business Modeller as I’m more familiar with this! It looks very similar, and all the concepts are equivalent. I guess BPM is BPM and so the “problem space” is what it is and so they will all tend to look the same. The difference is down to the quality of the implementation between vendors. It certainly looked easier to use than WebSphere Integration Developer (WID) – the Java code was all hidden from the designer until you get to wanting to put custom controls in JSPs or invoke external services. Debugging a process flow fired up quickly, no booting up a test instance of WAS required etc. Also, there is no real mention of BPEL, unlike in WPS (WebSphere Process Server).

There is also a SaaS modelling environment for business users called Lombardi BluePrint – this uses a simpler subset of BPMN, and so is essentially a clever online Visio-style tool. You can can pull models from BluePrint into the Eclipse dev tool and flesh out the process with more technical/implementation detail, as you might using WID for Modeller models.

There’s a business portal web app for business users to launch business processes, see what’s “in their inbox” etc – very similar to the ‘Business Space’ portal that WPS v6.2 now has, so I think IBM have caught up a bit here as the ‘human task management’ part of WPS has up until now been a weak area.

Inbuilt rules engine support is weak apparently (like in WPS IMHO, hence IBM bought iLog and got JRules, amongst other things), so you’d need to integrate with external “decision services” if a strong capability was needed.

The other vendor in this space that keeps coming up in our customer engagements is Peg a (strong in the business rules area), and in general my view is that are all pretty much capable of doing the job. One word of caution is that I’d have thought that as an almost “single product” vendor, Lombardi must be at risk of being acquired at some point, which could lead to consolidation into the acquirer’s existing product lines etc.

My colleagues Paul Russell and Murali Anantharaman will be attending the WebSphere User Groupconference being held at the Royal Society of Edinburgh on Thursday this week. As I am unable to attend this time, I’ve prep’d them with my preferred list of sessions to attend so hopefully I will attend “by proxy” :o

Following on from the previous post, here are some more random notes about CloudBurst – pros, cons, features, limitations etc.

One thing that occurred to me was “Why is cloudburst a hardware appliance?” – it could just be a software appliance…well the key reason it seems to me is that it holds lots of OS/WAS admin passwords etc and so the ‘hardening’ of a hardware appliance with tamper-resistance etc is a key feature.

Deployment patterns and deploying

WAS hypervisor edition is actually an OVA image with 4 virtual disks, with multiple WAS profiles set up already – so the hypervisor edition is actually a VM image, rather than a customised WAS code base it seems.

There are patterns for a single server deployment, small cluster (3 VMs), large cluster (15 VMs) etc. You can modify a standard pre-packaged VM (e.g. add a fixpack etc) and then ‘capture’ back into CloudBurst as a standard catalogue VM for use in new patterns.

Control is available over whether certain pattern properties (passwords, memory size etc) can be overridden for each instance of that pattern or not.

A key point – keep track of any changes made to your VM patterns (e.g. any tuning done) and then ‘bake’ the in the pattern in CloudBurst so that any future deployments get the changes – otherwise they’ll be lost when you redeploy the pattern.

The first image transfer to each hypervisor can take up to 1 hour to transfer (obviously this is environment dependant)!

IP addresses are allocated on the fly when deploying, i.e. it pulls them from a pool of available IP addresses that the admin user sets up.

Script packages

Post deployment, CloudBurst can run any script, not just wsadmin scripts – essentially it ssh’s over to the VM and uses a zip file and an executable name (e.g. <some_path>/wsadmin.sh) with some arguments (e.g. what JACL file to run). ‘wsadmin’ scripts can be used against the deployment manager VM to install an application (EAR file) into the cloud cluster. Some “wsadmin” scripts are provided out of the box for common tasks – setting up global security etc.

Management facilities
CloudBurst provides a centralised access point to the consoles for each VM/WAS instance.

You can control separate permissions for each user (with LDAP integration), so you can have multiple users using a single CloudBurst box at one time, creating their own ‘private’ patterns etc.

You can use it to control the hypervisors to create snapshots of all the VMs in a particular deployment, so for example you can run some tests and then quickly recover the entire virtual deployment (i.e. all the VMs).

License management/metering etc – seems a pretty limited offering, it relies on admin REST APIs exposed by CloudBurst that are called by something like Tivoli etc.

CloudBurst admin console interface seems v..e..r..y.. slow to respond sometimes.

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers