IBM WebSphere


Today myself and four other Smarties attended Norfolk’s first Mobile Development Conference at the Hethel Engineering Centre, which is right next to where they make Lotus Cars.

Conference Room

There is an obvious tie-up between Hethel and Lotus given that the main presentations were held in the Colin Chapman room (founder of Lotus cars) where one of Ayrton Senna’s “99T” F1 cars was stuck to the wall!

Mobile Development is one of the most exciting and diverse areas in IT at the moment and this conference did very well to have a wide coverage from games developers like MonoGame to Tim Ferguson, Head of Digital at one of our customers AVIVA and their mobile app lessons learnt from their various innovations and experiments.

The keynote by Neil Garner of @Proxama resonated with me very much, both in his memories of tech from past years (Nokia 7110 first Wap phone) to his honest assessment of NFC and rebuttal of the doubters who don’t see NFC taking off now. The ARM Trustzone was highlighted by Neil as a key element in providing security for NFC applications. There are Contactless terminals everywhere now and 9 of the top 10 device manufacturers are signed up to support NFC – Apple is the odd one out but aren’t they always?

Our own @JamesElsey1986 later showed that NFC is more flexible and powerful than you think using Android. James later tweeted:

Source code/slides from my #NFC workshop http://ow.ly/mDz7A  Feel free to ask questions / give feedback. Thanks for attending! #MobDevCon

Matt Lacey presented two sessions, his first on tips for developing for Windows 8 included some real gems which will help us with our tailoring of our cross-platform Apps to work well on the new Windows platforms. I agree with Matt, who worked on PhoneGap’s Windows integration code that you have to be knowledgeable and experienced in developing native Apps to be able to build successful cross-platform Apps. Luckily Smart421 have a whole Microsoft practice to help us Java-oriented types out with that. Read Matt’s blog for more info and his slides from his second presentation on monetising Apps.

I was first on to present after lunch and talked about our work delivering cross-platform mobile experiences with Worklight – my slides are now up on slideshare. There was a general theme at the conference that cross-platform tools are coming of age and the compromise of user experience and performance when compared to native development is far outweighed by the much faster and cheaper overall costs of App development and maintenance. I just about managed to demo the new Worklight 6 Studio IDE and Console. I am really liking the improved jQueryMobile integration and want to find time to check out the new App Centre tools and automated testing when I get the chance.

Ruth John (@rumyra) of O2′s “The Lab” gave a kitty and puppy-tastic presentation on FireFoxOS and why Telefonica have taken it up especially in the emerging South American markets – it’s free, works well on low-end handsets with the FireFox operating system built on top of the Gecko layer as is Android. It will be really interesting to see if this will catch on in the UK and European markets in these times of austerity where people are perhaps not quite ready to splash a few hundred every year on the latest iOS gadgets.

There was also a really enlightening “sponsor presentation” by Basho on the subject of reclaiming the terms web scale, big data, dev ops and how the NHS is using Riak’s open source technology.

Massive thanks to Naked Element (Paul and Marie) and everyone involved in setting up the event, thanks to Hethel for such a great venue, the sponsors for the delicious lunch and the attendees for their support and kind comments.

P.S. Welcome to twitter @CharlesBSimms :-)

IBM Forum London by HOK

IBM Forum, London, designed by HOK (www.hok.com)
Photo courtesy of HOK

Integration architects and solutions designers interested in IBM’s WebSphere technology stack converged once again for the Spring (!) gathering 2013 WUG at IBM Forum Centre, South Bank, London on  21 March.  I was one of them.

Trends in the WebSphere Ecosystem – Innovation/Evolution

From my perspective, WebSphere seem to retain good ground mainly due to innovation & adaptation of the upcoming technologies in it’s stride. Catering for Mobile & The Cloud areas as well as fulfilling existing customer/partner’s needs by adding features. There was good interest shown by participants on applicability of WebSphere technologies in the Mobile world.

To be honest, I’m neither optimistic nor pessimistic about this, as I’m hoping that the promise shown in the WebSphere brand needs to be realised to it’s potential by the end-users/customers.

Without innovation & evolution into these emerging technology areas, WebSphere brand will probably sustain existing customer base for some years but then could’ve started going down hill. Customers will always have changing needs/requirements to adapt & harness the potential of emerging technologies which if isn’t catered for, will result in them having to resort to alternative solutions – maybe slowly, but surely!

I’d like to share some highlights based on sessions I attended:

WebSphere MQ in the Mobile world (notably: WMQ Support for Web Sockets) WebSockets as we know is a two-way messaging (full duplex) protocol, part of HTML 5 & notably provides the facility to perform push notification from Server to browser (not the pseudo pushes as a result of polling etc.). Remember MQTT (MQ Telemetry Transport), a simple & lightweight messaging protocol? It’s been there for quite a number of years associated with real world applications (Sensors, Machine-To-Machine usage etc.) and been associated with WebSphere MQ from V7.1.

As the MQTT.org puts it succinctly: It is a publish/subscribe, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery.

So, combining the two together (MQTT due to its reliable & light-weight nature & WebSockets due to its full duplex nature) usually involves doing extra work on both sides (at the least, on the client side if using a Messaging server that supports MQTT).

From WebSphere MQ Version 7.5.0.1 the support for WebSocket is built-in!

  • Highly Scalable
  • Secure Push Notifications

This eliminates the need for a client library and hence the Web Browser (or any Web app based on HTML 5/WebSockets) could now be used to send and receive messages, Publish/Subscribe directly from WMQ. This effectively provides a wider range of options for mobile web apps too (based on HTML 5/WebSockets) for reliable messaging, Subscriptions etc. Seamless integration through the Messaging Server to the whole enterprise and beyond (ESB, Cloud, Third party Systems etc.).

Gotchas

  • Browsers should support WebSockets – some do & some don’t
  • Some work (scripting etc.) still needs doing on client side (e.g. JavaScript), that will manage the WebSockets connection. Apparently this (JavaScript Client Library) and other client libraries (C, .Net etc.) would be provided by IBM as part of WMQ (I was told by the presenter).
  • SSL/TLS adds significant network overhead – so use of wss needs careful consideration particularly in constrained/low-bandwidth scenarios.

Extra Bits

  • A Browser based dashboard with close to real-time reporting (a dynamic/active speed-dial display, using the JavaScript WebSockets Client library) connecting to a WMQ Server subscribing to a topic (For selective data feed on system performance characteristics CPU / Memory etc.)
  • The other (more interesting one) an EXCEL Spreadsheet with specific Cells populated by Macros (uses .Net WebSockets Client Library behind, I think) providing a similar dynamic/active speed-dial display. A MQ Server Push ending up in an Excel Spreadsheet!
  • FaceBook Messenger uses MQTT as their underlying messaging protocol.

WebSphere Message Broker 8 & DFDL Support

A gist of features, some of which were really interesting (from a value addition/diversified usage point of view):

Built-in WebSphere ExtremeScale caching

  • ExtremeScale caching: A Grid based (infinite hash map) caching technology that apparently have proved to provide a very high degree of scaling/elasticity for business class applications.
  • Use of recommended patterns such as Application State Store Pattern (Scale-up/scale-down: is more suited for a cloud based solution improving App Server Elasticity), Side-Caching Pattern (for a Gateway style implementation) provides a robust caching facility thus eliminating the need for custom/non-standard caching solutions.
  • Caching available at Broker, Execution Group, Flow, Node levels.

Worklight integration, provision of patterns for mobile services & Toolkit support for conversion from WMB service to Mobile service e.g. Mobile service created using specific pattern can Receive data in JSON and covert to native Broker format.

.NET Integration

  • Ability to directly invoke .NET programs through the CLR
  • C#, VB.NET natively added to WMB
  • .NET Assemblies into BAR files

Patterns based development - Provides a set of pre-built templates (based on best practice) & ability to create user defined patterns for reuse. Also, a web based pattern generation facility allows end users (which not much knowledge of WMB development) to configure and deploy these patterns directly onto broker.

DFDL Parser & Modelling Support: DFDL (Data Format Description Language) is a modelling language for describing text & binary data in a standard way (as data formats). Is is quite powerful as a variety of data formats (including industry Standard formats) are supported and it can be used for: Text, Binary & Bit level data, Fixed length, delimited, patterned, Bi-directional data, Nil & Out of range values etc.

As its a standard DFDL format is portable, editable & can be generated using tools – similar to XSD for XML data. DFDL builds upon W3C XML Schema 1.0 and uses a subset of XML Schema elements to model non-XML (text/binary data).

MRM Message Sets were used in earlier WMB days to model text/binary data. DFDL Support in WMB provides a powerful & standardised way to model & process data.

  • DFDL Parser (On-demand & streaming) is available for both ESQL & Java nodes.
  • Graphical Editor with Guided Wizards for DFDL modelling in Message Broker Toolkit.
  • DFDL SChema deployed as part of BAR file onto Broker, so no separate dictionary files to manage.

 Other WMB Features Worth noting:

  • Lineage of Data Analysis & Cross-tool data analysis using Infosphere Metadata Workbench
  • Improved Web Administration Console to send control commands (If my guess is right this should be using MQTT behind the scenes) with Role based access. Also, provides a public REST based management API.
  • Built-in Audit of Messages/data flowing through – persistence to DB2 and Oracle supported – MS SQL Server not supported.
  • Built-in Record, Capture & Replay Messages/Data facility using single or Multiple Brokers.
  • Simplified Cloud Provisioning through IBM PureSystems.

 ….and many more!

PS  I’ve also posted this on my personal blog

Please Rate and Like this blog.  If you can, please leave a Comment.

Flooded
Photo by East Coast Main Line

I have had the luck to attend the WUG at the Royal Society of Edinburgh on George Street again this year. This is a bi-annual event hosted in Edinburgh in the Autumn and, from this year, at IBM’s facilities on the Southbank in London in the Spring.

The good luck was attending, when maybe a third of people failed to go when the weather was bad, but the bad luck was when the East Coast mainline was flooded on the way down, causing a very late return home.

There were a few interesting sessions, including the Worklight acquisition for developing mobile applications on to a variety of target devices, including iOS and Android. Possibly more on that later. There was also a good session by Alan Chambers on sample use-cases for using WebSphere eXtreme Scale, which is a distributed in-memory caching technology. This is an interesting area, which merits further attention. The slide deck for the various sessions, including ones I could not get to, are on the WUG site.

David Sayers of MidVision also gave a talk about DevOps, which is the set of disciplines for bringing development and operations closer to each other. Although MidVision supply a tool in this space, David was keen to stay away for instances of tools, and to say that there is no magic bullet, and that it’s about process and people too.

A phrase which struck a chord with me went something like: “many firms don’t want to make a change in a production system because ‘Steve’ is on holiday and he’s the only person who understands this”.

It’s a spooky coincidence, as we have just published a development policy stating that all environments, and deployments to those environments should be 100% automated, as part of our policy refresh.

The presentation I want to elaborate on a bit this time, is the “How Lightweight is the [WebSphere] Liberty Profile” which is part of WebSphere Application Server (WAS) 8.5.

Simon Maple  (Twitter @sjmaple) – one of IBM’s technical evangelists on WAS – explained that this profile is an OSGi-based application server kernel which only loads up the libraries and subsystems, as you need them. The end result is a *very* lightweight application server.

So much so, that the session involved showing the application server running on a Raspberry Pi (£20-odd computer, the size of the palm of your hand, delivered as a circuit board).

To follow this up Simon then started up a WAS instance on his older android phone which was then serving up a blogging application via the phone’s wireless hotspot. I connected to it with my phone, and posted: “Amazing!” (yes Oscar Wilde won’t be looking over his shoulder), which then showed up on his monitor, along with some more imaginative posts.

I have the tooling, which was provided on a memory key in our “info” shared area for any Smarties to download.

The liberty profile tooling (eclipse plugins) even runs on a Mac, along with the dev runtime. Even though this runtime is not supported in production on Mac, this is a pretty major step for IBM. I would not have imagined it five years ago.
In terms of production use though, the liberty profile WAS is still a standard WAS install from the perspective of licensing… though I’m not sure how many PVU’s a Raspberry Pi has.

IBM also have a new Head of WebSphere Software, Stuart Hemsley, who was keen to get feedback from the delegates, both by announcement at the keynote, and by walking around during the breaks.

Our feedback was that the application server just costs too much compared to the likes of Tomcat and JBoss, and includes technologies which are starting to be less relevant (e.g. session clustering), as the application architecture moves to pursue session-statelessness. Yes you would expect to pay a premium for a big-league vendor-supported product, but not 10x as much.

It would be a shame for IBM to loose out on market share because of pricing, when they provide excellent tooling and support, as shown by a session on performance tuning the JVM… but that (as they say) is another story.

I also had the opportunity to attend the    Websphere User Group (WUG) meeting on 23rd March 2011 at Bedfont Lakes. The WUG is a very popular topic amongst colleagues at Smart421 as its a great community

As someone who doesn’t have much direct WebSphere experience on a day-to-day basis, I was wondering if I would struggle to follow the content. However, I can say that I was pleasantly surprised as the tracks were designed for a variety of skills levels. There were several talks that any Java developer would find interesting. There were actually 13 streams running over the course of the day. Many focused on specific IBM products (as you’d expect) but also some focused on more general topics such as Java, and OSGi.

The first session I attended was the WebSphere Foundation Update and Technical Direction in the WAS 1 stream. This session by Ian Robinson gave an overview of forthcoming features in WAS v8. While this was a very WAS specific session it also provided useful updates on several areas in the J2EE space. To download the slides, click here.

The second session I attended was in the WAS 2 stream on JAX-WS 2.2 and JAX-RS 1.1 support in WebSphere Application Server Version 8.0 Beta . The presenter, Katherine Sanders, a software engineer at IBM Hursley,  gave a very good introduction to these two technologies without being tempted to delve into a lot of overly-heavy WAS-specific details. To download the slides, click here.

The third session I attended was given by Simon Cashmore, a Lead Engineer within the Global Middleware team at Barclays Bank.  This talk, Changing the way Java Application Hosting is delivered at Barclays Bank , stood out by a mile as it was the only session in the Customer stream (c’mon WUG Committee, more like this please). It was informative because it focused on Barclays’ new approach to hosting Java applications. Barclays have essentially built their own collection of virtualised WAS instances that can be made available in days rather than weeks or months. Previously, projects would buy brand new hardware that was not shared or reused, so costs and timescales were sky high. Now they have a shared resource that can be used and reused much more efficiently – and more cost effectively. I’m sure Barclays shareholders will be very pleased to hear that  ;o)

The fourth and final session I attended was a talk in the Java stream on Generational Garbage Collection: Theory and Best Practices. This was focused on how the IBM JVM works, but Chris Bailey, a technical architect in the Java Technology Center (JTC) team at IBM Hursley, gave a very detailed description of it which applies to any JVM that implements Generational Garbage Collection. To get a copy of Chris’ slides, click here.

So if you’re in doubt whether you should attend a WUG meeting because you feel don’t have enough WebSphere experience then let me reassure you that any Java developer will find something of interest. There were also suggestions of adding a more business focused stream to future meetings to widen the potential audience even more.

Details of all WUG activities and events can be found here.

WUG 10th Birthday Celebrations, IBM Bedfont 23 March 2011

Members of the WUG Board, past and present, cut the birthday cake. From left to right: Nigel Gale (founding Chairman), Simon Maple (IBM Representa tive), Alan Chambers (WUG founder and Board member), Chris Mason (Treasurer throughout the WUG's 10 years), and Jonathan Marshall (IBM Representa tive). Photo by kind permission of Alan Chambers.

On 23 March, over 200 members of the WebSphere User Group UK (WUG) and members of the WebSphere Integration User Group UK  descended on IBM Bedfont Lakes, Feltham, UK for the WUG’s spring-time gathering (2 annual meetings; March at Bedfont, September at Edinburgh). Smart421 was there with one or two of our bigs guns. More on that in a moment.

As longstanding members of the WUG, we get a lot out of these meetings - perhaps ‘cos we also put  lot in. A significant number of our customer engagements require deep Java skills and several depend on WebSphere technologies in some way or another. Most speakers are IBM-ers, many out of Hursley, or sometimes further afield. Delegates from IBM, end-users of WebSphere and IBM business partners make up the remainder of the rich ‘ecosystem’ that is today’s WUG.

Smart421 Lead Consultant, Stu Smith, had his proposal selected by the Committee, which carried the catchy little title ‘Software Development Life-cycle with Message Broker in end-to-end SOA’ [Download the slides]. Nevertheless, Stu pulled a bigger crowd than usual with his piece and people seemed to appreciate his content and the very good Q&A session he triggered; for the last session of the day, it was a lively interactive exchange among attendees, who by then probably had their minds on the drinks reception or what they had to do to catch the early train home.

Alan Mangroo, one of our elite tekkies, attended for the educational tracks and was last seen diving in and out of sessions he has pre-selected. Knowing him, he’ll have made copious notes, so try to make a point of reading his separate blog [posted 08 April, click here].

The WUG has been running for ten years in the UK (yeah…I know !) and the Committee didn’t run past the opportunity to celebrate with drinks and two rather impressive cakes to mark the occasion. I’ve included a photo, courtesy of Alan Chambers, so you can share the moment with us. Proof –  if ever you needed it – that even tekkies have soul, so long as you bring the candles ;o)    Actually, I only remember cute miniature marzipan figures: developers with laptops.

As is often the case, Smart421 ran a on-stand prize draw for a bottle of Bollinger and appropriately Nigel Gale, the WUG’s first chairman (pictured, far left), was the one who swooped the 1st prize. Good timing I’d say. Hope you enjoy that, Nigel.

I took a break from the wet weather in South England, in order to enjoy the wet weather in Edinburgh, and attend the WebSphere user group. For one thing; our Alan Philpott was also going up there to hold a presentation on applying best practice in terms of SDLC to Datapower devices. Also Bharat Bhushan would be presenting on trouble-shooting and performance tuning of the devices. I have the latter’s slides for any of our DP rangers who want a walk-though. I will post a pdf of them up on sharepoint.

The WUG had a bit of a wobble last year, when some of the committee (who do their work for ‘free’) found that they no longer had the spare capacity to put in. The committee are still looking for volunteers. Some people questioned whether the WUG would continue to be viable. From the attendance levels of this recent meeting, the degree of organisation, and the quality of the  presentations, you would not notice any issues.

There was an interesting set of presentations (full details here), but the stuff that caught my eye was the OSGi stream. The OSGi framework (the dynamic framework system for java) has been around for a few years, not least as the underpinning technology behind Eclipse, since version 3. There is a good definition of it on wikipedia. I have seen it before in a container used to host the enterprise service mix ESB (a.k.a. FUSE 4). Adrian Treneman gives a good run-through here.

What has been interesting has been seeing this technology hitting the mainstream (IBM), and being taken seriously there. I attended two presentations, one of which covered the tooling support in Rational Application Developer (RAD) 8, and another covering the support for OSGi in WAS 8. It was previously available as a feature pack for WAS 7.0. The interesting part here was to see that the technology was being incorporated for simple pragmatic reasons (ability to load different versions of libraries into the same JVM – to be used by different applications, lowering memory footprint by sharing libraries amongst several applications) – rather than as some new-age magic.

I have grossly over-simplified in the above, but it may be a prompter for anyone who is interested to follow it up.

The other major new thing for me was WebSphere eXtreme Scale (WXS). On first seeing the title of the presentation, I thought that it would be a re-branding of either WAS ND or WAS XD. Wrong! “It’s just a cache”, as Jonathan Marshall announced at the beginning of his presentation – hugely underselling the use of caching technologies, and the product itself – before going on to show those benefits in his talk. Having recently been involved in two use-cases for caching technologies (dynamic content management) and web-session management in hugely scaling environments: both situations where databases struggle, and application server session-sharing mechanisms struggle (the old n^2 communication overhead, as the number of app. servers increases)… I could appreciate the problems that WXS product is trying to solve.

WXS is not the only player in this space. As well as other commercial offerings (which I had previously heard issues about) there are several open source alternatives (e.g. memcached), but since the general area is very new… not all of the products implement all of the features (e.g. active standby). Indeed there is probably not a common consensus of what all the features are. I will distribute slides on the internal sharepoint site. They should become available on the WUG site in due course. The take-away news for me is that IBM have a seemingly very capable product in this area, now, which merits investigation in some of my existing scenarios, and also new scenarios such as using the cache as a system of record.

So even after a few years of going along to the WUG, it still has the ability to make me sit up and take notice! Long live the…

DataPowerStackedIf you are interested in finding out about best practices and what is involved in managing an IBM WebSphere DataPower deployment, day in, day out, then my colleague Alan Philpott will be presenting on this subject at the WebSphere User Group meeting in Edinburgh on Tuesday 28th September. For details of the venue etc, see here.

Alan has spent the last 18 months industrialising a huge business-critical DataPower estate, and will be using Subversion etc to walk through some examples & lessons learned, what kind if support processes you need in place etc.

The head of our WebSphere practice, David Taylor, will also be in attendance – manning the stand. Quiz him about his experiences using Amazon Web Services (AWS) – he’ll love it.

It’s been a while since Oracle’s spending spree – including Sun and BEA – something that i’m still in shock about.  To get some clarity on the SOA offerings from Oracle a colleague and I recently attended an Oracle partner day in Reading. As the day was ran by 2 presales guys we got an interesting incite into what they were picking out as the differentiators to the other main player IBM – some of their opinions are reflected below – all of which i take with a pinch of salt.

Fusion Middleware

Oracle Fusion Overview

After everybody nodded to the question “does everybody know what SOA is?” the instructors continued on to tell us about SOA for a couple of hours. Nevertheless there was an interesting slide in this period with the average project cost and staff size for SOA projects; Basically, the message was ‘too big’ and doomed and ‘too small’ and it’ll just fade away. Also, they noted that the audience and the questions asked were testament to how the market has matured: their presentations have moved more from ‘how it works’ to ‘how to roll it out and support it’.

One contradiction I noted was how often they talked about the importance of interoperability and the use of industry standards but at the same time were really playing up how well integrated the Oracle SOA Suite 11g is. Perhaps this itself isn’t a contradiction – it’s more the incongruity of vendors on the one hand sell you the standardisation but at the same time tut-tut and teeth-sucking at the idea of using open source in some instances.

From a product perspective the day was a really useful insight in to how the suite has evolved and into what’s new and exciting about Oracle SOA Suite 11g (r1 patchset 2 – the 3rd big release in a year). Here are some specific notes I made which may or may not make sense:

  • No prizes that the winner in the application server shoot out between Oracle App Server and WebLogic was… WLS. The current version of SOA Suite won’t support running on WebSphere or JBOSS AS but this is on the roadmap.
  • One important piece of consolidation with the release is that SOA Suite and Oracle Service Bus can now run on the same application server platform (previously in 10G you needed different versions/instances).
  • Aqualogic ESB is called Oracle Service Bus (OSB) and now fulfils the ESB function. The product formerly known as Oracle ESB is actually still part of the product set – but is called Mediator. There is clearly some overlap between the products but mediator should be used only for the basic routing and transformation tasks.
  • BPM Suite for 11g is the completely re-written Aqualogic BPM which is for business modelling, supplied as part of the Dev Suite and includes some coding ability and allowing closed loop development (BPM is not as powerful or feature rich as the BPA Suite and not intended as a competitive product). The BPA Suite is pure modelling.
  • BPA Suite is targeted at analysts and contains 3 licensed modules of the ARIS product from IDS Scheer. The message was clearly made that Oracle don’t feel it is their business to create business process design software. I think this was a poke at IBM.. who recently bought Lombardi. BPA is much cheaper then the pureplays, example he gave was Pega PRPC but delivers on the promise of being able to integrate this rather then run it is as a homogeneous blob.
  • List of what’s included in SOA suite: OSB, Adapters, B2B, BPEL, BAM, EM, WSM, Service Reg, JDeveloper, CEP. –maybe missed a couple – all running on top of WebLogic with AQ/JMS and JRockit
  • BPM, BPA Suite and Enterprise Repository (ER) are licensed separately, the later under the ‘governance suite’. The ER assists governance from an assets perspective and provides the repository to store all artefacts.
  • The process server now supports BPMN 2.0 as well as the existing BPEL. There is an aim to provde BPMN round-tripping
  • Oracle have invested a lot in the adapters – making them all JCA compliant again (BEA had focussed on making them fast and not standardised). Lots of them, and there are lot’s of partners providing even more (Attunity and iWay were mentioned)
  • Event Driven Architecture  aka Event Delivery Network is JMS based of course, thankfully on weblogic JMS and not Oracle AQ. Apparently, you can use another JMS provider if you want, there is a JMS bridge (though I’m not sure that is really the accurate answer to my question)
  • The B2B product is really just an endpoint and not B2B network provider software
  • Oracle business rules (OBR) is a basic product – again the distinction was drawn out between IBM buying ILOG JRules and the OBR. No desire to create BPMS software which they say can hide implicit process within complex business rule structures. However, they have added handling for the basics of decision tables. OBR is integrated into JDeveloper.
  • Oracle policy admin (OPA) provides a sophisticated tool for managing your internal policy related data, e.g. VAT.
  • Application Integration Architecture (AIA) are all about making the Oracle Applications harmonious with the SOA suite and providing reference and process integration packs (PIP) are about providing standard processes across these applications.. rather then developing your own ‘order to pay’. Another distinction was drawn to IBM – who “don’t do applications”. Although predominantly Oracle Applications currently, the architecture is intended to support any cross application integration (e.g. connectivity to SAP http://www.oracle.com/applications/aia-plm-to-sap-erp-data-sheet.pdf). A list of the currently released PIP is at http://www.oracle.com/us/products/applications/application-integration-architecture/054232.html
  • JDeveloper is the development tool and is very well integrated and includes Unit Test Framework in 10.1.3.

As I said before – if you have noticed the Oracle instructor references to IBM above – clearly the competition is fierce between the 2 application integration stacks. A couple of other points were made which I regurgitate for the reader to make up their own minds: IBM products aren’t well integrated like the Oracle suite now is, and rational are holding back BPMN development within the System Architect tool.

It’s been a while since I have attended one of these so I went with an element of excited anticipation…honest :-)

In truth it was good to have a day away from the office and client site to refresh my view on as many Websphere related topics that I could cram into one day. Another important facet of attending the event was to ‘man’ the Smart421 marketing stand in between the various sessions with one of our Lead Consultant colleagues as part of our long-standing commitment and relationship with the WUG. This is always a bit of a daunting prospect but I saw it as a way of ‘dip-sticking’ the current IT temperature gauge – I was kind of expecting some stumbling conversations around the Cloud but there were more conversations around SOA…….a debate for another time but perhaps SOA is still more real for many organisations at the moment as they continue to experience the pain of their SOA journeys. 

The breadth of the sessions available is a real attraction of one of these events and to me adequately justifies any time and cost of attending one these events. We weren’t disappointed by the topics up for discussion. I went to 4 sessions consisting of:

  • Business Process Management: Collaborate, Iterate, Refine, Validate – by Waverney Croson, Consulting IT Specialist, IBM
  • Java 6 Unleashed: Tuning the IBM JVM – presented by Chris Bailey, IBM Java Technology Center, Hurlsey Lab
  • Learning from Other People’s SOA Experiences – presented by John Moe, Head of Integration Services, Tori Global
  • Websphere Message Broker V7 Introduction and New Features – presently by Dave Page, IBM Consulting IT Specialist

All the presentations I attended were well prepared and well presented but the one that in the end stood out for me was the ‘Learning from Other People’s SOA experiences’. There has been much debate on the subject of SOA generally and there has been much discussion amongst our consultants in Smart421; where it is today, where it is heading and how it relates to the emergence of the Cloud. Anyway back to the presentation……the slides rolled through but there were a few gems that I thought were worth repeating. The presenter clearly had lived and breathed this stuff and had also been burnt by it; his opening gambit was ‘its not easy’!. He stated that many large organisations had good messaging architectures (hub and spoke) and that moving to a SOA architecture (n layer) wasn’t necessarily a natural next step; “just because you can doesn’t mean you should”. Some reasonable advice was to begin your SOA evolution in small chunks; stay with the core capability of the product you are using to help avoid vendor lock-in around some obscure functionality only available in that product. He did say that the products in this area were far more mature than a few years ago but his strong advice was always to use vendor support to help you through the evolution process. He mentioned that the original implementations of most the products in this area were embarrassingly poor and this did remind me of some work with a client some time ago (circa. 2006) on an early enterprise version of one of these products – it would have been overly generous to even suggest this was ready for ‘beta’ trialing. Some of the other challenges he mentioned were around the adoption of service-based development practices and that organisations entering their SOA evolution should initially look to use agile methodologies to deliver services where tight control could be maintained; when the processes were more mature this could go to a waterfall methodology and off shored. The definition he used for determining SOA maturity was one around reuse; a rough figure of 30-40% reuse of services would represent a mature SOA architecture.

An interesting debate was around governance and it was made clear that this was one of the key barriers to success. This developed into an interesting discussion about versioning of services; this has been raised before on a client site and I was expecting a concise and polished answer but there wasn’t one. The answer given was that there wasn’t a tool out there to help with effective versioning of production services and typically the service would be split into 2 separate production services. He justified this by saying that as soon as a service goes live it effectively becomes legacy and there should be an acceptance that it will be difficult to change due to the potential impacts on consuming platforms. This is certainly something I have seen in practice but I wasn’t expecting that to be the norm!

The session that I was eager to attend based on the original order of the day’s events was the Business Process Management (BPM). I see a lot of need within organisations to manage their processes more efficiently and realise the cost savings that just a few key improvements can often bring. The presentation didn’t disappoint but I was left trying to work out how an organisation could effectively adopt such an array of tools in this area. It may be a reflection of some of the clients I have worked with but it seems to me to require such a large amount of transformation within an organisation to perhaps make it too big a shift to bite off in one go. Maybe the answer is in the question in terms of concentrating on implementing the products/tooling (let’s ignore the cost and vendor lock-in issues for the purposes of this discussion) and target a particular problematic or high-value process and just deliver it. If the delivery is successful (and of course it should be because you will have heavily weighted the first implementation in your favour) then get the recognition and buy-in from other areas and rollout the adoption of the tool and the other requisite skills, training etc. across more and more of your organisation. Perhaps to convince me, I need to attend a ‘Learning from Other People’s BPM experiences’ next time :-)

IaaS, PaaS, SaaS and now introducing SCaaS…. Public Cloud Computing gets Real

… for me at least. And it turns out that it has been real for some firms for several years.

I attended a session held by IBM and Amazon Web Services (AWS) today at Hursley labs in Hampshire. The title of the session was: “Software as a Service: Getting started with IBM software on Amazon Web Services featuring WebSphere sMash & DB2″

The name kind of disguised what a mind-expanding day it was going to be. There were two major aspects to the day:

  • that the commercial model for cloud computing is alive and well, and that we are at the beginning of a brand new ecosystem. This is as big as the first phase of the world wide web was, and bigger than web 2.0 by far
  • the hands on proof that all the parts worked.

AWS is a provider of IaaS, and as I found out when signing up for my AWS, by reading the T’s and C’s, Amazon also has a subsidiary, Alexa, which provides Service Components as a Service (call it SCaaS or CaaS for short) – notably search and indexing.

Instead we will look at two of the major offerings of AWS, namely Simple Storage Service (S3) and the Elastic Compute Cloud (EC2). There are quite a few other offerings from AWS, which are interesting if you are using IaaS, but the core offerings illustrate why IaaS is more than a flash in the pan.

S3 is simply the provision of secure storage. Secure in the sense that it is private, and in the sense that it is persisted across multiple data-centres.

EC2 is the ability to provision (i.e. not just turn on, but to create) virtual machines – specifically various flavours of intel machines, running various OSs – within minutes. And then just as quickly to be able to release the machines. An AWS virtual machine is called an Amazon Machine Instance (AMI).

The story of how Amazon came to be providing infrastructure on demand, originally for its internal consumption, arises out of their desire to remove ‘undifferentiated heavy lifting’, e.g. how do we provide secure storage? (now provided by S3) They wanted to save each team having to solve similar heavy lifting problems, and concentrate on adding value. The end result is that the one IaaS provider I do (now) know something about, now, is already in a position to provide cheaper infrastructure to most blue chips, than any internal IT department – including those which are run by outsourcing outfits. AWS certainly will not be the only game in town; we can expect more players to join the market – which will make the cloud IaaS offering even more competetive:

  • AWS have a large bank of hypervisors (hardware on which virtual machines can run), and have sorted out the provisioning of Virtual Machines to those hypervisors such that:
    • The provisioning is controlled by the user, either through a web gui, or via web-services invocations
    • The provisioning can be billed by the hour
    • The provisioning can be automated to provide scaling on demand
  • The hypervisors (i.e. the actual hardware) tend to run at about 80% utilisation. That is about sixteen times as much as most in-house IT shops. This means that AWS has one sixteenth of the cost of hardware, power, space, and cooling for any given set of software running.

Before I went to the session, I was aware that one of the benefits of cloud IaaS was that it was on-demand. I had no idea that the costs for running the machines 24×7 would be so much cheaper than most blue-chip’s IT shops.

The lab sessions were set up around the use of WebSphere sMash and IBM DB2 9.2 CE all running on virtual machines called Amazon Machine Instances (AMIs) in the EC2.

This is very much like the session I had a couple of weeks ago with the WebSphere Cloudburst appliance. Cloudburst makes it easy to deploy multiple instances of WebSphere Application Server (WAS) to a set of hypervisors (machines which are running as hosts to one or more virtual machines). The cloudburst appliance was very slick in its operation, and a big step towards virtualisation and re-use within a datacentre.

Setting up all of this virtualisation (i.e. the hypervisor estate on which to install and un-install the various virtual machines) within an IT shop requires a fair amount of capital expenditure, not to mention time.

The big difference between the cloudburst session, and this session is that the hypervisors are already there, ready to be used in the Amazon EC2 cloud: i.e. the capacity is there already. No capital investment required. No lead time. The labs allowed us to set up and instantiate the AMIs via a web gui with little fuss. A web-services API is also provided for this, with the relevant (X.509 and other) security mechanisms, and there is a vibrant developer community writing libraries in various languages to invoke these web-services.

The AMIs come in a variety of flavours, in terms of base operating system: windows, various Linux distros Open Solaris, but the second innovation is that commercial software is also available by the hour, just by paying a bit more on the hourly rate. So an AMI with a ‘free’ OS is cheapest ($0.0095 /hr at time of course, and now $0.0085 /hr). Windows costs a bit more (currently $0.120 /hr). All of the AMIs come with the necessary software to work with the AWS infrastructure – for instance to be able to capture snapshots of themselves (after configuration) for further cloning, or the capability to connect to S3 storage.

IBM is also taking part in that it is offering AMIs with various of its software products. Often the developer version is ‘free’ in that it costs the same as the minimum AMI cost with a bare OS. Then when moving to production the paid-for version is available as a higher cost AMI (say $0.36 /hr for express edition DB2, or $1.25 /hr for the workgroup edition). Similar terms exist for other IBM products (WAS, WebSphere Portal Server, sMash). This is a major step forward, as it allows the complete stack to be paid for on demand. IBM also allows you to bring your existing licenses to the infrastructure running on AWS.

It is worth noting that although IBM is a frontrunner, it is not alone in working with AWS in this way. Microsoft and Oracle also have database instances available, for example.

In summary.. by looking at AWS and IBM (so we expect the market to be at least as good as this) we can see that :

  • Cloud IaaS offerings are already here making the infrastructure offerings of in-house IT look expensive.
  • The management of that infrastructure by the hour, and the ability to pay by the hour (no capital expenditure) is compelling, especially for those organisations which have highly variable load
  • AWS have a good story around data protection, Sarbanes-Oxley and other FUD-inducing regulatory measures
  • Value added ‘heavy lifting’ like storage, and load-balancing is greatly simplifying development and deployment (users do not have to be Cisco and EMC black-belts), thus reducing development costs.
  • There will be a growing number of value add components, e.g. card merchant services, search engine in the cloud – such that it will soon be the case that many large solutions have more components ‘in the cloud’ than they would have ‘in our data-centre’.

So now I am going to stick my neck out, and say that within three years there will be no large organisation that does not investigate cloud IaaS for new infrastructure installs (including refresh operations); and within ten years 50% of all large organisations (£100m turnover +) will have at least 50% of their data-centres ‘in the cloud’. It will be a higher proportion (say 80% of smaller organisations).

Next Page »

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers