ESB


IBM Forum London by HOK

IBM Forum, London, designed by HOK (www.hok.com)
Photo courtesy of HOK

Integration architects and solutions designers interested in IBM’s WebSphere technology stack converged once again for the Spring (!) gathering 2013 WUG at IBM Forum Centre, South Bank, London on  21 March.  I was one of them.

Trends in the WebSphere Ecosystem – Innovation/Evolution

From my perspective, WebSphere seem to retain good ground mainly due to innovation & adaptation of the upcoming technologies in it’s stride. Catering for Mobile & The Cloud areas as well as fulfilling existing customer/partner’s needs by adding features. There was good interest shown by participants on applicability of WebSphere technologies in the Mobile world.

To be honest, I’m neither optimistic nor pessimistic about this, as I’m hoping that the promise shown in the WebSphere brand needs to be realised to it’s potential by the end-users/customers.

Without innovation & evolution into these emerging technology areas, WebSphere brand will probably sustain existing customer base for some years but then could’ve started going down hill. Customers will always have changing needs/requirements to adapt & harness the potential of emerging technologies which if isn’t catered for, will result in them having to resort to alternative solutions – maybe slowly, but surely!

I’d like to share some highlights based on sessions I attended:

WebSphere MQ in the Mobile world (notably: WMQ Support for Web Sockets) WebSockets as we know is a two-way messaging (full duplex) protocol, part of HTML 5 & notably provides the facility to perform push notification from Server to browser (not the pseudo pushes as a result of polling etc.). Remember MQTT (MQ Telemetry Transport), a simple & lightweight messaging protocol? It’s been there for quite a number of years associated with real world applications (Sensors, Machine-To-Machine usage etc.) and been associated with WebSphere MQ from V7.1.

As the MQTT.org puts it succinctly: It is a publish/subscribe, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery.

So, combining the two together (MQTT due to its reliable & light-weight nature & WebSockets due to its full duplex nature) usually involves doing extra work on both sides (at the least, on the client side if using a Messaging server that supports MQTT).

From WebSphere MQ Version 7.5.0.1 the support for WebSocket is built-in!

  • Highly Scalable
  • Secure Push Notifications

This eliminates the need for a client library and hence the Web Browser (or any Web app based on HTML 5/WebSockets) could now be used to send and receive messages, Publish/Subscribe directly from WMQ. This effectively provides a wider range of options for mobile web apps too (based on HTML 5/WebSockets) for reliable messaging, Subscriptions etc. Seamless integration through the Messaging Server to the whole enterprise and beyond (ESB, Cloud, Third party Systems etc.).

Gotchas

  • Browsers should support WebSockets – some do & some don’t
  • Some work (scripting etc.) still needs doing on client side (e.g. JavaScript), that will manage the WebSockets connection. Apparently this (JavaScript Client Library) and other client libraries (C, .Net etc.) would be provided by IBM as part of WMQ (I was told by the presenter).
  • SSL/TLS adds significant network overhead – so use of wss needs careful consideration particularly in constrained/low-bandwidth scenarios.

Extra Bits

  • A Browser based dashboard with close to real-time reporting (a dynamic/active speed-dial display, using the JavaScript WebSockets Client library) connecting to a WMQ Server subscribing to a topic (For selective data feed on system performance characteristics CPU / Memory etc.)
  • The other (more interesting one) an EXCEL Spreadsheet with specific Cells populated by Macros (uses .Net WebSockets Client Library behind, I think) providing a similar dynamic/active speed-dial display. A MQ Server Push ending up in an Excel Spreadsheet!
  • FaceBook Messenger uses MQTT as their underlying messaging protocol.

WebSphere Message Broker 8 & DFDL Support

A gist of features, some of which were really interesting (from a value addition/diversified usage point of view):

Built-in WebSphere ExtremeScale caching

  • ExtremeScale caching: A Grid based (infinite hash map) caching technology that apparently have proved to provide a very high degree of scaling/elasticity for business class applications.
  • Use of recommended patterns such as Application State Store Pattern (Scale-up/scale-down: is more suited for a cloud based solution improving App Server Elasticity), Side-Caching Pattern (for a Gateway style implementation) provides a robust caching facility thus eliminating the need for custom/non-standard caching solutions.
  • Caching available at Broker, Execution Group, Flow, Node levels.

Worklight integration, provision of patterns for mobile services & Toolkit support for conversion from WMB service to Mobile service e.g. Mobile service created using specific pattern can Receive data in JSON and covert to native Broker format.

.NET Integration

  • Ability to directly invoke .NET programs through the CLR
  • C#, VB.NET natively added to WMB
  • .NET Assemblies into BAR files

Patterns based development - Provides a set of pre-built templates (based on best practice) & ability to create user defined patterns for reuse. Also, a web based pattern generation facility allows end users (which not much knowledge of WMB development) to configure and deploy these patterns directly onto broker.

DFDL Parser & Modelling Support: DFDL (Data Format Description Language) is a modelling language for describing text & binary data in a standard way (as data formats). Is is quite powerful as a variety of data formats (including industry Standard formats) are supported and it can be used for: Text, Binary & Bit level data, Fixed length, delimited, patterned, Bi-directional data, Nil & Out of range values etc.

As its a standard DFDL format is portable, editable & can be generated using tools – similar to XSD for XML data. DFDL builds upon W3C XML Schema 1.0 and uses a subset of XML Schema elements to model non-XML (text/binary data).

MRM Message Sets were used in earlier WMB days to model text/binary data. DFDL Support in WMB provides a powerful & standardised way to model & process data.

  • DFDL Parser (On-demand & streaming) is available for both ESQL & Java nodes.
  • Graphical Editor with Guided Wizards for DFDL modelling in Message Broker Toolkit.
  • DFDL SChema deployed as part of BAR file onto Broker, so no separate dictionary files to manage.

 Other WMB Features Worth noting:

  • Lineage of Data Analysis & Cross-tool data analysis using Infosphere Metadata Workbench
  • Improved Web Administration Console to send control commands (If my guess is right this should be using MQTT behind the scenes) with Role based access. Also, provides a public REST based management API.
  • Built-in Audit of Messages/data flowing through – persistence to DB2 and Oracle supported – MS SQL Server not supported.
  • Built-in Record, Capture & Replay Messages/Data facility using single or Multiple Brokers.
  • Simplified Cloud Provisioning through IBM PureSystems.

 ….and many more!

PS  I’ve also posted this on my personal blog

Please Rate and Like this blog.  If you can, please leave a Comment.

I have been asked many times to help our customers select a product to act as their ESB and have constantly found myself resorting to type and looking at the usual suspects IBM, Oracle, Red Hat, MuleSoft etc. However, recently I have been looking in greater detail at the ESB offering from WSO2 especially since the coverage and ratings from Gartner and Forrester, so when I was  asked by Smart421 to attend the WSO2 ESB and Enterprise Integration last Tuesday (6th September) I jumped at the chance.

Arriving at the IET in Savoy Place, London I was greeted by that fantastic smell of fresh coffee first thing in the morning, that gave me opportunity to speak to some of the others attending, representatives from SpecSavers, Aspen Re, SportingBet there to find out more about WSO2 and the products they offer, to other systems integrators like Asteria. Paul Freemantle (CTO of WSO2) was leading the workshop and I enjoyed the openness of him and the easiness of his style of presenting which I am sure is driven from the passion he has for WSO2 and the products they are building.

What struck me most about the WSO2 ESB during the workshop and demonstrations was the simplicity of  it WSO2 ESB is a lean, lightweight mediation platform based on and enhances the Apache Synapse ESB and has been designed for the demands of high-volume SOA implementations. Built on the OSGi specification, WSO2 ESB can be easily customised to specific IT project needs by adding other WSO2 components , providing greater flexibility and agility to meet changing and challenging enterprise demands. WSO2 ESB is also available as an ESB-as-a-Service as part of the WSO2 Stratos cloud middleware platform and WSO2 StratosLive platform-as-a-service (PaaS) hosted by WSO2 again making an even stronger case for adopting this.

Paul also talked through some of the examples of clients using their products such as British Airways, US NavyMicrosoft and eBay – with their ESB processing over a billion transactions per day! Couple this with with what had been demonstrated during the day and the other products that WSO2 offer such as they Governance Registry, Application Server and Business Process Server you can see the capability that you have available to you immediately. You can see why this has helped Gartner and Forrester rating this product so highly, and why I think that this is a serious option when considering an ESB and one I will be pushing more…

I took a break from the wet weather in South England, in order to enjoy the wet weather in Edinburgh, and attend the WebSphere user group. For one thing; our Alan Philpott was also going up there to hold a presentation on applying best practice in terms of SDLC to Datapower devices. Also Bharat Bhushan would be presenting on trouble-shooting and performance tuning of the devices. I have the latter’s slides for any of our DP rangers who want a walk-though. I will post a pdf of them up on sharepoint.

The WUG had a bit of a wobble last year, when some of the committee (who do their work for ‘free’) found that they no longer had the spare capacity to put in. The committee are still looking for volunteers. Some people questioned whether the WUG would continue to be viable. From the attendance levels of this recent meeting, the degree of organisation, and the quality of the  presentations, you would not notice any issues.

There was an interesting set of presentations (full details here), but the stuff that caught my eye was the OSGi stream. The OSGi framework (the dynamic framework system for java) has been around for a few years, not least as the underpinning technology behind Eclipse, since version 3. There is a good definition of it on wikipedia. I have seen it before in a container used to host the enterprise service mix ESB (a.k.a. FUSE 4). Adrian Treneman gives a good run-through here.

What has been interesting has been seeing this technology hitting the mainstream (IBM), and being taken seriously there. I attended two presentations, one of which covered the tooling support in Rational Application Developer (RAD) 8, and another covering the support for OSGi in WAS 8. It was previously available as a feature pack for WAS 7.0. The interesting part here was to see that the technology was being incorporated for simple pragmatic reasons (ability to load different versions of libraries into the same JVM – to be used by different applications, lowering memory footprint by sharing libraries amongst several applications) – rather than as some new-age magic.

I have grossly over-simplified in the above, but it may be a prompter for anyone who is interested to follow it up.

The other major new thing for me was WebSphere eXtreme Scale (WXS). On first seeing the title of the presentation, I thought that it would be a re-branding of either WAS ND or WAS XD. Wrong! “It’s just a cache”, as Jonathan Marshall announced at the beginning of his presentation – hugely underselling the use of caching technologies, and the product itself – before going on to show those benefits in his talk. Having recently been involved in two use-cases for caching technologies (dynamic content management) and web-session management in hugely scaling environments: both situations where databases struggle, and application server session-sharing mechanisms struggle (the old n^2 communication overhead, as the number of app. servers increases)… I could appreciate the problems that WXS product is trying to solve.

WXS is not the only player in this space. As well as other commercial offerings (which I had previously heard issues about) there are several open source alternatives (e.g. memcached), but since the general area is very new… not all of the products implement all of the features (e.g. active standby). Indeed there is probably not a common consensus of what all the features are. I will distribute slides on the internal sharepoint site. They should become available on the WUG site in due course. The take-away news for me is that IBM have a seemingly very capable product in this area, now, which merits investigation in some of my existing scenarios, and also new scenarios such as using the cache as a system of record.

So even after a few years of going along to the WUG, it still has the ability to make me sit up and take notice! Long live the…

It’s been a while since Oracle’s spending spree – including Sun and BEA – something that i’m still in shock about.  To get some clarity on the SOA offerings from Oracle a colleague and I recently attended an Oracle partner day in Reading. As the day was ran by 2 presales guys we got an interesting incite into what they were picking out as the differentiators to the other main player IBM – some of their opinions are reflected below – all of which i take with a pinch of salt.

Fusion Middleware

Oracle Fusion Overview

After everybody nodded to the question “does everybody know what SOA is?” the instructors continued on to tell us about SOA for a couple of hours. Nevertheless there was an interesting slide in this period with the average project cost and staff size for SOA projects; Basically, the message was ‘too big’ and doomed and ‘too small’ and it’ll just fade away. Also, they noted that the audience and the questions asked were testament to how the market has matured: their presentations have moved more from ‘how it works’ to ‘how to roll it out and support it’.

One contradiction I noted was how often they talked about the importance of interoperability and the use of industry standards but at the same time were really playing up how well integrated the Oracle SOA Suite 11g is. Perhaps this itself isn’t a contradiction – it’s more the incongruity of vendors on the one hand sell you the standardisation but at the same time tut-tut and teeth-sucking at the idea of using open source in some instances.

From a product perspective the day was a really useful insight in to how the suite has evolved and into what’s new and exciting about Oracle SOA Suite 11g (r1 patchset 2 – the 3rd big release in a year). Here are some specific notes I made which may or may not make sense:

  • No prizes that the winner in the application server shoot out between Oracle App Server and WebLogic was… WLS. The current version of SOA Suite won’t support running on WebSphere or JBOSS AS but this is on the roadmap.
  • One important piece of consolidation with the release is that SOA Suite and Oracle Service Bus can now run on the same application server platform (previously in 10G you needed different versions/instances).
  • Aqualogic ESB is called Oracle Service Bus (OSB) and now fulfils the ESB function. The product formerly known as Oracle ESB is actually still part of the product set – but is called Mediator. There is clearly some overlap between the products but mediator should be used only for the basic routing and transformation tasks.
  • BPM Suite for 11g is the completely re-written Aqualogic BPM which is for business modelling, supplied as part of the Dev Suite and includes some coding ability and allowing closed loop development (BPM is not as powerful or feature rich as the BPA Suite and not intended as a competitive product). The BPA Suite is pure modelling.
  • BPA Suite is targeted at analysts and contains 3 licensed modules of the ARIS product from IDS Scheer. The message was clearly made that Oracle don’t feel it is their business to create business process design software. I think this was a poke at IBM.. who recently bought Lombardi. BPA is much cheaper then the pureplays, example he gave was Pega PRPC but delivers on the promise of being able to integrate this rather then run it is as a homogeneous blob.
  • List of what’s included in SOA suite: OSB, Adapters, B2B, BPEL, BAM, EM, WSM, Service Reg, JDeveloper, CEP. –maybe missed a couple – all running on top of WebLogic with AQ/JMS and JRockit
  • BPM, BPA Suite and Enterprise Repository (ER) are licensed separately, the later under the ‘governance suite’. The ER assists governance from an assets perspective and provides the repository to store all artefacts.
  • The process server now supports BPMN 2.0 as well as the existing BPEL. There is an aim to provde BPMN round-tripping
  • Oracle have invested a lot in the adapters – making them all JCA compliant again (BEA had focussed on making them fast and not standardised). Lots of them, and there are lot’s of partners providing even more (Attunity and iWay were mentioned)
  • Event Driven Architecture  aka Event Delivery Network is JMS based of course, thankfully on weblogic JMS and not Oracle AQ. Apparently, you can use another JMS provider if you want, there is a JMS bridge (though I’m not sure that is really the accurate answer to my question)
  • The B2B product is really just an endpoint and not B2B network provider software
  • Oracle business rules (OBR) is a basic product – again the distinction was drawn out between IBM buying ILOG JRules and the OBR. No desire to create BPMS software which they say can hide implicit process within complex business rule structures. However, they have added handling for the basics of decision tables. OBR is integrated into JDeveloper.
  • Oracle policy admin (OPA) provides a sophisticated tool for managing your internal policy related data, e.g. VAT.
  • Application Integration Architecture (AIA) are all about making the Oracle Applications harmonious with the SOA suite and providing reference and process integration packs (PIP) are about providing standard processes across these applications.. rather then developing your own ‘order to pay’. Another distinction was drawn to IBM – who “don’t do applications”. Although predominantly Oracle Applications currently, the architecture is intended to support any cross application integration (e.g. connectivity to SAP http://www.oracle.com/applications/aia-plm-to-sap-erp-data-sheet.pdf). A list of the currently released PIP is at http://www.oracle.com/us/products/applications/application-integration-architecture/054232.html
  • JDeveloper is the development tool and is very well integrated and includes Unit Test Framework in 10.1.3.

As I said before – if you have noticed the Oracle instructor references to IBM above – clearly the competition is fierce between the 2 application integration stacks. A couple of other points were made which I regurgitate for the reader to make up their own minds: IBM products aren’t well integrated like the Oracle suite now is, and rational are holding back BPMN development within the System Architect tool.

architecture IstanbulQ. How have organisations approached setting up an Integration Competency Centre (ICC)/Centre of Excellence (CoE) and what are the challenges?

A. An ICC/Integration CoE (put your organisation’s favourite name here) includes the following functions, which sometimes organisations have in separate teams, and sometimes combined…

  • Standards creation and ownership
  • Integration project governance, including the admin aspects
  • Service definition and canonical data modelling, owning data representations ‘on the wire’ etc
  • Doing real work for real projects, including the project/resource management aspects of this

Different organisations approach this in different ways. The key questions to consider include:

  • Funding model – centralised or decentralised (i.e. project based)?
  • Organisational model – if your organisation is heavily de-centralised (typically it’s grown through acquisition and the business strategy has been to leave acquired units to behave autonomously) then the ICC structure needs to reflect this in order to succeed.
  • Change PMs – are they rewarded/incentivised for supporting reuse? for minimising operational (IT and business) as well as change costs? If this is not well set up then you’re in for a continual battle.
  • Use of external product vendors – if they do not insist on the SOA standards/architectural guidelines role being fulfilled, this is an indicator that they don’t know what they are doing!
  • Are you expecting to use the supplier’s data model for integration?
  • Are you using significant offshore/remote resources? – this strongly affects how best to structure your teams

So what’s the best model? Well – of course – it depends. In general we recommend a decentralised team with some limited centralised standards support/expert help and project management/admin capability as this reflects the reality that we see with our customers – i.e. that a large non-project cost centre is not sustainable. The key is in the sustainability of the relationship between the ICC/ICoE and the individual project teams/federated parts of the organisation – if you get that functioning well with the projects feeding back ‘value’ into the ICC and the projects see the value that the ICC is providing (by removing the barriers to integration), then you’ve cracked it. This is all a subset of our ‘Sustainable Enterprise Architecture’ vision really.

Other challenges to consider

Here’s a hit list of other things to think about/look out for that we might have seen out in the field…

  • Holy wars – over technology etc, partially due to poor comms/selling of pros/cons/vision typically
  • Keeping up the energy/commitment in the long run – it’s a 10k run, not 100m sprint. It’s always difficult for any long-term initiative to survive business cycles and staffing changes.
  • Business involvement/input – depends on whether it was a “technology sell” in the first place – but fundamentally the business guys should know best about where various services are likely to head, and therefore maximise agility down the road
  • You can only go so fast, each step on the road takes a while to bed down, and you can’t leap-frog a maturity step, so allow for bedding-in phases. A common issue we see is that initial versions of internal standards are “over-egged” making it hard for service providers to comply with them – maybe just intellectually difficulty if not technically so. There maybe a bit of “showing off” going on there from new staff in the ICC who are keen to impress etc. Just stop it, right? I’m not interested in your use of the emerging esoteric WS-yada-yada standard when you don’t even have a service catalogue under control :o)
  • How do you manage a tier 1 supplier? – SAP, Oracle etc – to ensure you buy the right products for you, even though their consultants are all over you like a rash. A common and tempting pattern for organisations is to go for a “one stop shop” from a single vendor for HR, ERP etc approach so that they can minimise integration problems. I’m not saying that this doesn’t have merit, but the key point is that your integration standards need to still be yours if you want to retain any control over the selection and integration of other components in the future. Ensure your suppliers have the necessary motivators/vested interests to deliver ‘good’ services that integrate well with the wider solution (i.e. they they don’t have a cost driver to minimise the customisation of their product leading to non-ideal/non-conformant services). Also if you have multiple partners/vendors involved, ensure that they have no incentive to rubbish and impede each other. As a customer, that’s no fun.
  • Be clear about deliveries from your integration partner – exactly what they will produce and how it will add value, not just trust them to come up with the rigour/’magic of the ESB’, as there’s a lot of BS out there around integration/SOA.
  • Service versioning – We tend to favour an “n+1 model”. but often we see that organisations don’t sort this out early enough as they don’t see it as a “day 1″ problem and then falter later on.
  • Service ‘hygiene’ in general – invest in it.
  • Metrics to measure and demonstrate success and business benefit, and also the effectiveness of governance processes (Robin’s rule – if your governance process never “stops” a project, it’s a sick process)
  • Have a communication plan – your heavily federated organisation isn’t going to guess what your ICC is up to are they and magically fall into line with it? And probably the key point here – keep it alive. One time comms happens – well – once. Once you’ve read this blog you’ll forget it in 15 minutes, if I’m lucky. Your ‘virtual ICC’ colleagues will do the same unless you keep repeating the messages. It’s the old adage – tell them what you’re going to tell them, then tell…etc.
  • Plan for the necessary admin/management support for your ICC – especially important for the ongoing governance process/project touch points as this task won’t happen for ‘free’.
  • Define a concessions process – this is the get out of jail card – grant concessions to deviate from standards in special cases, document concession & mitigation/consequences, set a review or mitigating action deadline.
  • Don’t let the ICC get bogged down in delivery, so not not enough time is spent on vision/standards/governance. A tough one to avoid this and where lots of organisations struggle. It all comes back to good project planning/resourcing really.

That’s all there is to it!

RobinAtWIUGJuly2009After a journey characterised by a conspiracy between parking meters and failed tube signals, I made it yesterday to IBM’s Hursley Park for the WebSphere Integration User Group meeting. Here’s the photo at a sunny Winchester station as proof…ok…so Jamie and I forgot to take any photos at the event.

The key note presentation was from Kevin Turner – ESB and Messaging Technical Strategy Manager. He covered IBM’s Smarter Planet initiative and then moved on to future architectural trends – the noteworthy points were:

  • The 2nd wave of SOA is coming – there’s a significant number of ’1st wave’ adopters out there now who have been through the joy and the pain, and have now fully understood the upfront investment required and the potential benefits. This 2nd wave is likely to consist of organisations trying to join up their islands of SOA (probably built along departmental lines due to a sensible pragmatic strategy of ‘baby steps’ SOA), and so federation of multiple ESBs will be a key theme. Governance will be crucial here if these islands are ever to be consolidated into a virtual enterprise bus that solves the problems of visibility of services across ESBs, end-to-end security and manageability etc.
  • Patterns – IBM are working on building some integration patterns support into their tooling (presumably WebSphere Business Modeller, WebSphere integration Developer etc) to allow an expert from an organisation’s ICC/ICoE to define the standard ‘pattern’ and therefore accelerate the development of instances of that integration pattern. The integration developer might just have to supply the pattern parameters for example, with many of the decisions such as how to manage errors etc already consistently solved for them.

There were a couple of presentations about specific MQ-related Supportpacs (MO71 and IH03) which I expected to be dull due to their nature, but the presenters managed to bring them alive – especially Paul Clarke. It was clear that he’d written and refined quite an impressive beast of an application over many years and was rightly proud of it.

Kevin mentioned CloudBurst during his keynote, and I managed to get some time with an IBMer later in a hastily arranged session to discuss it some more – I’ll post details in another blog post later as there’s quite a lot to report.

That means I missed the start of the next session about the developments in the WebSphere Process Server product to better support the human interaction aspects of BPEL processes (which have been sorely needed in our experience!). Paul Smith demoed the Business Space web app from WPS v6.2 which goes some of the way to addressing these shortcomings, with better human ‘override’ control of processes (skipping a step, repeating a step etc) and better visualisation tools for business users to use to understand where a particular process instance has got to, etc. This is clearly still a developing area of the product set though.

ilogThe last session I attended was a demo of rules from the recent iLog acquisition by IBM. An ex-iLogger Lenny Bromberg gave a very engaging demo which involved my colleague Jamie playing the role of “business user” to dynamically change rules that influenced the behaviour of a mock motor insurance quotation app. An interesting aspect of Lenny’s “pitch” was that essentially rules engines are 10 a penny, there’s several good open source ones out there if you want one, but what rules provides is a Business Rules Management System (BRMS), i.e. all the significant extras wrapped around the actual runtime rules execution environment that you need to make it really workable, manageable and governable. This includes rule development/editing environments (developer IDE and business facing), rule versioning, rule comparison tools, audit, simulation environments etc. Some other observations:

  • Lenny’s experience from previous projects where they have integrated with BPM solutions (like WPS etc) is that they often find that the business process definition/BPEL has become ‘spaghetti’ as the business rules are not clearly separated from the business process – and so the use of an external rules engine enforces a good business process automation design practice, and leads to more maintainable BPEL etc.
  • This is related to BRMS’s in general and not specifically iLog rules, but a weakness that I could see is that the rules rely on a good, stable business object model and we know from experience with numerous customers how difficult it is to get enterprise data models together and agreed. This is the potentially shifting sand that the rules are all built upon.

Many thanks to Mike Wyvill and Keith Guttridge and others for organising the event. Well worth the £28… :o)

WIUGLogoTogether with a colleague Jamie Milne I’ll be attending the next WebSphere Integration User Group (UK) meeting tomorrow at Hursley Park. Whilst these events tend to have rather an MQ-bias, I’m looking forward to hearing what the messages about the future are relating to ESBs and process integration technologies, and I’ve signed up for the “Business Process Integration” breakout sessions – the promise of an “process integration demo” was too much to resist :)

Will post more afterwards about what I heard…

BizTalk Server 2009 was released end of April 2009. This tends to lead to  a natural  review of our internal handbooks and the utilities that complement our processes.

When approaching all BizTalk engagements we  initially consider the SOA Roadmap and development methodologies and test frameworks that support these approaches. The majority of the utilities/extensions mentioned here are available as open source, or free trial download, but rarely do I see these utilities combined. It is assumed that these are applied to the traditional base build of Windows, SQL Server, optional WSS and BizTalk.

WCF LOB Adapter SDK SP2 (http://www.microsoft.com/downloads/details.aspx?FamilyID=47AB6F21-0D8B-4C90-A8B9-E8647281B164&displaylang=en )

The LOB adapter pack has now been extended to support SQL Server and is worth considering when looking at using WCF bindings.

BizTalk Adapter Pack 2.0 (http://www.microsoft.com/downloads/details.aspx?FamilyID=76736ba7-3c05-4436-9353-1c33f9005194&displaylang=en ).

This is a 120 day evaluation extending the WCF LOB Adapter SDK, to enable the auto generation of schemas and ports for the additional new bindings. There is an additional cost for this pack, depending on the licence model of your client.

Nunit 2.5  (http://www.nunit.org/index.php?p=download ).

Still popular with clients and used  to manage BizUnit test cases. The more traditional option is mstest, but useful to include for completeness.

BizUnit 2.3  (http://bizunit.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=9001#DownloadId=23581 )  or 3.0 (http://bizunit.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=20013)

BizUnit is an excellent declarative test framework for managing test cases and for generation of automated regression test packs. The newer release 3.0.x.x has additional feature support. Some clients like to make use of the excellent BizUnit Designer, which provides a UI for test case generation over editing raw XML. This is useful for early adopters, helping with understanding of what features are initially available in the framework.

Microsoft BizTalk LoadGen 2007 (http://www.microsoft.com/downloads/details.aspx?FamilyID=c8af583f-7044-48db-b7b9-969072df1689&DisplayLang=en)

An additional tool to coordinate the execution of performance and stress tests.

BizUnit Designer 1.4 (http://bud.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=12968#DownloadId=33409) used with BizUnit 2.3

 A useful tool providing a UI to assist with the initial creation of test cases.

BizTalk Deployment Framework  (http://biztalkdeployment.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=17826#DownloadId=54972 )

This is a must for any development team. Having used the older framework extensively, the new features support msbuild projects over nant. This is a real time saver when managing complex build and deploys.

HTMLHelp.exe ( http://www.microsoft.com/downloads/details.aspx?FamilyID=00535334-c8a6-452f-9aa0-d597d16580cc&displaylang=en )

A Pre-requisite for the HTML output of the following orchestration profiler/documentor

BizTalk Orchestration Profiler 1.1.1 ( http://biztalkorcprofiler.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=6375#DownloadId=17235 )

An excellent tool for verification of code coverage of orchestrations in BizTalk. Generation of a help file analysing orchestration performance. Used during test cycles for verification of regression test packs coverage.

BizTalk Documentor v3.2 (http://biztalkdocumenter.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=8689  )

A useful tool for documenting the configuration of a BizTalk implementation. Useful to include in the deployment and release cycles for configuration management issues and to share with support teams, to verify configurations.

BizTalk Server Best Practices Analyzer v1.2 ( http://www.microsoft.com/downloads/details.aspx?FamilyID=93d432fe-1370-4b6d-aaa8-a0c43c30f5ab&displaylang=en)

An essential tool for all client deployments to generate and understand compliance reports. The latest version supports BTS2006, 2006R2 and 2009.

MessageBoxViewer ( http://blogs.technet.com/jpierauc/pages/msgboxviewer.aspx)

An invaluable tool for querying and analysing system configurations, especially for warnings of potential performance issues. This is more of a dynamic analysis than best practices analyser.

 Each of the downloads provides excellent detail on how to use the individual utilities. These are the utilities that we use to support our SOA Development Methodology when implementing and supporting BizTalk Server implementations. They provide the basis for development, test and deployment frameworks.

As promised, here’s some notes from yesterday’s UK WebSphere User Group meeting (combined with the UK WebSphere Integration User Group) – held at IBM’s offices at Bedfont Lakes.

First of all – let’s start with a picture of the happy crew at our stand. Rohima did an excellent job hunting down victims to force business cards upon, with me and David Taylor in support.

AllAtStand

The keynote presentation was by Rob High, the IBM Chief SOA Architect, concerning the 2009 technical strategy and directions for the WebSphere portfolio. Unfortunately (for you – the reader), much of what he said came with a “not for the public domain” health warning. Interestingly he was still banging the SOA drum and when I asked him later in the Q&A session about it he was quite dismisive of all the recent ‘SOA is dead’ dicussions on the web – he gave the message that I wanted to hear which was essentially this – there’s nothing new under the sun, good integration practices are still good, and so the current hype cycle status of SOA should not stop us from still understanding the fundamental business services of an organisation and supporting them with technology in an agile way. I think there is some marketing difficultly with the term SOA now maybe but there was no hint of IBM moving away from it.

Rick Robinson presented on Web 2.0, going through quite a bit of background material and then mapping that on to IBM products, and their support for REST, OpenAjax and Dojo. Whenever I attend this kind of presentation I always pick up a few Web 2.0-ish sites/things that I hadn’t seen before – I guess this is the nature of the relatively viral nature of the subject itself. My favourite was http://www.twitscoop.com/ – which gives a real time view of Twitter topics that are being discussed as a tag cloud. Another one mentioned was http://brightkite.com/, a social networking site that was location-aware before Google Latitude came along.

I then attended another session from Rob High about EA (enterprise architecture) and the relationship with BPM (business process management). For me this was more of an EA revision session (an IBM view on TOGAF to some extent) and I didn’t get so much from it, except the IBM roadmap for their recently acquired TeleLogic System Architect product and how that fits into the roadmap for WebSphere Business Modeller. The vision is that they will remain separate tools but eventually with a shared repository. Interestingly none of the Rational modelling tooling was mentioned apart from saying that it was focused on software rather than business modelling, so there was no vision to merge System Architect into Rational Software Architect in anyway.

One other thing Rob mentioned was an expansion of that massively overused phrase “IT-business alignment” (doesn’t every IT initiative that comes along promise this?!?) into more several more defined levels of alignment – this rang a bell for me and it’s something I’ll look into a bit more I think…

Whilst my colleague was presenting a DataPower case study from a customer project in another room, the final presentation I went to was a full on techy session from David Currie of IBM about the new features in WebSphere Process Server (WPS) and WebSphere Enterprise Service Bus (WESB) v6.2. I wanted to keep on top of where these products are going at a detailed level. The degree of change in each release is pretty amazing, but also leaves a slightly sour taste in your mouth as you realise that IBM are plugging feature gaps in the product that sometimes you knew about (e.g. code assistance to help deal with the SDO model) and sometimes you didn’t (ability to act as a service gateway – which seems a pretty fundamental thing for an ESB to offer). Apart from the fairly extensive changes to support service gateways (a number of new mediation primitives etc), the developments in the human tasks aspect of it are the most interesting to me. You can now attach documents to business processes, and users can override a business process flow (sounds dangerous! – but intended for those processes where business exceptions/interruptions can occur at any time). Sorry to finish on a negative, but one obvious gap was that SOAP 1.2 is now supported, but not for SOAP over JMS – purely due to them running out of time to get it into the release AFAICT.

So – all in all, a good user group meeting, especially as it finished with beers. Many thanks to Rick Smith and co for organising it.

I’m currently working to find the best Enterprise Service Bus for a project. Nothing unusual there. Something I’ve done a few times before. Except this time the requirements are a little more unusual than which kinds of transformation the tool supports.

Functional requirements:

  1. The ESB must fit with Smart421′s SOA patterns.
  2. The ESB need not be a one-box solution. We’re happy to mix and match tools around the outside. BPEL in particular is not necessarily a mandatory part of the core product.

On the face of it, not too tricky. In practice, maybe a little harder: In our view, the ESB isn’t a piece of software in the first place, it’s a collection of (continuously changing) standards and policies that govern the interactions that take place across the essentially empty void between two services, so we’re looking for something that’s compatible with that view, rather than something that wants to sit at the centre of the SOA universe (more on this later, perhaps). Let’s look at the non-functionals:

  1. The ESB must be Open Source, be based on an Open Source product, or there must be an Open Source version available.
  2. The ESB must have an established presence in the market – the latest and greatest features aren’t enough. We’re looking for something that has some industry buy-in.
  3. It must be possible to buy in support for the product from a third party, should we need it.
  4. The ESB must support light weight development. We must be convinced that easy things are easy achieve, and hard things are proportionally (and not disproportionately) harder.
  5. The ESB must offer a non-functional envelope that allows it to support a large scale enterprise application, preferably without restricting us to vertical scaling.

I’m intending to follow a fairly standard product procurement process to help select the technology, so the next step is to set some more formal selection criteria and identify some candidate products to form the ESB core.

At the moment, the obvious candidates for me are:

I’ll keep you posted as I progress… Comments/suggestions/vitriol welcome!

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers