Ipswich from the air

Ipswich from the air. Photo by kind permission of Stu Smith. More at http://bit.ly/o4OMrc

 One of our more intrepid colleagues, Smart421 lead consultant Stu Smith, has just published some outstanding aerial photos of Ipswich.

When he’s not architecting IT systems, configuring IBM DataPower appliances behind closed doors in a customer’s data centre or speaking at industry events (e.g. WUG), Stu is often flying the skies of urban or suburban areas with his paramotor or paragliding over more inhospitable terrains somewhere in the world.

His latest views of Ipswich, taken in August, can be found here http://bit.ly/o4OMrc

Those familiar with the area will easily spot the Smart421 technical centre at Felaw Maltings, as well as Portman Road (home of Ipswich Town Football Club).

Leave a comment to tell us what other landmarks you can see.

In December I had the pleasure of attending a seminar called “Service Management in Cloud and Virtual Environments” which was organised by IT Service Management Forum (ITSMF) at a very cold but, thankfully, snow free Manchester Museum. The aim of the seminar was to “highlight the specific challenges, and suggest practical ways to how you can modify your approach to service design, service transition, and service operations”.

There were speakers from the following organisations; HP, Global Knowledge, Atos Origin and Fujitsu and each presentation was followed by a short question and answer session.

First up was the presenter from HP who gave a very interesting address entitled “Why is Infrastructure Converging? – Creating platforms for the Cloud”.  This presentation outlined the technology challenges ahead driven by Population Growth, Urbanisation and Globalisation resulting in an Information Explosion. Together with an aging, complex and inefficient IT infrastructure will lead to an environment where business demand technology which is based upon rigorous standards, high volume, low-cost and rapid innovation. In the presenter’s opinion, this environment will be dominated by the “top table players”, namely Cisco, IBM, Oracle and unsurprisingly HP! There will always be small companies leading the innovation
cycle and “looking to be acquired” but mid market players such as SAP will be squeezed. New convergence technologies such as intelligent energy management, virtual I/O networks and virtual resource pools will be the infrastructure which underpins the Cloud Solutions but unlike the mainframe era the only lock in will be via quality as businesses look to metrics such as Time to Innovation, Cost to Innovation and Longevity of Supply.

This was an extremely thought-provoking presentation backed up by some incredibly large numbers and forecasts and it made me consider the following;

  • The need to address the regulatory environment when looking at Cloud solutions. For example the requirements for retaining data are different in the USA compared to the UK and Europe and what does this mean if the customer is based in the UK but Cloud services (and data stored) are provided elsewhere or anywhere!
  • What is the impact to Service Level Management and Service Operations where the network becomes the boundary of the organisation as devices are too diverse and widespread to be supplied or supported by IT?
  • The challenges associated with organisations grappling with initiatives associated with reducing their carbon footprint. How do you measure the number of transactions per unit of power and what are the operational challenges moving workload from the hot part of the data centre to the cooler areas?
  • Service Transition policies and plans are required to have a greater focus on how to move service from one cloud provider to another.

Next up was the presenter from Global Knowledge who gave a presentation called “Service Level Management and the cloud revolution. A survival guide”. This presentation started by reviewing the definition of the Cloud given by Forrester Research (“Any computing service that is provided outside the customer premise and that is provided on a ‘pay-as-you-go’ basis”)
and looking at some of “… as a Service” offerings such as Platform as a Service (PAAS) and Infrastructure as a Service (IAAS). An interesting slide outlined the speaker’s findings about “the Cloud” when speaking to various CIO’s and these were; lack of control, accountability, visual representation and transparency including security and resolution processes. The presenter then looked at Service Level management in more detail and in particular looked at some details around “Agree”, “Monitor” and “Improve”. When looking at “Agree” the following key themes were discussed; Cloud is just another Third Party Service
with Underpinning Contracts, Performance and Availability will be outside your control, Security is NEVER guaranteed and Service Catalogues will be vital. “Monitor” basically asked one question and provided one possible outcome;
Traditional measures of components are irrelevant and Synthetic Monitoring will be the key. Finally when looking at “Improve” it is worth considering that Cloud providers are unlikely to be interested in your Service Improvement Programme (SIP) and therefore SIP may have to be achieved by switching vendors.

This presentation was much more related to Service Management than the first of the day and was still as interesting and again made delegates consider some interesting questions.

  • During and after this presentation I found myself in full agreement with the assertion that Cloud Services  is just another underpinning contract which needs to be managed although in practice this may be an easier “sell” to the customer where a service is fully outsourced but where  other sourcing models, such as co-sourcing or multi-sourcing, are in place then this may not be the case.
  • Additionally I was in full agreement with the speaker regarding synthetic monitoring, i.e. monitoring from the end-user perspective, as for far too long organisations have hidden behind complex availability metrics whilst end users feel the service is less than satisfactory. However it’s easier to talk about synthetic monitoring than achieve it and when designing a service this sort of monitoring must be able to be lifted and shifted if the service is to be moved between Cloud service providers
  • As part of the Service Design process it seems to me that organisations may spend less time on technical resources but actually spend more time and effort ensuring people understand how to manage procurement and the supplier management processes.
  • Demand Management, understanding User Profiles and Patterns of Business Activity are likely to be a key part of Service Design process to ensure the ability to leverage benefits of the cloud such as ensuring there is enough capacity to meet spikes in demand.
  • Consideration will need to be given to how network availability is designed as a network is the key to accessing Cloud based services. Questions will need to be asked how are the providers of networks geared for this change when changes
    in this area are typically slow to provision. Additionally how does this affect IT Service Continuity if network providers utilise a shared infrastructure.
  • Most of the above questions and issues appear to me to have a significant bearing on how Service Level Agreements are negotiated with clients. It may be that fundamentally not a great deal has changed with delivering the service
    when moving to the cloud but the very act of moving a service in to the cloud will mean that many questions that have not been addressed until now suddenly become all important.

The 3rd presentation was given by a representative of Atos Origin and began by restating some defining principles of Cloud Computing. The presenter then outlined some considerations which organisations may need to consider when looking at moving in to the Cloud; Migration consideration – build an “Internal Cloud”, Determine external vs. internal Cloud ROI and
monitor external costs, Don’t abdicate responsibility to the Cloud, Use the Cloud for short-term scale and offload capacity, Safeguard your data, Gain access to well-trained “Cloud expertise” which in my opinion these all seemed very sensible observations.

Personally I found this slide show less stimulating but one or two of the slides made for interesting discussion.

  • Firstly that organisation should consider moving non-strategic services in to the Cloud; personally I can see the merit in that approach but that may be a bit idealistic as the appetite and speed of migration of services in to the
    Cloud will be driven by many different factors. For example if an organisation no longer has capacity in their own data centre then all future services may be moved in to the Cloud in spite of their strategic importance to the organisation.
  • Here at Smart421 we are not building an Internal Cloud but undertaking an exercise to “eat our own dog food” by moving some of our internal services in to the Cloud.
  • I particularly like the suggestion to engage with an organisation with Cloud Expertise :-)

The final presentation was from Fujitsu which was titled “A Private Cloud – The HMRC “S4” Service” which was described as “A private cloud service for hosting Windows, Linux, and AIX based applications”. A Capacity Unit was the model used for charging and was composed of n CPU cores + memory.

When discussing the challenges associated with this service it was clear that many of the traditional challenges remain, namely; maintenance slots, how much to invest against the foreseeable customer demand, forecasting workload and optimising utilisation.

Finally, looking at the benefits of this shared infrastructure solution it can be seen that there are similar benefits to be gained as identified from cloud solutions, that is; reduction in costs, faster deployment of business solutions and reduced sizing risk. The final benefit, according to the presenters, was “Simple decommissioning at end-of-life” which I assume was meant from an infrastructure point of view in terms of not having to retire and dispose of hardware etc.

  • Although this was presented as a Private Cloud, in my opinion this was more of a shared infrastructure solution which had some features and benefits of cloud but also had some limitations. For example the HMRC Service Owners
    could purchase the service using a standard service catalogue and costs were below what would typical charged to host the solutions on dedicated hardware however there didn’t appear to be anyway to “burst” beyond what was purchased
    in terms “Capacity Unit”.  Whilst I can see attraction of this from a technical point of view this charging model does not directly relate to business outcomes or metrics which in my opinion is the way the IT Service Management community is heading.
  • In my opinion a true cloud solution would offset some of those shared infrastructure concerns. For example maintenance slots negotiation could be mitigated by moving the workload to another instance in a true cloud solution. Again in a true cloud solution investment would be less of an issue as the customers only pay for what you use.

In conclusion this was interesting rather than startling seminar however what was still clear to me was specific real world examples of organisations moving services in to the Cloud are still pretty thin on the ground. Additionally addressing the security concerns of organisations is fundamental to driving this growth of take up of Cloud Services.

Looking at the Service Lifecycle and some of the processes contained within I believe that greater emphasis in the future is going to be  placed upon Service Strategy (Demand Management), Service Design (Capacity Management) and Service Transition.

When assessing the impact on Service Operations I think there is a fundamental requirement to assess existing processes for appropriateness and effectiveness when moving services in to the Cloud. Indeed there could be opportunities to streamline processes and for example, potentially reduce the impact on end users associated with Change.

Continual Service Improvement will continue to be a challenge especially with those Infrastructure as a Cloud (IAAS) service providers which operate a true utility model.

Finally in my opinion, the demand placed upon IT Service Management will still be as great as ever but the emphasis may be different to what we have seen in the past which for me is an exciting part of the challenge.

I took a break from the wet weather in South England, in order to enjoy the wet weather in Edinburgh, and attend the WebSphere user group. For one thing; our Alan Philpott was also going up there to hold a presentation on applying best practice in terms of SDLC to Datapower devices. Also Bharat Bhushan would be presenting on trouble-shooting and performance tuning of the devices. I have the latter’s slides for any of our DP rangers who want a walk-though. I will post a pdf of them up on sharepoint.

The WUG had a bit of a wobble last year, when some of the committee (who do their work for ‘free’) found that they no longer had the spare capacity to put in. The committee are still looking for volunteers. Some people questioned whether the WUG would continue to be viable. From the attendance levels of this recent meeting, the degree of organisation, and the quality of the  presentations, you would not notice any issues.

There was an interesting set of presentations (full details here), but the stuff that caught my eye was the OSGi stream. The OSGi framework (the dynamic framework system for java) has been around for a few years, not least as the underpinning technology behind Eclipse, since version 3. There is a good definition of it on wikipedia. I have seen it before in a container used to host the enterprise service mix ESB (a.k.a. FUSE 4). Adrian Treneman gives a good run-through here.

What has been interesting has been seeing this technology hitting the mainstream (IBM), and being taken seriously there. I attended two presentations, one of which covered the tooling support in Rational Application Developer (RAD) 8, and another covering the support for OSGi in WAS 8. It was previously available as a feature pack for WAS 7.0. The interesting part here was to see that the technology was being incorporated for simple pragmatic reasons (ability to load different versions of libraries into the same JVM – to be used by different applications, lowering memory footprint by sharing libraries amongst several applications) – rather than as some new-age magic.

I have grossly over-simplified in the above, but it may be a prompter for anyone who is interested to follow it up.

The other major new thing for me was WebSphere eXtreme Scale (WXS). On first seeing the title of the presentation, I thought that it would be a re-branding of either WAS ND or WAS XD. Wrong! “It’s just a cache”, as Jonathan Marshall announced at the beginning of his presentation – hugely underselling the use of caching technologies, and the product itself – before going on to show those benefits in his talk. Having recently been involved in two use-cases for caching technologies (dynamic content management) and web-session management in hugely scaling environments: both situations where databases struggle, and application server session-sharing mechanisms struggle (the old n^2 communication overhead, as the number of app. servers increases)… I could appreciate the problems that WXS product is trying to solve.

WXS is not the only player in this space. As well as other commercial offerings (which I had previously heard issues about) there are several open source alternatives (e.g. memcached), but since the general area is very new… not all of the products implement all of the features (e.g. active standby). Indeed there is probably not a common consensus of what all the features are. I will distribute slides on the internal sharepoint site. They should become available on the WUG site in due course. The take-away news for me is that IBM have a seemingly very capable product in this area, now, which merits investigation in some of my existing scenarios, and also new scenarios such as using the cache as a system of record.

So even after a few years of going along to the WUG, it still has the ability to make me sit up and take notice! Long live the…

DataPowerStackedIf you are interested in finding out about best practices and what is involved in managing an IBM WebSphere DataPower deployment, day in, day out, then my colleague Alan Philpott will be presenting on this subject at the WebSphere User Group meeting in Edinburgh on Tuesday 28th September. For details of the venue etc, see here.

Alan has spent the last 18 months industrialising a huge business-critical DataPower estate, and will be using Subversion etc to walk through some examples & lessons learned, what kind if support processes you need in place etc.

The head of our WebSphere practice, David Taylor, will also be in attendance – manning the stand. Quiz him about his experiences using Amazon Web Services (AWS) – he’ll love it.

I’ve been hearing more about web traffic management products recently, and they’ve been of increasing interest to me as they seem to fit nicely with a cloud deployment model – e.g. if you’ve got your SaaS app deployed to your favourite cloud provider so that you can scale out quickly without large upfront capital costs, then you are probably going to need something infront of that deployment architecture to manage the incoming traffic. Hopefully (and this will become more common) your cloud deployment might auto-scale to increase its own capacity on demand, and so your environment will always be able to cope with demand, so you don’t need to manage the incoming traffic – right? Well, I’m not so sure – I can see some barriers to this:

  1. Cloud auto-scaling implies unpredictable costs, and whilst maybe extra traffic = extra cost = extra revenue to offset it, I’d expect most SaaS providers would want to set a cap on costs and so any traffic exceeding that ‘cap’ still needs to be managed gracefully.
  2. Traffic management allows finer grained control over where you deploy your valuable £/$, e.g. favouring response times for customers on the ‘buy’ pages of your app over those looking at ‘about you’. So in times of high traffic you don’t just keeping throwing virtual servers (and therefore cash) at the problem – you get more selective instead.
  3. Finally, web traffic management products give some comfort as a ‘last resort’ in case auto-scaling etc goes wrong or is incorrectly set up etc – or your cloud vendor doesn’t meet their SLA :)

We’ve come across the following products in this space:

  • Zeus ZXTM software appliance – I intend to delve into Zeus ZXTM in a little more detail in another blog post at some point…as it is a software appliance (as opposed to the next two products) it fits a public cloud model better.
  • Big IP F5
  • IBM WebSphere DataPower – this is not in exactly the same market as the other two products, but is capable of fulfilling this role (plus other things) and we have worked with customers to deploy it for this kind of ‘use case’

Maybe one day auto-scaling will be mainstream enough and incremental costs will be low enough that traffic management won’t be required – but we’re not there yet I suspect.

CloudburstBox530x155We first heard about this appliance at the UK WebSphere User Group meeting back in March although it was commercially sensitive at the time so I couldn’t blog about it, and we’ve been waiting for it to be launched. IBM are leveraging their acquisition of an appliance format with DataPower to bring other appliance products to market of which this is the first. It’s an interesting product, aimed at both those looking to create a private cloud and for the public cloud vendors.

It provides the ability to dynamically deploy and manage virtual machines including the new hypervisor version of WebSphere Application Server (which supports the OVF standard) including security and chargeback facilities.

I’m looking forward to our internal WebSphere practice finding out more and getting to grips/playing with it.

See http://www-01.ibm.com/software/webservers/cloudburst/ for more details. What’s the next functional area that is ripe for ‘appliancisation’ (if that’s a word…)?

As promised, here’s some notes from yesterday’s UK WebSphere User Group meeting (combined with the UK WebSphere Integration User Group) – held at IBM’s offices at Bedfont Lakes.

First of all – let’s start with a picture of the happy crew at our stand. Rohima did an excellent job hunting down victims to force business cards upon, with me and David Taylor in support.

AllAtStand

The keynote presentation was by Rob High, the IBM Chief SOA Architect, concerning the 2009 technical strategy and directions for the WebSphere portfolio. Unfortunately (for you – the reader), much of what he said came with a “not for the public domain” health warning. Interestingly he was still banging the SOA drum and when I asked him later in the Q&A session about it he was quite dismisive of all the recent ‘SOA is dead’ dicussions on the web – he gave the message that I wanted to hear which was essentially this – there’s nothing new under the sun, good integration practices are still good, and so the current hype cycle status of SOA should not stop us from still understanding the fundamental business services of an organisation and supporting them with technology in an agile way. I think there is some marketing difficultly with the term SOA now maybe but there was no hint of IBM moving away from it.

Rick Robinson presented on Web 2.0, going through quite a bit of background material and then mapping that on to IBM products, and their support for REST, OpenAjax and Dojo. Whenever I attend this kind of presentation I always pick up a few Web 2.0-ish sites/things that I hadn’t seen before – I guess this is the nature of the relatively viral nature of the subject itself. My favourite was http://www.twitscoop.com/ – which gives a real time view of Twitter topics that are being discussed as a tag cloud. Another one mentioned was http://brightkite.com/, a social networking site that was location-aware before Google Latitude came along.

I then attended another session from Rob High about EA (enterprise architecture) and the relationship with BPM (business process management). For me this was more of an EA revision session (an IBM view on TOGAF to some extent) and I didn’t get so much from it, except the IBM roadmap for their recently acquired TeleLogic System Architect product and how that fits into the roadmap for WebSphere Business Modeller. The vision is that they will remain separate tools but eventually with a shared repository. Interestingly none of the Rational modelling tooling was mentioned apart from saying that it was focused on software rather than business modelling, so there was no vision to merge System Architect into Rational Software Architect in anyway.

One other thing Rob mentioned was an expansion of that massively overused phrase “IT-business alignment” (doesn’t every IT initiative that comes along promise this?!?) into more several more defined levels of alignment – this rang a bell for me and it’s something I’ll look into a bit more I think…

Whilst my colleague was presenting a DataPower case study from a customer project in another room, the final presentation I went to was a full on techy session from David Currie of IBM about the new features in WebSphere Process Server (WPS) and WebSphere Enterprise Service Bus (WESB) v6.2. I wanted to keep on top of where these products are going at a detailed level. The degree of change in each release is pretty amazing, but also leaves a slightly sour taste in your mouth as you realise that IBM are plugging feature gaps in the product that sometimes you knew about (e.g. code assistance to help deal with the SDO model) and sometimes you didn’t (ability to act as a service gateway – which seems a pretty fundamental thing for an ESB to offer). Apart from the fairly extensive changes to support service gateways (a number of new mediation primitives etc), the developments in the human tasks aspect of it are the most interesting to me. You can now attach documents to business processes, and users can override a business process flow (sounds dangerous! – but intended for those processes where business exceptions/interruptions can occur at any time). Sorry to finish on a negative, but one obvious gap was that SOAP 1.2 is now supported, but not for SOAP over JMS – purely due to them running out of time to get it into the release AFAICT.

So – all in all, a good user group meeting, especially as it finished with beers. Many thanks to Rick Smith and co for organising it.

Today the UK WebSphere User Group meeting at IBM’s offices at Bedfont Lakes took place, and I attended some interesting sessions – I’ll post more about this later on.

I picked up three interesting quotes today…mainly from Rob High (IBM Chief SOA Architect)…

  • “67% of business processes fail” (due to insufficient business resources)
  • “It costs five times less to reuse some functionality (from a legacy system) than it does to recode it”
  • “IT is a fashion industry”

One of my colleagues, Stuart Smith, flew in from Edinburgh today to give a presentation on a novel use of DataPower from a customer project, and then flew back again. This was particularly dedicated as he’s just had a knee operation, which he was nice enough to share with us…including the piece of string that still seems to be connected to something inside his knee…yuck.

StringyLeg

A quick blatant plug – my colleague Stuart Smith is going to give a presentation at the forthcoming UK WebSphere User Group meeting on the following theme:

DataPower XI50 has a broad set of features. By using them in innovative ways, Smart421 built an electronic form submission solution for a national revenue collection service which consisted of only one component: the XI50.

Utilising existing features of MS Excel, backing spreadsheets with XML schemas, and then using DataPower to service and accept submissions of the document, a ‘one box solution’ was created for the client allowing extremely quick development and delivery of a live solution (less than 1 week).

This presentation will talk through the design of the solution, concentrating on separation of concerns, reusability and maintainability. A design pattern around 2 step validation (schema XSD level, and business level) will also be discussed.

Should be interesting – this will definitely be about capability rather than a tedious sales pitch…or else I’ll be walking out :)

I’ve been involved in a project for a client recently that uses WebSphere Process Server and a DataPower XI50 to service enable a legacy system. Maybe I’ll post something about the fun and games I’ve had with Process Server v6.1 some other time… – for now I want to talk about DataPower.

For those that don’t know, it’s a 1U hardware integration appliance that performs functions what may have traditionally been done with an app server, e.g. secure service exposure, XSLTs etc. It does lots more than this which I won’t go into now – really quite a powerful beast.

Anyway, when Smart421 first got into DataPower and got a number of staff certified in its use – I must admit to being rather sceptical. Having now used it on a project though I have seen the light – looks like my colleagues were right all along! Easy and quick to setup, lower TCO, great performance, and good sales organisation support also.

Don’t you just hate it when that happens :o)

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers