Internet Explorer running through workspaces on an iPad.

Unusually, I’m writing this blog post in a browser. Specifically, I’m writing it in Microsoft Internet Explorer, on Windows. Not particularly odd you might think, except that I’m not sat in front of my laptop. I’m using my iPad.

It’s ok, you haven’t gone mad. Hell might have already frozen over when Microsoft released Office for iPad last week, but rest assured it hasn’t happened twice: MS have not released Windows iPad Edition.

Instead, I’m trying out Amazon’s new Workspaces product. This product went GA last week, after a private beta announcement which we covered back in November.

Workspaces is a virtual desktop product that allows you to run managed Windows 7 desktops in AWS with very little effort. Signing up takes minutes, and you can provision either a mostly vanilla workspace with just a few basic utilities installed, or a ‘Plus’ workspace which adds Microsoft Office Pro, and Trend Micro AV. In either case, licences for the installed software are included in the price, making it a great way to stand up a desktop machine with all the essentials fully licenced in no time.

There are two performance teirs: ‘Standard’, with 1 vCPU and 3.75GB of ram (which sounds suspiciously similar to an m3.medium instance), or ‘Performance’, which packs 2 vCPUs and 7.5GB of RAM (m3.large, anyone?). As is commonly considered best practice, each machine has a couple of disk volumes attached: one that holds operating system and applications (C:), and one for holding a user’s data (D:). Data on the user’s D: is automatically backed up every 12 hours.

Depending on the bundle you chose, prices range from $35 to $75 per month.

You access your workspace using an amazon provided client application that runs on Windows, Mac, iPad, Kindle or Android tablet.

So, that’s the basics covered. How is it to use? Honestly, from the UK it’s currently a little painful. This is to be expected as workspaces is currently only available in the US, so every pixel of my display is being shot across the Atlantic before I get to see it. I’m seeing latencies of just over 200ms, and Amazon recommend a sub 100ms latency for good user experience. I can confirm that both iPad and Mac clients work well enough (spot the Apple fanboy), although in common with any iPad based remote desktop product, the touch-your-screen-to-point-your-mouse impedance mismatch is disorientating at times. Swapping between devices seems to work much as you’d expect. If you’re logged on from your iPad, and then sign in from a desktop, your session transfers seamlessly to the desktop.

From an infrastructure/desktop manager’s perspective, it’s early days at the moment I think. AD integration is possible, allowing users to log in with their normal credentials, as well as allowing them access to local printers and (I assume) file shares. While deploying your own software is certainly possible, you’re pretty much on your own there: There is no concept of an AMI here, nor is there any support for packaging and deploying applications within the service itself. This in itself is probably not a disaster in some senses, since most enterprises have their own deployment tools, but the lack of custom AMI capability makes boot strapping a workspace into the deployment tool harder than it would otherwise be.

What about use cases? We can already see a couple of things we do for customers where workspaces could replace or supplement what we currently provide:

  • Cloud DR solutions (for an example see our Haven Power case study). As things stand, the key issue preventing us from doing this is the fact that you pay for workspaces per month, regardless of how much usage of the workspace you make. Unusually for AWS, there isn’t an API allowing you to automatically provision/deprovision workspaces, making it hard to optimise the cost here.
  • Remote desktops for 3rd Party users. We deployed a Windows Terminal Services farm in AWS for another of our customers, who use it to allow third parties to work on their applications. Both the applications and terminal services farm are managed by us in AWS, and are accessed globally. in theory it would be relatively straightforward to replace the terminal services farm with Workspaces, although we’d have to be confident that the performance is adequate.

Workspaces is a promising technology, but until it’s available in the EU-WEST-1, we’re unlikely to be able to adopt it except perhaps in very niche circumstances.

That’s the thing about Amazon though: Like Apple, when Amazon first release a new feature, it’s tempting to be a little underwhelmed. But then, like Apple, a few months or years later we look back at a now mature technology, and we can’t quite remember when it grew up from a metaphorical spotty teenager with potential, to an essential member of the team.

It’s this ability to start ‘simple’, but then improve and polish their products day in, day out, over and over again that has made both companies the unstoppable juggernauts they now are.

 

Please Rate and Like  this blog.  Share it via the social icons below or via short URL http://bit.ly/1hewn2U

Our readers want to know what YOU think, so please add a Comment. 

After the polished video intro (screen shot below and catch the full key note here) Brad Anderson (Corporate VP at Microsoft) kicked off TechEd Europe here in Madrid by highlighting the opportunities that we have available to us as at the moment in IT. He talked about innovation, helping customers differentiate themselves and how Microsoft aim to make others great, so whilst I definitely agree that with him, it was the resonance with so much of what we do at Smart that I really liked.

OpeningSeq

Four areas of innovation were discussed around; People Centric IT, Building better Apps for people, Data, and the Cloud Platform. Aligning to these were new product release announcements including SQL 2014 CTP, VS2013 and TFS2013 CTP, Windows Server 2012 R2 and System Center 2012 R2. Better still, a partnership with Oracle was announced which means Oracle is fully supported on Azure and Hyper-V (and from what I gathered Oracle images ready to spin up in Azure).

One of the best parts (and I tweeted at the time – @wanty1975 btw) was a video of one of our customers (Aviva) referenced as an example of mobile and Azure development combining to provide an innovative solution for their customers. Why was this so good? Well partly because my old boss was in the video…but more so because seeing real world examples, and close to home too help inspire and make me realise the opportunities we have available to us.

TechEd Pres

So a brief list of new features mentioned in the key note that you should be aware of:

  • BYOD – two-factor authentication making it easy to add devices (a MS Surface in the demo) to AD domains using Windows 8.1 (not sure what the experience would be like for other devices though??). The simple process means files are sync’d, permissios granted to corporate data and Line of Business (LoB) applications downloaded onto the device accessible via a company portal-like homepage.
  • VS and TFS 2013 – ALM benefits throughout the develoment lifecycle but I really liked InRelease (following a recent acquisition by Microsoft) which provides workflow and approval mechanisms for moving builds through environments. I could see this really helping to take the pain away from tracking builds through environments and ensuring everyone knows what versions are where. From a dev perspective I also thought the new Code Lens and Code Map features looked really helpful. These come under the banner of Situational Awareness and are designed to reduce the time and effort needed by developers when context switching. Code Lens integrates with TFS 2013 to provide details within the code base, such as tests passing/failed, who it was last checked in by and any recent check-ins. It helps developers understand who has done what, where and why, much more simply than the previous version/compare routine that we’ve all experienced. Code Map provides a diagramatic view of code (similar to architecture modelling tools) but this can be viewed at runtime in debug mode, like a call stack, helping developers familiarise themselves with code much quicker and easier.
  • SQL 2014 – some huge performance gains achieved through in-memory OLTP and row based access rather than page, therefore reducing locking issues and increasing performance. SQL will suggest what tables are ‘hot’ and recommend moving them into memory (I’m assuming they’re persisted to disk to fairly frequently!!). I also picked up on the ability to replicate an existing on-prem DB to Azure, to provide HA and DR via a right-click menu option…didn’t see or hear any more evidence of this but sounds like a great feature. Also HDInsight and Hadoop linkage to Excel to provide visualisations and insight to Big Data.
  • Windows Server and System Center 2012 R2 – providing the tools to make Hybrid cloud simple and easy, with tooling to help sync data, create replicas for DR purposes and significant performance benefits for virtualised machines.

So there are obviously things that I’ve missed and lots more to talk about but hopefully this represents a fair summary. I’ll also be tweeting any further nuggets throughout the week.

Turning disc

Turning disc by Toshiyuki IMAI.
CreativeCommons

With most enterprise arrays now providing the ability to allocate more storage than is physically installed and with data moving transparently between tiers of low cost and high performance with no human intervention – what could possibly go wrong?

For a long time now SAN infrastructure has been at the core of the Data Centre satisfying a need to hoard as much data as could possibly be collected. With the required capacity growing at an exponential rate, it has become common to predict massive data growth when designing services, often calculating growth for two or more years. This scenario leads to large amounts of storage being purchase in anticipation of things to come, by the time two years has passed, some of the attached systems will become legacy as they are superseded by shiny new solutions and some of the systems will just not have grown as expected. Even those systems that have behaved as designed are probably using no more than 70% of the storage as likely as not due to a “fudge factor” arbitrarily thrown into the mix by the architect.

Virtualised storage arrays (each in it’s own way it would seem) allow us, within reason, to allocate as many devices of whatever size we like to as many hosts as we see fit with only a small pool of physical disk. Allowing us to satisfy our storage hungry analysts whilst buying the disk in a phased manner and delaying spend until it is necessary. Add into this the reducing cost per Gigabyte over time and the benefits mount up.

At the same time, the array architecture has developed in such a way that it is harder to facilitate small amounts of physical growth. With larger RAID groups becoming more common and larger disk sizes compounding the issue, traditional SAN allocations became inflexible and for a small requirement, potentially expensive. The smallest building block becomes a RAID Group and for a one Terabyte requirement on a traditional storage array might require the business to grow an array by thirty Terabytes or more. With virtualisation comes the ability to create more devices for allocation without having to have physical storage installed to match. There is no longer a need to grow a system until the written data exceeds thresholds and the larger building blocks are far more palatable when they are shared between all attached systems.

As with everything in life, there is always a flip side. The balance in this case is increased risk; what if the physical pool fills up. At the very least, writes will have to stop but in some cases I/O stops altogether, neither scenario is one that a business will want to contemplate. This is where the need for careful planning and monitoring is essential.

In a simple virtualised environment, physical storage will be configured into Storage Pools and a number of virtual devices created and attached to these pools. As data is written to the logical device it will be written across the physical disks making up the storage pool.

Considerations when designing virtual solutions:

  • The number of servers that will be connected – To provision for sufficient connectivity and I/O capability. It is often not available but any additional I/O profiling for attached systems will also be useful.
  • The average server allocation size and utilisation – To calculate required space for allocation and physical writes.
  • The maximum allocation size –  Large allocations could result in a single server having the ability to fill all available pool space.
  • Maturity of the service – Mature systems will require larger initial space with slower growth; new systems may start small and grow into the allocated space over an extended period.
  • Performance – Is there sufficient throughput for the attached servers?
  • Criticality – There is an associated risk with over provisioning. There are ways to mitigate this and in some cases systems will be too critical to the business to accept any additional risk.
  • Mixed Production, Test and development environments – Will the different environments share the same pools or should they be ring fenced?
  • Alerting  – What level of alerting is configured? At what level of utilisation do warnings start? Can unexpected write bursts from a single application be identified and highlighted?
  • Capacity for growth – Can the pool be quickly expanded?
  • Time to deploy – How quickly can the vendor react to increase the Pool size.
  • Plan B – Can space be redeployed to production systems?
  • Cost reduction per GB – Flexibility may be cited as a benefit but the main driver for virtualised storage is to drive the cost per GB down, on the most part this is achieved by deferred purchase and tiered storage.

Monitoring:

With thin provisioning, monitoring is essential to maintaining a low risk environment.

As a minimum the capacities must be monitored on a regular basis depending on the activity in the environment, the attitude to risk and the subscription and utilisation figures. In most large environments daily should be sufficient or even weekly.

We have found the following capacity metrics to be the most useful and collect on a weekday basis so that we can produce trend analysis to support future forecasts.

For each pool on each array we collect:

  • Installed storage
  • Used
  • Virtual TB
  • Allocated
  • Subscription
  • Nett subscription

Experience shows that the larger environments have a very linear growth both in allocated and written storage and the trending figures provide quite accurate forecast estimates.

Performance figures should also be reviewed but a less regular basis is sufficient – weekly or even monthly.

With virtualisation improving the overall utilisation and ever larger physical disks being installed we see as a result that the I/O profile per disk is also increasing. These performance trends should be monitored and reviewed to anticipate thresholds being reached.

As a result of the increased utilisation and larger physical disks, we are also seeing the number of host connections to front-end ports (fan-out ratio) and the amount of storage allocated to front-end ports both increasing. The host ports should be monitored in the same way as the disks to anticipate performance thresholds being reached.

There is no point increasing the size of the pools to accommodate further allocations if the path to the data is already fully loaded.

This performance monitoring is specific to the capacity planning and does not replace the normal daily performance monitoring and alerting on the storage arrays.

What can be done to prepare for emergencies?

Most virtualised storage solutions have the functionality to preallocate storage. For systems that cannot afford any outage the storage can be preallocated in advance.

Understand the process with the vendors to increase the installed physical capacity.

  • How long for the vendor to install new storage from the request?
  • Can elements (such as procurement) be prepared in advance of the need or post deployment to avoid delaying the process?
  • Can hardware be left on site to reduce hardware deployment timescales?
  • Are sufficient resources available to accept additional disk?
    • Free drive bays
    • Cache
    • Power
    • Floor space

Understand the environment.

  • What systems are attached that could be sacrificed to maintain critical services?
  • Are there device replicas in the pool for non-production purposes that could be released?

These observations are based on simple virtualised pools, most hold true for tiered storage environments but these also bring their own sets of concerns and even more metrics to follow.

Please Rate and Like this blog. If you can, please leave a Comment.

Organised by the UK Windows Azure User Group, this free all day conference provided a great opportunity to catch up on the latest developments, particularly given the Microsoft announcement a couple of weeks back.

Core to this announcement was Microsoft’s move into Infrastructure-As-A-Service (IaaS), and the key note by Scott Guthrie positioned IaaS (described as Virtual Machines) alongside Microsoft’s current Cloud offerings which to date has focused on Platform-As-A-Service (PaaS – now labelled Cloud Services by Microsoft) and Software-As-A-Service (SaaS – Office 365 for example).

MS Cloud Day

Despite the lack of internet connectivity for a large part of the presentation (what is it with Cloud demos and loss of connectivity?!?) Scott did a great job talking through the slides, clearly describing the alignment of each of the deployment options: On-premise vs Virtual Machines vs Cloud Services vs SaaS.

In addition to Virtual Machines, the new Web Sites service was also discussed which gives Azure customers up to 10 web-sites and 1GB of storage for free (whilst in preview period, see here for further details). The demonstration showed how easy it is if you simply want to re-host an existing web-site on Azure whether it be ASP.NET, Node.js, PHP or even classic-ASP. So the new Web Site and Virtual Machine services provide a simple route to hosting applications on the Azure platform, but there is the added benefit of the Azure management aids, real time statistics and in the case of Web Sites incremental deployments and continuous integration (through TFS or GIT) too.

So where does this fit with Paas? Well Steve Plank from Microsoft provided some answers with another demonstration. With Cloud Services you get host of services to call upon including Storage, Database, Identity, Caching and Service Bus and the demo showed that if you design your application from the ground-up utilising these services, you benefit from an end-to-end application architecture that can be deployed and running in minutes at the click of a button. It is this architecture that really gives you the elasticity and flexibility in the places you need it.

A good day and exciting times with the options and landscape constantly changing. Nicely summed up by another Smartie (Andy Carter), ‘I guess there’s a load more stuff I need to learn about’, when a couple of days after passing the Azure certification MS announced the new services…(Well done btw!)

In December I had the pleasure of attending a seminar called “Service Management in Cloud and Virtual Environments” which was organised by IT Service Management Forum (ITSMF) at a very cold but, thankfully, snow free Manchester Museum. The aim of the seminar was to “highlight the specific challenges, and suggest practical ways to how you can modify your approach to service design, service transition, and service operations”.

There were speakers from the following organisations; HP, Global Knowledge, Atos Origin and Fujitsu and each presentation was followed by a short question and answer session.

First up was the presenter from HP who gave a very interesting address entitled “Why is Infrastructure Converging? – Creating platforms for the Cloud”.  This presentation outlined the technology challenges ahead driven by Population Growth, Urbanisation and Globalisation resulting in an Information Explosion. Together with an aging, complex and inefficient IT infrastructure will lead to an environment where business demand technology which is based upon rigorous standards, high volume, low-cost and rapid innovation. In the presenter’s opinion, this environment will be dominated by the “top table players”, namely Cisco, IBM, Oracle and unsurprisingly HP! There will always be small companies leading the innovation
cycle and “looking to be acquired” but mid market players such as SAP will be squeezed. New convergence technologies such as intelligent energy management, virtual I/O networks and virtual resource pools will be the infrastructure which underpins the Cloud Solutions but unlike the mainframe era the only lock in will be via quality as businesses look to metrics such as Time to Innovation, Cost to Innovation and Longevity of Supply.

This was an extremely thought-provoking presentation backed up by some incredibly large numbers and forecasts and it made me consider the following;

  • The need to address the regulatory environment when looking at Cloud solutions. For example the requirements for retaining data are different in the USA compared to the UK and Europe and what does this mean if the customer is based in the UK but Cloud services (and data stored) are provided elsewhere or anywhere!
  • What is the impact to Service Level Management and Service Operations where the network becomes the boundary of the organisation as devices are too diverse and widespread to be supplied or supported by IT?
  • The challenges associated with organisations grappling with initiatives associated with reducing their carbon footprint. How do you measure the number of transactions per unit of power and what are the operational challenges moving workload from the hot part of the data centre to the cooler areas?
  • Service Transition policies and plans are required to have a greater focus on how to move service from one cloud provider to another.

Next up was the presenter from Global Knowledge who gave a presentation called “Service Level Management and the cloud revolution. A survival guide”. This presentation started by reviewing the definition of the Cloud given by Forrester Research (“Any computing service that is provided outside the customer premise and that is provided on a ‘pay-as-you-go’ basis”)
and looking at some of “… as a Service” offerings such as Platform as a Service (PAAS) and Infrastructure as a Service (IAAS). An interesting slide outlined the speaker’s findings about “the Cloud” when speaking to various CIO’s and these were; lack of control, accountability, visual representation and transparency including security and resolution processes. The presenter then looked at Service Level management in more detail and in particular looked at some details around “Agree”, “Monitor” and “Improve”. When looking at “Agree” the following key themes were discussed; Cloud is just another Third Party Service
with Underpinning Contracts, Performance and Availability will be outside your control, Security is NEVER guaranteed and Service Catalogues will be vital. “Monitor” basically asked one question and provided one possible outcome;
Traditional measures of components are irrelevant and Synthetic Monitoring will be the key. Finally when looking at “Improve” it is worth considering that Cloud providers are unlikely to be interested in your Service Improvement Programme (SIP) and therefore SIP may have to be achieved by switching vendors.

This presentation was much more related to Service Management than the first of the day and was still as interesting and again made delegates consider some interesting questions.

  • During and after this presentation I found myself in full agreement with the assertion that Cloud Services  is just another underpinning contract which needs to be managed although in practice this may be an easier “sell” to the customer where a service is fully outsourced but where  other sourcing models, such as co-sourcing or multi-sourcing, are in place then this may not be the case.
  • Additionally I was in full agreement with the speaker regarding synthetic monitoring, i.e. monitoring from the end-user perspective, as for far too long organisations have hidden behind complex availability metrics whilst end users feel the service is less than satisfactory. However it’s easier to talk about synthetic monitoring than achieve it and when designing a service this sort of monitoring must be able to be lifted and shifted if the service is to be moved between Cloud service providers
  • As part of the Service Design process it seems to me that organisations may spend less time on technical resources but actually spend more time and effort ensuring people understand how to manage procurement and the supplier management processes.
  • Demand Management, understanding User Profiles and Patterns of Business Activity are likely to be a key part of Service Design process to ensure the ability to leverage benefits of the cloud such as ensuring there is enough capacity to meet spikes in demand.
  • Consideration will need to be given to how network availability is designed as a network is the key to accessing Cloud based services. Questions will need to be asked how are the providers of networks geared for this change when changes
    in this area are typically slow to provision. Additionally how does this affect IT Service Continuity if network providers utilise a shared infrastructure.
  • Most of the above questions and issues appear to me to have a significant bearing on how Service Level Agreements are negotiated with clients. It may be that fundamentally not a great deal has changed with delivering the service
    when moving to the cloud but the very act of moving a service in to the cloud will mean that many questions that have not been addressed until now suddenly become all important.

The 3rd presentation was given by a representative of Atos Origin and began by restating some defining principles of Cloud Computing. The presenter then outlined some considerations which organisations may need to consider when looking at moving in to the Cloud; Migration consideration – build an “Internal Cloud”, Determine external vs. internal Cloud ROI and
monitor external costs, Don’t abdicate responsibility to the Cloud, Use the Cloud for short-term scale and offload capacity, Safeguard your data, Gain access to well-trained “Cloud expertise” which in my opinion these all seemed very sensible observations.

Personally I found this slide show less stimulating but one or two of the slides made for interesting discussion.

  • Firstly that organisation should consider moving non-strategic services in to the Cloud; personally I can see the merit in that approach but that may be a bit idealistic as the appetite and speed of migration of services in to the
    Cloud will be driven by many different factors. For example if an organisation no longer has capacity in their own data centre then all future services may be moved in to the Cloud in spite of their strategic importance to the organisation.
  • Here at Smart421 we are not building an Internal Cloud but undertaking an exercise to “eat our own dog food” by moving some of our internal services in to the Cloud.
  • I particularly like the suggestion to engage with an organisation with Cloud Expertise :-)

The final presentation was from Fujitsu which was titled “A Private Cloud – The HMRC “S4” Service” which was described as “A private cloud service for hosting Windows, Linux, and AIX based applications”. A Capacity Unit was the model used for charging and was composed of n CPU cores + memory.

When discussing the challenges associated with this service it was clear that many of the traditional challenges remain, namely; maintenance slots, how much to invest against the foreseeable customer demand, forecasting workload and optimising utilisation.

Finally, looking at the benefits of this shared infrastructure solution it can be seen that there are similar benefits to be gained as identified from cloud solutions, that is; reduction in costs, faster deployment of business solutions and reduced sizing risk. The final benefit, according to the presenters, was “Simple decommissioning at end-of-life” which I assume was meant from an infrastructure point of view in terms of not having to retire and dispose of hardware etc.

  • Although this was presented as a Private Cloud, in my opinion this was more of a shared infrastructure solution which had some features and benefits of cloud but also had some limitations. For example the HMRC Service Owners
    could purchase the service using a standard service catalogue and costs were below what would typical charged to host the solutions on dedicated hardware however there didn’t appear to be anyway to “burst” beyond what was purchased
    in terms “Capacity Unit”.  Whilst I can see attraction of this from a technical point of view this charging model does not directly relate to business outcomes or metrics which in my opinion is the way the IT Service Management community is heading.
  • In my opinion a true cloud solution would offset some of those shared infrastructure concerns. For example maintenance slots negotiation could be mitigated by moving the workload to another instance in a true cloud solution. Again in a true cloud solution investment would be less of an issue as the customers only pay for what you use.

In conclusion this was interesting rather than startling seminar however what was still clear to me was specific real world examples of organisations moving services in to the Cloud are still pretty thin on the ground. Additionally addressing the security concerns of organisations is fundamental to driving this growth of take up of Cloud Services.

Looking at the Service Lifecycle and some of the processes contained within I believe that greater emphasis in the future is going to be  placed upon Service Strategy (Demand Management), Service Design (Capacity Management) and Service Transition.

When assessing the impact on Service Operations I think there is a fundamental requirement to assess existing processes for appropriateness and effectiveness when moving services in to the Cloud. Indeed there could be opportunities to streamline processes and for example, potentially reduce the impact on end users associated with Change.

Continual Service Improvement will continue to be a challenge especially with those Infrastructure as a Cloud (IAAS) service providers which operate a true utility model.

Finally in my opinion, the demand placed upon IT Service Management will still be as great as ever but the emphasis may be different to what we have seen in the past which for me is an exciting part of the challenge.

Following on from the previous post, here are some more random notes about CloudBurst – pros, cons, features, limitations etc.

One thing that occurred to me was “Why is cloudburst a hardware appliance?” – it could just be a software appliance…well the key reason it seems to me is that it holds lots of OS/WAS admin passwords etc and so the ‘hardening’ of a hardware appliance with tamper-resistance etc is a key feature.

Deployment patterns and deploying

WAS hypervisor edition is actually an OVA image with 4 virtual disks, with multiple WAS profiles set up already – so the hypervisor edition is actually a VM image, rather than a customised WAS code base it seems.

There are patterns for a single server deployment, small cluster (3 VMs), large cluster (15 VMs) etc. You can modify a standard pre-packaged VM (e.g. add a fixpack etc) and then ‘capture’ back into CloudBurst as a standard catalogue VM for use in new patterns.

Control is available over whether certain pattern properties (passwords, memory size etc) can be overridden for each instance of that pattern or not.

A key point – keep track of any changes made to your VM patterns (e.g. any tuning done) and then ‘bake’ the in the pattern in CloudBurst so that any future deployments get the changes – otherwise they’ll be lost when you redeploy the pattern.

The first image transfer to each hypervisor can take up to 1 hour to transfer (obviously this is environment dependant)!

IP addresses are allocated on the fly when deploying, i.e. it pulls them from a pool of available IP addresses that the admin user sets up.

Script packages

Post deployment, CloudBurst can run any script, not just wsadmin scripts – essentially it ssh’s over to the VM and uses a zip file and an executable name (e.g. <some_path>/wsadmin.sh) with some arguments (e.g. what JACL file to run). ‘wsadmin’ scripts can be used against the deployment manager VM to install an application (EAR file) into the cloud cluster. Some “wsadmin” scripts are provided out of the box for common tasks – setting up global security etc.

Management facilities
CloudBurst provides a centralised access point to the consoles for each VM/WAS instance.

You can control separate permissions for each user (with LDAP integration), so you can have multiple users using a single CloudBurst box at one time, creating their own ‘private’ patterns etc.

You can use it to control the hypervisors to create snapshots of all the VMs in a particular deployment, so for example you can run some tests and then quickly recover the entire virtual deployment (i.e. all the VMs).

License management/metering etc – seems a pretty limited offering, it relies on admin REST APIs exposed by CloudBurst that are called by something like Tivoli etc.

CloudBurst admin console interface seems v..e..r..y.. slow to respond sometimes.

CloudburstBox530x155We’ve got a bit more under the skin of CloudBurst now and I wanted to post some info that’s not been written by anyone in Marketing…about the realities of the product (good or bad) rather than the saleman’s spin.

So what does it do? Well, in a nutshell it holds VMWare ESX and ESXi VM images of WebSphere Application Server (WAS) on disk and can install them to some IP addresses of your choice at your command and follows them up with some automated WAS admin scripting of your choosing. Some pre-created WAS Hypervisor Edition VM images exist (based upon Novell SLES 10 SP2) or you can create your own and package them, up using OVF (Open Virtualisation Format). There’s no runtime component to the product other than it’s VM management/deployment role, i.e. it relies on WAS XD if you want load balancing etc. There’s more to it than that but that’s the bare bones of it.

So what are the key use cases for CloudBurst – why would someone want one when they can install VM images themselves? Well, the key reason for use is to take the deployment cost out of an IT organisation. The creation of the OVF VM images is still going to be just as easy/traumatic as it was before, but once you’ve got a “pre-canned” environment setup in CloudBurst then you can roll that out repeatedly and with confidence with very little manpower required.

Who would use it? Well, if you get benefit from being able to ‘can a topology’ rather than just putting a single machine image ‘in a can’ then there could be real cost savings and agility/reproducibility benefits from being to roll out a defined topology to your private cloud very quickly and repeatedly. So if your organisation has many projects running throughout the year that need multiple dev, test, pre-prod, prod etc environments created and ripped down all the time, then you’d very quickly get a payback I suspect. It would also make you more likely to kill off an unused environment if you knew you could painlessly recreate it, allowing you to have less capacity needs overall.

The immaturity of the Open Virtualisation Format (OVF v1.0 was only released in March 2009) is a key constraint at the moment and this is an industry-wide issue – it’s early days. A key impact relating to CloudBurst is that each VM image is a bit of a beast at 20Gb minimum (not entirely clear why this is – maybe due to a WAS install being big anyway, due to the way virtual disks are managed in the OVF standard, and the images being uncompressed?). This directly impacts deployment times just due to the sheer volume of data to be shunted around, but it’s not immediately clear to me if this an OVF issue (it does have some compression support) or an issue with the implementation/use of the standard. If deployed more than once to the same hypervisor then deployment times can be accelerated as all this data doesn’t need to be provided a second time. It can take something like 7-15 minutes to deploy a VM image.

There are two key design approaches when creating your VM images (the best approach is probably a mixture of the two):

  • use a ‘stock’ WAS image and then customise it with your coifing/settings and your EAR files installed etc, and create a new OVF image for deployment
  • use a ‘stock’ WAS image and then do the customisation post-deployment using WAS admin scripting

So where’s it going from here…? Well support for Xen-based VM images must be likely as this is crucial for anyone who is going to ultimately deploy into Amazon EC2. Portal Server is already available on EC2 on a pay-as-you-go basis and WAS is coming. Also, it’ll be interesting to see if IBM support non-WAS images in later versions of the product.

With all the excitement surrounding cloud computing, I’ve been chewing over what the realities are in an enterprise of deploying into a private cloud environment. Whilst there are obviously great cloud ‘use cases’ like the ability to scale up rapidly to create a performance test environment etc – the rather more mundane fact of corporate life is that many of the deployments are relatively small scale, maybe a few servers at most to support specific and specialist business operations. But we still want to have them hosted inside our corporate private cloud – we want them deployed to a virtualised environment, with fail-over support, higher resource utilisation and managed in the same way as the rest of our IT estate and so on.

But some relatively specialised, industry-specific software packages still require the use of hardware dongles plugged into a USB port in order to enforce license agreements. So then what do you do? Well, you can ‘virtualise away’ the dongle to the extent that there is technology out there to allow VMs to connect over TCP/IP to USB ports that are actually hosted elsewhere, but that still leaves you with a hardware affinity in the data centre, e.g. you couldn’t transparently move a dongle-dependant VM from one data centre to another in the cloud. And it also gives you an additional single point of failure on the infrastructure design that needs to be addressed. Obviously you would hope that vendors would be moving away from using dongles – but some still will for the foreseeable future.

It’s a silly but real example, and the point is that that the vision of everything moving to being deployed to private clouds is subject to these kinds of practical considerations, and the application vendors have a role to play in enabling their products for these kinds of deployment models…

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers