Uncategorized


Infrastructure Architecture is dead, long live Infrastructure Architecture.

Cloud infrastructure has changed the role of the infrastructure architecture into one of a virtual architect. The tin and wires approach, the HA and DR, the test and release and following the sun have all been replaced by Cloud infrastructure. Careful sizing, location, rack allocation etc. are unnecessary for most applications.

The desktop environment is changing radically: desktop management, massive rollout projects and investment decisions obsolete.  The use of virtual terminal software returns the desktop to the server. BYOD removes the need and ability to direct the desktop and mobile landscape. Network architecture is increasingly virtualised both within the virtual datacentre, between datacentres and client devices.

It is hard to imagine that bastion of physicality the Infrastructure Architect dealing with purely virtual server  communicating on a virtual network and that it can be assembled from their own virtual device. There is as serious point to this, as it depends on the Application Architect to design in such a way that enables the full power of the cloud to be used.

Not that it is anything new, just more pressure on the Application Architect.

 

 

Please Rate and Like this blog.  We always welcome your feedback and your opinion, so please leave a Comment.

I attended the Big Data Analytics 2012 event at the Victoria Park Plaza, London (organised by Whitehall Media) yesterday along with our CTO, Robin Meehan. We wanted to attend to keep in touch with what some of the big players are saying about “Big Data” and their views on analysis with Cloudera, Oracle, SAP, Informatica, SAS UK, SGI, MapR Technologies, GigaSpaces, MicroStrategy, Pentaho etc all there (and furnished with masses of pens, notepads, pen drives and stands etc).

The event itself was good; usual mix of CIO’s, CTO’s and techies in attendance. A number of the key note speakers (in and amongst the sales pitches) had some interesting stories and facts, such as John O’Donovan, Director of Technical Architecture & Development at the Press Association, talking about how they analysed the “masses of data” captured during the London 2012 Olympics, to deliver “Content as a service” to consumers around the globe (including the translation of the content on route) supporting up to 50K TPS. This was followed by a great clip explaining the math behind Robert Carlos’s improbable goal – it’s worth a look – click here to watch on YouTube.

Where do I start?

Where do I start?

Bob Jones, Head of the CERN Openlab project gave us an insight to some of the “Big Data” challenges they are facing at CERN and with the generation of 1 petabyte per second it is clear to see why! Even when throwing most of the data they still permanently store between 4 – 6 GB of data per second and on board for recording 30 PB of data for 2012 and they aren’t even running at full steam yet!

Many other companies spoke at the event but the one that resonated with me most was the one by David Stephenson, Head of Business Analytics at eBay. It wasn’t the impressive stats such as 70-80 billion database calls per day, 50+ TB of new data stored per day or the 100+ PB of data processed daily.  It was what he called “The prize”:

“using behavioural data to understand our customers intenet, preferences and decision making processes”

The reason this resonated so much with me, because this is exactly “the prize” that I have been working with one of our (Smart421’s) customers for – tapping into the rich vein of information available and utilising this to ensure that they engage with the customer in a more relevant and timely manner.

It really does come down to the four V’s: (i.e. Doug Laney’s “3Vs” construct of Big Data mentioned in previous blog here, plus one further crucial point)

  • VOLUME
  • VARIETY
  • VELOCITY
  • VALUE

And actually the one we all really want to focus on is the fourth VALUE!  Otherwise, why are you doing the first three anyway – right ?!

It is the VALUE of the data that we seek, seeing what else we have available to us to allow us to progress, make better applications, communication more effectively and more relevantly, whatever you business is is the VALUE that you derive from your data that really counts.

One of the points that David Stephenson made was that 85% of the eBay analytical workload is new or unknown – you don’t need to know all the questions you need answers for when you start a “Big Data” programme, just look at what you already have, see how this can be supplemented, what is relevant to your market or other areas of business and take it from there!

You’ll be amazed at what you find and the impact that it can have on your business! It is not all about unstructured data, or installing and using Hadoop, it is about using your data and this will most likely fall into all three of the structured, semi structured and unstructured camps and no one tool is going to give you a solution – it is about realising that there is an untapped resource that give you so much – so remember it’s all about the four V’s (well… really… it is the last V we are all wanting to get from our “Big Data”).

Microsoft announce Windows 8 at //Build

You may have heard recently that Microsoft announced an early release of Windows 8 at the //Build conference in California – although Microsoft are calling this release Windows Developer Preview (otherwise known as a CTP).

You might have downloaded this build and attempted an install on release day last week. If you were lucky, the install will have succeeded. If you were unlucky, like I was and tried to use virtualization for the install, you might have received an error similar to:

DRIVER_IRQL_NOT_LESS_OR_EQUAL

I have written a blog post on this error on my other blog that you might find to help you resolve this problem specifically for the Windows 8 pre-beta build here: http://www.simonrhart.com/2011/09/windows-8-pre-beta-driverirqlnotlessore.html

NOTE: This is *only* for the pre-release of Windows 8, the error above is a very generic Microsoft Operating System error.

I had my first experience of the WebSphere Integration User Group at IBM

IBM Hursley

Hursley on the 29th June 2010 (despite the best efforts of the traffic on the M25 trying to thwart my plans) with another colleague from Smart421. The event was held in the original house at Hursley which is amazing venue and very impressive, regardless of the number of times I have visited the site before.

After loading up on coffee after the early start and to calm the nerves from the drive in and then a brief chance to network with the many other people attending the event from a variety of companies, such as Visa, BSkyB, UBS, RBS, Lloyds Banking Group, JP Morgan to name but a few, it was straight down to business with the key note speech “WebSphere Technologies in a Smarter Planet” delivered by Kevin Turner (who was quite a captivating speaker). The subject itself touched on how IBM are trying to align the WebSphere product suite with the what they see to be future for a Smarter plant were we are more interconnected, instrumented and intelligent. This was linked to the developments that has been made to the WebSphere products such as enhancements of Low Latency Messaging (LLM – targeting trading platforms using multicast as the underlying technology) and MQ Telemetry Transport (MQTT), plus the strategic acquisitions that IBM have made such as Cast Iron, CoreMetrics, Lombardi and iLog.

This was then quickly followed by “Scenarios for Message and Event Monitoring from an Architectural perspective” and “Message and Event monitoring implementation and demonstration”. Richard White was the presenter, someone who worked with some of us “Smarties” at Marsh a few years back. The presentation was very good, and gave some real insight into reaping more rewards from Business Activity Monitoring (BAM). In this day and age most of the applications we deal with are integrated with many different platforms, but most BAM is focussed on specific applications (so it’s not really BAM, more …er… AM J)”, whereas looking at a greater scope can improve the usefulness of the monitoring as having too low level monitoring can hide the real issues. He also touched on thinking about the event models and what information we want to get better return on investment from our BAM and key points to consider are:

  • Summarisation – reduce the BAM traffic
  • Correlation – construct high value information from disparate events
  • Temporal – Present the events in an expected order
  • Interpretation – conversion of domain knowledge to a canonical representation

To be fair a lot of this is what we already know but is something that often takes a back seat and can be a last minute consideration, but adds real benefit to business for supporting their applications, maximising their potential and achieving what they are capable of. BAM is definitely becoming an increasingly higher priority to business, especially in these times of austerity as by watching our processes and systems activity, small tweaks can bring huge returns on efficiency and even profit!!!

The afternoon sessions were equally as good with two sessions covering, “WebSphere Lombardi Edition” and “Solutions to Enabling business to manage Business Rules within your ESB” which I will write a separate blog about shortly…

I also attended the IBM Impact 2010 event, but did not go to quite the same talks as Paul Russel. I have not got around to making a few notes until now, which for once is not a bad thing, as it has given me the opportunity to see how IBM have followed up since the event. For one thing they have made a quite comprehensive set of slide sets available for download here. I shall no doubt bone up on a couple of the sessions that I was interested in, but could not make, as I could not be in two places at one time.

Although the event centred on the WebSphere product set, it was different in flavour to the WebSphere User Group, and the WebSphere Integration User Group events in that it was explicitly targeting IT decision makers (architects and upwards), whereas the user groups – apart from being independant – tend to target the people involved in the implementation a bit more.

- As a quick aside, I notice that the WebSphere User Group website has had a bit of a refresh recently, to include at least some downloads from its March event, if not the information about its September event in Edinburgh this year – encouraging news after the hiatus resulting from leadership changes for the user group.

Back at the Impact event, there were separate streams for business areas (smarter work in Financial Services and Retail respectively), as well as the usual streams that are more centred on technology.

The financial services stream kicked off with a very intersting presentation by Nigel Solkhon, which provided a really interesting view of the business environment around banking, especially post credit crunch. For example, if you look at the fourth slide of his presentation, there is a graph of various banks’ loan/deposit ratios, andwhat happened next. 

Richard Brown did an interesting presentation on the use of WebSphere business monitor to instrument an existing technical process that was working inconsistently before – and still derive value in the form of both reduced errors/inconsistencies and then using the monitoring to document the ‘real’ business process – instead of what people think was happening.

I did also attend a session on ‘solving the business case challenge’ by Andy Martin. I never knew that there were experts within IBM whose main job is helping clients come up with a business case, especially for horizontal investmnts (which are often a challenge to IT shops in large organisations) this could provide value. I think that it is a resource that we could/should follow up on. 

According to the blurb on the Impact microsite: ‘The first [my emphasis] UK Impact 2010 was a resounding success’, which leads me to think that there will be more. I shall be recommending my practice members to attend, as it is a useful and informative event.

My current client has posed an interesting challenge: they have an SOA architecture and want to leverage it in some of their batch activities. Now, while SOA and batch may not jump out as obvious bedfellows, it doesn’t strike me as unreasonable to want to take advantage of an existing SOA investment. How do you process hundreds of thousands of records whilst reusing those lovely enterprise services you’ve spent so much cash creating – and do it in a performant way?

As you might expect, some reasonable concerns have begun to surface. But my client is currently taking a mature approach to the problem: while some of the team think there may be a problem, we don’t have numbers to prove it. And so, we’re going to get the numbers and let that inform our response.

So the point of my post is not to say what we found out, but rather the explore the intellectual space. After all, while SOA isn’t perhaps a natural candidate for batch, I must admit to being a little disappointed by the alternatives. How can you do SOA batch without dropping SOA? Well, I think there are some options. (Which I’ll cover in just a second, below.)

Having been through some of them, I feel a mixture of happiness with a twinge of disappointment. On the positive side, there are a few options that offer benefit, with varying degrees of cost. On the negative side, there’s no switch that’s going to make SOA performant without investment. In some ways, perhaps I’m asking SOA to address a use case it really isn’t intended for but I’m not so defeatist – clearly SOA batch is not out of the question. It’ll be interesting to see how this area develops – if enterprises are to extract their maximum value from SOA, then batch is a unique use case that cannot be avoided.

  1. Pare down the per-record process to a bare minimum. Rather than calling a single heavyweight service to do everything, perhaps part of the work can be carried out per-record, leaving the rest to be dealt with in the background. One could perhaps even take this to the extent of only performing validation in the per-record loop – a read-only SOA validation service isn’t completely out of the question. It is a necessary part of the implementation in any case.
  2. Make the services themselves a bit more batch oriented. Make the services accept 1..* records to work on, and supply them, perhaps 100 at a time. This really cuts down the round-trip time, at the expense of necessitating a bit of forethought in service design. But it’s an easy pattern to understand, and potentially one that could be retrofitted to an existing service layer if the ESB can be moved close enough to reduce the round trip overhead, or the implementation and interfaces changed slightly.
  3. Have a two stage process that validates the input, prior to processing the content, and decoupling them. The idea would be to perform a quick first pass (perhaps even not leveraging SOA at all), and then loading the known valid data into SOA in the background. Ideally, the validation step catches enough problems to make the remainder that fail at run time a not inconceivable problem to deal with operationally.
  4. Stick with SOA, but go for less heavyweight components. For example – in our case, we are using a BPEL engine to do the load and orchestration, but that could be switched out for a ESB only orchestration. A bit more fiddly, but doable.
  5. Some times, things can be done in different places. (In our case, actually they can’t, but I’ve seen this enough times to mention it.) For example, if part of the job is aimed at ensuring data wasn’t corrupted or truncated in transit, there are approaches to dealing with this at the network or transport layer that mean the service layer can be freed from such a menial task to do the heavier lifting.
  6. Process things in parallel, and leverage the spare capacity in your system. So, this only applies if there is capacity that can be used. But if you have it, then perhaps more of it can be dedicated to the batch processing at certain times (overnight or in quiet times). This can require some deep reconfiguration of the platform, perhaps to leverage multiple queues with differently performant configurations, but it is only configuration.
  7. Partition your environment, so that no matter how much you throw at batch, the rest of the system remains responsive and available. This is more of a environmental deployment approach, but if you can do it, it’s another option that doesn’t require re-development.
  8. Make your services batch oriented, but also take advantage of SOAP with Attachments and stream your data. Not something that can be done without effort. But if your payload has a few hundred thousand records in it, and you can avoid the overhead of a request/reply for each record, the saving could be significant. However, I don’t know of many tools that could take advantage of this without some clever implementation.
  9. In some situations, it might be possible to redeploy components so as they are co-located. It is clearly not always going to be possible. But if it is, and if the overhead associated with the across-the-network trip is a significant contributor to the problem, then this could really help.

And if I find out what the answer is, I’ll come back and let you know!

IaaS, PaaS, SaaS and now introducing SCaaS…. Public Cloud Computing gets Real

… for me at least. And it turns out that it has been real for some firms for several years.

I attended a session held by IBM and Amazon Web Services (AWS) today at Hursley labs in Hampshire. The title of the session was: “Software as a Service: Getting started with IBM software on Amazon Web Services featuring WebSphere sMash & DB2″

The name kind of disguised what a mind-expanding day it was going to be. There were two major aspects to the day:

  • that the commercial model for cloud computing is alive and well, and that we are at the beginning of a brand new ecosystem. This is as big as the first phase of the world wide web was, and bigger than web 2.0 by far
  • the hands on proof that all the parts worked.

AWS is a provider of IaaS, and as I found out when signing up for my AWS, by reading the T’s and C’s, Amazon also has a subsidiary, Alexa, which provides Service Components as a Service (call it SCaaS or CaaS for short) – notably search and indexing.

Instead we will look at two of the major offerings of AWS, namely Simple Storage Service (S3) and the Elastic Compute Cloud (EC2). There are quite a few other offerings from AWS, which are interesting if you are using IaaS, but the core offerings illustrate why IaaS is more than a flash in the pan.

S3 is simply the provision of secure storage. Secure in the sense that it is private, and in the sense that it is persisted across multiple data-centres.

EC2 is the ability to provision (i.e. not just turn on, but to create) virtual machines – specifically various flavours of intel machines, running various OSs – within minutes. And then just as quickly to be able to release the machines. An AWS virtual machine is called an Amazon Machine Instance (AMI).

The story of how Amazon came to be providing infrastructure on demand, originally for its internal consumption, arises out of their desire to remove ‘undifferentiated heavy lifting’, e.g. how do we provide secure storage? (now provided by S3) They wanted to save each team having to solve similar heavy lifting problems, and concentrate on adding value. The end result is that the one IaaS provider I do (now) know something about, now, is already in a position to provide cheaper infrastructure to most blue chips, than any internal IT department – including those which are run by outsourcing outfits. AWS certainly will not be the only game in town; we can expect more players to join the market – which will make the cloud IaaS offering even more competetive:

  • AWS have a large bank of hypervisors (hardware on which virtual machines can run), and have sorted out the provisioning of Virtual Machines to those hypervisors such that:
    • The provisioning is controlled by the user, either through a web gui, or via web-services invocations
    • The provisioning can be billed by the hour
    • The provisioning can be automated to provide scaling on demand
  • The hypervisors (i.e. the actual hardware) tend to run at about 80% utilisation. That is about sixteen times as much as most in-house IT shops. This means that AWS has one sixteenth of the cost of hardware, power, space, and cooling for any given set of software running.

Before I went to the session, I was aware that one of the benefits of cloud IaaS was that it was on-demand. I had no idea that the costs for running the machines 24×7 would be so much cheaper than most blue-chip’s IT shops.

The lab sessions were set up around the use of WebSphere sMash and IBM DB2 9.2 CE all running on virtual machines called Amazon Machine Instances (AMIs) in the EC2.

This is very much like the session I had a couple of weeks ago with the WebSphere Cloudburst appliance. Cloudburst makes it easy to deploy multiple instances of WebSphere Application Server (WAS) to a set of hypervisors (machines which are running as hosts to one or more virtual machines). The cloudburst appliance was very slick in its operation, and a big step towards virtualisation and re-use within a datacentre.

Setting up all of this virtualisation (i.e. the hypervisor estate on which to install and un-install the various virtual machines) within an IT shop requires a fair amount of capital expenditure, not to mention time.

The big difference between the cloudburst session, and this session is that the hypervisors are already there, ready to be used in the Amazon EC2 cloud: i.e. the capacity is there already. No capital investment required. No lead time. The labs allowed us to set up and instantiate the AMIs via a web gui with little fuss. A web-services API is also provided for this, with the relevant (X.509 and other) security mechanisms, and there is a vibrant developer community writing libraries in various languages to invoke these web-services.

The AMIs come in a variety of flavours, in terms of base operating system: windows, various Linux distros Open Solaris, but the second innovation is that commercial software is also available by the hour, just by paying a bit more on the hourly rate. So an AMI with a ‘free’ OS is cheapest ($0.0095 /hr at time of course, and now $0.0085 /hr). Windows costs a bit more (currently $0.120 /hr). All of the AMIs come with the necessary software to work with the AWS infrastructure – for instance to be able to capture snapshots of themselves (after configuration) for further cloning, or the capability to connect to S3 storage.

IBM is also taking part in that it is offering AMIs with various of its software products. Often the developer version is ‘free’ in that it costs the same as the minimum AMI cost with a bare OS. Then when moving to production the paid-for version is available as a higher cost AMI (say $0.36 /hr for express edition DB2, or $1.25 /hr for the workgroup edition). Similar terms exist for other IBM products (WAS, WebSphere Portal Server, sMash). This is a major step forward, as it allows the complete stack to be paid for on demand. IBM also allows you to bring your existing licenses to the infrastructure running on AWS.

It is worth noting that although IBM is a frontrunner, it is not alone in working with AWS in this way. Microsoft and Oracle also have database instances available, for example.

In summary.. by looking at AWS and IBM (so we expect the market to be at least as good as this) we can see that :

  • Cloud IaaS offerings are already here making the infrastructure offerings of in-house IT look expensive.
  • The management of that infrastructure by the hour, and the ability to pay by the hour (no capital expenditure) is compelling, especially for those organisations which have highly variable load
  • AWS have a good story around data protection, Sarbanes-Oxley and other FUD-inducing regulatory measures
  • Value added ‘heavy lifting’ like storage, and load-balancing is greatly simplifying development and deployment (users do not have to be Cisco and EMC black-belts), thus reducing development costs.
  • There will be a growing number of value add components, e.g. card merchant services, search engine in the cloud – such that it will soon be the case that many large solutions have more components ‘in the cloud’ than they would have ‘in our data-centre’.

So now I am going to stick my neck out, and say that within three years there will be no large organisation that does not investigate cloud IaaS for new infrastructure installs (including refresh operations); and within ten years 50% of all large organisations (£100m turnover +) will have at least 50% of their data-centres ‘in the cloud’. It will be a higher proportion (say 80% of smaller organisations).

Just thought I would give you a quick update from my week at Tech Ed Europe 2009, the main focus of TechEd was all about “The New Efficiency” – both in terms of power usage and productivity.  Areas covered in the keynote speech were:

  • Windows Server 2008 R2  (virtualisation is also included in this area) -> uses less power
  • Windows System Center and Forefront all offering the ability to save time and increase productivity with control from their control panels
  • Windows 7 -> offering an optimised desktop experience, now faster with the ultimate edition offering built in bitlocker functionality for improved security.  (Touch features are available in Windows 7 – there was a good sprinkling of talk about gestures in several of the sessions that I attended)
  • Exchange Server 2010 -> Forrester research say the ROI is 6 months. Combined with Outlook 2010 to integrate a voicemail to text feature and a way to view your emails in conversation strings.
  • Office tools such as Word, Excel and Outlook toolkit available from browsers
  • Cloud computing (Windows Azure Platform) – good for economies of scale, though it will not save everyone money, this was not highlighted in the keynote speech, but it was discussed in the other sessions. Dublin data centre opening on Wednesday 18th November.

Other areas that caught my interest during the week:

  • SharePoint 2010 more integrated with existing tools, can add document from Word, Excel etc.  It’s built on 3.5 of the .Net framework.  The good: migration process easier than 2003 – MOSS 2007 and the underlying framework has not changed as much, tight integration with the Office toolset   The not so good: requires upgrade to x64
  • .Net 4.0:  The trend seems to be moving away from SOAP towards REST.  Microsoft do not plan to extend LINQ (though this plays a key part of SP2010, StreamInsight (see SQL Server 2008 R2) etc) and further and sees the entity framework as the way forward
  • SQL Server 2008 R2: StreamInsight, the new event driven application to be shipped with SQL for use with multiple strands of realtime data – has adapters for input and output and an StreamInsight engine for processing and capturing data. Early adopters McLaren Electronic Systems Official ECU Supplier to the FIA Formula 1 World Championship for telemetry software presented the benefits (release date interlinked with SP2010)
  • Visual Studio 2010:  There has been a lot of work in the Architecture and Testing tools space.  You can extend the Architecture features yourself if you want and the Team Edition offers automated testing options for UI as well.  Silverlight can be used with SharePoint 2010 and mobile devices.
  • Team Foundation Services:  Visual Source Safe is at the end of its life, time to move to TFS.
  • Mobile -> There were a few sessions on the mobile space, but it was generally quiet on this front (see things to look out for below).  Touch screen and Silverlight intergration seemed to be topics that were on everyone’s mind.  

 And finally, a few things to look out for in the next week or two:

  • BizTalk ->  announcement expected at Professional Developer’s Conference (PDC) 2009 in Los Angeles, which takes place this week (November 16th)
  • Windows Azure Platform -> European pricing to be released this week
  • SharePoint 2010 -> Release date to be announced soon – current expectation is March/April 2010
  • Mobile -> Announcement expected this week on future developments.   

ByronCookThe other night I attended the BCS Roger Needham lecture held at the Royal Society in London – this is an annual event that recognises outstanding research work in the IT world and provides a platform for the author to present. This year it was the turn of Dr Byron Cook from Microsoft Research, speaking on the subject of “Proving the programs eventually do something good”. Byron reminded me of a cross between Jarvis Cocker from Pulp and Smart421’s WebSphere practice manager, and there is a bit of the IT research pop star about him (if such a thing exists!).

My attendance was part of my strategy to follow Guy Kawasaki advice – “eat like a bird, poop like an elephant”, although I must admit that beforehand I was pretty sceptical of how useful I would find the event.

So I have to say straight off – the presentation was utterly, utterly fascinating. He was a great speaker – interesting, amusing, great anecdotes and just eccentric enough to be a convincing academic. To be honest, he is actually quite a practical academic – having produced real code solutions that solve real problems and are in use today. I found myself fished in and attracted to the idea of the purity of academic research – at least until he got started on the maths and the set theory :). The basic message of the research papers he and his colleagues have worked on is that static analysis of code can prove that programs will eventually terminate (i.e. not hang), and that some other programs will hang under certain conditions, but that this cannot be proven for all programs. The nature of some programs means that it just cannot be proven either way. Ever. Turing worked that out years ago. But his vision is that it will be possible to increase the proportion of programs in the “we definitely know if we have a problem or not” category to a useful level, giving you a kind of automatable 100% test coverage for some programs.

So in the future, as well as just using the static code analysis tools like CheckStyle, lint, FxCop etc that we use today, we could also perform static analysis that will prove (I say again – for some programs) that they will not hang. Impressively he has already demonstrated this by finding some bugs in Windows device driver code (the size of which was up to 30k lines) which had previously gone undiscovered despite being in use in the field. Of course there are many barriers to this becoming mainstream – two of which were that it sounded like you needed pretty huge processing power (so could only be done overnight typically and maybe using a high degree of parallel computing capacity, although fortunately the algorithm does fit parallelism quite nicely) and also that the range of ‘checkable’ programs and data structures is currently quite limited, e.g. today we can handle linked lists but not tree structures.

But Byron gave me hope that static code analysis is not a ‘dead’ area and that advances are being made that can lead us to produce better software in the future. If you want to read more, have a look on the Microsoft research site.

I’ve just been through the interesting process of defining and negotiating my role and responsibilities as CTO in a services company. As part of this process, I naturally turned to the primary research tool at my disposal (a.k.a. Google) and found that there wasn’t a huge amount out there – so I thought I’d do my bit for the greater good and post a precis of the results.

Smart421 is primarily a service company offering integration and managed services, and so the CTO role is necessarily different to what it would be in a product-biased organisation, or in the on-IT sector. Therefore this role definition is heavily skewed towards the important challenges that we face in an ITIL, SOA, SaaS, cloud, storage, enterprise architecture etc world…

The CTO role is primarily outward facing, being responsible for defining the role that specific technologies and related methods will play in Smart421’s future growth and where technology and products and services related to it can contribute to providing a competitive advantage. Rather than being the senior technologist in the company, the CTO is the management team member with the responsibility for providing sound advice on business decisions relating to technology.

The function of the Chief Technology Officer is to:

  • Monitor and assess new technologies, tools and methodologies relevant to our current and future propositions
  • Instigate and oversee research activities
  • Using this input, maintain our technology strategy and ensure its alignment to business strategy
  • Implement this strategy by agreeing actions to be taken within other areas of the company
  • Lead the technology practices
  • Support sales and marketing activities
  • Provide reliable technical assessment of potential mergers and acquisitions
  • Promote Smart421 within the external technology community

PS Thanks are due to Roger Smith whose article from his time at Titan Corporation that I did find on the web…

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers