phoenixThe other day I was hearing all about the Phoenix open source project, which has a great strap line of “we put the SQL back in NoSQL”. It’s a SQL skin over HBase that is provided as a JDBC driver, and it’s come out of Salesforce.com, and has been proposed as an Apache incubator project with the vote started yesterday (Thurs 5th December 2013).

What I find ironic about the whole SQL/NoSQL thing is how there is a huge amount of energy being put into “SQL-ising” NoSQL datastores. Obviously hive does it, Impala from Cloudera etc, and now Phoenix on top of HBase. Whilst being really impressive – and I mean that – Phoenix currently has some limitations that just bring home the maturity of the SQL/relational database world, such as the need to define your tables in the right order in your joins to optimise performance etc – features that SQL query optimisers have laughed in the face of for years.

One really nice feature of Phoenix is it’s support for secondary indexes, where under the covers it creates and maintains a separate HBase table but transparently uses it to prevent table scans when it can – something HBase developers have been laboriously hand-cranking for a while.

Also it provides query plans so you can understand what’s going on. In the relational world the query optimisers are so good these days that SQL developers can often be pretty slap dash in writing queries and still get good performance characteristics, at least up to a certain level of scale anyway – you are abstracted away from a lot of the underlying complexity, so can be more productive. Of course there is no substitute for understanding what is really going on under the hood, but in the “SQL on NoSQL” world, you really do need to understand the gory tuning nuts and bolts of the underlying NoSQL datastore or else you’re going to be in trouble.

The reasons behind the origins of Phoenix are compelling – Salesforce.com needed to store potentially millions of data items across many thousands of customers, and so they adopted HBase to deal with that scale. It’s fundamentally very batch in nature and they needed to support low latency web applications per customer. But the key driver for SQL-like interfaces that you hear repeated across all these NoSQL datastores is that well…everyone just knows SQL.

It is the lingua franca of data queries, and for most use cases, broad adoption by your developer community (even inside a very tech-savvy company like Facebook) is worth a heck of a lot more than that last 1% of NoSQL tuning that you might be able to squeeze out using a guru and the HBase API. SQL has proved to be very flexible across a wide number of data models – although the NoSQL community’s use of it has introduced lots of extensions, it’s not like the relational database vendors didn’t do that is it?

Long live SQL!

ForgeRock OpenAM version 11.0 highlights Powerpoint presention slide on 15 October 2013 by John Barco, VP Product Management at ForgeRock. Photo: Kirsten Hurley

ForgeRock OpenAM version 11.0 highlights Powerpoint presention slide on 15 October 2013 by John Barco, VP Product Management at ForgeRock.
Photo: Kirsten Hurley

Last week I had the pleasure of attending the ForgeRock Open Identity Stack Summit: Europe in a chateau on the outskirts of Paris. The event was a sell out but, keen not to turn away delegates, the ForgeRock team valiantly sacrificed their rooms to the extra attendees while they stayed in a hotel half an hour away!

There was a real mix of attendees, from the technical community to procurement executives, and of course ForgeRock’s partner community. All were eager to hear product updates, and listen to industry experts such as Eve Maler (Principal Analyst at Forrester) on the trends they are witnessing across the world. Eve’s key note on modern IAM and trends certainly caused a stir with her statement that XACML (eXtensible Access Control Markup Language) is now in clear decline, and that authorisation standards need to get finer still. UMA (User-Managed Access) is the one to watch apparently…

A lot of the messaging revolved around how IAM (Identity Access Management) is moving to ‘IRM’ (Identity Relationship Management). This is largely driven by factors such as internet scale over enterprise scale –  an inescapable requirement for a user base that is no longer restricted to employees, but partners and customers too, and accessing not just from on premise, but from the internet and in the cloud. And that’s without even mentioning the number of devices each individual is expecting to be able to access your systems with!

It was also apparent why ForgeRock had taken the radical step to rebrand the latest upgrade from v10.2 to v11 when the new features were revealed (see photo below). ForgeRock has so rapidly developed an already market-leading product to become even simpler to integrate and deploy that the changes certainly justified the leap in nomenclature.

Finally, I cannot sign off without mentioning CEO Mike Ellis’s big announcement of the event – Salesforce and ForgeRock announced a partnership which will see the SaaS vendor integrate the open source software into its Identity Connect product.

If there is anyone out there who still wonders whether open source technology really has a place in the enterprise, surely this news that one of the world’s largest technology vendors sees fit to partner with them must mean that ForgeRock’s position in the IAM (or IRM!) market is confirmed?!

Please share this blog using the social icons, or via the short URL http://bit.ly/1aFumKp

Please Rate and Like this blog.  Our readers want to see YOUR opinion, so please take a moment to leave a Comment.

Marc Benioff and Michael Dell

Marc Benioff, CEO of Salesforce with Michael Dell, CEO of Dell Computer

This week (14 Sept) saw an influx of over 6,000 people to London to attend Cloudforce conference at the Royal Festival Hall on the South Bank. It was the annual shin-dig for fans of Salesforce.com , the game-changing customer relationship management system hosted in the cloud.

Unlike at previous Cloudforce, it was clear on arrival that the Salesforce ecosystem had grown substantially. Elizabeth Decker, Director of Partner Marketing, had obviously been working overtime to assemble a noticeably bigger raft of partners, some specialising in apps development, some in implemention (among them some ex-Smarties). One observation was the very prominent role taken by Accenture (diamond sponsor of the event – which must have set them back a few quid); they featured a great case study for Schneider Electric in their 1.00pm breakout session on “The Cloud Enabled Business”.

But of course, everyone had come to see Salesforce CEO, Marc Benioff take to the stage in one of his now infamous, charismatic keynotes. Although I did overhear some grumbles that the content was a clone of his keynote at the recent Dreamforce event in San Francisco (itself something of a recordbreaker with over 45,000 attendees), he didn’t seem to disappoint his London audience.

The Social Enterprise

Marc Benioff's vision of the Social Enterprise

Benioff’s new mantra was the “social enterprise“. Having launched “Chatter” just over a year ago, Benioff said he no longer speaks of Cloud 2.0. It seems he had a eureka moment somewhere after visiting people like Mark Zuckerberg (founder of Facebook) and others that the successfull enterprise is the connected enterprise.  From an article in Forbes magazine (Barrett, 2011)  it seems that Benioff instructed his half his development team to stop what they were doing and inject social media functionality inside Salesforce.

He asserted that a platform for social enterprise is truly transformational; Benioff was quick to remind the audience of the the role social media in changing political landscape; the “power to bring down dictators in countries and dictators in companies”, he squawked.

 “It not about soft[ware] power or hard[ware] power but social power”.  Strong stuff indeed. Maybe you disagree?

At Cloudforce, Benioff shared the limelight with Michael Dell, CEO of Dell Computer (morning keynote and afternoon keynote interview, 14 Sept). They seemed very at ease and spontaneous. See my video clip here.

Marc Benioff CEO of Salesforce.com at 2011 Cloudforce

Marc Benioff at 2011 Cloudforce

Another guest was Angela Ahrendts, CEO Burberrys, the British fashion house, who was very vocal in her support for how Salesforce had helped her revolutionise her business whilst not sacrificing brand values.

The growth of Salesforce has been nothing short of staggering. And they’re not done yet. Acquistions such as Heroku and Radian6 are definitely technologies to watch.

Don’t forget to post a comment and I’ll let Marc know what you think…

If you weren’t there, or were there and want to relive the experience, then go online to the recorded video on the Salesforce website.

References

Barrett, V. (2011) ‘Mister Disrupter’ Forbes. 08 August. p.84.

TrendTowardsPaaS

Amazon Web Service’s announcement yesterday about their AWS Elastic Bean stalk offering was a significant step in their inexorable movement up the stack from IaaS towards PaaS (platform as a service), offering a Java PaaS based upon a Apache/Tomcat/Linux stack.

At the same time, the SaaS players are moving in the opposite direction, as demonstrated by Sales force.com’s recent acquisition of Heroku, the ruby PaaS provider.

Up until now, I’d have argued that PaaS has been the least mature of all the “aaS” siblings, with significant risk of lock-in, vendor failure risk etc, but this is rapidly changing and PaaS will become the main battleground between all the vendors as the SaaS players make their offerings more and more configurable/flexible and the IaaS vendors try and simplify their typically quite techy offerings for a wider audience.

There’s good further material of this topic on Krishnan Subramanian’s blog.

Salesforce.com are generally always quoted as the definitive success story in the SaaS space. They’ve grown their offering over the years, starting with sales force automation, moving into broader CRM, contact centre management, PaaS with Force.com, collaboration with features such as Chatter and now Java application hosting via VMForce.

Their revenue growth is quite amazing, as shown by the following graph. In terms of pure revenue, they are massive when compared with some of the other leading SaaS vendors.

SaaS Vendor Revenues

* Note: Some of the 2010 data is based on trailing twelve months

What is even more interesting though is to look at the company revenue growth with respect to a common date in the past, e.g. 2006.

Normalised SaaS Vendor Revenues

Here you can see that SuccessFactors has actually growing faster than Salesforce.com over this period, and NetSuite has tailed off somewhat (although it is still growing at say circa 10% per annum, a very healthy rate that many companies would kill for – it is just the growth rates of the other two vendors that make it look a bit ordinary!).

Of course this data needs to be reviewed with some caution. Salesforce.com are much bigger and therefore have a diminishing market to try and capture, whereas the other players are smaller fish in their respective ponds. It certainly looks like Salesforce.com got off to cracking head start though! Also – it’s not all about revenue – I could have looked at number of subscribers, or profit etc. Vendors have been in a “land grab” phase and so will have been sacrificing revenue to some extent and profit for sure in order to gain market share, and to maximise longer term revenue opportunities and market capitalization.

So why is Salesforce.com adoption faster than that for other SaaS vendors? Well, for sure their execution over the years has been excellent – they’ve obviously got the pricing right and kept it right, and carried on adding functionality despite starting from a very low base. Whilst SaaS vendors are generally thought of as young upstarts, Salesforce.com have been around since March 1999, and so they’ve had a long time to continually add to and refine their product offering.

But I would suggest another reason other than just excellent execution of their proposition in the market – I think they picked one of the best business process to address first – sales. Why? Well…

  • Sales staff just don’t like keying in data about their opportunities, prospects etc – they’d rather be out on the road selling than doing “admin”. This was recently described to me as “attention deficit disorder”… :)
  • The basics of the sales business process are relatively straightforward and the same across different vertical markets, i.e. capture prospects, generate opportunities, convert to business.
  • The sales process has relatively few immediate integration needs – it makes sense “on its own” without integrating it with other processes/systems, at least in the early days of usage

SuccessFactors chose Human Capital Management (HCM) as the business processes to which they would apply the SaaS model. Again, I think HCM is a good process to select for a SaaS startup, as the business process can function in a relatively standalone manner, e.g. you could live without integration with payroll, etc in the early days of a “trial” deployment.

But what about NetSuite? They selected Enterprise Resource Planning (ERP)/Financials as their target SaaS-ified business process. ERP processes are typically quite likely to require tailoring to specific vertical markets, and both ERP and Finance processes are also likely to need integration with existing customer systems as a “day one” deployment requirement.

So, the lesson to learn from this is that whilst execution is all important in the delivery of a SaaS offering, the fundamental business processes that a vendor decides to target are a significant deciding factor that influences the size of market you can address, how successful you can be in it, and how quickly.

Continuing on from my previous post….the second session I attended at Cloudforce 2010 covered Force.com. Being familiar with the PaaS offering already, I was more interested in drilling into some of the uses to which it was being put, pros/cons etc.

The sweet spot for Force.com was described as relational data “heavy”, process centric applications. This includes the classic Lotus Notes app replacement problem that many organisations suffer from, and also replacement for (typically business-built) Excel and Access applications. Afterwards I had a chat with one of the presenters and asked the obvious counter question – for what would it not be a good fit? The sour spot (!?) for Force.com would be an application that has very few users, performs heavy computational processing and manipulates large unstructured data (PDFs, images etc). For these latter requirements you are better going down the AWS EC2/S3-style IaaS route, although potentially Force.com could still be used for any relational data management aspects and combined with AWS services (for example). I find the idea of this ‘architectural mashup’ (my words, not theirs) quite appealing – it feels like the next logical evolutionary step after “integration at the glass”, but at the same time it also feels like the increase in moving parts and dependencies in your architecture is heading in the wrong direction, i.e. more complex.

From a competitive positioning point of view, what was interesting was that Salesforce.com focused their arguments on tackling Java and .NET development, rather than discussing how they were or were not better than other PaaS providers – so to a large extent they feel they are still in the “educating the market” phase and of course as market leaders I guess they don’t feel they need to defend their offering against more direct competitors. Quite a bit of time was spent discussing industry analysis reports comparing development speed and cost between Force.com and Java/.NET – which really reminded me of the same debates between Java and .NET in years gone by. The claim made was that in general Force.com is 5x faster, and half the cost. The flip side of that argument was not really discussed, i.e. as Force.com abstracts you away more from the underlying code (80% of the work is supposed to be “point and click”) then there must be some inherent restrictions that you could avoid in Java/.NET – it’s the classic 3GL vs 4GL debate all over again but with a cloud deployment model thrown in. My conclusion is use the right tool for the right job, it’s not either/or, and Salesforce.com’s answer would be to mix and match.

Finally, I discussed lock-in and governance concerns with the presenter afterwards. Salesforce.com are evolving and enhancing their product offerings at an amazing rate and clearly expanding out to target more and more of the enterprise IT footprint. As the post-Internet world continues to centre around fewer and fewer mega-brands, there has to be a concern with committing more and more of your IT estate to any one vendor. In addition, I have real concerns about the governance of internal Force.co implementations. Whilst there is no reason that good development lifecycle processes cannot be applied to a Force.co development, the whole point is that more of the “point and click” can be performed by users closer to the business etc, and so organisations will need to be very vigilant to prevent utter change management chaos occurring, e.g. changes to apps on the fly, redundant/duplicate data being created and never tidied up etc. Are we at the new dawn of a fresh wave of data quality and integration nightmares? I regularly see how difficult our customer’s find it to control change in “classic” software developments on mainframes etc, with the stronger more vocal change agents causing the introduction of architecturally suspect changes due to insufficient or ineffective governance controls. The lowered barriers to entry from a PaaS offering such as Force.co is like pouring petrol on this fire I suspect. We’ll see if I’m right…although I accept that predicting train wrecks in the software development world is not exactly a genius insight!

Mark Benioff?Yesterday I attended Cloudforce 2010: London – which had been moved from Tuesday to Wednesday to avoid the tube strike. This meant I could only make it for two of the afternoon sessions due to a prior customer engagement, and I must admit when I got there about 2pm I was a little bit grumpy about it all. I’d missed the Mark Benioff keynote earlier in the day, but all was not lost as I caught up with him later on as he walked around the exhibition – see the photo attached. At least I think it was him…

I’ve been a lazy blogger recently due to some client pressures, so I just wanted to capture some of the key interesting points from the two sessions I attended about Chatter and Force.com – not so much about the technologies themselves but more about what they mean for the industry.

Having heard about Chatter at a previous industry cloud event when it was in beta, I must admit to seeing it as a bit of “fluff” at the time – as it was positioned as “Facebook for the enterprise”. Having now had some demos, a few things struck me. First of all, this is really an attempt by Salesforce.com to break out of existing limited user communities in a single customer. If an organisation with 10,000 employees only has 1,000 Salesforce.com seats (e.g. access is given to Sales, Marketing and Customer Services only), then Chatter is the thin end of the wedge to grow the number of licensed seats – because if the power is that everyone can collaborate, then only having 10% of your organisation able to contribute doesn’t make much sense. They do offer a Chatter-only license model also to allow greater organisational coverage, and it’ll be interesting to see what the take-up of this model is.

Secondly, for existing Salesforce.com seats, there is no additional license cost, which is a bit of genius to support my first point. In fact, it is heading towards being just part of the base functionality of the platform and enabled by default in the future it seems, so in some ways if it had been part of the platform all along there would have been less fanfare about it.

Also, it struck me that Chatter is a potential SharePoint competitor (and similar products). I felt there was a significant overlap in their functional footprints – with either approach you can log personal info/activity and also track changes on objects – be they people, documents, discussion groups etc – and receive email alerts or see a summary on a web page. But with SharePoint this requires some effort to configure – putting an alerts summary widget on your home page for example. With Salesforce.com this is all “out of the box” which may be great or annoying depending on your point of view. But as Chatter is tightly integrated with the business process that the user is following, e.g. working on a sales opportunity, it is natural to want to track its progress – whereas in SharePoint this is all possible but again would require some configuration effort, e.g. to build a custom sales workflow process. It is a bit of an unfair comparison in some ways as they are different products, but it made we wonder how Chatter was viewed in Redmond.

And finally, whilst both the customer’s who spoke on stage said that this was not a problem for their deployments, I can see that in some organisations the “noise” generated by Chatter would be really annoying and distracting. I do not agree with the presenter’s view that the “chattiest” person is therefore the most productive and should be recognised as such in an organisation. We live in an increasingly interrupt-driven world and it is very easy to suffer from productivity sinks and this can be another one. I guess it is just a tool and like all tools, they need to be used appropriately.

Mimecast workshopToday I attended this event at Olympia with Smart421′s Managing Director. I must confess that I’ve been to enough events like this now to have become a little jaded by the repetition and the vendor sales pitches – I guess it wouldn’t be much different if I was presenting :), but you know what I mean…however I am also constantly surprised by the fact that even the apparently most dull presentation usually contains a few gems that get me reaching for my notebook. Here’s some of those notes…

Following the intro, the keynote was by Phillip Dunne from Salesforce.com – which was essentially a sales pitch. What struck me about this material was how Phillip referred several times to the “success” of cloud computing, whereas I was thinking that what Salesforce.com have done so well is provided a great product with the right features at an attractive price, and kept maturing and enhancing that product in a very agile manner. Now of course this has been enabled by the underlying IaaS platform, but it’s the appropriateness of their SaaS offering that’s given them the edge – if the product hadn’t made the grade then would cloud computing be a failure? Of course not. The pace of their functionality growth is so breathtaking it is almost alarming, with Chatter (a kind of enterprise social networking capability) in beta and inroads being made into contact centres. I’m guessing that there are some very nervous software vendors out there, because you certainly don’t want these guys chasing your market.

Amongst other things, Andy Steven from VMWare described Project Redwood which is an interesting initiative to agree open standards across the industry for VMs – the OVF standard from DMTF but also API standards around moving them between CSPs and manipulating them. I guess this is a defensive move to counter the de facto AMI standard from Amazon Web Services. He also explained VMWare’s strategy around the acquisition of SpringSource, and his basic message was that they wanted to play/have some skin in the game in all the layers – IaaS (VMWare), PaaS (Spring) and SaaS (I forget what they’ve acquired in this space). I thought this was an interesting message – he was saying that they had no ambition to become massive SaaS leaders for example, but they wanted to have real exposure to it, presumably in order to really understand where to take VMWare ESX etc. He also implied that they would add some VMWare-specific cleverness in the Spring world so it is VMWare-aware and able to influence dynamic movement of workloads via vMotion etc.

There was a stream of workshops running in parallel with the main conference sessions, and the “interesting workshop of the day” award has to go to ex-ethical hacker Jason Hart from Cryptocard’s explanation and demo of the insecurity of username/password based authentication strategies, leading to the conclusion that two-factor authentication is the only way to go. He showed some sites that had just been hacked on the web, usually due to political motivations, and also showed some hacking tools and approaches (technical and human engineered) that can easily be used to capture username and passwords. Fascinating. And scary – but I guess that was the idea…

Some other observations that I picked out…

  • It’s a generalisation, but I detected that the main driver for cloud computing adoption (and most people meant IaaS when they said this) was agility and speed to market rather than cost savings. Cost savings were a consideration but not the primary one.
  • Chris Rae from CA talked about the likely expectations of Gen Z users when they hit the workplace in a few year’s time, and reinforced my view that the world is going “interruption-mad”, with a real desire from users to have several concurrent interruption-based interactions going on, e.g. Facebook + Twitter + texting + IM etc at the same time. This reminds me a little of the challenge from mainframe-based colleagues in years gone by as to “why do I need to use a mouse?”. You just do – right…? In the same way, for good or bad the world is changing and this interruption-based reactive working pattern is coming and needs to be embraced in the workplace.
  • It is not surprising but interesting to see how dominated the IaaS space is by Amazon Web Services/Rackspace. Other market entrants benchmark themselves against AWS and this completely drives how they position themselves in the market.
  • Mimecast’s take on cloud-based email services is interesting, in that rather than offer the typical “hosted Exchange” model (thought they do offer this via partnerships) they offer a hybrid model where Exchange stays on premise and they provide services integrated with it for archiving, eDiscovery, DR etc.
  • Private vs public cloud – the classic debate. I’ll blog about this some other time as there’s quite a lot to say – but for now, suffice to say that at the event the broad agreement was that private clouds are the next step in the enterprise journey to cloud adoption, regardless of whether that appears to be sensible or not – and there are a number of drivers causing this.

Top analogies of the day

You can’t talk about cloud without some analogies – it appears to be an unwritten rule. Apart from the usual electricity supply one that always gets a look in, my favourites were (both from Phillip of Salesforce.com):

  • Analogy #1 – Comparing multi-tenancy with an office block, where each company has their own “private space” but shares the lifts, fire prevention systems, front desk reception etc.
  • Analogy #2 – Comparing TCO for infrastructure to an iceberg, with the obvious costs above the waterline being things like software licenses etc, when in fact most of the costs are much less visible such as staffing, data centre space, insurance, power, cooling etc.

Top PowerPoint images of the day

You’ve got to love PowerPoint presentations haven’t you? One presenter (somehow) managed to weave in a photo of fish and chips into his presentation, and I’m still not sure how. But my favourite image was from Magic Software who used a piggy bank squeezed in the jaws of a vice to represent the pressure on infrastructure costs – genius!

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers