Vegas baby!

Vegas baby!

I’ve survived my first full day in Vegas at AWS re:Invent, the annual Amazon Web Services shindig, although I must admit to being jet-lagged to hell. Handily nothing ever shuts down here so waking up at 2am is not a problem :)

The first day was dedicated to the AWS Partner Network (APN) Summit, and the #1 highlight had to be the announcement that Smart421 have been awarded Premier Consulting Partner status – one of only 3 partners in the EMEA region to be recognised in this way. This is the highest level that there is globally, and it makes me really proud of what our internal AWS Practice have achieved over our four year journey with AWS – this is not something that AWS give to any old partner! It’s recognition of the great customer case studies and deployments that we’ve jointly undertaken with AWS, and the investment in deep skills that we’ve made.

APNThe sheer scale of re:invent is pretty staggering. The venue (The Venetian) is enormous, the rooms are massive, and there’s a huge number of attendees with a very wide variety of interests – enterprise-level, gaming, HPC, start-ups etc. As I was at the APN Summit all day which was on its own floor, this didn’t really hit me until I went to the Expo part of the event at the end of day – where 180+ different vendors are touting their wares. It was a struggle even to walk through the room as it was so busy – although copious amounts of food and alcohol probably helped drive attendance :).

Here’s a couple of other takeaways from the APN Summit yesterday:

  • AWS have just updated the quote that they use to demonstrate the rate at which they are scaling their infrastructure. Anyone familiar with AWS will probably have heard before that one way of quantifying their rate of infrastructure growth is in terms of comparing with the number of servers etc needed to run the Amazon.com retail business at some point back in the past. Well – AWS has carried on growing, and so this comparison metric has had to be updated. They are now adding enough server capacity every day to power Amazon.com when it was a $7bn business – which is quite an incredible statement really. Cloud computing is indeed a scale game…
  • One of the big push areas from AWS is in driving use of AWS to host specific packages such as Microsoft Exchange, various Oracle business products (i.e. not just the technology components such as database, middleware etc), SAP, Microsoft SharePoint etc. Hence yesterday they announced some new partner competencies for some of these products. Personally I don’t quite get this – in my view, the cloud model is not so compelling for these kinds of IT workloads, as they tend to be very “steady state” in nature, not particular peaky workloads and if they are somewhat peaky, then you’ve usually got to have a resilient pair running all the time anyway and so they options for scaling down are limited. There’s a myriad of companies out there offering very price-competitive hosted Exchange and SharePoint models (like our sister company in fact) and they can exploit multi-tenancy across customers to drive a really low price point. Office365 (which wraps in Exchange and SharePoint with other stuff) is also the direction of travel for many enterprise customers. Having said all that, AWS are obviously seeing traction for these more enterprise-level software deployments otherwise they wouldn’t be aligning their partner model to it – as they are clearly not dummies given that they are giving the rest of the IaaS/PaaS market a bit of a hiding.

Today we have the opening keynote from Andy Jassy, and then we get into the more nitty-gritty technical sessions…

After the polished video intro (screen shot below and catch the full key note here) Brad Anderson (Corporate VP at Microsoft) kicked off TechEd Europe here in Madrid by highlighting the opportunities that we have available to us as at the moment in IT. He talked about innovation, helping customers differentiate themselves and how Microsoft aim to make others great, so whilst I definitely agree that with him, it was the resonance with so much of what we do at Smart that I really liked.

OpeningSeq

Four areas of innovation were discussed around; People Centric IT, Building better Apps for people, Data, and the Cloud Platform. Aligning to these were new product release announcements including SQL 2014 CTP, VS2013 and TFS2013 CTP, Windows Server 2012 R2 and System Center 2012 R2. Better still, a partnership with Oracle was announced which means Oracle is fully supported on Azure and Hyper-V (and from what I gathered Oracle images ready to spin up in Azure).

One of the best parts (and I tweeted at the time – @wanty1975 btw) was a video of one of our customers (Aviva) referenced as an example of mobile and Azure development combining to provide an innovative solution for their customers. Why was this so good? Well partly because my old boss was in the video…but more so because seeing real world examples, and close to home too help inspire and make me realise the opportunities we have available to us.

TechEd Pres

So a brief list of new features mentioned in the key note that you should be aware of:

  • BYOD – two-factor authentication making it easy to add devices (a MS Surface in the demo) to AD domains using Windows 8.1 (not sure what the experience would be like for other devices though??). The simple process means files are sync’d, permissios granted to corporate data and Line of Business (LoB) applications downloaded onto the device accessible via a company portal-like homepage.
  • VS and TFS 2013 – ALM benefits throughout the develoment lifecycle but I really liked InRelease (following a recent acquisition by Microsoft) which provides workflow and approval mechanisms for moving builds through environments. I could see this really helping to take the pain away from tracking builds through environments and ensuring everyone knows what versions are where. From a dev perspective I also thought the new Code Lens and Code Map features looked really helpful. These come under the banner of Situational Awareness and are designed to reduce the time and effort needed by developers when context switching. Code Lens integrates with TFS 2013 to provide details within the code base, such as tests passing/failed, who it was last checked in by and any recent check-ins. It helps developers understand who has done what, where and why, much more simply than the previous version/compare routine that we’ve all experienced. Code Map provides a diagramatic view of code (similar to architecture modelling tools) but this can be viewed at runtime in debug mode, like a call stack, helping developers familiarise themselves with code much quicker and easier.
  • SQL 2014 – some huge performance gains achieved through in-memory OLTP and row based access rather than page, therefore reducing locking issues and increasing performance. SQL will suggest what tables are ‘hot’ and recommend moving them into memory (I’m assuming they’re persisted to disk to fairly frequently!!). I also picked up on the ability to replicate an existing on-prem DB to Azure, to provide HA and DR via a right-click menu option…didn’t see or hear any more evidence of this but sounds like a great feature. Also HDInsight and Hadoop linkage to Excel to provide visualisations and insight to Big Data.
  • Windows Server and System Center 2012 R2 – providing the tools to make Hybrid cloud simple and easy, with tooling to help sync data, create replicas for DR purposes and significant performance benefits for virtualised machines.

So there are obviously things that I’ve missed and lots more to talk about but hopefully this represents a fair summary. I’ll also be tweeting any further nuggets throughout the week.

Being relatively new to the IT profession, I wasn’t really too sure what to expect from JAX.  Over the years, I’ve been to plenty of other non-IT conferences; some of them great, some of them awful.
However, JAX 2012 did not disappoint.
Located in the depths of Park Plaza, Victoria, the venue was pretty ideal.  Spacious conferences rooms that easily accommodated any interested attendees (and then some) with plenty of communal seating areas to mingle with the other conference-goers.  Plus, it’s directly opposite a Pret and, as any regular London commuter will tell you, that is no bad thing.  The event staff were very efficient and helpful and there wasn’t a single hiccup in organization over the entire two days, which is a huge indication of the amount of effort that went into the planning of such a big event.
After a brief welcome by one of the conference’s organizers, the first session was a keynote by Brian Goetz (@BrianGoetz ), a language architect from Oracle, centered around Java 8, in particular, Lambda expressions.  It was a really eye-opening insight into the evolutionary forces in play that drive forward Java as a language and a technology to adapt to the fast-moving technological landscape whilst, at the same time, maintaining backwards compatibility with legacy code.  The new features of the language, due for release in Summer ’13, have definitely captured my attention.  There are definitely a lot of features ‘borrowed’ from Scala, in particular the use of parallelism in iterating over Collections, but these can only work to Java’s advantage.  Overall, a fantastic talk and has genuinely excited me over the future of Java.
Next up, a live coding session with Sandro Mancuso ( @sandromancuso ) as he showed us his philosophy on testing and refactoring legacy code.  Whilst endorsing the traditional test-driven development (TDD) approach and the manta of always leaving the code in a better way than you found it, quite a few hints and tips that he gave, such as start testing from the shortest branch of code and refactoring from the deepest branch, are small changes that can make working with legacy code that much easier.  A useful session overall.
I’d never really had much exposure to the Big Data world before so I jumped at the opportunity to attend a few of the Big Data lectures available.  First up was a lecture on Apache Cassandra by Tom Wilkie ( @tom_wilkie ).  Amongst the frequent plugging of Acunu, a reporting tool that sits on top of Cassandra, it gave me a decent understanding of the way that Cassandra works and the advantages, such as providing realtime analytics, and the disadvantages, such as having to declare your queries up-front, that the technology has to offer.
The second Big Data lecture I attended was regarding Hadoop and its evolution.  Similarly to the previous talk, this gave me a good basic understanding of the workings of Hadoop and how it has progressed over the years.   Very interesting indeed.
Having worked on mainly Agile Scrum projects at Smart421, I was quite keen to attend Roman Pichler’s ( @romanpichler )lecture, entitled ‘Choosing the Right Agile Innovation Practices: Scrum vs. Kanban vs. Lean Startup’.  It was really interesting to see Roman’s take on when each of these agile processes are relevent depending on the level of risk and uncertainty in the product.  Having never had any really exposure to Kanban and Lean Startup, this was a great session to get more of a feel for these paradigms.
Next up, was Heiko Seeberger’s ( @hseeberger ) ‘Scala In Action’ talk.  I was very excited about this talk because I’ve heard great things about Scala and had been meaning to experiment with it for a while.  I was not disappointed.  This live coding session showed the basics of the Scala programming language, as well as its interoperability with Java projects.  Whilst the syntax may take a bit of getting used to for existing Java developers, the readability of the code drastically increases, condensing what may take several lines of Java code into a single line of Scala.  As well as this, there are a plethora of features, such as parallel collections and lazy values that make this language very appealing.  A fantastic presentation and I’ll definitely be coding some Scala of my own very soon.
The second of Roman Pichler’s lectures I attended was about his concept of the Product Canvas.  The Product Canvas, in a nutshell, is a document for a new piece of product functionality that captures the stakeholder’s needs, user journies and UX design, constraints and developer stories all in one document.  It was suggested that, once created, this document is displayed for anyone in the business to see.  This is perhaps quite handy for ensuring that all parts of the business are, quite literally in this case, on the same page, but in reality, I’m not sure if this would really be feasible in a fast-moving Scrum team where the requirements are constantly changing.  Definitely an interesting concept though and provides much food for thought on how development teams can work more closely with the rest of the business.
‘Rock Arts and Rockets’ was the title of the next keynote from IBM’s Steve Poole.  This talk was a captivating insight into the Java community as a whole and was a cry of help to encourage us, as Java developers, to get more involved to help drive the future of the programming language.  An eye-opening talk that was very thought-provoking.
At conferences, I’ve always found that there is usually one talk where someone has a good whinge.  Russel Winder’s ( @russel_winder ) talk on Closures didn’t let that hypothesis down.  A lot of the content in this talk was material that had been covered by Brian Goetz’s keynote and the rest seemed to be a long rant about how Java isn’t object-oriented(!) and slides of squirrels.  The wasn’t really much to take away from this talk, which was a pity, as it sounded quite interesting.
Next, was a high-energy and entertaining talk by Holly Cummins ( @holly_cummins ) and Martijn Verburg ( @karianna @diabolicaldev ), which seems odd for a lecture about Software Complexity.  Covering a range of areas concerning software complexity, such as dependency management, issues with classloading and modularity principles, there was plenty to take away from this talk and apply to my day-to-day development with Java, which was fantastic.
Another keynote followed, given by Patrick Debois of Atlassian, regarding DevOps, in particular the importance of sharing in such a development methodology. Patrick took us through the 6-phases of implementing the DevOps way-of-thinking into a pre-existing workspace.  The key theme throughout showed that improved communication can do wonders for development and operational teams working in harmony.  I see this methodology getting a lot more attention over the coming years.
Having recently been involved in helping out with the performance testing for a client, I thought it would be good to attend a couple of talks relating to that area that could perhaps give me some useful skills to apply.  First up was a FAQs about Performance talk by Trisha Gee ( @trisha_gee ).  Aimed at the beginner performance tester, Trisha covered a wide range of different types of performance tests that can take place, as well as considerations and common pitfalls.  A very down-to-Earth session and gave me some very useful advice to apply day-to-day.
The second performance-related talk I attended was by Kirk Pepperdine on the topic of garbage collection logs.  I must admit, I’ve never really thought of analyzing garbage collection logs, purely because I wouldn’t really have a clue where to start.  However, this session showed that it’s not quite a scary as it first appears.  Kirk talked us through a few different garbage collection algorithms and how these can be applied to the heap, complete with practical examples.  Whilst one would need to do a lot more research to become fully comfortable delving into the logs themselves, this lecture gave a great overview of how to tweak the garbage collection mechanisms to make your code more high-performance.  Highly enjoyable!
The final lecture of the conference that I attended was entitled ‘Multi-tenancy in a Platform-as-a-Service’ and this was given by Paul Fremantle ( @pzfreo ). This talk seemed like a ‘shameless plug’ for a product called Stratos but it also highlighted the issues that Stratos came across when building their PaaS solution, which made for an interesting final session.  It’s also made me keen to get more exposure to cloud-based solutions.
Overall, this was a fantastic conference, both is organization and in content.  It’s a great opportunity for any developer to keep up to date with the latest goings on in the Java ecosystem.  It’s given me plenty of mind-food to munch on for the coming year.  The JAX 2012 team deserve a big pat on the back.

Nearly a month behind original indications from the AWS Oracle guys but Oracle RDS is here (released 23rd May).

http://aws.amazon.com/about-aws/whats-new/2011/05/23/amazon-rds-for-oracle-database/

Below are some quick calculations for a months continuous usage (excluding storage) in EU Ireland for the On Demand model. You will see the cost doubles each time you in theory double the power :-)

On Demand (License Included) High Mem/Extra Large DB Instance $699.36 per month
  High Mem/Double Extra Large DB Instance $1,398.72 per month
  High Mem/Quadruple Extra Large DB instance $2,797.44 per month
On Demand (BYOL) High Mem/Extra Large DB Instance $550.56 per month
  High Mem/Double Extra Large DB Instance $1,101.12 per month
  High Mem/Quadruple Extra Large DB instance $2,202.24 per month

The Reserved model still remains significantly cheaper over a year period – $305.05 per month as opposed to $699.36 but with upfront year 1 cost of $1,850 – so year 1 costs £5510.48 for reserved as opposed to $8392.32 for on demand for the example of high mem/extra large instance.

Some of our Customers have been keen to see progress on the AWS-Oracle roadmap and whilst any progress will always be welcome news, one of the key observations on ‘first-glance’ of the Oracle RDS service is that there is a multi-AZ option but that the only current available selection is ‘No’ - this is not the case for the MySQL RDS service. This will be a limiting factor particularly for our enterprise Customers when considering a highly resilient/highly available database solution. Some digging is required into this but it is likely to be a consequence of the underlying AWS architecture and the ‘new ways’ of thinking in terms of having a ‘shared nothing’ architecture - this currently prevents mutli-node Oracle RAC implementations on AWS.

Based on the recent AWS outage  (see previous blog posting here) and the re-enforced message to implement your solution across AZ’s to ensure high resilience/availability, then this Oracle release may not be enough to impress enterprise-level Customers – it may be a case of  continuing to ‘watch this space’ on the progress along the AWS-Oracle roadmap……

logo awsJeff Barr has posted some info here on the AWS blog which provides links to other AWS pages describing how some core Oracle Applications products are now certified and supported on Amazon EC2, including eBusiness Suite, PeopleSoft, Siebel and Fusion Middleware. It feels like Oracle have joined the party at last, and not before time. The key barriers to sticking Oracle Applications on AWS before were that:

  • You had to create/customise/maintain your own AMIs (a hassle but doable)
  • More importantly, when you ring Oracle with a support issue, they might not play ball

The removal of the second barrier is the key thing. Some barriers still exist – customers still have to be happy with the data security aspects of public IaaS hosting, so that’s still a barrier, maybe mainly an emotional rather than real one. Also, Oracle Real Application Clusters (RAC) deployments are not currently supported – so that’ll put most enterprises off for now I’d have thought though.

It’s been almost exactly a year since Larry Ellison’s infamous rant against cloud computing. And clearly a year is a long time in the world of cloud computing…

It’s been a while since Oracle’s spending spree – including Sun and BEA – something that i’m still in shock about.  To get some clarity on the SOA offerings from Oracle a colleague and I recently attended an Oracle partner day in Reading. As the day was ran by 2 presales guys we got an interesting incite into what they were picking out as the differentiators to the other main player IBM – some of their opinions are reflected below – all of which i take with a pinch of salt.

Fusion Middleware

Oracle Fusion Overview

After everybody nodded to the question “does everybody know what SOA is?” the instructors continued on to tell us about SOA for a couple of hours. Nevertheless there was an interesting slide in this period with the average project cost and staff size for SOA projects; Basically, the message was ‘too big’ and doomed and ‘too small’ and it’ll just fade away. Also, they noted that the audience and the questions asked were testament to how the market has matured: their presentations have moved more from ‘how it works’ to ‘how to roll it out and support it’.

One contradiction I noted was how often they talked about the importance of interoperability and the use of industry standards but at the same time were really playing up how well integrated the Oracle SOA Suite 11g is. Perhaps this itself isn’t a contradiction – it’s more the incongruity of vendors on the one hand sell you the standardisation but at the same time tut-tut and teeth-sucking at the idea of using open source in some instances.

From a product perspective the day was a really useful insight in to how the suite has evolved and into what’s new and exciting about Oracle SOA Suite 11g (r1 patchset 2 – the 3rd big release in a year). Here are some specific notes I made which may or may not make sense:

  • No prizes that the winner in the application server shoot out between Oracle App Server and WebLogic was… WLS. The current version of SOA Suite won’t support running on WebSphere or JBOSS AS but this is on the roadmap.
  • One important piece of consolidation with the release is that SOA Suite and Oracle Service Bus can now run on the same application server platform (previously in 10G you needed different versions/instances).
  • Aqualogic ESB is called Oracle Service Bus (OSB) and now fulfils the ESB function. The product formerly known as Oracle ESB is actually still part of the product set – but is called Mediator. There is clearly some overlap between the products but mediator should be used only for the basic routing and transformation tasks.
  • BPM Suite for 11g is the completely re-written Aqualogic BPM which is for business modelling, supplied as part of the Dev Suite and includes some coding ability and allowing closed loop development (BPM is not as powerful or feature rich as the BPA Suite and not intended as a competitive product). The BPA Suite is pure modelling.
  • BPA Suite is targeted at analysts and contains 3 licensed modules of the ARIS product from IDS Scheer. The message was clearly made that Oracle don’t feel it is their business to create business process design software. I think this was a poke at IBM.. who recently bought Lombardi. BPA is much cheaper then the pureplays, example he gave was Pega PRPC but delivers on the promise of being able to integrate this rather then run it is as a homogeneous blob.
  • List of what’s included in SOA suite: OSB, Adapters, B2B, BPEL, BAM, EM, WSM, Service Reg, JDeveloper, CEP. –maybe missed a couple – all running on top of WebLogic with AQ/JMS and JRockit
  • BPM, BPA Suite and Enterprise Repository (ER) are licensed separately, the later under the ‘governance suite’. The ER assists governance from an assets perspective and provides the repository to store all artefacts.
  • The process server now supports BPMN 2.0 as well as the existing BPEL. There is an aim to provde BPMN round-tripping
  • Oracle have invested a lot in the adapters – making them all JCA compliant again (BEA had focussed on making them fast and not standardised). Lots of them, and there are lot’s of partners providing even more (Attunity and iWay were mentioned)
  • Event Driven Architecture  aka Event Delivery Network is JMS based of course, thankfully on weblogic JMS and not Oracle AQ. Apparently, you can use another JMS provider if you want, there is a JMS bridge (though I’m not sure that is really the accurate answer to my question)
  • The B2B product is really just an endpoint and not B2B network provider software
  • Oracle business rules (OBR) is a basic product – again the distinction was drawn out between IBM buying ILOG JRules and the OBR. No desire to create BPMS software which they say can hide implicit process within complex business rule structures. However, they have added handling for the basics of decision tables. OBR is integrated into JDeveloper.
  • Oracle policy admin (OPA) provides a sophisticated tool for managing your internal policy related data, e.g. VAT.
  • Application Integration Architecture (AIA) are all about making the Oracle Applications harmonious with the SOA suite and providing reference and process integration packs (PIP) are about providing standard processes across these applications.. rather then developing your own ‘order to pay’. Another distinction was drawn to IBM – who “don’t do applications”. Although predominantly Oracle Applications currently, the architecture is intended to support any cross application integration (e.g. connectivity to SAP http://www.oracle.com/applications/aia-plm-to-sap-erp-data-sheet.pdf). A list of the currently released PIP is at http://www.oracle.com/us/products/applications/application-integration-architecture/054232.html
  • JDeveloper is the development tool and is very well integrated and includes Unit Test Framework in 10.1.3.

As I said before – if you have noticed the Oracle instructor references to IBM above – clearly the competition is fierce between the 2 application integration stacks. A couple of other points were made which I regurgitate for the reader to make up their own minds: IBM products aren’t well integrated like the Oracle suite now is, and rational are holding back BPMN development within the System Architect tool.

Following the recent acquisition of Sun by Oracle an interesting discussion kicked off via email within our WebSphere practice. The general consensus was that it probably wouldn’t change things that much but that it would be interesting to see how they handled their new toys while the BEA ones were still pretty shiny.

Today I came across this article http://www.infoq.com/news/2009/06/g1-paid. Whilst consisting of nothing more than speculation it raises some questions and elevates this a little higher up my list of things to keep an eye on.

Is there anything in this? Have Oracle got a masterplan to make money out of Java where Sun failed?

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers