flock of migrating canada geese birds flying at sunsetIn yesterday’s blog post I summarised the cloud services broker role with some definiti0ns, and concluded that I do indeed appear to work for one of them – and I suspect many vendors at this week’s Cloud Expo Europe (#cee14) might lead on this from a marketing point of view.

We’re delivering service intermediation/customisation and service aggregation/integration, but one thing we are not really doing (or seeing any demand for) at the moment is dynamic or semi-dynamic workload migration. i.e. it’s not just dev & test any more, these days we are migrating complex production environments customer after customer onto AWS. But we are not providing customers with the means to dynamically move or spread those IT workloads across different cloud providers. It’s certainly something we could do from a technology perspective, and most of our deployments have some hybrid aspect to them.

cee_logo_2014_nowebsiteThe ability to migrate IT workloads dynamically (i.e. at run-time, not at deployment time) is something I sometimes see as a capability under the “cloud broker” banner, but in my view it really just doesn’t make sense – at least not at the moment.

The rate of innovation in the IaaS/PaaS/DaaS market is such that most of the other vendors are playing catch-up with AWS, as AWS continue to differentiate themselves from the following pack. This shows no sign of slowing down over the next couple of years – so the only way a migrated workload is going to work across multiple cloud vendors is if it only relies on the lowest common denominator functionality across the vendors, which is typically basic storage, virtualised compute and connectivity.  Or you have to architect your solution to take into account deployment differences across the cloud providers you intend to use – and be able to effectively monitor and support both – twice the work and complexity, and not something you really want to debug.  Did your load balancing just stop working as you expected – it worked last week…mmm…I wonder if our load balancing configuration behaves exactly the same across all our cloud service providers? And so on…

Even storage – the most commoditised of the building blocks of IaaS (you would have thought) contains some interesting differentiation – not on price any more as Google/Microsoft/AWS are effectively price-matching these days, but on features like access control, archiving to cheaper storage, automated data life-cycle policies etc.

The bottom line is that if you are going to architect your applications so they can run on any cloud service provider, then you can’t easily use any of the good bits and hence your value in migrating to a cloud solution is diminished.  Not ruined, just reduced.

There are now a bunch of brokerage tools out there from vendors that claim to give you this workload migration capability, but what I’ve seen so far is disappointing, e.g. one recent new tool I looked at required a custom template to be created for each cloud provider – so whilst the end user might get a menu choice of “deploy to AWS” or “deploy to Azure” – under the covers you still need a bunch of experts in each cloud service provider’s technology and those experts need to keep themselves abreast of new functionality continually.  You can create an impression of homogeneity, but it’s just a veneer.

In our experience, even in very large enterprise estates (e.g. where we’ve deployed and managed up to 750 AWS instances made up of numerous discrete environments), whilst the IT workloads might be relatively consistent in nature (e.g. there might be a corporate standard for this OS and that application server etc), there is always sufficient variance in each project’s requirements that a completely cookie-cutter approach to environment deployment and self-service just does not work.  Each project needs slightly different software, or software versions, or server specifications, or connectivity requirements etc etc – and the list goes on.  And if they didn’t – well – you’d hope the projects would be merged into a single project if they were so similar in their needs, right?

So – given that it’s taken the IT industry this long to get to the point that “as a service” is really possible, for the next couple of years at least let’s focus less on hiding away the good bits of cloud and the really useful differentiating features of the leading cloud providers, and focus more on actually exploiting them please!

PS If you happen to be attending Cloud Expo catch me on the Smart421 stand or let me know you are there via @SmartCTO

Vegas baby!

Vegas baby!

I’ve survived my first full day in Vegas at AWS re:Invent, the annual Amazon Web Services shindig, although I must admit to being jet-lagged to hell. Handily nothing ever shuts down here so waking up at 2am is not a problem :)

The first day was dedicated to the AWS Partner Network (APN) Summit, and the #1 highlight had to be the announcement that Smart421 have been awarded Premier Consulting Partner status – one of only 3 partners in the EMEA region to be recognised in this way. This is the highest level that there is globally, and it makes me really proud of what our internal AWS Practice have achieved over our four year journey with AWS – this is not something that AWS give to any old partner! It’s recognition of the great customer case studies and deployments that we’ve jointly undertaken with AWS, and the investment in deep skills that we’ve made.

APNThe sheer scale of re:invent is pretty staggering. The venue (The Venetian) is enormous, the rooms are massive, and there’s a huge number of attendees with a very wide variety of interests – enterprise-level, gaming, HPC, start-ups etc. As I was at the APN Summit all day which was on its own floor, this didn’t really hit me until I went to the Expo part of the event at the end of day – where 180+ different vendors are touting their wares. It was a struggle even to walk through the room as it was so busy – although copious amounts of food and alcohol probably helped drive attendance :).

Here’s a couple of other takeaways from the APN Summit yesterday:

  • AWS have just updated the quote that they use to demonstrate the rate at which they are scaling their infrastructure. Anyone familiar with AWS will probably have heard before that one way of quantifying their rate of infrastructure growth is in terms of comparing with the number of servers etc needed to run the Amazon.com retail business at some point back in the past. Well – AWS has carried on growing, and so this comparison metric has had to be updated. They are now adding enough server capacity every day to power Amazon.com when it was a $7bn business – which is quite an incredible statement really. Cloud computing is indeed a scale game…
  • One of the big push areas from AWS is in driving use of AWS to host specific packages such as Microsoft Exchange, various Oracle business products (i.e. not just the technology components such as database, middleware etc), SAP, Microsoft SharePoint etc. Hence yesterday they announced some new partner competencies for some of these products. Personally I don’t quite get this – in my view, the cloud model is not so compelling for these kinds of IT workloads, as they tend to be very “steady state” in nature, not particular peaky workloads and if they are somewhat peaky, then you’ve usually got to have a resilient pair running all the time anyway and so they options for scaling down are limited. There’s a myriad of companies out there offering very price-competitive hosted Exchange and SharePoint models (like our sister company in fact) and they can exploit multi-tenancy across customers to drive a really low price point. Office365 (which wraps in Exchange and SharePoint with other stuff) is also the direction of travel for many enterprise customers. Having said all that, AWS are obviously seeing traction for these more enterprise-level software deployments otherwise they wouldn’t be aligning their partner model to it – as they are clearly not dummies given that they are giving the rest of the IaaS/PaaS market a bit of a hiding.

Today we have the opening keynote from Andy Jassy, and then we get into the more nitty-gritty technical sessions…

MB 1987 RiverIn the 1980’s, I was a typical PITA user, developing applications behind the backs of the IT department, even bringing my own PC and software into work. Eventually the IT department ‘took me under their wing’ and I was the one fighting off guerrilla developments from the user community, but by providing them with better, faster and more flexible technology, we won the day.

Now I find myself on the other side of the fence again.

I don’t develop anymore, but I’m watching the world of Cloud encourage self-service in the technical user community and leave IT departments behind. It’s a theme I have returned to before – the “democratisation of compute power” – served up brilliantly through the AWS IaaS model. We’ll see more examples of this at the AWS Enterprise Summit mid-September that Smart421 is sponsoring.  ( hashtag #AWSsummit )

However, it’s not just the Cloud that is challenging IT departments.

Mobile too seems to be spawning a new generation of Garagists*.  Either bright individuals buried in large companies or small one or two man bands creating mobile applications – building on core components (hosting/logon/mapping/location) provided by Apple, Google etc. by adding layers of creativity.

So what’s the problem – the real point here ?

The issue is security. When I was hacking out applications and getting sneaky access to CRM databases and pricing algorithms, everything was safe inside the corporate firewall. Nowadays it is mobile and cloud based.

Both of these technologies I wholeheartedly support, but like everything it has to be done in the right way. So if it was up to me again, I’d develop a Cloud strategy and Mobile architectural guidelines ASAP – before the Horse has bolted, the Cat is out of the bag and the Gorilla (sic) is in the mist.

* “The word Garagiste refers to the great Enzo Ferrari’s hatred of the multitude of talented, but small, Formula 1 teams that were emerging out of Britain in the late 50′s and early 60′s … were basically garage workers (grease monkeys in less formal parlance) compared to the engineering might of his Scuderia Ferrar. These teams didn’t produce their own engines or other ancillaries (aside from BRM), specialising mostly in light, nimble chassis”. http://dancleggf1.wordpress.com/2011/12/12/italys-garagistes/

Please share this blog using short URL   http://bit.ly/1dT7rT1

Please rate and Like this blog.  Comments always welcome.

Positively Noctilucent:  not all clouds were created equal

With just 3 weeks to go until its Enterprise Summit in London (17 Sept), Amazon Web Services (AWS) is on a bit of a roll. ( hashtag #AWSsummit )

Research reports by analysts at Forrester and at Gartner have shown again AWS’ leadership position in a significant way.  For those still looking for evidence that Cloud Computing is a robust alternative to on-premise compute and storage, these reports are as close as you are likely to get to an objective viewpoint.

In June, Forrester Research published its ‘Wave’ report on Enterprise Public Cloud platforms, Q2 ( get the reprint here or via short URL http://bit.ly/1dUsPC3 ). In it, the authors Rymer and Staten took all the mainstream IaaS / PaaS / SaaS vendors (except absences e.g. Savvis) and analysed from a number of viewpoints: rapid developers, coders, DevOps pros, and application development & delivery (AD&D).

Incidently, they also registered a blurring of the lines between the widely accepted stratified offerings software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS),  But I digress. You should read the report.

Amazon Web Services (AWS) came out the strongest of all vendors across three of our four developer segments.

In August, Forrester’s arch rivals Gartner published its Magic Quadrant report on Cloud IaaS  ( get the reprint here or via short URL http://gtnr.it/17Lu848 ). Analyst team Leong, Toombs, Gill, Petri and Haynes put AWS way out in front.  Other mainstream vendors (including Savvis) looked like a tightly bunched peloton on their now-famous 2 x 2 matrix.

Research clients of Forrester and Gartner place a lot of currency in these types of reports; substantial IT deals are decided on the strength of vendors’ positioning and analyst opinions.  Of course, analysts don’t always get it right, but when it comes to Cloud they have for the most part been prescient.

Regular readers of the Smart421 blog will know only too well that we have been uncharacteristically vocal of our support for the AWS Cloud technology stack.  This is only because a number of UK enterprises have already engaged Smart421 to architect, deliver and support solutions on the AWS Cloud.

If you’ve registered to attend on 17 September, you’re in for a treat with a great agenda.  If you’d like to have a more pointed conversation with our Cloud Architects or Senior Management team, please fee free to visit our special microsite to book an on-site meeting.

Share this blog using the short URL http://bit.ly/12LlyCF

Please Rate and Like this blog.  We always welcome Comment.

EOA-summit-logo-2013It was great to see National Rail Enquiries (NRE) win an award at the European Outsourcing Association Awards in Amsterdam last Friday (26 April).

In recognition of their SIAM outsourcing strategy (Service Integration and Management), NRE won the award for Best Multi-sourcing Project of the Year , beating strong category finalists 60k and Centrica (Centrica won this category in 2012).

Smart421 is pleased to be a large part of that initiative, performing the Managed Services element on top of an AWS Cloud platform for several key NRE applications.

As customers struggle with the chains of traditional SI relationships, Smart421 is providing agile delivery and innovation methods in the IaaS world.

Many analysts see this as “third generation outsourcing” and a change for good – and so do I.

 

Please rate and Like this blog.  If you can, please leave a Comment.

I was doing some Hadoop demo work last week for a customer and mainly just because I could, I used spot instances to host my Hadoop/pig cluster using AWS’s Elastic MapReduce (EMR) offering.  I thought I’d have a quick look at what the resulting costs were over the few hours I was using it.  I used a combination of small and large instances in the US-East region – m1.small for the master node and m1.large for the core nodes.  Note – these costs exclude the PaaS-cost uplift for using EMR (another 6 cents per hour for a large instance).

In summary – it’s dirt cheap….

AWS Spot Price Analysis

What is more revealing is to look at this in terms of the % of the on-demand price that this represents…

AWS Spot Price Analysis Saving

So in summary, around an average saving of 90% on the on-demand price!  This is probably influenced by the fact that I was running the cluster mainly during the time when the US are offline.  We tend to get a bit fixated on the headline EC2 cost reductions that have frequently occurred over the last few years, and the general “race to the bottom” of on-demand instance pricing between AWS, Google, Microsoft etc.  Obviously not all workloads are suitable for spot pricing, but what I did here was deliberately bid high (at the on-demand price for each instance type in fact) knowing that this would mean that I was very unlikely to get booted off the instances as anyone bid higher if capacity got short.  As EC2 instance costs are so low anyway, we tend to not worry too much about optimising costs by using spot pricing for many non-business critical uses – which is a bit lazy really and we could all exploit this more.  Let’s do that!

The subcategory called Big Data is emerging out of the shadows and into the mainstream.

Matt Wood with Robin Meehan

From left: Matt Wood, Chief Data Scientist at Amazon Web Services (AWS) with Robin Meehan, CTO at Smart421
Photo by Jim Templeton-Cross

What it is.

Definitions abound (who would have thought it? – quite usual in the technology market). For Big Data, we quite like the definition that originated with Doug Laney (@doug_laney), formerly META Group, now a Gartner analyst. It goes something like this:

 ” … increasing volume (amount of data), velocity (speed of data in and out), and variety (range of data types and sources)”

Gartner continue to use this “3Vs” model for describing Big Data.

Unsurprisingly, others are claiming Gartner’s construct for Big Data (see Doug’s blog post, 14 Jan 2012).

Still confused?

Put another way, Big Data is commonly understood to be:

“… a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools. The challenges include capture, curation, storage,search, sharing, analysis,and visualization. The trend to larger data sets is due to the additional information derivable from analysis of a single large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to “spot business trends, determine quality of research, prevent diseases, link legal citations, combat crime, and determine real-time roadway traffic conditions.” read more on Wikipedia.

Big Data could be executed on-premise if you have sufficient compute and storage in your corporate data centre. And some do, especially some large banks, and with good success. Several solutions are already out there on the market;  Oracle’s Big Data Appliance is just one example.  But it does also beg the question “why would you” ?

If you don’t want the CapEx of purchasing more tin, or don’t want to gobble up capacity in your own data centre, then there are alternatives. For example, a cost model now exists with cloud-based compute and cloud-based storage (for example, think of Amazon’s announcement of 25 percent reductions in the price of Amazon S3, it’s storage solution) that puts Big Data in the Cloud well within the reach of all UK enterprises. A cost model like that islikely to win friends in procurement and in corporate governance as well as in IT.

Hinging on technologies including Apache Hadoop clusters, Amazon Elastic Map Reduce (Amazon EMR) and others, Big Data is delivering a degree of analytics and visualisation not previously possible at affordable levels.

Don’t just take our word for it, ask around. We could point you to other experts in Big Data, such Matt Wood ( @mza ), Chief Data Scientist at AWS.

What it isn’t.

Big Data isn’t business intelligence (BI). What I mean is that Big Data isn’t BI in any traditional sense of the term. It is altogether another level on from that. Granted that some tooling enterprises may own may be recycled for use in Big Data analytics. But it isn’t another species, it’s another race.

Big Data isn’t a lame attempt at reviving a management information system (MIS); those should be left to rest in peace.

What it means for you.

By now, if you’ve read this far, something should be niggling away at you that you could be missing a trick. I trust it won’t be those voices in your head again. But it might be your instincts telling you how Big Data could answer those tough business questions – y’know, those “I can’t be asked” questions that existing systems just cannot deliver.

Now, you would not necessarily get our CTO to come right out and say that Big Data is the next big thing. But evidence we are assembling so far does seem to point to a new capability to deliver. For those with an appetite to understand their business in new ways, Big Data is delivering tangible intelligence that lets them see new dimensions, new possibilities and new revenue streams.

I did get a full radar lock on something our CTO said in the summer. It was a throw away line at the time but it stuck with me and with others. So, when the time came to consider an appropriate go-to-market message for our quarter three (Q3) focus, we decided to wheel out his one-liner as part of our messaging.

“It’s not about survival of the fittest -
it’s about survival of the best informed”
Robin Meehan, CTO, Smart421 Ltd.

Making no apologies to Charles Darwin or evolutionists, the statement is resonating with decision makers in the enterprise space, not least those in the Insurance sector. Why?  Well, we think it is because a lot of the big insurers operate under many names in their brand portfolios.

The capability to see and understand impacts of brand activities, such as Insurance Quotes, delivered using Big Data analytics in the AWS Cloud, is illuminating new gains that would otherwise have remained out of reach.

Don’t forget – brand analysis is only one use case for Big Data in the Cloud.

If the world is going Big Data crazy then you need to know what it is, what it isn’t and what it means to your enterprise.

Agree?  Disagree?

UPDATE 05 Dec 2012 – Our economist friend Tim Harford  (@TimHarford) sent this hilarious tweet: The data do not lie. OR DO THEY? Muah huah huah! http://dlvr.it/2b2NS1

UPDATE 06 Dec 2012 – Robin and colleague Ben Baumguertel (@bbaumguertel) are attending the Big Data Analytics event in London today (organised by @WhitehallMedia ).

Please Rate this post and Like this post below. If you can, please leave a Comment.

GhostA colleague pointed me at an article in Computing the other day that starts off with “Retail giant Marks & Spencer is ditching Amazon as its online platform host“. As we are a leading AWS Solution Provider, interesting I thought, so I looked into it. The article itself it not misleading, but as usual the comments and interpretation it has generated have confused Amazon the retail store with Amazon Web Services (AWS) the IaaS/PaaS provider. For example someone made the comment “One is left wondering how many of AWS’s customers will move to rackspace or other companies of that ilk” – which is nonsense IMHO, and I’ll explain why.

M&S’s eCommerce offering is built using a white-labelled version of Amazon’s retail store – see this article in ComputerWeekly from 2007 when it was announced. I’m sure that some or all of M&S’s eCommerce site runs on AWS infrastructure, but that’s not the point really. I totally understand a retail organisation’s reluctance to use a retail competitor’s platform due to the reduced control over new functionality releases and the level of business data insight that this could give to a competitor. But those arguments do not apply to running a retail (with your own eCommerce software/platform) operation on AWS – if you follow good cloud architectural practices, e.g. encrypt data in transit and at rest, keep your encryption keys private and away from the cloud service provider etc.

In discussions with customers and vendors I see this confusion of Amazon.com and AWS all the time (sometimes deliberately to spread fear, and sometimes accidentally through lack of understanding). To be balanced here, the AWS offering has historically leaned on the Amazon.com brand and scale as a selling point, so I guess AWS need to keep making this distinction clear (which I’ve seen them do in every presentation). Anyway, either way it’s a red herring…and getting a bit tedious.

It is another exciting day for Smart421 with the news that we have been confirmed as the first UK enterprise to be confirmed on the Amazon Partner Network (APN) as an Advanced Consulting Partner by Amazon Web Services (AWS).

AWS Logo Advanced Consulting Partner Dark

What does it really mean for Smart421 and our Customers?

In some ways, partner ‘labels’ are often seen as just that, labels, and can be given out like confetti. However, for Advanced Consulting Partner status, we had to put up some substantive evidence of various AWS capabilities including Customer references, minimum of $10,000 a month AWS billings, minimum of Business level AWS support (previously called Gold) and at least 10 trained AWS staff.

So before the cynics have a pop either at the program or at us, or both, I can reveal it does require proven AWS capabilities. That will sort the ‘wheat from the chaff’ so to speak :-)

Does it change anything for us? – well yes and no…..

In terms of Operations, we already have a great relationship with AWS (since 2010 as an AWS Solution Provider and since 2012 also as an AWS Direct Connect Solution Provider) with access to a range of valuable contact points, from technical contacts through to Sales and up into Senior Management in the UK and USA. From my perspective as AWS Practice Manager, the existing relationship means regular face-to-face monthly meetings with our technical contacts in AWS, access to the product teams including ‘gurus’ based in Seattle, involvement in beta trials for new features etc. so the APN will just help reinforce those good relationships.

Perhaps the biggest change will be the impact of membership of the APN for our Customers. We are now able to be able to leverage a wealth of AWS resources on our Customers behalf. This translates into concrete deliverables as straightforward as documentation right through to support from technical architects during delivery engagements. It adds up to an improved level of confidence for our Customers that our proven AWS capabilities are fully backed by AWS and its rapidly growing global eco-system.

It means far more than a partner label to us and our Customers….

As Smart421’s partnership with Amazon Web Services (AWS) is slowly moving from pure infrastructure work into full cloud-based solution architecture design, I’ve recently spent some time analysing their IaaS platform from that angle.

My immediate goal with this was to better understand how the infrastructure side of my solutions are going to be affected by this paradigm shift moving forward. In other words, I was preparing myself to have a conversation with my infrastructure architect, let’s call him Jimmy, and not be caught showing my complete ignorance on the matter… again. :D

To my surprise I’ve found out that AWS, and IaaS in general, has a much more profound effect on my end to end solution design than just infrastructure, the key being how the “Servicelisation of Infrastructure” (sorry) provides a great mechanism to finally comprehensively address NFRs at the application view and close what I use to call “the leap of faith into the iron”… let me explain that.

We all know by now that our solution design is always going to start by a business architecture exercise that will feed the solution requirements, including functional and data requirements but also those dreaded non-functional requirements (NFRs)**, to our data and application architects, who in turn will produce the data model and component view, and that will be followed by our infrastructure architect, who will come up with the technical platform all those application components will run. Fine and dandy, but…

Dreaded NFRs? Why? Well, while it is quite straightforward for our data architect to create a logical data model out of the data requirements, or it is just BAU for our application architect to derive the system capabilities to cover the functional requirements, it is not easy for any of the two to cope with those NFRs… How does our data architect react to a NFR such as “the system shall process 2 million transactions per day”? Or what does our application architect think of the NFR “the site will provide the same response times independently of the location of the user”? Well, in most of the cases the reaction will be “Well, that’s for infrastructure to answer, isn’t it? Let’s pass those ugly things to Jimmy, he’ll know what to do with them…”.

So that leaves us with that situation we’ve been so many times, in which we provide a very detailed application architecture to Jimmy alongside a very long list of performance related requirements, hoping, and here’s the leap of faith I was referring to earlier, that our infrastructure hero will know how to put a lot of heavy equipment together that somehow magically will achieve those very ambitious performance goals…please raise your hands if that has never happened to you, or if that hasn’t inevitably ended up in all sort of performance issues detected just too late down the line, maybe a couple of weeks prior to go-live? Anyone?

Well, let’s see how Infrastructure as a Service may come to our rescue. Just do this simple exercise: go to the Amazon Web Services site and read all the documentation of the offering, all of it, trying to ignore the fact that it describes an infrastructure platform and rather set your mind-set as if you were looking at the functionalities of just another software system that needs to be part of your component architecture… Interestingly enough this made a “magic click” in my head, and suddenly I was thinking about my solution and my application architecture*** in terms of capabilities, functionalities and features that elegantly addressed all those long hated NFRs!

I’ve put this idea to test for a CMS solution I’ve been playing around with recently; I would have never typically defined capabilities such as a “Fast Access Data Access Layer” or a “Low Latency Distribution of Content” in my capabilities inventory, but suddenly my understanding of those AWS services such as ElastiCache or CloudFront made it dead simple to think about the NFRs and translate them to discrete solution components.

And what’s even more interesting is my design is not immediately coupled to the given IaaS platform as a result, not at all. As with the rest of the solution, these components and capabilities allow for a fit-gap exercise against the available options to be answered by my infrastructure architect: Do we achieve global performance by deploying the servers in our corporate data centres or maybe by deploying them in the AWS regions in the cloud? Or do I just keep my platform in a single location within my premises and use a CDN pull-zone for low latency delivery of static content? Quite a different proposition for Jimmy than the old “make it quick, boy”!

Now the problem is addressed where it should be in the design process, at the logical level, and decomposed into a set of features that achieve full traceability from the business into the application and then into the infrastructure layer and then back up again… the work of our infrastructure architect is now so much easier, as it is the predictability of our design exercise! Life is good! :D

Well just a thought in any case… I guess most of you were already through this learning, but just in case you had your own Jimmy’s suffering, this is a nice mental approach to try to bridge the gap.

* Infrastructure as a Service; I’ve focused this analysis to this type of cloud offering as it is probably where the biggest gap between architecture practices exist. In big organizations with dedicated software, middleware or platform architecture functions a similar situation will probably exist in which we could follow a similar approach with SaaS (eg. Microsoft Dynamics CRM Online) or PaaS (eg. Microsoft Azure)…

** It’s worth mentioning the usual problem of the business architecture exercise not really producing NFRs other than maybe a couple of fuzzy “it must be like really really fast” or “the site needs to look gorgeous”… this article is working on the bold assumption that our business analysts have been able to get blood out of stones and have coaxed the business into expressing real tangible NFRs to come along with the rest of requirements.

*** After all aren’t we solution architects just application architects with a good working knowledge of the other disciplines? At least I’ll confess that’s my case…

Follow

Get every new post delivered to your Inbox.

Join 1,122 other followers