Microsoft


So at Microsoft TechEd Europe this year the strapline from the keynote was; It’s Time . It’s time for us to utilise the Cloud on our terms. I also thought there’d be time at the airport last Friday to write this blog but no chance, no time, just waiting in queues…if only they’d used Azure Topics to route us effectively…

So what sticks in my head my regarding the The Cloud OS: It’s Time theme? Hybrid Solutions , it was a theme repeated throughout the week, with many of the breakout sessions highlighting the progression in Microsoft products whether it be SQL, BizTalk, Office or .NET and the new features to support on-premises and cloud solutions.

With integration at the heart of most things we do at Smart421, I was interested to see Microsoft’s vision for integration particularly given the recent announcement of Windows Azure BizTalk Services (WABS) and I thought I’d briefly describe how the new features may be utilised.

BizTalkCloud

Connect to the Cloud. BizTalk 2013, now includes Azure Service Bus adapters for simplified Azure connectivity. Why would you use this? The integration pattern discussed was Store and Forward, where an organisation may wish to insulate an on premises version of BizTalk from peaks and spikes or when destination systems are not online. By utilising Azure Service Bus queues to store messages, the on-prem BizTalk server is then free to process messages at it’s capacity or when destination systems are available.

Run in the Cloud. Due to large footprint of the BizTalk Server product, you may be able to save a considerable amount of time with Azure IaaS. Creating Dev and Test environments could be reduced to minutes with ready made BizTalk images ready to spin up at any time.

Build for the Cloud. WABS provide an easy way to expose service endpoints in the cloud, making B2B message exchange potentially simpler without having to expose on-prem services through a DMZ. EDI message exchange was given as an example, with ready made EDI adapters making EDI processing simple again potentially saving time.

With regard to the roadmap for WABS it will come as no surprise to hear that many of the BizTalk Server features will be finding there way into WABS but rest assured the investment in the on-prem version will continue for some time yet. If you’re an existing Microsoft customer and already bought into their stack, there is undoubted flexibility provided by the combination of on-premises, cloud infrastructure and platform services. As always, understanding how best to utilise what and when will be the challenge.

In my next blog I hope to talk in a bit more detail about some of the new features in SQL2014.

Share this blog using short URL http://bit.ly/1a7yJDq

Please Rate and Like this blog. Please leave a Comment.

After the polished video intro (screen shot below and catch the full key note here) Brad Anderson (Corporate VP at Microsoft) kicked off TechEd Europe here in Madrid by highlighting the opportunities that we have available to us as at the moment in IT. He talked about innovation, helping customers differentiate themselves and how Microsoft aim to make others great, so whilst I definitely agree that with him, it was the resonance with so much of what we do at Smart that I really liked.

OpeningSeq

Four areas of innovation were discussed around; People Centric IT, Building better Apps for people, Data, and the Cloud Platform. Aligning to these were new product release announcements including SQL 2014 CTP, VS2013 and TFS2013 CTP, Windows Server 2012 R2 and System Center 2012 R2. Better still, a partnership with Oracle was announced which means Oracle is fully supported on Azure and Hyper-V (and from what I gathered Oracle images ready to spin up in Azure).

One of the best parts (and I tweeted at the time – @wanty1975 btw) was a video of one of our customers (Aviva) referenced as an example of mobile and Azure development combining to provide an innovative solution for their customers. Why was this so good? Well partly because my old boss was in the video…but more so because seeing real world examples, and close to home too help inspire and make me realise the opportunities we have available to us.

TechEd Pres

So a brief list of new features mentioned in the key note that you should be aware of:

  • BYOD – two-factor authentication making it easy to add devices (a MS Surface in the demo) to AD domains using Windows 8.1 (not sure what the experience would be like for other devices though??). The simple process means files are sync’d, permissios granted to corporate data and Line of Business (LoB) applications downloaded onto the device accessible via a company portal-like homepage.
  • VS and TFS 2013 – ALM benefits throughout the develoment lifecycle but I really liked InRelease (following a recent acquisition by Microsoft) which provides workflow and approval mechanisms for moving builds through environments. I could see this really helping to take the pain away from tracking builds through environments and ensuring everyone knows what versions are where. From a dev perspective I also thought the new Code Lens and Code Map features looked really helpful. These come under the banner of Situational Awareness and are designed to reduce the time and effort needed by developers when context switching. Code Lens integrates with TFS 2013 to provide details within the code base, such as tests passing/failed, who it was last checked in by and any recent check-ins. It helps developers understand who has done what, where and why, much more simply than the previous version/compare routine that we’ve all experienced. Code Map provides a diagramatic view of code (similar to architecture modelling tools) but this can be viewed at runtime in debug mode, like a call stack, helping developers familiarise themselves with code much quicker and easier.
  • SQL 2014 – some huge performance gains achieved through in-memory OLTP and row based access rather than page, therefore reducing locking issues and increasing performance. SQL will suggest what tables are ‘hot’ and recommend moving them into memory (I’m assuming they’re persisted to disk to fairly frequently!!). I also picked up on the ability to replicate an existing on-prem DB to Azure, to provide HA and DR via a right-click menu option…didn’t see or hear any more evidence of this but sounds like a great feature. Also HDInsight and Hadoop linkage to Excel to provide visualisations and insight to Big Data.
  • Windows Server and System Center 2012 R2 – providing the tools to make Hybrid cloud simple and easy, with tooling to help sync data, create replicas for DR purposes and significant performance benefits for virtualised machines.

So there are obviously things that I’ve missed and lots more to talk about but hopefully this represents a fair summary. I’ll also be tweeting any further nuggets throughout the week.

Wow! SyncIpswich’s second meetup and around 80 people crammed into the Eastern Enterprise Hub in the James Hehir Building at University Campus Suffolk. Many of the attendees were working for local behemoths like BT but there were also a good mix of bootstrappers, Start Ups and tech entrepreneurs with all kinds of backgrounds (even spotted a Chartered Accountant).

Organisers Carl Farmer (@CarlFarmer), supported by Anders Fisher (@atleastimtrying) and others have done a great job with SyncIpswich, which we are proud to sponsor. The focus of this meetup was on building software quickly with good practices as well as a nice introduction to the Windows Azure Cloud.

Talk no 1. Continuous Delivery

The first presentation by Chris O’Dell from 7digital (@ChrisAnnOdell) described how Agile practices (CI, Kanban, etc) combined with their architectural evolution to SOA have reduced code to deploy times to half a day  at 7digital.  And, by the sound of it, makes their developers more productive by getting away from “DLL Hell” that used to be the bane of any Microsoft Windows developer’s life towards a loosely-coupled set of services and a public API.

7Digital Logo

Chris raised some really interesting points around developing small fine-grained service components – not being that familiar with .Net myself this seemed to be similar to what we are doing in the Java world with OSGi and Service Component Architecture. I do like the policy of developing new features on the trunk (no feature branches) but making good use of feature flags rather than old-fashioned branch & merge.

They are also using Git for the code version control and Chris showed the inversion of the classic Unit Test, Acceptance Test, QA triangle. Some in our own organisation are raising question marks about the usefulness of very granular unit tests so the approach taken by 7digital of increasing the number of Unit tests is interesting.

There were a lot of questions from the floor, I was particularly interested in how the small kanban teams (about 6 or 7 members in 5 or 6 teams I think) interact when there are common services. This is a key problem that us SOA architects need to get right to get the best value on services. Feature Flags is something that we’ve also thought about in the context of simplifying application testing by, for example, switching off authentication for functional testing.

It’s great to see a company like 7digital competing successfully with iTunes and Amazon in the digital music space. I’ll be checking out their API (and their JLS back catalogue !) in more detail this weekend.

Richard Astbury AzureTalk no 2. Starting out in Azure

The second talk of the night was by Richard Astbury (@richorama) of Two10 Degrees ( @two10degrees). Richard gave a nice introduction to Cloud computing and in particular using Microsoft Windows Azure, showing a picture of a MS data centre under construction, which was something I haven’t ever seen before. I think it really brought home the sheer scale and commodity nature of the Cloud and these facilities being full of containers of kit that is just thrown away or recycled when it stops working.

Building a website on Windows Azure from scratch can use a few main pre-canned routes like the obvious “Website”, “Virtual Machine” and “Cloud Service”.

And it now includes a “Mobile Service” which is of particular interest to me. Sadly, I didn’t have time to chat to Richard about this but it’s on my “To Do” list to get a Hello Smartie mobile service up and running. In fairness, Richard did do two masterful demos for Website, including a node.js based site which he even launched from his home computer (a Raspberry Pi no less). As Carl tweeted:

Deploying to Azure from a remote RaspberryPi at home… Impressive stuff from @richorama !

— SyncIpswich (@SyncIpswich) April 25, 2013

Well done to the people of Ipswich for turning out and drinking all the sponsored free beer!
SyncIpswich will run and run.

Please Rate and Like this blog.  If you can, please leave a Comment.

Wad of cash

© Flowah | Stock Free Images & Dreamstime Stock Photos

There has been a bit of a stir today with the patent Apple have taken out relating to mobile cash. Some twitter users thought it was a joke at first but actually Apple is only taking a method that has been proven in developing markets and applied it to the US market.

The basic premise is that registered “cash providers” can provide the folding green stuff to someone in need of it using their iPhones and iTunes plus location (maps) as the enabling technology.

At first this seems bizarre, why would I as a total stranger trust you to come and take cash off me? The proposed 1% commission doesn’t seem much incentive on the average cashpoint withdrawal but I guess on larger transactions it might be attractive.

On closer inspection, this is actually very similar to hugely successful mobile payments systems like the M-Pesa “unbanked” system that relies on networks of agents around rural communities in Kenya and other countries. Bill Gates recently championed M-Pesa as a great example of ideas that have “digitally empowered” subscribers but commented:

Without omnipresent cash points, M-Pesa would be no more convenient than traditional ways of moving money around. At the same time, it was impossible to persuade retail stores to sign on as cash points unless there were enough M-Pesa subscribers to make it profitable for them.

And is it really that much different to the ad-hoc person-to-person (P2P) mobile payments being promoted by Barclays Pingit and others?

This is just yet another example of an attempt at getting people using their phones for financial transactions. The “closed system” nature of M-Pesa wasn’t a barrier in a country with very little alternative and only a minority of people with bank accounts. The concept of using agents for so-called “Cash-In” and “Cash-Out” services made perfect sense. However, in countries with thousands of ATMs and where most places are now happier to take card payments than cash due to the downsides of handling cash I really can’t see this latest Apple patent catching on.

Organised by the UK Windows Azure User Group, this free all day conference provided a great opportunity to catch up on the latest developments, particularly given the Microsoft announcement a couple of weeks back.

Core to this announcement was Microsoft’s move into Infrastructure-As-A-Service (IaaS), and the key note by Scott Guthrie positioned IaaS (described as Virtual Machines) alongside Microsoft’s current Cloud offerings which to date has focused on Platform-As-A-Service (PaaS – now labelled Cloud Services by Microsoft) and Software-As-A-Service (SaaS – Office 365 for example).

MS Cloud Day

Despite the lack of internet connectivity for a large part of the presentation (what is it with Cloud demos and loss of connectivity?!?) Scott did a great job talking through the slides, clearly describing the alignment of each of the deployment options: On-premise vs Virtual Machines vs Cloud Services vs SaaS.

In addition to Virtual Machines, the new Web Sites service was also discussed which gives Azure customers up to 10 web-sites and 1GB of storage for free (whilst in preview period, see here for further details). The demonstration showed how easy it is if you simply want to re-host an existing web-site on Azure whether it be ASP.NET, Node.js, PHP or even classic-ASP. So the new Web Site and Virtual Machine services provide a simple route to hosting applications on the Azure platform, but there is the added benefit of the Azure management aids, real time statistics and in the case of Web Sites incremental deployments and continuous integration (through TFS or GIT) too.

So where does this fit with Paas? Well Steve Plank from Microsoft provided some answers with another demonstration. With Cloud Services you get host of services to call upon including Storage, Database, Identity, Caching and Service Bus and the demo showed that if you design your application from the ground-up utilising these services, you benefit from an end-to-end application architecture that can be deployed and running in minutes at the click of a button. It is this architecture that really gives you the elasticity and flexibility in the places you need it.

A good day and exciting times with the options and landscape constantly changing. Nicely summed up by another Smartie (Andy Carter), ‘I guess there’s a load more stuff I need to learn about’, when a couple of days after passing the Azure certification MS announced the new services…(Well done btw!)

Last Tuesday was another installment from the UK Windows Azure User Group (formerly London Windows Azure UG) which this time included a presentation from our very own Simon Hart, and as well as myself, two other Smarties also attended the event to heckle support Simon.

The session kicked off with an apology from Microsoft’s Mark Quirk (Azure Product Manager) due to last week’s Azure downtime. Mark described this as the first incident in two-half years and although any incident like this is never a good thing, it was great to see Mark there and helped reinforce to me just how important this stuff is to Microsoft.

The 1st agenda item was from Yossi Dahan (MS Technical Specialist) and given that he wasn’t feeling well he did a great presentation on Azure and how it can support mobile application development. Yossi demonstrated a mobile solution called BabelCam (source available here ) which goes something like this:

  1. Someone takes a picture from a mobile device of a menu which is in a foreign language.
  2. The solution then authenticates using the Azure Access Control Service (ACS).
  3. Once authenticated, stores the image on Azure as a BLOB.
  4. Passes the image to an OCR service to capture the text.
  5. The text is then passed to Bing where it is translated into English.
  6. Translated message is passed back to the mobile device.

A couple of things stuck out for me:

  1. Due to the constraints of processing power on mobile devices, Azure enables developers to move as much logic and data as possible from the device up onto the cloud.
  2. This helps deliver more powerful applications but also means that it is easier to support multiple device OS’s (Windows, iOS and Android) using the .NET SDKs.

The main realisation I had was how influential Cloud paired with Mobile will be and the combined potential that these technologies bring. Consumer-focused mobile applications (above Enterprise apps) require the type of scalability that only the cloud can provide. Couple this with the ease at which things like ACS can be configured to authenticate users via Live ID, Yahoo!, Google Mail and Facebook and MS are providing the building blocks for vastly scalable, quick-to-market solutions.

azure ug cropped

Following the beer and pizza (thanks sponsors), Simon Hart talked us through his soon to be published Azure reference architecture…in 45 minutes!

The solution enables you to notify your home-based media center of programmes you wish to record from any browser device. The effort and dedication shown by Simon in developing the reference architecture was obvious and the slides at the start helped frame the presentation; what initially looks to be a simple undertaking turns out to be quite a challenge. What Simon demonstrated was how all the facets of Azure can be used to overcome these challenges creating a solution that is:

  • Secure (using ACS)
  • Scalable (using the Azure Service Bus)
  • Resilient (using an active-passive DR capability across regions)

It was great to see an example pulling all these different parts together in one solution…the difficult part, was grasping it all in the time! I think that Simon is penciled in for the next meeting (3rd April) to talk in more detail about the Service Bus, so it’ll be another late one for me but well worth it.

Planky getting an error!On Tuesday night last week I attended my first London Windows Azure user group meeting – it’s the second time this new group have met, but the first one I’m managed to make it to. My colleague Simon Hart blogged about the inaugural event here.

There were about 35 attendees or so and it felt like a good crowd, asking intelligent questions and I had some interesting chats during the breaks with some other user group members and I also caught up with Yossi Dahan (a Microsoft technical architect I’ve met before) – it really feels like this young user group has some momentum – so hats off to the organisers for getting this off the ground! The good pizza, chips, and beer also always helps :) – this must be one of the best catered user group meetings I’ve ever been to – there was even someone opening my beer bottle for me…

Planky (aka Steve Plank from Microsoft) presented on two topics relating to different strategies for identity federation and application access control – Azure’s Access Control Service (ACS) and Azure Connect.

Most of the the presentation time was allocated to ACS – which is pretty intricate to use. Well – it’s probably fairer to say that there are plenty of moving parts and technologies to get to grips with if you want to federate identities from Active Directory on-premise using ADFS2, via ACS in Azure to a set of applications hosted in Azure (which will typically using Windows Identity Foundation – WIF – to process the security token issued by ACS). None of it is particularly tricky in itself, but the great man himself hit some issues along the way (which always makes for a better presentation anyway :)) and I was left thinking that it was a bit of nightmare to troubleshoot exactly why user access to the end application (the “relying party”) was being denied (see the image above) – it’s just the joys of debugging a distributed architecture I guess.

Azure Connect is essentially a VPN and IPSEC tunnel offering that I guess is very roughly equivalent to the Virtual Private Cloud (VPC) offering from AWS, but with some significant differences – but it’s trying to address the same key requirement – seamless but secure network connectivity between on-premise and cloud-based networks. It’s still in beta (at least until Summer 2012) and has some inherent limitations such as the fact that it requires a separate installation of agent software on every on-premise server that will talk to/from Azure, but it looks like an interesting technology. My main concern was just whether our customer’s security team’s could live with this model though – as in addition to the installation requirement, it essentially avoids any corporate firewall by creating an out bound SSL (port 443) connection to the Relay Service on Azure, effectively creating a client-to-site VPN from each individual on-premise server to the Relay Service.

So overall – a very useful and interesting evening, I’m glad I attended and I recommend my Smart421 colleagues to make the effort to attend future events (which are planned to be monthly) – the next event (register here) is on the 7th Feb and relates to “Parallel Processing with Azure and HPC Server“, so I’m personally very interested to hear how this compares to AWS’s offerings in this area.

DavidTuppenSQLServerArticleOne of the Smarties in the our Microsoft practice, David Tuppen, has published an article on the SQLServerPro web site (what was called SQL Mag) about how to work around the limitations of the Business Intelligence Wizard in SQL Server Analysis Services (SSAS).

It’s very clear and detailed. Have a read!

Anyone doing modern enterprise software engineering today will be practicing or partially practicing Continuous Integration, otherwise known as CI.

If you have never heard of CI before, Martin Fowler has a really good post on the subject here: http://martinfowler.com/articles/continuousIntegration.html

Just quickly, CI really comes down to a few main key points – at a minimum (some of which lifted from Fowlers post above):

  1. Automate the build
  2. Make your build self testing
  3. Everyone commits/check’s in to trunk every day
  4. Every commit/check in should build trunk on an integration machine
  5. Support for buddy builds
  6. Keep the build fast
  7. Must be able to run a local command-line single build that does everything that represents the build machines process

As far as I am concerned those 7 items are mandatory. There are of course many more that make up a really good CI strategy which I’ll go into in a future post but for now, lets concentrate on those 7 items.

I’m not going to go into detail into each item, but will pick out the ones I think are the most important.

2. Make your build self testing

The thing is everyone seems to know about CI now, which is good. But many are still getting it wrong.

Number 2 on the list need explaining in order to get it right. I’ve seen so many large companies get this wrong, let me explain.

There are normally two types of automated testing:

  1. Unit testing (by no means should the component being tested go down a rabbit hole and test a whole bunch of other components, these dependencies should be isolated out)
  2. Integration testing or acceptance testing

So if unit testing is done right i.e. they are *not* integration testing and the tests test the smallest testable unit, then unit testing honors point 6 (keep the build fast) on the above list. Who cares whether the build is fast? well you normally only have a finite number of build servers and on those servers you have a finite number of CPU’s and RAM which means a limited number of agents. When all your agents are busy serving multiple simultaneous check-in’s, builds get queued, the more queued builds the longer it takes to see feedback from a check-in/commit.

If you don’t have a buddy build system (point 5 on the list – I’ll explain what a buddy build is in a minute), imagine what can happen when you get lots of queued builds, if 1 fails, then it is very likely all the following builds will fail. This means someone has to sort out the mess. This mess then break downs the benefits of CI.

So going back to my original point, keep the build fast is very important that often gets missed or not really thought about too much. I often see unit tests doing too much meaning that they take too long to execute. I often see unit tests being a mix between integration tests (like BDD/acceptance tests) and unit tests more like a hybrid. Unit tests should only test a single class – where possible, this keeps the build very fast.

But then what about acceptance/integration/BDD tests? normally such tests will execute end-to-end i.e. from a UI all the way to the database or downstream system and back. If you practice BDD you’ll have these tests for all your use cases main and alternate flows. So often you will have hundreds of these tests for a typical enterprise large system. So with that, these tests will take much longer to run due to the fact the tests are testing the full stack. Sometimes the downstream system will be on another network perhaps communication to it is via a IPsec VPN tunnel to the other side of the world. Or could be over the internet. Whatever it might be, incorporating these tests with the unit tests is not a good idea.

So when should you run these long running integration tests and when should you run unit tests? Integration tests should normally run nightly i.e. like 02:00am when the build servers are quiet and don’t have much activity. Unit tests should run on every check-in/commit during the day for constant feedback.

My CI tool of choice is Microsoft Team Build (comes with Team Foundation Server). In Team Build 2010 it allows you to easily setup controllers so I often use one controller for integration and one for CI. These controllers can delegate to n number of build servers. So with this kind of configuration, I normally have quite a meaty set of CI build servers (for running unit tests on each check-in) then have a less powered build server for deploying code and running integration tests.

The power of this is such that the whole daily build activities is fast, thus satisfying the fast build and constant feedback requirement. Developers can also start an integration build anytime they wish during the day which will not affect the day-to-day running of a software development team.

5. Support for Buddy Builds

What the heck is a buddy build I hear you say! I’m not sure if this is just a Microsoft term or not that has come from Redmond. But a buddy build is a way of ensuring your changes will integrate with the version on the server before your changes are actually committed into the branch. So how do you do that? in the old days before software products did this, you would send your changes to a buddy who would merge your changes with his, then he would attempt to compile/run tests etc before you committed your changes.

In TFS 2010, Microsoft has introduced the concept of a gated check-in. So when you check-in, you actually check-in as a shelveset (not to the main branch) then Team Build will execute the build scripts against your shelveset first, if it succeeds, then your changes will be committed to source control – hence it is impossible to break the build!

I’m not sure about other build server technologies like, Jenkins, TeamCity, Hudson etc but if you want to read more about TFSÂ 2010 gated check-in, see here: http://blogs.msdn.com/b/patcarna/archive/2009/06/29/an-introduction-to-gated-check-in.aspx

7. Must be able to run a local command-line single build that does everything that represents the build machines process

Now this is very important. I see all to often that there are no command-line builds at all! and if there are some, you sometimes have to run more than one to get a complete picture of whether your check in will succeed on the server or not.

The benefits of having a local command build are huge, here are a few reasons:

  1. It allows you to get a local copy of the solution running, i.e. setting up a web server, adding users setting security permissions, deploying web sites into a web server
  2. Have one single command-line build script batch file or power shell script that does everything your check-in will do, i.e. compile, run code analysis, run code style rules, run tests etc
  3. Enables faster development, geeks love command-line tooling!

If you have read this far, congratulations! until next time..

Happy building…

Sometimes you have a requirement to generate your C# class contracts from an existing WSDL document so they can be used by consumers. It might be that the customer simply wants to keep the existing service contract and in this case only the implementation needs to be developed.

The way to do this in .NET is to use the SVCUTIL.exe command-line tool that you get when you install Visual Studio. Most people will know about this tool for generating C# client proxy classes to handle the service invocations and marshalling of simple and complex data contracts from the client side. It can also handle generation of the C# service contracts – although not pretty and will need to be cleaned up afterwards.

Just to note, when I say C# above, the tool also supports VB.NET, you just need to pass in the /l:VB switch. It defaults to C#.

So to generate your concreate service contract from an existing WSDL you can use the following syntax:

svcutil /dconly myservice.wsdl

If you’re lucky you’ll get a single myservice.cs source code file that contains everything i.e. all data contracts and the service contracts that you need in order to implement your service. You’ll need to clean this up though as the tool makes a mess of creating these contracts. You’ll also get a .XML file that can be added to your web.config or app.config file as it specifies the address, bindings and contracts for your service. It takes the address from the WSDL document and applies it to the config file.

If you’re not so lucky, you might get an error similar to the following:

D:\Workspaces\Simon\Project_Foo\Trunk\Foo\Source\Artefacts\WSDL>svcutil myservice.wsdl
Microsoft (R) Service Model Metadata Tool
[Microsoft (R) Windows (R) Communication Foundation, Version 4.0.30319.1]
Copyright (c) Microsoft Corporation. All rights reserved.

Error: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter
Error: Schema with target namespace ‘http://schemas.foo/Services/Foo/2009′ could not be found.
XPath to Error Source: //wsdl:definitions[@targetNamespace='urn:com:foo:services:foo:function:2011']/wsdl:portType[@name='PortType']

Error: Cannot import wsdl:binding
Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.
XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='urn:com:foo:services:function:feature:2011']/wsdl:portType[@name='PortType']
XPath to Error Source: //wsdl:definitions[@targetNamespace='urn:com:foo:services:function:feature:2011']/wsdl:binding[@name='EndpointBinding']

Error: Cannot import wsdl:port
Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on.
XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='urn:com:foo:services:function:feature:2011']/wsdl:binding[@name='EndpointBinding']
XPath to Error Source: //wsdl:definitions[@targetNamespace='urn:com:foo:services:fucntion:feature:2011']/wsdl:service@name=’Service’]/wsdl:port@name=’EndpointPort’]

Generating files…
Warning: No code was generated.
If you were trying to generate a client, this could be because the metadata docu
ments did not contain any valid contracts or services
or because all contracts/services were discovered to exist in /reference assembl
ies. Verify that you passed all the metadata documents to the tool.

Warning: If you would like to generate data contracts from schemas make sure to
use the /dataContractOnly option.

Looking through the error, the first thing that comes to mind is it looks like the separate namespace being imported could not be found as per the following error message:

Schema with target namespace ‘http://schemas.foo/Services/Foo/2009′ could not be found

This might be the case if your WSDL document looks something like the following:

<wsdl:types>
<xs:schema>
<xs:import namespace=”http://schemas.foo.com/Services/Foo/2009&#8243; schemaLocation=”Context.xsd”/>
</xs:schema>
</wsdl:types>

You look at the above WSDL snippet and think, well Context.xsd is present in the same folder as the WSDL document to which I’m trying to generate a code contract, so why can’t SVCUTIL find it.

It can’t find it because it’s not looking for it. You have to supply SVCUTIL with all the dependent XSD files on the command-line with the name and location of the WSDL document file like the following, i.e.

SVCUTIL myservice.wsdl context.xsd

Once you execute that command, you’ll notice you’ll get a single C# file with the same name as the WSDL document (in this case myservice.cs) containing all the code required to implement the WSDL. This includes correctly decorated service contracts (interface) and the correctly decorated data contracts for use with WCF (Windows Communication Foundation). I admit it makes a mess of this but it’s a starting point and saves you a great deal of time especially if your data contracts are large and complex as often seen in SOA environments.

Next Page »

Follow

Get every new post delivered to your Inbox.

Join 1,084 other followers