The other night I attended the BCS Roger Needham lecture held at the Royal Society in London – this is an annual event that recognises outstanding research work in the IT world and provides a platform for the author to present. This year it was the turn of Dr Byron Cook from Microsoft Research, speaking on the subject of “Proving the programs eventually do something good”. Byron reminded me of a cross between Jarvis Cocker from Pulp and Smart421′s WebSphere practice manager, and there is a bit of the IT research pop star about him (if such a thing exists!).
My attendance was part of my strategy to follow Guy Kawasaki advice – “eat like a bird, poop like an elephant”, although I must admit that beforehand I was pretty sceptical of how useful I would find the event.
So I have to say straight off – the presentation was utterly, utterly fascinating. He was a great speaker – interesting, amusing, great anecdotes and just eccentric enough to be a convincing academic. To be honest, he is actually quite a practical academic – having produced real code solutions that solve real problems and are in use today. I found myself fished in and attracted to the idea of the purity of academic research – at least until he got started on the maths and the set theory :). The basic message of the research papers he and his colleagues have worked on is that static analysis of code can prove that programs will eventually terminate (i.e. not hang), and that some other programs will hang under certain conditions, but that this cannot be proven for all programs. The nature of some programs means that it just cannot be proven either way. Ever. Turing worked that out years ago. But his vision is that it will be possible to increase the proportion of programs in the “we definitely know if we have a problem or not” category to a useful level, giving you a kind of automatable 100% test coverage for some programs.
So in the future, as well as just using the static code analysis tools like CheckStyle, lint, FxCop etc that we use today, we could also perform static analysis that will prove (I say again – for some programs) that they will not hang. Impressively he has already demonstrated this by finding some bugs in Windows device driver code (the size of which was up to 30k lines) which had previously gone undiscovered despite being in use in the field. Of course there are many barriers to this becoming mainstream – two of which were that it sounded like you needed pretty huge processing power (so could only be done overnight typically and maybe using a high degree of parallel computing capacity, although fortunately the algorithm does fit parallelism quite nicely) and also that the range of ‘checkable’ programs and data structures is currently quite limited, e.g. today we can handle linked lists but not tree structures.
But Byron gave me hope that static code analysis is not a ‘dead’ area and that advances are being made that can lead us to produce better software in the future. If you want to read more, have a look on the Microsoft research site.