I was in Zurich last week (Nov 21-25) for the Future of Software Engineering (FOSE) symposium, held at ETH Zurich university campus on the occasion of the 60th birthday of Bertrand Meyer, the inventor of the Eiffel programming language, which is itself celebrated its own 25th birthday on a 3rd day held after the two FOSE symposium days. The group of speakers attracted to speak at FOSE included many luminaries from the history of computing, as shown below in the panel at the end of day 1.
Some of the highlights from speakers’ presentations.
Day 1
Michael Jackson (Open University UK) – Engineering and Software Engineering
Jackson is known for pioneering work on program design and particularly software requirements and specification. He spoke here on how the ‘engineering’ in ‘Software Engineering’ is still largely an aspiration, if we take the intended meaning of this term to be ‘reliable development of dependable computer-based systems’. Jackson said that there is a lack of artefact specialisation in software engineering, in the way that occurs in other engineering disciplines. In other disciplines, e.g. civil, building systems involves putting together components which are ‘named’ (i.e. they are a well understood type of thing like an ‘I-beam’) and whose characteristics and interfaces are well understood. Because software engineering lacks this componentisation, each job looks like a new one from scratch. This means that we can never progress from the ‘radical design’ mode (necessary when you invent something new, like Benz’s first motor car – completely custom from top to bottom) to the ‘normal design’ mode used in industries that build new versions of old concepts, like cars, skyscrapers etc.
My reaction: Jackson is a great speaker, and presents very clear ideas. My 22 or so years in software engineering tell me that he is somewhere close to the mark with the ideas I quoted above.
David L Parnas (Middle Road Software Inc) – Precise Documentation
Parnas is great. He manages to rub everyone up the wrong way, in a friendly yet fierce way, and like all proper intellectuals, can back up his points of view with very solid and long thought out data, information and knowledge. He divides the world between ‘real engineers’ and ‘the rest’ (i.e. hackers), and sees only the former as really having a clue about ‘systems’ and ‘design’ and therefore having any hope of being able to deliver proper software. I will admit (as an engineer who also happens to have a BSc in Comp Sci) to exactly the same prejudice! Parnas says the sorry state of software engineering is due to the failure to create proper documentation, not because we need paper, but because ‘proper’ documents record crystallised and agreed-to design decisions, at various levels of abstraction. My own observation is simply that most people don’t want to actually do ‘design’ because it entails thinking, which hurts, so they just start programming instead. Now, I do understand that some people do in fact do good design like that (and indeed, with a language like Eiffel, it is partly possible). But the majority don’t, and Parnas is right, even if his observation is couched in terms of software’s least popular activity, ‘documentation’.
Barry Boehm (USC) – Future Opportunities and Challenges
Boehm has been one of my heroes since reading his 1980s book ‘Software Engineering Economics’ in which he developed most of the economic measurement models still in use today in some form, in the field. He gave a very enjoyable talk that touched on a modernised and somewhat formalised version of the classic Spiral model of development, now styled an ‘incremental commitment model’.
Nicklaus Wirth
Wirth is one of the true pioneers of computing, and the inventor of Algol W, Pascal, Modula-2 and Oberon. Wirth is now in retirement and (according to him) playing with programmable logic circuits via Verilog. Wirth was professor of computer science at ETH for nearly 30 years. He gave an amusing presentation, starting with some reminiscences, as one in his position surely must… He continued by talking about the need for codesign of software and hardware, and the possible end of the general purpose von Neumann architecture. Lastly he bemoaned the current state of the industry, focussed as it is on ever more complex applications and frameworks, yet built on outdated programming languages (by which he meant principally C, and system programming activities, like Linux). During the panel, Wirth’s detestation of C was used to comic effect, and when truly pressed by an audience member, he admitted that he thought that system programming should be done in Oberon.
My take: it is hard not to smile when the opportunity comes to hear one of the original masters of computing talk about the topic he co-invented, and which now underlies nearly the entirety of the modern world. I personally don’t agree that C is such an awful language. From the point of view of abstraction, it is, but it should not be thought of as an abstract language, rather a structured machine language in which the physical aspects of the machine are directly exposed. I programmed in C for around 7 years when doing real time systems, and when discipline is used, the result is not so bad. If I could choose what language to use for system programming today, it would probably be Eiffel, since I would rather have an OO language that can still talk to C, but in which I can write clearer abstract models and particularly use contracts.
Day 2
Day 2 included talks by Yuri Gurevich (Microsoft Research), Andreas Zeller (Saarland University), Rustan Leino (Microsoft Research), Joseph Sifakis (Verimag, France), and of course Bertrand Meyer. Gurevich‘s work on ‘Evidential Authorisation’ (for distributed systems) is well worth looking up, as it applies to sharing information in a trusted manner by agents who don’t necessarily trust each other. Similarly Sifakis‘ BIP (Behaviour, Interaction, Priority) formalism for defining real time systems in terms of components looks interesting.
Otherwise most of what was presented was to do with code verification (i.e. proof methods and tools) and validation (by automated testing). These presentations really did start to feel like the future of software engineering, since they would lead to tools that would not only prove and test a programmer’s code, but make suggestions for fixes to both contracts (if used) and the code itself, based on failing and succeeding test runs. One of the unsettling things about this kind of approach is that it might even work for people who don’t bother to do design and just code by trial and error.
Meyer‘s presentation focussed mainly on SCOOP (Simple Concurrent Object-oriented Programming), which is in my view a real advance. After various false starts over the last 15 years, I think he has finally got it right. Only one change to the language is required: the ‘separate’ keyword before a type, that indicates that objects of that type used in the declaration context may be created on a different processor and executed in parallel with the calling object. The only other change is the understanding of pre-conditions: they must now be understood as wait conditions. This is one of those deep realisations that Meyer admits took him a long time to come to terms with. But in hindsight it seems obvious: with concurrent computing, it appears unavoidable that correctness statements (the contracts) will now have a temporal aspect – now we need to talk about when they are true. All I can say is that Eiffel looks more attractive than ever, and as someone who has used it for over 20 years, there is no hope of my stopping now.
Day 3 – Eiffel Day
A third day was held for Eiffel people and involved presentations from the many excellent projects being done at ETH Zurich under Meyer’s guidance. The research group there is very bright, and I would say at risk of having far more fun than I ever had in Comp Sci… computing is supposed to be serious! I am pretty sure that this speaks for Bertrand’s characteristics as both an intellectual lead but also educator. Paul-Georges Crismer gave an excellent presentation on his experiences over 15 years of using Eiffel in industry. I myself, as (I think) the longest-time user of Eiffel in the room(20 years) apart from Bertrand himself, gave a presentation on Eiffel in industry (including in finance and openEHR), with some thoughts about where things could go to make Eiffel more mainstream. My presentation: PDF (2Mb).
Conclusion
For me, it is clear that ‘software engineering’ is still more of an aspiration than a reality. It does exist and is done well in those companies with CMM 3 and above ratings, typically real time control, air traffic control and military applications. Otherwise, most of what happens is something else.
The modern notions of agile programming, extreme programming and the general modus operandi of the open source movement are in my view partly recapitulations of things real software engineers knew 20 years earlier (e.g. test driven design: we were doing it in 1988, based on IEEE specifications and fully version controlled code), and partly real innovation in the sense of being able to organise armies of people who have never met to create large frameworks quickly. I suspect that the coding efficiency by these methods is not that high (i.e. low work / output ratio), but the quality and reliability of things like Apache and Linux cannot be argued with. On the other hand, the use of a language like Java in routine software development felt like a backward step when it came out 15 years ago, and it feels even worse today, compared with what is available (in my own company, developers continually trip up over the appalling implementation of generic types, the poor type system and lack of proper contracts). Offerings like Heroku show that today’s developers don’t need great languages to think (very) big.
My main observation is that a majority of developers still don’t want to do much ‘design’ because that involves thinking, may even involve mathematics, and worse, being away from the computer! This is not specific to people working in computing – it is a universal failing – many engineers would get away with such laziness if they could, but they can’t, for a simple reason: the delivered system will be made of poured concrete and steel, it will cost millions, and getting it wrong will kill the company. Saving software engineering in my view cannot be done by trying endlessly to convince developers about the ‘right way to do things’ and how it will save time in the long run. That has failed for the last 40 years – the only ones who really do it properly are those for whom failure has similar consequences as in civil or mechanical engineering (i.e. producers of real-time software that controls trains, power stations etc). The only answer is to start making software vendors legally liable for their products (and their failures) in the same way as companies who produce cars and mobile phones are: we need warranties on software.
Proceedings of the symposium are published by Springer, edited by Sebastian Nanz (ETH) who also very ably organised the conference.
Re: Jackson’s comments about componentisation.
I agree in principle, and come across the problem of having to re-invent solutions quite frequently. In large part I believe that our tools are not yet sophisticated enough. Mathematical proofs are a bit like software, but are well componentised (free-standing, in a sense). Software is not yet sufficiently abstract that the implementation specifics can be easily tuned to suit the application. I still have to deal with bare-metal and OS-level code, where “simple” issues like whether or not dynamic (heap) memory allocation can be used matter (typically it can, but software must be designed as though it can not). A colleague recently had to re-write some perfectly useful code because it had to run in device-driver context where STL containers weren’t allowed (but which were liberally used in the original source.) It’s hard enough making and meeting nominal functional specifications, but when you also have to import the specification soup that defines the minutae of the operating environment, re-use even at the cut-and-paste level becomes impossible.
An I-Beam is a useful engineering component because it is not a thing that depends on any other thing for its definition or existence. It can be understood completely in and of itself. Software isn’t like that because software mixes algorithms and implementation details in too-intricate a fashion. Software is (outside of some very rarified niche languages) overspecification. (I don’t want to start a language war, but I started to glimpse the possibilities of separating algorithm from implementation when I used scheme to implement a couple of projects, a year or two ago. Very exciting. Pity it doesn’t help me with the dynamic memory issue…)
Andrew, I think the obs in your second para is well made. I don’t think it necessarily applies to all of computing, for example, there are some reasonably well defined components available (web servers, unix utilities), but the ‘too-intricate’ mixing is also well known, particularly closer to the metal.
Pingback: Ruminations on ‘design’ in e-health « Woland's cat