Lessons From the Fifty-Year Quest to Turn Programmers into Software Engineers

Title: Lessons From the Fifty-Year Quest to Turn Programmers into Software Engineers
Date: May 11, 2021
Duration: 1 HR

SPEAKER
Adam Barr, Software Consultant, Crosslake Technologies

MODERATOR
Will Tracz, Former Chair of ACM SIGSOFT; Member, ACM Professional Development Committee

Registration Link

Defensive C++: Software Engineering Principles & Types of Errors (Free course for ACM Members)
Performance Engineering: Potential Performance Issues in Software Development (Free course for ACM Members)
Software Engineering in the Era of Cloud Computing (Free book for ACM Members)
Interdisciplinary Approaches to Information Systems and Software Engineering (Free book for ACM Members)
Human Factors in Global Software Engineering (Free book for ACM Members)
The Problem with Software: Why Smart Engineers Write Bad Code (Free book for ACM Members)

2 Likes

One of the aspects of “engineering” is the practice of learning from mistakes. Many engineering disciplines in the United States have government bodies that seek root cause answers and publish findings/recommendations. The Biden administration recently created the Cyber Safety Review Board (charter, website) in the Department of Homeland Security that is modeled after the NTSB for learning from cyber-security incidents. What impact will this have on the evolution of software development as an engineering discipline?

2 Likes

Question: When I was at college (CS at UMich 1980), I compared the qualifications for “software engineer” versus e.g. “civil engineer” which requires passing examinations. What about that? Should we have qualifying exams for software engineering? After all that education, I have to laugh that the next generation goes to a “boot camp” for 3 months and then describe themselves as software engineers. It’s still the wild west.
Should we have a national and international licensing requirement for software engineering that reinforces a national or international SWEBOK?
One thing I’ve heard for years is about the need for a software engineering certification, ala the PE certifications for pretty much any other engineering discipline. Thoughts?
Answer: I think we should absolutely have qualifying exams and certification for software engineers—but we are not ready for it, because we don’t know what those qualifications should be. Also, the question of whether we need to have specialized qualifications (e.g. security, database, UI). So I would love to see an industry group set a goal of having qualifying exams in N years, and then work towards that.
I’m not a fan of SWEBOK specifically—it reads like a brain dump of thoughts and opinions about software development, with no guidance (backed by research or not) on what is useful and what is not. Someone who memorized all of SWEBOK wouldn’t automatically be a competent developer.

Question: What about FORTRAN and COBOL, two early and, obviously, very influential languages? FORTRAIN was created by IBM, and COBOL was a government effort consolidating various commercial languages.
Answer: Yes, I simplified the list of languages I talked about. The different from now is that back then some languages that came out of academia were useful (and used) in industry, because the scope of many problems in industry was similar to what academia worked on.

Question: So, the only important and persistent programs/software are written at the organizational level? bottom right? Are you saying that there are no important top left programs that persist over years and are widely used?
Answer:I’m sure there are some important top left programs. The TeX typesetting system, at least the core, is a top left program. But most of what industry does is bottom-right, or at least bottom-left for some smaller companies where the work fits in one team (but they still have turnover).

Question: Do we need a resurgence of the pattern movement?
Answer: Patterns were useful as clarification on how to use object-oriented languages effectively, particularly interfaces. They lead to code that is easier to unit-test and therefore refactor, which is a good thing. But they aren’t a silver bullet. Also, the initial Gang of Four presentation was too dense and memorization-based, and required the “Design Patterns Explained” book to actually synthesize out the real wisdom.

Question: Do you think coding camps actually fill the gap, or is it just that they focus on a subset of CS that is hyper-relevant to a particular sub-domain (e.g. full stack development) at a particular moment in time? For me, I think if academia is to remain relevant, we need a method for continuously keeping up to date with the needs of industry - whatever that means


Would be a good idea for universities included boot-camp like activities? Why/why not? Thanks.
Answer: I do think they fill a gap, not only in being up-to-date on languages and tools, but also in up-to-date on techniques such as unit testing. Which isn’t very “up-to-date”, that has been established for decades. I don’t think academia should exist only to train for industry (written as the son of a math professor) or chase after the latest languages and tools (e.g., whatever JavaScript framework is hot this year), but there is a middle ground of areas like unit tests that they should cover, rather than just stick with the “timeless” concepts like algorithms, or languages like C++ that are way past their expiration date. I had a few of my classes in college taught by visiting professors from Bell Labs and that was quite useful to get some information from the “front lines” (although Bell Labs wasn’t quite the front lines, but they told some good stories).

Question: My question is about XP/Agile. The quote with [name withheld], perhaps takes it too far, indeed. But what’s your take on the process of requirements discovery? How did Microsoft ensure that the software they produced didn’t end up unused, solving problems that didn’t exist for users that didn’t exist either? Should, in your opinon, devs pay any attention at all to the problem space, or specs should be the ultimate source of truth for them?
Answer: Requirements discovery is very important, and some people trying to “fix” software, like the “Essence of Software Engineering” people (the ones working on SEMAT), are focused on that as the key problem to tackle. And certainly, one of the big wins from Agile is shortening the feedback loop with customers rather than trying to predict what they wanted. Microsoft has several layers between the customer and the development team, and I think in retrospect they do a bad job with what many companies call the “Product” team (meaning Product Management, the ones who interface between customer and the development team), because they hire computer science majors for the role rather than people who understand customers and/or UI design, and they consider them part of engineering rather than peers of engineering. Microsoft has at times delivered software that people did not want, or at least features they did not want. But Microsoft is giant and the lessons from there don’t necessarily apply to most companies. I think being Agile, meaning delivering small updates frequently, is enough for most companies to be customer-connected without requiring particularly formal requirements discovery. In any case I am more interested in the path from requirements to code, not customers to requirements. If you are a developer and it is someone else’s job to figure out what the requirements are, I would let those people handle it and not try the “I’m a dev so I know more than you do about your job” approach. Same thing with UI/UX design, it is important, but not what I am concerned about.

Question: Regarding the old adage “there most reliable code is the one you never have to write”. Has there been any research on the distribution of bugs in third party vs first party code? In my own experience it’s my own code that has the most problems. Could we one day get to a point where the lower-level libraries are so stable and cover so many use cases that we need minimal first party code?
Answer: What I hope we get to is a situation where when considering adopting a low-level library, there are known artifacts you can trust to help you, e.g. the unit test coverage results, the static analysis results, the pen testing results; as well as some sense of language suitability to the task. In theory code that is used more broadly (meaning third-party code in this case) is more likely to have its bugs found, but you have situations like the log4j exploit where a critical bug was still missed by everyone.

Question: It appears in the future energy will be more expensive. As we deal with global warming and shift to renewables, efficiency appears to be rising in importance. Are we coming to the end of resource waste caused by dynamic languages and poor engineering choices that trade efficiency for power?
Answer: I would still prefer to focus on clean, maintainable code even if it is less efficient. If a dynamic language gives you that, I’ll take the tradeoff of less efficiency. I do think microservices architectures, particularly serverless ones, will in general be more resource-efficient.

Question: How does the quality of software dev in open source systems compare to that of proprietary companies - if it’s possible to compare!? Do you believe that having free/libre software will improve the security of that software?
Answer: It should be possible to compare although I don’t have any studies handy. Certainly the “many eyeballs” theory, that open source would have fewer bugs because more people could see the code, didn’t pan out, since Linux also has exploits, open-source SQL databases still have bugs, etc. Anyway, for a product like Windows there are thousands of developers who could look at the code, so any “many eyeballs” effect could happen in the cathedral as well as the bazaar. I do think that an OSS project that is worked on by a bunch of programmers who don’t work for the same company (and may have never met in person) leads to some better engineering, for example cleanly defining APIs to connect two components if neither component is the “boss” (e.g. inside Microsoft people working with Windows APIs often had to work with whatever Windows provided, because Windows was so important inside Microsoft that it had the upper hand in any discussions—this is less likely to happen in an OSS project).

Question: Modularization of software has always been the key to good “engineering”, but has always been hard for most “programmers” to do well. Isn’t microservices just modularization on a more complicated platform. If “not done well” (which it often isn’t), doesn’t that just make things worse?
Answer:Yes, microservices is modularization, but with the ability to formally define/test interfaces, provide automated failover/restart, and scale independently in response to actual need. While having the flexibility in what language to use and what API format (REST, GraphQL) to choose on a per-microservice basis. It is hard, but it is one of the things programmers will have to get better at to become “real” engineers.

Question: Considering more companies are no longer requiring a 4-year degree (e.g. Apple, Google, Elon Musk, etc.) how do we get to the quality of skill required to be a software engineer if a 4 year degree is no longer required?
Answer: Microsoft has never explicitly required a 4-year degree. I hope eventually we do get to a stage where a 4-year degree is required, because there is agreement on what needs to be taught to software engineers, and it takes 4 years (or 2 upper-class years) to teach it. But we’re not there yet; a lot of a computer science degree is just practicing writing code, which you do on your own, or intensively in a coding camp.

Question: What impact on engineering do you anticipate technologies like GitHub Copilot and other AI assisted development becomes more mainstream? What about A.I. creating the software? Any hope here?
Answer: As with anything involving AI, I think it can be very useful in constrained situations. Things like auto-complete and ensuring code-comment consistency can be helped by AI. I don’t think AI will be able to make judgments such as “this code meets the spec” or “this code is bug-free” however. The current crop of “Big Code” studies, using machine learning to detect unusual patterns in your code compared to a large corpus of code, is a useful way to find certain bugs. But it will just be another tool, like rules-based static analysis. No silver bullet!

Question: What sense of “problem-space” ought we define from the quote on slide 14?- Is this a psychological term? – “
engineering is predicated
Object thinking focuses our attention on the problem space rather than the solution space
”" - [name withheld] "
Answer: I don’t really know what that is supposed to mean; I was showing this quote as example of how silver bullets are hyped up.

Question: What’s the best resource that summarizes all the scientifically found best practices based on empirical studies?
Answer: The book “Making Software” edited by Andy Oram and Greg Wilson is very good, although it is from 2010.

Question: With regard to empirical studies papers, is there still research on measuring software productivity, or have we given up? What are the trends in “capability per labor hour”?
Answer: Yes, people do study it; a contact at Microsoft Research pointed me to this paper that presents a nice model of 5 different dimensions of productivity: https://dl.acm.org/doi/10.1145/3454122.3454124.

Question: So what IS software engineering? Are you saying we should just keep up on good development practices? Is there anything fundamentally different about “engineering” software rather than just “developing” it? I am surprised that you talked a lot about programming, but not much about software engineering methodologies. Are you equally critical of those?
Answer: I’m not concerned about the term used. Whatever you want to call the process of someone wanting to write software to do something, and having it actually do that thing, is what I am talking about. Probably this should be separated from “computer science”, which is more theoretical; and some schools do in fact offer these as separate majors.

Question: Given that you’ve seen that knowledge from industry research labs like Microsoft Research isn’t adequately filtering down to the product teams, what changes do you think should be made to better encourage the practical application of research findings?
Answer: In a corporate setting it usually comes down to risk/reward and how people are incented. Unfortunately, empirical studies come with (appropriate) caveats about their limited applicability and how further study is needed and all that, and Agile training comes all neatly packaged with extravagant, hard-to-disprove claims, so it is less risk for a manager to follow Agile. Ideally someone would package up the most useful claims in a book that offered a complete approach, but I don’t know of anyone who has done this.

Question: I guess I have a broader question around how can academia remain relevant going forward? Especially if we’re not producing impactful research or industry ready graduates - in the UK, the early years curriculum has kids able to program in two languages by the time they are eleven (typically Scratch and Python). So by the time they get to University, top-left should in theory be pretty much covered.
In your opinion, what is the best way to give students a “taste” during college about the environment in the “bottom right world”?
Answer: I think having the universities focus more on studying code and modifying code, specifically large open-source projects, would be very helpful. A lot of universities are moving to student projects in groups of 3-4, and maybe trading off to test another team’s code, which is better than not doing that, but it’s still “top left” work. For better or worse, university CS programs will remain the primary source of programmers in the short term, so they may not have much incentive to change. It may require some “up and coming” schools to show they do a better job with a more industry-focused approach, but you would need industry to recognize this in order to change how other schools approach it. Status in academia is often based on publications and if it is easier to publish work on a new algorithm than on an empirical study, then that will continue. It may be that academia needs a bit of a “generational reset” as well.

Question: So, to what extent do you think have the issues noted at the NATO conferences in 1968-70 been solved (well enough)?
Answer: They have not. The problems identified at the 1968 conference still exist, and the communications gap identified at the 1969 conference still exists. Unfortunately, this allowed all the quotes I used in my presentation to be 25+ years old.

Question: So related to what you said about how the younger generations using string manipulation in C# compared to the older generations who may understand more about what’s going on under the hood, were the exploits you mentioned related to the security of that code and how it could easily turn into someone taking over a system via that program?
Answer: The exploits that hit Microsoft in the 2000s (Code Red/Slammer/Blaster/Sasser/etc) were all because a) programmers wrote network-facing code in C because it was fast b) programmers refused to use any sort of standardized string-manipulation library in C because even that wasn’t fast enough. So they hand-wrote each bit of string manipulation and they sometimes made mistakes (particularly when using Unicode strings where the byte count and letter count don’t match). The details of how a buffer overflow allows arbitrary code to execute relates to the specific way local string variables are allocated on the stack, which is too complicated to get into here, but the reason the local string variables were allocated on the stack instead of the heap, which would have been much less exploitable, is because (you guessed it) it was faster (and also required fewer lines of code).
You also had some other exploits, particularly in Office opening files in the old binary format, that were because of integer overflow, which once again was exacerbated by over-focusing on performance.

Question: Experimental empirical studies are often fairly small scale, so it’s sometimes hard to draw conclusions about their applicability to bottom-right style development. What are your thoughts about effective understanding of empirical studies?
Answer: That is true, and it’s why the Mining Software Repositories work is so interesting—we now have large codebases to study! But even small codebase studies can yield interesting results, just as learning “programming in the small” aka “top left quadrant” is still useful for “programming in the large”, it’s necessary but not sufficient.

Question: Where do you draw the line between “optimization” and avoiding naive and known-bad approaches? I think it’s common to see ‘top-left’ programmers make very poor decisions when tackling systems problems. How best might we strike that balance?
Answer: I think “top left” can teach you to avoid some obvious mistakes, at least get you a sense of O(n) vs. O(n^2) etc (even if industry rarely requires O() analysis). For the actual runtime performance—I say write the code cleanly and surround it with good monitoring and then see what happens. I round Knuth’s 97% of optimization being premature up to 100%. Because you just never know what will be the actual bottleneck until you run it, so don’t pollute your code with performance optimizations ahead of time.

Question: Would you recommend “Code Complete”, by Steve McConnell, for college/university professors/students?
Answer: I do like “Code Complete”, McConnell is a good writer and he actually references empirical studies! Except then in “Code Complete 2” he realized there hadn’t been a lot of studies in the intervening decade, and almost 20 years later there still haven’t been a lot more.

Question: Did you read Coders at Work book?
Answer: I’ve read “Coders at Work” and “Programmers at Work”, they are interesting for the stories. The problem is that a single person who has been successful doesn’t necessarily have insight into what parts of their work habits and processes are important vs. what parts are unimportant or possibly even negative. This is the “history is written by the winners” bias. E.g. you could look back at Microsoft in the 1990s and think “Wow, the processes they used must have been great given their success” but they certainly had flaws in their processes, for example it was a heavily “devs throw it over the wall to test and then blames test for any bugs that weren’t found” culture even though this was already known not to work well.

Question: What makes ‘software engineering’ unique compared to other engineering disciplines? Want is your personal definition of “Software Engineering”?
Answer: I think the completely wide-open nature makes it different. You can write anything in any language using any process. I think true “Software Engineering” would mean being able to write code and make accurate predictions about how reliable, scalable, and secure it would be, and also how long it would take to achieve a desired level of reliability, scalability, and security.

Question: On the 3 books shown, how are they to be rated on the “empirical study” scale? I understand that they are more from direct experience. Is that anecdotal?
Answer: They are anecdotal, certainly not based on empirical A/B studies, but Brooks and Mills managed large software projects, and Weinberg consulted on a lot of software projects, so they had a broad view of software engineering which they synthesized into their books.

Question: Any suggestions on great code to walk through for students? – especially in the popular languages today (python, go etc.)
Answer: No specific suggestions, but I would look for something that is widely used and has a lot of contributors, since that tends to mean it has arrived at some process standards that are useful to understand.

Question: You didn’t mention debugging much; are there efforts between industry and academia to reconcile what’s needed and what’s taught?
Answer: This is another area that is completely ignored by academia. Even on “top left” projects in school, students are left to their own devices to figure out how to debug them, and then debugging “bottom right” projects (large bodies of code that you likely didn’t write, perhaps running as a service that you can’t interrupt, with bugs that don’t always reproduce) is a whole other art. FWIW my earlier book “Find the Bug” talks about this a bit, there are some other decent books on debugging out there. “Effective Debugging” by Spinellis looks good although it came out after my book so I didn’t cover it when I was surveying writing on debugging.

Question: What changes would you make in teaching of introductory programming that you believe might help? The code has to be small and simple at that stage, but is there anything we should be focusing on, in your opinion?
Answer: For introductory programming I think the biggest thing is making sure that the course is accessible to all, that is, an “Introduction to Programming” class in college should not favor people who programmed in high school (and therefore discourage people who didn’t). If you read “Unlocking the Clubhouse” it talks about this. A lot of colleges are now aware of this problem and are focusing on it, especially as it relates to attracting under-represented groups, while not being boring for those who did program in high school. One thing that seems to work well is focusing on results that are visible easily, e.g., teach the class to make a quick website or game vs. writing different sort routines.

Question: In his acceptance speech for the Turing award, Alan Kay said »So, little progress was being made, but I think that whenever we say â€școmputer scienceâ€č or â€șsoftware engineeringâ€č, and especially whenever we think we’re teaching it, the worst thing we could ever do is to pretend to the students that we know what it is«, and he maintained that we still have little idea how to create large systems right. did we make significant progress since then, and what is it?
Answer: I looked it up, that was 2003. We have made some progress. The cloud allows (and in some case requires) better development processes. The availability of open-source components avoids some basic reinvention of the wheel. Automated testing is much more understood and expected (although not universal by any means). But we still don’t have the ability to make accurate statements about code which would be expected of an engineering discipline.

Question: How important is it for students to learn to write (rigorous? formal?) software specifications?
Answer: If you mean documentation on the architecture, coding standards, test approach, etc, this is very important as you move from the top-left quadrant down, where the original development team is no longer on the project and having good documentation is critical to avoid accidental mistakes going forward. If you mean specification of the UI, I think it is better if people other than developers specialize in this, although arguably it is part of the broad “software engineering” umbrella.

Question: How to you view how the software development process (e.g. Agile, Waterfall, etc.) impacts Software Engineering? It is complementary? Does it hinder Software Engineering? etc.
Answer: Agile basically punts the question of how to actually engineering the software, except possibly for Test-Driven Development. Certainly, I like including all the people involved in the software in the development process, so as I said in the talk, DevSecOps is better than having Dev, Test, Ops, and Security arranged in a one-way downhill flow.

Question: I ask this question because you gave the impression that there are numerous techniques and tools available now that were not available then and that do (help) improve the situation. What is the nature of issues remaining? Education, training, management, 
?
Answer: There are still a lot of issues remaining, you can read the table of contents from the 1968 Garmisch conference report and see all the problems facing software now. We are writing much more complicated code, so the tools and processes have advanced, but we still seem to be producing code that is at the limit of what the human brain can handle given the current state-of-the-art for processes and tools. And then there is disagreement on what state-of-the-art is, and people taking shortcuts to get to market, etc.

Question: Your former colleague Nathan Myhrvold once said that software is a gas, that will occupy all the available resources. Do you think this is still valid? And do you regard it as a desirable property or not?
Answer: I think we are past that. There was a period going from a single BASIC program running on a PC with 16K of memory, to a pile of C# applications running on a PC with 1 GB of memory (or some number like that) where it felt the software was always consuming all the available space. But at this point, even if your 64 GB computer shows most of the memory in use, a lot of that is cached data and disk pre-reads and other stuff which is just performance improvement, you could run fine with a lot less memory. I think it is good that people can write in C# and have the memory available for that. And now with the cloud, since your machine could die at any time anyway, you can genuinely write code that doesn’t even have to worry about memory allocations failing, because a) they won’t and b) that’s just another catastrophic error you have to recover from, so no need to special-case it. And code that doesn’t have to worry about memory allocations failing is easier to read.

Question: It seems to me that if someone is a problem solver and good programmer, they can acquire skills such as using code repositories and unit testing in a week. Can you elaborate on why academia should focus on these rather basic skills rather than on problem solving?
Answer: The mechanics of using a code repository you could learn quickly, the “how to deal with other people working on the same code” takes longer. But that is a simple one which academia could skip if they wanted to (although why not have students use a source code repository?). Unit testing, on the other hand, is much more complicated and should be taught in school. Partly to teach the skills and partly to set the expectation that this is something you always do as a developer.

2 Likes