This is an HTML rendering of a working paper draft that led to a publication. The publication should always be cited in preference to this draft using the following reference:
Although only a few may originate a policy, we are all able to judge it.
— Pericles of Athens
Popular folklore has our profession’s use of the word bug originating from a real insect found in an early electromechanical computer. Indeed, on September 9th of 1947 the Harvard Mark II operators did find a moth obstructing a relay’s contacts. They removed it and dutifully taped it in the machine’s logbook. However, engineers were using the term “bug” many decades before that incident. For example, in a 1878 letter Edison used the term referring to the faults and difficulties he was facing while moving from an invention’s intuition to a commercialisable product.
One approach for dealing with bugs is to avoid them entirely. For example, we can hire only the best software engineers and meticulously review every specification, design, or code element, before committing it to a computer. However, following such an approach would be wasteful, because we would be underutilizing the many tools and techniques that can catch bugs for us. As Pericles recognized, creating a bug free artifact is a lot more difficult than locating errors in it. Consequently, although humans and program generators are seldom able to cast large-scale bug-free code from scratch, bug-finding tools are both abundant and successful.
In our field one important paradigm for eliminating bugs is the tightening of the specifications of what we build; in a similar context an industrial engineer would seek to reduce variability by manufacturing to tighter tolerances. At the level of the program code, we can tighten-up the specifications of our operations on different data types, of our program’s behavior, and of our code’s style. Furthermore, we can use many different approaches to verify that our code follows the specifications: the programming language, its compiler, specialized tools, libraries, and embedded tests are our most obvious friends here.
Modern programming languages do a great job in restricting many risky code constructs and expressions. First of all, structured languages (anything better than assembly language and old-style Fortran) prohibit, or at least impede, many programming tricks that can easily lead to unmaintainable spaghetti code. Even C, which provides us ample self-hanging rope with its support for goto and longjmp, doesn’t allow arbitrary jumps across different functions. Once we properly indent our code we are also forced to split it into separate functions or methods: one would be mad to try to write code with more than a handful of indentation levels. This splitting eliminates bugs by promoting attributes like encapsulation and testability.
In addition, languages can often enforce correct behavior on our code. In Java, if a method can throw an exception, methods that call it will have to catch it or to declare that they may also throw that exception; in C# we can ensure that resources we acquire will be properly disposed by means of the using construct.
More importantly, languages with strong typing rules, can detect many problems at compile-time as data-type errors (adding apples to oranges). Obviously, errors we catch at compile-time won’t appear when the program runs: this is an effective way to eliminate many bugs. For example, the introduction of generics into Java 1.5 allows us to specify that a list container will only house strings; our program won’t compile if we attempt to store a value of a different type in it. In earlier versions of Java where the list would contain values of type Object—the least common denominator of all Java types—the error would manifest itself at runtime as a bug, when we attempted to cast an element retrieved from the list into a string.
Even when the programming language allows us to write unsafe code, we can often ask the compiler to verify it for us. Many compilers will generate warnings when encountering questionable code constructs; we can save ourselves from embarrassing bugs, by actually paying attention to them. However, many of us, when we’re working under a pressing deadline, tend to ignore compiler warnings. We can deal with this problem by using another commonly-supported compiler option that treats all warnings as errors: the code won’t compile until we deal with all warnings.
We can also often help the compiler generate better warnings for us. Consider for example C’s notoriously error-prone printf- and scanf-like functions. These functions require us to match the types specified in a format string with the supplied arguments. If we get this correspondence wrong our program may crash, print garbage, or, worse, open itself to a stack smashing attack. Some compilers will verify format arguments for the C library functions, but we often add our own functions with similar behavior, which the compiler can’t check. For these cases, the GNU C compiler provides the __attribute__((format())) extension. We tag our own function declarations with the appropriate attribute, and the compiler will check the arguments for us.
Another way to eliminate bugs from our code is to pass it through one or more tools that explicitly look for problems in it. The progenitor of this tool family is lint, a tool Stephen Johnson wrote in the 70s to check C code for non-portable code and error-prone or wasteful constructs. For example, lint will flag the construct if (b = 0) as an error, complaining of an assignment in conditional context; we probably intended to write if (b == 0). Nowadays we can find commercial and open-source lint-like tools for many commonly-used languages. Some examples include CheckStyle, ESC/Java2, FindBugs, JLint, Lint4J, and PMD (covering Java), FxCop and devAdvantage (covering C#), and PC-lint (covering C and C++). Other tools specialize in locating security vulnerabilities—a class of bugs that stand out for their potentially devastating consequences. Tools in this category include Flawfinder, ITS4, Splint, and RATS.
Specialized tools can cover a lot more than what we could realistically expect a compiler to warn us about. For example, many tools will report violations of coding style guidelines, such as indentation and naming conventions. Furthermore, some tools are extensible: we can add rules particular to our own project (calls to launchMisile must be preceded by a call to openHatch), and we can precisely specify the rules that our project will follow. Integrating a code-checking tool into our build process, configuring its verification envelope, and extending it for our project becomes an important part of our development process. In some projects, a clean pass from the code-checking tools is a (sometimes enforced) prerequisite for checking code into the version control system.
Finally, we can delegate bug busting to code. Many libraries contain hooks or specialized builds that can catch questionable argument values, resource leaks, and wrong ordering of function calls. As a prime example consider the C language dynamic memory allocation functions—a potent source of both bugs and of research papers describing versions of the library that can catch them. You can catch many of these bugs by using the valgrind tool, by loading the watchmalloc.so library (under Solaris), or by setting the MALLOC_CHECK_ or MALLOC_OPTIONS environment variables (under GNU/Linux distributions and FreeBSD, correspondingly).
In our own code we have even more options at our disposal. We can sprinkle our code with assertions, expressing preconditions, postconditions, and invariants. Any violation of them will trigger a runtime error, and help us pin down a possibly difficult-to-locate bug. At a higher level, we can instrument our classes with unit tests, using the JUnit testing framework or the equivalent for out environment. When churning out code, unit tests will identify many early bugs in it; later on, when we focus on maintenance activities, unit tests will ring a bell when we introduce new bugs.
Diomidis Spinellis is an associate professor in the Department of Management Science and Technology at the Athens University of Economics and Business and the author of the recently published book Code Quality: The Open Source Perspective (Addison-Wesley, 2006). Contact him at firstname.lastname@example.org.