One question we are commonly asked is whether Java's security woes are due to simple bugs or reflect deeper design problems. The answer is a bit complicated, as we shall see.
Software that is properly engineered goes through a standard process from requirements design, through detailed specification, to actual implementation. In the world of consumerware (software created for the mass consumer market, like browsers and JDKs), pressure to be first to market and retain what is known as "mind share" compresses the development process so much that software engineering methods are often thrown out the window. This is especially true of testing, which regularly ends up with no scheduled time and few resources. An all too common approach is to leave rigorous testing to users in the field (sometimes even paying users when they find bugs!). We think this is just awful. The Internet time phenomenon has exacerbated the software engineering problem. These days, Internet years rival dog years in shortness of duration (the standard ratio is seven dog years to one regular year). So three months of regular time are currently equivalent to a complete Internet "year." Given the compressed development schedules that go along with this accelerated kind of calendar, the fact that specifications are often very poorly written (if they exist at all) is not surprising. The authors commonly encounter popular consumer-oriented systems that have no specifications. Java suffered from this problem in its early years as well. Fortunately, Java does have an informal specification today. That's always a good start. One of the most common misconceptions about Java security holes is that they are all simple implementation errors and that the specification has been sound and complete since day one. Threads in the newsgroup comp.lang.java.security and other newsgroups often repeat this fallacy as people attempt to trivialize Java's security holes. The truth is that many of the holes described in this chapter are simple implementation bugs (the code-signing hole from April 1997 comes to mind-see The Magic Coat later in the chapter), but others, like problems discovered in Java class loaders, are not. Sometimes the specification is just plain wrong and must be changed. As an example, consider how the Java specification for class loading has evolved. Often it is hard to determine whether a security hole is an implementation problem or a specification problem. Specifications are notoriously vague. Given a vague specification, who is to blame when a poor implementation decision is made? Specifications are also very often silent; that is, when a hole is discovered and the specification is consulted, there is nothing said about the specific problem area. These sorts of omissions certainly lead to security problems, but are the resulting problems specification problems or implementation problems? In the end, the holes are fixed, regardless of whether they are implementation bugs or design-level problems. This leads to a more robust system. If Java stood still long enough, you would think all the holes would be discovered and fixed. But Java is far from still. With every major JDK release, the Java source code has doubled in size. Much new functionality has been added to the language, some of which has important security implications. The addition of flexible access control in Java 2 is a case in point. Implementing a code-signing and access-control system is nontrivial, and the code is certainly security-critical. Other examples are serialization and remote method invocation (RMI). Subtle security problems are likely to be discovered in these and other new Java subsystems.
Why is it that all the known attack applets covered in this chapter were discovered by good guys and not bad guys? The quick but unsettling answer is: pure luck. The Princeton team and other Java security researchers are not the smartest people in the world (sorry guys), and the holes uncovered in Java so far do not require years of advanced training to find. There is no reason that malicious crackers could not discover such holes for themselves. The Java industry has been fortunate that the people who usually discover Java security problems are honest and want to see Java improved so that it is safer to use. Also fortunate is the punctuality and accuracy of typical vendor response. So how are holes usually discovered? Most often, the scenario goes something like this. Researchers discuss where potential flaws may lie by thinking about what is difficult to implement properly. Occasionally, researchers notice peculiar or surprising behavior in their work with Java and get an idea about what to investigate. The next step is to take a close look at the Java source code (for the VM and API classes) or the binary code if no source code is available. Sometimes, errors are obvious and exploits are easy. Other times, experimentation is required to turn a potential flaw into a real exploit. All of the holes described in this chapter can be exploited using attack applets. That means the holes covered here are not esoteric flaws that are impossible to exploit. They are sometimes-subtle flaws that have been turned into full-fledged attacks.
Every Java hole described in this chapter has an accompanying exploit. Another way of putting this is that there is an attack applet (the Java form of an exploit script) for each hole discussed here. However, the one-to-one correlation found in this chapter does not imply that it is necessary for every security hole to have an exploit. Holes are just vulnerabilities. Sometimes a hole will be recognized as a hole but cannot be exploited by itself. In these cases, multiple holes together create an exploit. Think of attacking a system as climbing up a cliff. When you reach the top, you have successfully completed an attack. A security hole can be likened to a piton in the cliff with a piece of rope attached. Sometimes one piton is enough to help a climber make it to the top (especially if the climber is an experienced veteran). Other times, more than one piton may be needed. The holes discussed in this chapter have exploits of both categories. A majority of the attack applets require only one hole, but sometimes an attacker must leverage other weaknesses to exploit a hole. (A perfect example of the latter category is the Beat the System hole of July 1998.)
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs
Copyright ©1999 Gary McGraw and Edward Felten. |