BUY IT!
Securing Java

Previous Page
Previous Page
The Future of Java Security: Challenges Facing Mobile Code
CHAPTER SECTIONS: 1 / 2 / 3 / 4

Section 2 -- Challenges for Secure Mobile Code

Next Page
Next Page

Java has risen to meet many important challenges of mobile code security. That means Java is by far the most security-aware of the many mobile code platforms. If you want to use mobile code and you are concerned with security, use Java. Of course, things are not perfect and there are still some open problems. Here are a number of remaining challenges that secure mobile code systems, including Java, must face.


Denial of Service

We spent some time discussing denial of service in Chapter 4, "Malicious Applets: Avoiding a Common Nuisance." As we said there, denial of service is a difficult problem for computer security that has yet to be addressed, not just in Java, but all over the network infrastructure. Successful denial-of-service attacks have been carried out against ISPs by exploiting weaknesses in TCP/IP, the protocol that is the life blood of the Internet. Java is not immune to the denial of service problem, either.

Denial of service can be more or less serious depending on where the problem manifests itself. On one hand, if a hostile applet crashes one user's browser by popping thousands of large windows, not much real harm is done. On the other hand, if a servlet crashes an enterprise Web server, real harm occurs. Our Chapter 4 discussion focused primarily on the client side; however, Java is making inroads in places where denial of service takes on more urgency as a problem. Server-side Java is one example. Another example can be found in systems with built-in Java VMs like Oracle8 or HP Printers. The implications of denial of service for these systems is much greater than in the client case.

New forms of denial-of-service attacks will be made possible with complex client-server systems like RMI that make extensive use of networking, synchronization, and locks. Denial of service in such a system becomes as easy as holding a lock. Distributed system applications will need to apply timeouts and other mechanisms to mitigate the risks of an uncooperative process.

Dealing with denial of service is not an easy task. Limiting resource allocation is one solution. Perhaps future versions of Java will include policy elements that can specify resource limits on the basis of identity. That way, constraints can be managed according to where code is coming from and how much it is trusted. These sorts of hooks do not yet exist in the Java 2 model.


Understanding Code Signing

There are a number of myths about code signing. Here are some of the most egregious:

Myth: Signatures denote authorship. The only thing a signature really tells you is who signed the code. From this piece of information, people infer a sense of vouching; that is, if a piece of code is signed, the implication is that the signer somehow vouches for the code. Unless you trust the person or organization who signed a piece of code, code signing gives you nothing. In the end, code-signing schemes simply amount to technological wrappings around a human trust decision.

Myth: If a signer is honest, the code is secure. Clearly, since all that a signature tells you is who signed the code, the signature says absolutely nothing about the code's security. Even the best-intentioned signer can only give you an honest opinion about the code, and that opinion might not be worth much if the signer isn't a technical expert. Certification schemes may begin to change the way this works if there are well-known, competent organizations that choose to vouch for certain code properties. These organizations can have signatures that count as validation stamps.

Myth: Signatures imply accountability/liability. The legal ramifications of digital signatures and what they denote have yet to be tested in the courts. Given the state of software liability in the industry, it is unlikely that a signature will carry much legal weight in terms of liability.

Assuming these myths are properly debunked, there are still some real barriers to trust models based on digital signatures. One of the main problems that will deeply impact the adoption of signing-based approaches is the lack of a public key infrastructure (PKI). Without some way of quickly and easily validating a signature, the market is unlikely to embrace code signing quickly. Adding to any lethargy in adoption caused by the poor state of the PKI is the equally poor state of tools for managing digital identities and policies (see Chapter 6, "Securing Java: Improvements, Solutions, and Snake Oil"). In particular, issues of certificate revocation and storage loom large.


Secure Distributed Programming

Distributed computing is still in its infancy. Complex systems like CORBA reflect this. Managing trust, identity, and policy in a distributed system is much more difficult than doing so on a VM-by-VM basis using, for example, Java 2. Standards are emerging slowly, and there is much confusion in the market regarding competing systems. Choices include Java's RMI, CORBA (encompassing both IIOP and IDL), and DCOM (or one of its many marketing identities).

Common to all of these approaches is the problem of complex identity, which is not well understood. Figure 9.1 shows why the problem is difficult. In real distributed systems not only is code mobile, but other functionality is, too. Interprocess communication across different machines can get hairy fast. RMI may not be equipped to handle some of the challenges that trust models entail.

Fig 9.1

Figure 9.1 The problem of complex identity.

In this example, Bob's applet, running on Alice's VM is communicating (possibly using RMI) with Donna's applet, running on Charlie's VM. Creating usable policies for situations like these is not well understood.


Being a True Multiuser System

The Java VM is not currently a replacement for a multiuser operating system. Neither is JavaOS a real multiuser operating system. JavaOS in its current instantiations is meant only to run a single Java VM. Object sharing and process firewalling rely on this fact to work. There are many problems to solve before the VM can serve as a true multiuser environment, and researchers are just starting to address them.


Persistence, Linking, and Versioning

Systems in which objects can be serialized (think of it as freeze-drying a process) and reconstituted elsewhere (thawed out) are susceptible to the "environment problem," and it is likely that security holes will be discovered in these systems. The problem is that there is no guarantee that the environment in which an object is thawed will be remotely similar to the one in which it was frozen. This can be a problem if the object assumes (as almost all code does) that its environment doesn't suddenly undergo drastic changes. This kind of problem can easily lead to type-safety problems and security risks.

The problem can be related to Descartes' brain in a vat experiment. In that venerable thought experiment, seventeenth-century philosopher Rene Descartes (of "I think, therefore I am" fame) asked how it is that we know for certain that our perceived environment is really there-that we are not simply a brain in a vat that is being fed all the right data by a malicious demon. The unfortunate answer is that we can't ever be certain.

If you substitute serialized code for the brain and the environment for the vat (and the controlling demon), you can get the idea. Deserialized software will never be in the position to probe its environment in order to discover where it really is or whether its environment is telling it the truth. This is problematic when it comes to security parameters and types. Actually, this analogy works well for mobile code in general.


Design for Security

Java offers a number of tools with which secure systems can be built. Obviously, this does not imply that all systems written in Java that make use of its security features will be secure. Designing a secure system takes much foresight and demands rigorous assurance at all levels of the process. Risk-based security testing can help.

The best security-assurance approach begins with a system specification. Given a detailed-enough specification, a thorough risk analysis can identify potential vulnerabilities and point out areas of the system with the greatest risk. Security risk analysis includes characterizing threats and attacks and working out attack scenarios based on potential attacks. For example, a specification may prove vulnerable to playback attacks (a common problem among systems originally designed for use on proprietary networks), decompilation (in which mobile code secrets can be divulged), or cryptanalysis attacks (in which things like weak data integrity hashes can lead to complete system compromise).

Given a thorough risk analysis of a system specification, the next step is to create a plan for probing system security through testing. Note that the kind of testing we are talking about here is not functional testing; it is risk-based testing related directly to the risk analysis. Functional testing tells you only if your system meets design specifications and produces the right output. Risk-based security testing tells you whether your system can be compromised by security attack. Security testing is at its heart a creative endeavor that is only as powerful as the risk analysis on which it is based. As such, security testing is no guarantee of security, but it certainly beats not testing for security at all. By following a proper test plan, actual testing can be carried out on a live system.

External analysis for security is a good idea. Note that the definition of external can vary. At the very least, a security review should be performed by a different team (within the same organization) from the design team. Designers tend to be too close to the system and tend to overlook security problems, even if they understand security well. In any case, it is essential that external reviewers have a strong body of security expertise and knowledge on which to draw. Analysis by external security experts may be warranted as well, although only for truly security-critical systems.

Systems that are designed expressly with security in mind usually turn out better than those that are not. One of the worst approaches to take is to try to bolt security on the side of an existing system. This is especially true of systems that have been fielded successfully on proprietary networks (or no network at all) and that are migrating to the Internet.

Previous Page
Previous Page


Search Help
Next Page
Next Page


Menu Map -- Text links below

Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs
Front -- Contents -- Help

Copyright ©1999 Gary McGraw and Edward Felten.
All rights reserved.
Published by John Wiley & Sons, Inc.