- Special Edition Using Java, 2nd Edition -

Chapter 47

Java and VRML


by Bernie Roehl

Exciting things happen when two complementary technologies start to converge. It's precisely this sort of convergence that's about to take place between Java and the Virtual Reality Modeling Language, or VRML. In this chapter, you’ll find out at what VRML is all about and how it relates to what you've already learned about programming in Java.

VRML is the standard file format for creating 3D graphics on the World Wide Web. Just as HTML is used for text, JPEG and GIF are used for images, WAV is for sounds, and MPEG is used for moving pictures, VRML is used to store information about 3D scenes. VRML files are stored on ordinary Web servers and are transferred using HTTP.

VRML files have a MIME type of x-world/x-vrml, although this is expected to change to model/vrml in the near future. These files have an extension of WRL. A three-character extension is used in order to avoid the confusion that might be caused by PC-based servers which truncate extensions at three characters (as happened with HTM versus HTML).

When a user retrieves a VRML file (by clicking on a link in an HTML document, for example), the file is transferred onto the user’s machine and a VRML browser is invoked. In most cases, the VRML browser is implemented as a plug-in. Once the scene is loaded, the VRML browser allows the user to travel through it at will, with no further data being transferred from the server.

Starting with Version 3.0 of Netscape, VRML support is included as part of the standard distribution. This will put VRML onto a lot of desktops in a very short time.

A Brief History of VRML

The basic idea for VRML originated with Mark Pesce back in 1993. He saw the potential for 3D graphics on the Web, and realized that a standard file format would be needed. He got together with Tony Parisi, and together they created Labyrinth, the first crude 3D Web browser. They demonstrated it at the very first conference on the World Wide Web, and they received an enthusiastic response from everyone they showed it to.

The next step was the creation of an electronic mailing list, which Wired magazine offered to host. After several months of discussion, it was decided to base the first version of the VRML file format on an existing language. Several proposals were put forward, and OpenInventor from Silicon Graphics Incorporated (SGI) was selected.

OpenInventor was extremely large and complex, so a subset of the language was used and extensions were added to make it suitable for use on the Web. Gavin Bell of SGI joined Pesce and Parisi in writing the specifications for VRML 1.0, and people all over the world set about creating VRML browsers.

As time went by, problems began to emerge. By using OpenInventor as its foundation, VRML 1.0 inherited some of that language’s weaknesses. The state-accumulation approach that is part and parcel of OpenInventor turned out to be difficult to implement on many platforms. It was also difficult to implement the full lighting model that the spec required. No two VRML browsers would produce exactly the same results for a given scene.

More importantly, VRML 1.0 lacked a lot of features. There was no sound, no interactivity, and no movement of any kind. VRML quickly earned the nickname “Virtual Reality Museum Language”, because it was well-suited for building museums and not much else. Clearly, something needed to be done.

The decision to avoid specifying a programming language in VRML 1.0 was a deliberate one. Even in hindsight, it was probably the right choice. Selecting a language would have been a nightmare, because everybody has different ideas about what features such a language should have. Bear in mind that VRML 1.0 was developed before Java had made its presence felt. If it had been available at the time that VRML 1.0 was being formalized, Java would doubtless have been chosen as the behavior language.
 

The limitations and problems of VRML 1.0 were clear enough that work began immediately on the creation of VRML 2.0. It was generally agreed that trying to "fix" VRML 1.0 would be a difficult chore and that a major redesign was required. Half a dozen proposals came in, including some from Microsoft, IBM, Apple, SGI, and Sun. After much discussion and debate, the "Moving Worlds" proposal was selected as the basis for VRML 2.0.

VRML 2.0 resembles VRML 1.0 in syntax, although the semantics are very different.

This chapter only deals with VRML 2.0. I would strongly advise against creating any more VRML 1.0 content at this point, because most VRML browsers will just have to spend time converting your world to 2.0 format before displaying it.
 

An Introduction to VRML 2.0

VRML is a large and powerful scene description language, so it isn't possible to cover it in any detail in this one chapter. However, in order to understand how it interacts with Java, it's necessary to have at least a basic understanding of the language. If you need to create more sophisticated VRML worlds (and you will!), you should refer to the upcoming second edition of Que's Special Edition Using VRML.

Basic Scene Structure

A VRML file describes a three-dimensional scene. The basic VRML data structure is an inverted tree that is composed of nodes, as shown in figure 47.1.


FIG. 47.1

This diagram shows the basic VRML scene structure.

Notice that there are two basic types of nodes: leaf and grouping. If you're familiar with the DOS or UNIX file systems, this will be a familiar concept: leaf nodes are like files, and grouping nodes are like directories (or folders, if you're on a Macintosh). Each grouping node can contain leaf nodes and additional grouping nodes. The result looks like an inverted tree.

Leaf nodes generally correspond to the sorts of things you'd expect to find in a 3D world: shapes, sounds, lights, and so forth. They have a direct effect on your experience of the virtual world, either by being visible or audible. A table or chair might be represented by a Shape node, the ticking of a clock would be created using a Sound node, and the scene would be made visible because of one or more lighting nodes.

Grouping nodes, on the other hand, are completely invisible. You can't see a grouping node when you view a VRML world, but it's there and it has an effect on the positioning and visibility of the leaf nodes below it in the tree. The most common type of grouping node is a Transform, which is used to position shapes, sounds, and lights in the virtual world.

Nodes that are attached to another node are referred to as the children of that node, and that node is the parent of each of the children. Occasionally, nodes that share a common parent are referred to as siblings. Note that in VRML 2.0, the order of children is generally irrelevant, because sibling nodes don’t affect each other the way they did in VRML 1.0. However, the ordering of children is still important in certain types of grouping nodes, such as Switch or LOD, which are beyond the scope of this chapter.

There are also nodes that are not really "in" the tree structure, although they're stored there for the sake of convenience. Among these nodes is the Script node, which is examined in detail in the second half of this chapter.

There are a number of different types of nodes in VRML 2.0 (54 at the last count!), and it’s possible to define new nodes using the “prototype” mechanism. Each of these nodes does something specific; fortunately, you don’t have to learn very many of them in order to start building simple VRML worlds.

Each type of node has a set of fields that contain values. For example, a lighting node would have a field that specifies the intensity of the light. If you change the value of that field, the light changes brightness. That’s the essence of what “behavior” in VRML is all about—changing the values of fields in nodes.

VRML Syntax

VRML files are human-readable text, and they use the Unicode character set (described elsewhere in this book). Because these files are text, you can print them out and read them, modify them with a text editor, and so forth. IBM, Apple and a company called Paragraph International have announced that they’re working on a binary format for VRML 2.0, which will make VRML files much smaller and faster to download. However, this format will still be semantically equivalent to the text format, so world-builders won’t have to worry about it.

Everything after a # on any line of a VRML file is treated as a comment and ignored. The only exception is when a # appears inside a quoted string. The # works just like // in a Java program.

The first line of every VRML 2.0 file looks like this:

#VRML V2.0 utf8

Notice that this first line begins with a #, so it’s a comment. The V2.0 means, “This file conforms to Version 2.0 of the VRML specification.” The utf8 refers to the character set encoding.

The rest of the file consists mostly of nodes, as described previously. Each node contains a number of fields that store the node’s data, and each field has a specific type. For example, listing 47.1 shows a typical PointLight node.

Listing 47.1 A PointLight node.

PointLight
{
on TRUE
intensity 0.75
location 10 -12 7.5
color 0.5 0.5 0
}

This node contains four fields. The fact that they’re on separate lines is irrelevant; VRML is completely free-format, and anywhere you can have a space, you can also have a tab or a newline. You could just as easily have said

PointLight { on TRUE intensity 0.75 location 10 -12 7.5 color 0.5 0.5 0 }

but it would have been harder to read.

The word PointLight indicates what type of node this is. The words on, intensity, location, and color are field names, and each is followed by a value. Notice that the values are different for each field; the on field is a boolean value (called an SFBool in VRML), and in this case, it has the value TRUE. The intensity field is a floating-point number (an SFFloat in VRML terminology). The location is a vector—a set of X, Y, and Z values (called an SFVec3f in VRML), and the color is an SFColor containing the red, green, and blue components of the light.

In other words, the point light source is turned on at 75 percent of its maximum intensity. It’s located at 10 meters along the positive X axis (right), 12 meters along the negative Y axis (down), and 7.5 meters along the positive Z axis (toward us). It’s a reddish-green color, because the red and green values are each at 50 percent of their maximum value and the blue value is set to zero.

Note that any fields which aren’t given values have default values assigned to them, as described in the VRML specification. For example, you could have left out the on TRUE because the on field has TRUE as its default value.

You can assign a name to a node using the DEF (for “define”) syntax. For example,

DEF Fizzbin PointLight { intensity 0.5 }

would create a PointLight and assign it the name Fizzbin. You see later in this chapter how these names get used, when I discuss “Instancing.”

Types of Fields

VRML supports a number of different types of fields, many of which correspond to data types in Java. Table 47.1 shows the correspondence between Java types and VRML types.

Table 47.1 The correspondence between Java and VRML types.

Java Type VRML Type
boolean SFBool
float SFFloat
int SFInt32
String SFString

As mentioned previously, there are also special data types for 3D vectors (SFVec3f), colors (SFColor), and rotations (SFRotation). There are also 2D vectors (SFVec2f). A special data type is used for time (SFTime) and another for bitmapped images (SFImage).

In addition to these single-valued fields (which is what the SF prefix stands for), there are multiple-valued versions of most of the fields (which begin with MF). These multiple-valued fields are arrays of values; for example, an array of vectors would be an MFVec3f. If more than one value is specified for a particular field, the values are surrounded by square brackets, like this:

point [ 0 0 0, 1.3 2.57 -14, 12 17 4.2 ]

One other field type turns that out to be very useful is SFNode, which allows fields to have a node as their value. There’s also an MFNode, for a field whose value is an array of nodes.

The complete list of VRML 2.0 field types is shown in table 47.2.

Table 47.2 VRML 2.0 Field Types

VRML Type Description
SFBool TRUE or FALSE value
SFInt32 32-bit integer value
SFFloat Floating-point number
SFString Character string in double quotes
SFTime Floating-point number giving the time in seconds
SFVec2f Two-element vector (used for texture map coordinates)
SFVec3f Three-element vector (locations, vertices, and more)
SFRotation Four numbers: a three-element vector plus an angle
SFColor Three numbers: the red, green, and blue components
SFImage Bitmapped image
SFNode A VRML node
MFInt32 Array of 32-bit integers
MFFloat Array of floating-point numbers
MFString Array of double-quoted strings
MFVec2f Array of two-element vectors
MFVec3f Array of three-element vectors
MFRotation Array of four-element rotations
MFColor Array of colors
MFNode Array of nodes

Coordinate Systems and Transformations

Because VRML describes scenes in three dimensions, you need to understand how 3D coordinate systems work in order to use VRML effectively. Figure 47.2 illustrates the coordinate system used by VRML.


FIG. 47.2

The coordinate system used by VRML is based on X, Y and Z axes.

Anyone who’s ever looked at an X-Y graph will find this coordinate system familiar. The X axis goes from left to right, and the Y axis goes from bottom to top. What’s new is the Z axis, which extends from the X-Y plane toward the viewer. The place where all three axes intersect is called the origin.

Translation

Every point in 3D space can be specified using three numbers: the coordinates along the X, Y, and Z axes. In VRML, distances are always represented in meters (a meter is about three feet). If a particular point in a VRML world is at (15.3 27.2 -4.2), then it’s 15.3 meters along the X axis, 27.2 meters along the Y axis, and 4.2 meters backwards along the Z axis. This is illustrated in figure 47.3.


FIG. 47.3

The point (15.3 27.2 -4.2) is shown in the VRML coordinate system.

Moving a point in space is referred to as translation. This is one of the three basic operations you can perform with a Transform node; the other two are scaling and rotation.

Scaling

Scaling means changing the size of an object. Just as you can translate objects along the X, Y, and Z axes, you can also scale them along each of those axes. Figure 47.4 shows a sphere as it might appear in a VRML browser.


FIG. 47.4

A sphere in VRML looks like this.

Figure 47.5 shows the same sphere scaled by a factor of 2 in the Y direction and a factor of 0.5 in the X direction.


FIG. 47.5

A sphere scaled by (0.5 2 1) is narrower and taller.

Scaling is always represented by three numbers, which are the amount to stretch the object along the X, Y, and Z axes, respectively. A value greater than 1.0 makes the object larger along that axis, and a value less than 1.0 makes it smaller. If you don’t want to stretch or shrink an object along a particular axis at all, use a factor of 1.0 (as was done for the Z axis in the sphere example earlier).

Rotation

Rotation is more complex than scaling or translation. Rotation always takes place around an axis, but the axis doesn’t have to be aligned with one of the axes of the coordinate system. Any arbitrary vector pointing in any direction can be the axis of rotation, and the angle is the amount to rotate the object around that axis. The angle is measured in radians. Because there are 3.14159 radians in 180 degrees, you convert degrees to radians by multiplying by 3.14159/180, or about 0.01745.

Transformations

Translation, rotation, and scaling are all transformations. VRML stores these transformations in the Transform node that was discussed earlier. A single Transform can store a translation, a rotation, a scaling operation, or any combination of them. That is, a Transform node can either scale the nodes below it in the tree, rotate them, translate them, or any combination of the above. The sequence of operations is always the same: the objects in the subtree are first scaled, then rotated, and then translated to their final location. For example, listing 47.2 shows what a typical Transform node might look like.

Listing 47.2 A typical Transform node.

Transform
{
scale 1 2 3
rotation 0 1 0 0.7854
translation 10 0.5 -72.1
children
[
PointLight { }
Shape { geometry Sphere { } }
]
}

This particular Transform node has four fields: scale, rotation, translation and children. The scale and translation fields are vectors (SFVec3f), and the rotation is an SFRotation (consisting of a three-element vector and a floating-point rotation in radians).

Because Transform is a grouping node, it has children that are stored in its children field. The children are themselves nodes—in this case, a point light source and a shape whose geometry is a sphere (you find out more about these later in this chapter). Both the light and the shape have their location, orientation, and scale set by the fields of the Transform. For example, the sphere is scaled by (1 2 3), then rotated by 0.7854 radians around the Y axis (0 1 0). Finally, it’s translated 10 meters along X, half a meter along Y, and negative 72.1 meters along Z.

The full Transform node is actually more complex than this because it can specify a center of rotation and an axis for scaling. Those features are beyond the scope of this chapter. There’s also a version of Transform called Group, which simply groups nodes together without performing any transformations on them.

Transformation Hierarchies

Each Transform node defines a new coordinate system, or frame of reference. The scaling, rotation, and translation are all relative to the parent coordinate system. For example, consider figure 47.6.


FIG. 47.6

Transformations and coordinate systems are key ideas in VRML.

A typical VRML world has a number of different coordinate systems within it. There’s the world coordinate system, of course, but a coordinate system also exists for each Transform node in the world. To understand how all this works, take a look at figure 47.7.


FIG. 47.7

The transformation hierarchy for a pool table.

The top-level Transform node is used to position the pool table itself in the world coordinate system; this positioning might involve scaling the table, rotating it to a different orientation, and translating it to a suitable location. Each of the balls on the table has its own Transform node for positioning the ball on the table. Each ball, therefore, has its own little coordinate system that is embedded within the coordinate system of the pool table. As the balls move, they move relative to the table’s frame of reference. Similarly, the table’s coordinate system is embedded within the coordinate system of the room.

Each of these coordinate systems has its own origin. The coordinate system for each ball might have its origin at the geometric center of the ball itself. The coordinate system of the table might have its origin at the geometric center of the table. The coordinate system of the room might have its origin in the corner near the door. The Transform nodes define the relationships between these coordinate systems. Listing 47.3 shows this transformation hierarchy as it would appear in a VRML file.

Listing 47.3 A pool table and balls.

#VRML V2.0 utf8
DirectionalLight { direction -1 -1 -1 }
DirectionalLight { direction 1 1 1 }
Transform {
translation 5 1 2 # location of pool table in room
children [
Shape { # Pool table
appearance Appearance { material Material { diffuseColor 0 1 0 } }
geometry Box { size 6 0.1 4 }
}
Transform {
translation 0 0.35 0.75
children [
Shape {
appearance Appearance { material Material { diffuseColor 1 0 0 } }
geometry Sphere { radius 0.3 }
}
]
}
Transform {
translation 1.5 0.35 0
children [
Shape {
appearance Appearance { material Material { diffuseColor 0 0 1 } }
geometry Sphere { radius 0.3 }
}
]
}
Transform {
translation -0.9 0.35 0.45
children [
Shape {
appearance Appearance { material Material { diffuseColor 1 0 1 } }
geometry Sphere { radius 0.3 }
}
]
}
]
}

Notice that there are Transform nodes in the children field of another Transform node; this is how the transformation hierarchy is represented.

Understanding how coordinate systems work in VRML is very, very important. When you start animating your VRML world using Java, you’ll often be moving and rotating objects by altering the fields of their Transform nodes.

Shapes

Among the most common of the leaf nodes is Shape. The Shape node is used to create visible objects. Everything you see in a VRML scene is created with a Shape node.

The Shape node has only two fields: geometry and appearance. The geometry field specifies the geometric description of the object, while the appearance field gives its surface properties. Listing 47.4 shows a typical Shape node.

Listing 47.4 An example of a Shape node.

Shape
{
geometry Sphere { radius 2 }
appearance Appearance { material Material { diffuseColor 1 0 0 } }
}

This example creates a red sphere with a radius of two meters. The geometry field has a type of SFNode, and in this case it has a Sphere node as its value. The sphere has a radius field with a value of 2.0 meters.

The appearance field can only take one type of node as its value: an Appearance node. The Appearance node has several fields, one of which is illustrated here: the material field. The material field can only take a Material node as its value. At first these appearance Appearance and material Material sequences may seem very odd and redundant, but as you see later, these sequences actually turn out to be useful. The other fields of the Appearance node allow us to specify a texture map to use for the shape, and information about how the texture map should be scaled, rotated, and translated. You learn more about the Appearance node in the section “Appearance” later in this chapter.

The Material node specifies only one field in this example: the diffuseColor of the sphere. In this case, it has a red component of 1.0 and a value of 0.0 for each of the green and blue components. As you see later in this chapter, the Material node can also specify the shininess, transparency, and other surface properties for the shape.

Geometry

There are 10 geometric nodes in VRML. Four of them are straightforward: Sphere, Cone, Cylinder and Box. There’s also a Text node that creates large text in a variety of fonts and styles, an ElevationGrid node that’s handy for terrain, and an Extrusion node that allows surfaces of extrusion or revolution to be created. Finally, the PointSet, IndexedLineSet, and IndexedFaceSet nodes let you get right down to the point, line, and polygon level.

Sphere, Cone, Cylinder, and Box

The Sphere node has radius field that gives the size of the sphere in meters. Remember that this is a radius, not a diameter; the default 1.0 value produces a sphere that’s two meters across.

A Cone has a bottomRadius field that gives the radius of the base of the cone. It also has a height and a pair of flags (side and bottom) that indicate whether the sides and/or bottom should be visible.

Like the Cone, the Cylinder node has fields that indicate which parts are visible: bottom, side, and top. This node also has a height and a radius.

The Box node is simple: it only has a size field, which is a three-element vector (an SFVec3f) that gives the X, Y, and Z dimensions of the box. In VRML 1.0, Box was called “Cube”. That name was misleading, though, because the sides are not necessarily all the same length.

Figure 47.8 shows these four basic geometric primitives.


FIG. 47.8

The Sphere, Cone, Cylinder, and Box nodes are the simplest geometric primitives in VRML.

ElevationGrid, Extrusion, and Text

The ElevationGrid node is useful for creating terrain; it stores an array of heights (Y values) that are used to generate a polygonal representation of the landscape. This data is sometimes referred to as a heightfield.

The Extrusion node takes a 2D cross-section and extrudes it along a path (open or closed) to form a three-dimensional shape.

The Text node creates flat, 2D text that can be positioned and oriented in the three-dimensional world.

Figure 47.9 shows the Text node in action.


FIG. 47.9

The Extrusion, ElevationGrid, and Text nodes are very useful.

Points, Lines, and Faces

The PointSet node is useful for creating a cloud of individual points, and the IndexedLineSet node is handy for creating geometry that consists entirely of line segments.

However, the most important and widely used geometric node is the IndexedFaceSet. This node allows you to specify any arbitrary shape by listing the vertices of which it’s composed and the faces that join the vertices together. Most of the objects you find in a VRML world are IndexedFaceSets, and a large part of any VRML file is made up of long lists of X, Y, and Z coordinates. Figure 47.10 shows an object made from an IndexedFaceSet.


FIG. 47.10

An IndexedFaceSet can create arbitrarily complex shapes.

Appearance

The Appearance node (which is only found in the appearance field of a Shape node) has three fields. One is used to specify a material for the shape, the second provides a texture map, and the third gives texture transform information.

The example shown in listing 47.5 will make this clearer.

Listing 47.5 Using the Appearance node.

#VRML V2.0 utf8
DirectionalLight { direction -1 -1 -1 }
DirectionalLight { direction 1 -1 -1 }
DirectionalLight { direction 0 0 -1 }
Shape {
geometry Sphere { }
appearance Appearance {
material Material {
diffuseColor 0 0 0.9
shininess 0.8
transparency 0.6
}
texture ImageTexture {
url "brick.bmp"
}
textureTransform TextureTransform { scale 5 3 }
}
}

This example creates a blue sphere that is shiny and partially transparent. It applies a brick texture, loaded from a BMP file out on the Web, to the surface of the sphere. The texture coordinates are scaled up, which makes the texture itself smaller. This causes it to get repeated, or tiled, across the surface as needed. Figure 47.11 shows the finished sphere.


FIG. 47.11

Texture-mapping makes objects look more detailed than they actually are.

In addition to the diffuseColor, shininess, and transparency, a Material node can specify the emissiveColor (for objects that appear to glow), the specularColor (for objects that have a metallic highlight), and an ambientIntensity factor (which indicates what fraction of the scene’s ambient light should be reflected).

The previous example shows an ImageTexture, which loads the texture from an image map (in this case, a Windows BMP file). Another alternative would be to use a MovieTexture node, which would specify an MPEG file that would produce an animated texture on the surface. You could also use a PixelTexture node, in which case you would probably generate the texture map using Java. Generating texture maps is beyond the scope of this chapter.

The TextureTransform node allows you to scale the texture coordinates, shift them, and rotate them. This node is like a two-dimensional version of the Transform node.

Instancing

VRML files can be pretty big. That means they take a long time to download, and the nodes can take up a lot of memory. Is there some way to reduce this bloat? It turns out that there is. You can re-use parts of the scene by creating additional instances of nodes or complete subtrees.

Earlier on, you saw how it’s possible to assign a name to a node using DEF. Once you’ve done that, you can create another instance of the node by using USE. Listing 47.6 shows an example.

Listing 47.6 An example of instancing.

#VRML V2.0 utf8
DirectionalLight { direction -1 -1 -1 }
DirectionalLight { direction 1 -1 -1 }
DEF Ball Shape {
appearance Appearance { material Material { diffuseColor 1 0 0 } }
geometry Sphere { }
}
Transform {
translation -8 0 0
children [
USE Ball
]
}
Transform {
translation 8 0 0
children [
USE Ball
]
}

The sphere is created once and then “instanced” twice —once inside a Transform that shifts it to the left eight meters, and once inside a Transform that shifts it to the right eight meters.

Note that USE does not create a copy of a node; it simply re-uses the node in memory. As you see later, USE does make a difference. If a behavior came along and altered the color of the ball, it would affect all three instances. Figure 47.12 shows this relationship.


FIG. 47.12

Instancing of nodes saves memory.

Lights

VRML supports three different types of light sources: PointLight, SpotLight, and DirectionalLight. One important point to keep in mind is that the more light sources you add to a scene, the more work the computer has to do in order to compute the lighting on each object. You should avoid having more than a few lights turned on at once.

All of the lights have the same basic set of fields: intensity, color, and on (which, not surprisingly, indicates that the light is on). Lights also have an ambientIntensity, which indicates how much light each contributes to the ambient illumination in the room, as well as some attenuation factors (which are beyond the scope of this chapter).

PointLight

A PointLight has a location field that indicates where the light is placed within its parent’s coordinate system. PointLights radiate equally in all directions.

SpotLight

SpotLights are similar to PointLights, except they also have a direction field that indicates which way they’re pointing (again, relative to their parent’s coordinate system). SpotLights also have some additional information (beamWidth and cutOffAngle) that describes the cone of light that they produce.

DirectionalLight

Unlike PointLight and SpotLight, a DirectionalLight has no location. It appears to come from infinitely far away, and the light it emits travels in a straight line. A DirectionalLight puts less of a burden on the rendering engine, which can result in improved performance.

Sound

One of the most important additions to VRML 2.0 is support for sound. Two nodes are used for this purpose: Sound and AudioClip.

A Sound node is a lot like a SpotLight, except that it emits sound instead of light. It has a location, a direction vector, and an intensity. It also contains an AudioClip node to act as a source for the sound.

An AudioClip node gives the URL of the sound source (a WAV file or MIDI data), a human-readable description of the sound (for users with no sound capabilities), a pitch adjustment, and a flag that indicates whether the sound should loop.

Viewpoint

The Viewpoint node allows the author of a world to specify a location and orientation from which the scene can be viewed. The Viewpoint is part of the transformation hierarchy, and the user is “attached” to it. In other words, you can move the user around the environment at will by altering the values in the Transform nodes above the Viewpoint.

Other VRML Nodes

There are a number of other nodes in VRML that are beyond the scope of this chapter:

There are grouping nodes for automatically switching the level of detail (LOD) or selecting any of several different subtrees (Switch). For details about these and other nodes, check out the full VRML specification online. See the reference at the end of this chapter.

The Sensor Nodes

Interactivity is a key element of the VRML 2.0 specification; therefore, a number of nodes are dedicated to detecting various types of events that take place in the virtual environment. These nodes are referred to as sensors.

At the moment, there are seven such sensors:

Sensors are able to generate events, which should be familiar to anyone who’s programmed for Windows, the Macintosh, X-Windows, or other windowing environments. An event contains a timestamp (indicating the time at which the event occurred), an indication of the type of event, and event-specific data. All sensors generate events, and they can generate more than one type of event from a single interaction.

A complete description of all the sensors and how they work is beyond the scope of this chapter. However, two sensors in particular are worth a closer look: TouchSensor and TimeSensor.

TouchSensor

A TouchSensor is a node that detects when the user has touched some geometry in the scene. The definition of touch is quite open, in order to support immersive environments with 3D pointing devices as well as more conventional desktop metaphors that use a 2D mouse. Touching in a desktop environment is usually done by clicking the object on-screen.

The TouchSensor node enables touch detection for all its siblings. In other words, if the TouchSensor is a child of a Transform, it detects touches on any shapes under that same Transform.

Listing 47.7 shows how a TouchSensor would be used.

Listing 47.7 A TouchSensor example.

#VRML V2.0 utf8
Transform {
children [
TouchSensor { }
Shape { geometry Sphere { } }
Shape { geometry Box { } }
]
}

A TouchSensor generates several events, but the two most important ones are isActive and touchTime. The isActive event is an SFBool value that is sent when contact is first made; touchTime is an SFTime value that indicates the time at which contact was made.

A TouchSensor can be used for operating a light switch or door knob, or for triggering any event that is based on user input.

Clicking either the sphere or the box in the example shown previously would cause the TouchSensor to send both an isActive event and a touchTime event, as well as several other events that are beyond the scope of this chapter.

TimeSensor

A TimeSensor is unusual, in that it’s the only sensor that doesn’t deal with user input. Instead, it generates events based on the passage of time.

Time is very, very important when doing simulations—especially when it comes to synchronizing events. In VRML, the TimeSensor is the basis for all timing; it’s a very flexible and powerful node, but a bit difficult to understand.

The best way to visualize a TimeSensor is to think of it as a kind of clock. It has a startTime and a stopTime. When the current time reaches the startTime, the TimeSensor starts generating events. It continues until it reaches the stopTime (assuming the stopTime is greater than the startTime). You can enable or disable a TimeSensor by using its enabled field.

Sometimes you want to generate continuous time values. Other times you want to generate discrete events, say once every five seconds. Still other times, you want to know what fraction of the total time has elapsed. A TimeSensor is able to do all three of these things simultaneously. It does this by generating four different kinds of events, one for each of these three situations and one that indicates when the TimeSensor goes from active to inactive.

The first type of event is simply called time. It gives the system time at which the TimeSensor generated an event.

Bear in mind that although time flows continuously in VRML, TimeSensor nodes only generate events sporadically. Most VRML browsers will cause TimeSensors to send events once per rendered frame, but there’s no guarantee that this will always be the case. The time value output by a TimeSensor is always correct, but there’s no way to be sure you’re going to get values at any particular time.
 

The second type of event is called cycleTime. The TimeSensor has a cycleInterval field, and whenever a cycleInterval has elapsed, the TimeSensor generates a cycleTime event. Again, there are no guarantees that the cycleTime event will be generated at any particular time, only that it will be generated after the cycle has elapsed. The cycleTime is useful for events that have to happen periodically. With loop set to TRUE, the timer will run until it reaches the stopTime, and multiple cycleTime events will be generated. If the stopTime is less than the startTime (it defaults to zero) and loop is TRUE, the timer will run continuously forever and generate a cycleTime event after every cycleInterval.

The third type of event is called fraction_changed. It’s a floating-point number between 0.0 and 1.0 that indicates what fraction of the cycleInterval has elapsed. It’s generated at the same time that time events are.

The final type of event is isActive, which is an SFBool that gets set to TRUE when the TimeSensor starts generating events (such as when the startTime is reached). isActive is set to FALSE when the TimeSensor stops generating events.

Figure 47.13 shows how to conceptualize a TimeSensor node.


FIG. 47.13

The TimeSensor node provides a time base.

The TimeSensor is probably the most complex and potentially confusing node in VRML 2.0, and the details of its operation are extremely subtle. Before making extensive use of it, you should read the description in the VRML 2.0 specification. If you still have problems with it, post a message to the comp.lang.vrml newsgroup, and someone should be able to help.
 

Routes

Now that you’re able to generate events from sensors, you need to be able to do something with those events. This is where the ROUTE statement comes in.

A ROUTE is not a node. It’s a special statement that tells the VRML browser to connect a field in one node to a field in another node. For example, you could connect a TimeSensor’s fraction_changed event output to a light’s intensity field as shown in listing 47.8.

Listing 47.8 Using a ROUTE.

#VRML V2.0 utf8
Viewpoint { position 0 -1 5 }
DEF Fizzbin TimeSensor { loop TRUE cycleInterval 5 }
DEF Bulb PointLight { location 2 2 2 }
Shape { geometry Sphere { } }
ROUTE Fizzbin.fraction TO Bulb.intensity

This example would cause the light intensity to vary continuously, increasing from 0.0 to 1.0 and then jumping back down to zero again.

Note what’s happening in this example. The default value for the enabled field of the TimeSensor is TRUE, so the timer is ready to run. Because the default value for startTime is zero and the current time is greater than that, the TimeSensor will be generating events. Because loop is TRUE and the default value for stopTime is zero (which is less than or equal to the startTime), the timer will run continuously. The cycleInterval is five seconds, so the fraction_changed value will ramp up from 0.0 to 1.0 over that interval.

The ROUTE statement is what connects the fraction_changed value in the TimeSensor named Fizzbin to the intensity field in the PointLight named Bulb. Note that both ROUTE and TO should be all-uppercase.

Not all fields can be routed to or routed from; for example, the radius field of a Sphere node can’t be the source or destination of a ROUTE. However, you can change the size of a sphere by altering the scale field of the surrounding Transform node. Check the VRML specification for details.
 

The type of values in the fields referenced in a ROUTE must match. In other words, it’s possible to route the TimeSensor’s fraction_changed value (an SFFloat) to the PointLight’s intensity field (also an SFFloat). However, routing an SFBool (like a TimeSensor’s isActive field) to the PointLight’s intensity field would have been an error.

Interpolators

There are many times when you want to compute a series of values for some field. For example, you may want to have a flying saucer follow a particular path through space. This is easily accomplished using an interpolator.

Every interpolator node in VRML has two arrays: key and keyValue. Each interpolator also has an input, called set_fraction, and an output, called value_changed. If you imagine a 2D graph with the keys along the X axis and the key values along the Y axis, you’ll have an idea of how an interpolator works (see fig. 47.14).


FIG. 47.14

Linear interpolation computes intermediate values.

The keys and the key values have a one-to-one relationship. For every key, there’s a corresponding keyValue. When an interpolator receives a set_fraction event, the incoming fraction is compared to all of the keys. The two keys on either side of the incoming fraction are found, along with the corresponding key values, and a value is computed that’s the same percentage of the way between the key values as the incoming fraction is between the keys. For example, if the incoming fraction value were two-thirds of the way between the 15th and 16th keys, then the output would be two-thirds of the way between the 15th and 16th key values.

There are half a dozen different interpolators in VRML:

Each serves a purpose of some kind, but this chapter only uses one: the PositionInterpolator.

In a PositionInterpolator, the key values (and value_changed) are of type SFVec3f—that is, they’re 3D vectors. Listing 47.9 shows an example of a PositionInterpolator at work.

Listing 47.9 A PositionInterpolator at work.

#VRML V2.0 utf8
DEF Saucer-Transform Transform {
scale 1 0.25 1
children [
Shape {
geometry Sphere { }
}
]
}
DEF Saucer-Timebase TimeSensor { loop TRUE cycleInterval 5 }
DEF Saucer-Mover
PositionInterpolator {
key [ 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 ]
keyValue [ 0 0 0, 0 2 7, -2 2 0, 5 10 -15, 5 5 5, 0 0 0 ]
}
ROUTE Saucer-Timebase.fraction_changed TO Saucer-Mover.set_fraction
ROUTE Saucer-Mover.value_changed TO Saucer-Transform.set_translation

The saucer is just a sphere that’s been squashed along the Y axis using a scale in the surrounding Transform node. The translation field for the Transform isn’t given, so it defaults to (0 0 0). The TimeSensor is just like the one you looked at earlier.

The Saucer-Mover is a PositionInterpolator. It has six keys, going from 0.0 to 1.0 in steps of 0.2. There’s no reason why it had to go in fixed-sized steps; any set of values can be used, as long as the values steadily increase.

There are six values that correspond to the six keys. Each one is a three-element vector, giving a particular position value for the saucer.

Once the nodes are defined, you can create the routes. The first ROUTE connects the TimeSensor’s fractional output to the PositionInterpolator’s fractional input. As the TimeSensor runs, the input to the PositionInterpolator increases steadily from 0.0 to 1.0, which it reaches after five seconds (the cycleInterval). The second ROUTE connects the value_changed output of the PositionInterpolator to the translation field of the saucer’s Transform node; this ROUTE is what lets the interpolator move the saucer. Figure 47.15 shows the relationship between these nodes.


FIG. 47.15

The routes between nodes for the flying saucer example.

Note that the saucer doesn’t “jump” from one value to another; its location is linearly interpolated between entries in the PositionInterpolator’s keyValue field.

Scripts and the Interface to Java

So far, you’ve seen how to create sensors to detect user input or the passage of time, as well as how to create interpolators to compute intermediate values for various quantities. You’ve also seen how to connect nodes together using ROUTEs. This gives us quite a bit of power, and there are a number of fun things you can do using nothing more than those basic building blocks.

However, you’re a Java programmer. You want to be able to use the power of the Java language in building your VRML worlds, and the way you do this is through the Script node.

The Script Node

The Script node is a kind of nexus. Events flow in and out of the node, just as they do for interpolators or other types of nodes. However, the Script node is special: it allows an actual program written in Java to process the incoming events and generate the outgoing events. Figure 47.16 shows the relationship between the Script node in VRML and the Java code that implements it.


FIG. 47.16

Java accesses VRML through a Script node.

The Script node has only one built-in field that you need to worry about at this stage—url, which gives the URL of a Java bytecode file somewhere on the Internet. There are a couple of other fields, but you don’t need to worry about them here.

The Script node can also have a number of declarations for incoming and outgoing events, as well as fields that are accessible only by the script. For example, listing 47.10 shows a Script node that can receive two incoming events (an SFBool and an SFVec3f), and can send three outgoing events. It also has two local fields.

Listing 47.10 A Script node.

#VRML V2.0 utf8
Script {
url “bigbrain.class”
eventIn SFBool recomputeEverything
eventIn SFVec3f spotToBegin
eventOut SFBool scriptRan
eventOut MFVec3f computedPositions
eventOut SFTime lastRanAt
field SFFloat rateToRunAt 2.5
field SFInt32 numberOfTimesRun
}

The eventIn, eventOut, and field designators are used to identify incoming events, outgoing events, and fields that are private to the Script node.

The Java bytecode file bigbrain.class is loaded in, and the constructor for the class is called. The class should contain a method called initialize(), which is called before any events are sent to the class. As events arrive at the Script node, they’re passed to the processEvent() method of the class. That method looks like this:

public void processEvent(Event ev)

where ev is an incoming event. An event is defined as follows:

class Event {
public String getName();
public ConstField getValue();
public double getTimeStamp();
}

The getName() method returns the name of the incoming event, which is the name the event was given in the Script node in the VRML file. The getTimeStamp() method returns the time at which the event was received at the Script node. The getValue() method returns a ConstField which should then be cast to the actual field type (such as ConstSFBool or ConstMFVec3f).

There are Java classes for each type of VRML field. Each of these classes defines methods for reading (and possibly writing) their values.

A Simple Example

Let’s say you wanted to have a light change to a random intensity whenever the user touches a sphere. VRML itself doesn’t have any way to generate random numbers, but Java does (the java.util.Random class). Listing 47.11 shows how you would construct your VRML world.

Listing 47.11 A simple random light.

#VRML V2.0 utf8
Viewpoint { position 0 -1 5 }
NavigationInfo { headlight FALSE }
DEF RandomBulb DirectionalLight { -1 -1 -1 }
Transform {
children [
DEF Touch-me TouchSensor { }
Shape {
geometry Sphere { } # something for the light to shine on
}
]
}
DEF Randomizer Script {
url "RandLight.class"
eventIn SFBool click
eventOut SFFloat brightness
}
ROUTE Touch-me.isActive TO Randomizer.click
ROUTE Randomizer.brightness TO RandomBulb.intensity

Most of this example should be familiar territory by now. The DirectionalLight is given the name “RandomBulb” using a DEF. A Sphere shape and a TouchSensor are grouped as children of a Transform, which means that touching the Sphere will trigger the TouchSensor.

The Script node is given the name Randomizer, and it has one input (an SFBool called click) and one output (an SFFloat called brightness).

When the RandLight class is first loaded, its constructor is invoked. Next, its initialize() method is called. The initialize() method can do whatever it likes, including send initial events.

Whenever you touch the sphere, the TouchSensor’s isActive field is set to true and routed to the script’s click eventIn; this in turn causes an event to be sent to the processEvent() method of the RandLight class. The event would have a name of “click”, and a value that would be cast to a ConstSFBool. That ConstSFBool would have a value of TRUE, which would be returned by its getValue() method. When you release the button, another event is sent that’s identical to the first but this time with a value of false in the ConstSFBool.

When any of the methods in the RandLight class sets the brightness value (as described later in this chapter), that event gets routed to the intensity field of the PointLight called RandomBulb.

The View from Java Land

Now that you’ve seen how the VRML end of things works, let’s look at it from the Java perspective. You return to our random-light project shortly, but first let’s take a little detour through the VRML package.

The VRML package is imported as you would expect:

import vrml.*;

This package defines a number of useful classes. There’s a class called Field (derived from Object) that corresponds to a VRML field. From Field there are a number of derived classes, one for each of the basic VRML data types, such as SFBool and SFColor. There are also “read-only” versions of all those classes; they have a Const prefix, as in ConstSFBool.

The read-only versions of the fields provide a getValue() method that returns a Java data type corresponding to the VRML type. For example, the ConstSFBool class looks like this:

public class ConstSFBool extends Field {
public boolean getValue();
}

The read-write versions of the fields also provide the getValue() method, but in addition they have a setValue() method that takes a parameter (such as a boolean) and sets it as the value of the field. Doing this causes an event to be sent from the Script node.

There are, of course, classes that correspond to multiple-valued VRML types such as MFFloat. These classes have the getValue() and setValue() methods, but they also have a method for setting a single element of the array: set1Value(). Listing 47.12 shows what the MFVec3f class looks like.

Listing 47.12 The MFVec3f class from the vrml.field package.

public class MFVec3f extends MField
{
public MFVec3f(float vecs[][]);
public MFVec3f(float vecs[]);
public MFVec3f(int size, float vecs[]);
public void getValue(float vecs[][]);
public void getValue(float vecs[]);
public void setValue(float vecs[][]);
public void setValue(int size, float vecs[]);
public void setValue(ConstMFVec3f vecs);
public void get1Value(int index, float vec[]);
public void get1Value(int index, SFVec3f vec);
public void set1Value(int index, float x, float y, float z);
public void set1Value(int index, ConstSFVec3f vec);
public void set1Value(int index, SFVec3f vec);
public void addValue(float x, float y, float z);
public void addValue(ConstSFVec3f vec);
public void addValue(SFVec3f vec);
public void insertValue(int index, float x, float y, float z);
public void insertValue(int index, ConstSFVec3f vec);
public void insertValue(int index, SFVec3f vec);

}

An MFVec3f is an array of three-element vectors (the three elements being the X, Y, and Z components, as you saw earlier). A single entry is a float[], and an MFVec3f is a float[][] type in Java.

Notice that there are three versions of setValue(): one which takes an array of floats, one which takes an array of floats and a count, and one which takes another MFVec3f.

Not only is there a class in the VRML package corresponding to a field in a VRML node, but there’s also a class for VRML nodes themselves. The Node class provides methods for accessing exposedFields, eventIns, and eventOuts by name. For example, the name of a field in the node is passed to getExposedField(), and it returns a reference to the field. The return value needs to be cast to be of the appropriate type.

There’s also a Script class, which is related to Node. When you write Java code to support a Script node, you create a class that’s derived from the Script class. The Script class provides a getField() method for accessing a field given its name, and a similar getEventOut() method. It also has an initialize() method as described earlier, and of course the processEvent() method. There’s also a shutdown() method that gets called just before the Script node is discarded, in order to allow the class to clean up after itself.

The Script node also defines two other methods: processEvents() (not to be confused with processEvent()) which is given an array of events and a count so that they may be processed more efficiently than by individual processEvent() calls, and an eventsProcessed() method, which is called after a number of events have been delivered.

And finally, there’s a Browser class which provides methods for finding such things as the name and version of the VRML browser that’s running, the current frame rate, the URL of the currently loaded world, and so on. You can also add and delete ROUTEs and even load additional VRML code into the world either from a URL or directly from a String.

Back to RandLight

Now let’s look at some Java code. Listing 47.13 shows the Java source for the RandLight class, which would be stored in a file called RandLight.java.

Listing 47.13 The RandLight class.

// Code for a VRML Script node to set a light to a random intensity
import vrml.*;
import java.util.*;
public class RandLight extends Script {
Random generator = new Random();
SFFloat brightness = (SFFloat) getEventOut("brightness");
public void initialize() {
brightness.setValue(0.0f);
}
public void processEvent(Event ev) {
if (ev.getName().equals(“click”)) {
ConstSFBool value = (ConstSFBool) ev.getValue()
if ((value.getValue() == false) { // touch complete
brightness.setValue(generator.nextFloat());
}
}
}
}

The RandLight.java file defines a single class, called RandLight, that extends the Script class defined in the VRML package as described earlier.

The RandLight class contains a random number generator, and it also has an SFFloat called brightness. As described earlier, the Script class has a method called getEventOut(), which retrieves a reference to an eventOut in the Script node in the VRML file using the name of the field (in this case, brightness). Because the type of eventOut (SFBool, SFVec3f, and so on) is unknown, the getEventOut() method simply returns a Field that is then cast to be a field of the appropriate type using (SFFloat). This is then assigned to the variable called brightness, which is of type SFFloat. The variable didn’t have to be called brightness, but it’s a good idea to keep the field name in the Script node consistent with its corresponding variable in the class that supports that Script node.

Like all read-write classes that correspond to VRML fields, the SFFloat class has a method called setValue(). This method takes a float parameter and stores it as the value of that field This in turn causes the Script node in VRML to generate an outgoing event, which may be routed somewhere.

The rest of the code is straightforward. The initialize() method sets the brightness to zero. The processEvent() method, which gets called when an event arrives at the Script node in VRML, checks for “click” events and sets the brightness to a random value on FALSE clicks (such as releases of the mouse button). That’s all there is to it.

The Towers of Hanoi

Now that you have learned how all of the pieces work, it’s time to put them together. The remainder of this chapter takes one of the oldest puzzles in recorded history and implement it using the latest in cutting-edge technologies.

The Towers of Hanoi is a very simple puzzle, yetit’s intriguing and fun to watch. There are three vertical posts that are standing side by side. On one of the posts is a stack of disks. Each disk has a different diameter. The disks are stacked so that the largest disk is on the bottom, the next-largest is on top of it, and so on until the smallest disk is on top. Figure 47.17 is a sketch showing the arrangement.


FIG. 47.17

The initial configuration for the Towers of Hanoi puzzle.

The goal is to move the entire stack to another post. You can only move one disk at a time, and you are not allowed to place a larger disk on top of a smaller one. Those are the only rules.

If you were moving the stacks by hand, you would start by taking the top-most (smallest) disk from the first post and placing it on the second post. You would then take the next-largest disk and place it on the third post. Then you’d take the disk from the second post and place it on the third one. This process would continue until you’d moved all of the disks.

Even though it’s fun to watch the stacks being moved, it’s a lot less fun to actually do it. (I could watch people work all day!)

Building a VRML/Java application to do move the stacks is a multi-stage process. The first step is to build the posts and base, along with some lighting and a nice viewpoint. The disks are added next and, finally, the script that animates them. The process of building this simple world will make use of everything you’ve learned about in this chapter, including TouchSensors, TimeSensors, PositionInterpolators, Scripts, ROUTE statements, and basic VRML nodes.

The Posts and the Base

The three posts are created using Cylinder nodes, and the base is a Box. The base is positioned first, as shown in listing 47.14.

Listing 47.14 The base of the Towers of Hanoi.

#VRML V2.0 utf8
# Base
Transform {
translation 0 0.0625 0
children [
Shape {
appearance Appearance { material Material { diffuseColor 0.50 0.50 0 } }
geometry Box { size 1.5 0.125 0.5 }
}
]
}

The box is 1.5 meters wide (X axis), 0.125 meters high (Y axis), and 0.5 meters deep (Z axis). Because you want it resting on the “ground” (the X-Z plane), you need to position its lowest point at Y=0. Because the origin of the box is at its geometric center, you need to shift it vertically by half of its height: half of 0.125 is 0.0625, which is why you have a translation of (0 0.0625 0): no translation in X or Z, and a 0.0625 meter translation in Y.

The next step is to add the first post, as shown in listing 47.15.

Listing 47.15 The base and one post.

# Posts
Transform {
translation 0 0.375 0
children DEF Cyl Shape { geometry Cylinder { height 0.5 radius 0.035 } }
}

The first post is a Cylinder that is half a meter high with a radius of 0.035 meters. This shape is assigned the name Cyl, because you will be making “USE” of it later. You want the bottom of the post to rest on top of the box. Because the origin of the Cylinder is at its geometric center, you need to shift it vertically by half of its height (0.25 meters) plus the height of the base (0.125 meters). Because 0.25 + 0.125 is 0.375, this shape has a translation of (0 0.375 0). Because there’s no X or Z translation, the post will be centered over the middle of the box. Figure 47.18 shows what the world looks like so far.


FIG. 47.18

The base and the first (middle) post, looking good.

Rather than create two more cylinders, let’s make use of instancing. Listing 47.16 shows how this works.

Listing 47.16 Two more posts, instances of the first.

Transform {
translation -0.5 0.375 0
children USE Cyl
}
Transform {
translation 0.5 0.375 0
children USE Cyl
}

The USE Cyl creates another instance of the post shape that was created earlier. The first Transform moves the post to the left (X = -0.5 meters), the second moves the post to the right (X = 0.5 meters), and they both move the posts to the same Y = 0.375 location as the first post.

A WorldInfo node is added to store author information and a title for the world, as well as a NavigationInfo node to put the user’s VRML browser in FLY mode and turn off the headlight. A TouchSensor is added to the base to give the user a way to start and stop the movement of the disks. Finally, some lights are thrown in. Listing 47.17 shows our world so far, and figure 47.19 shows what it looks like in a VRML browser.

Listing 47.17 The complete base and posts.

#VRML V2.0 utf8
WorldInfo {
title "Towers of Hanoi"
info "Created by Bernie Roehl (broehl@ece.uwaterloo.ca), July 1996"
}
NavigationInfo { type "FLY" headlight FALSE }
PointLight { location 0.5 0.25 0.5 intensity 6.0 }
PointLight { location -0.5 0.25 0.5 intensity 6.0 }
DirectionalLight { direction -1 -1 -1 intensity 6.0 }
Viewpoint { position 0 0.5 2 }
# Base
Transform {
translation 0 0.0625 0
children [
DEF TOUCH_SENSOR TouchSensor { }
Shape {
appearance Appearance { material Material { diffuseColor 0.50 0.50 0 } }
geometry Box { size 1.5 0.125 0.5 }
}
]
}
# Posts
Transform {
translation 0 0.375 0
children DEF Cyl Shape { geometry Cylinder { height 0.5 radius 0.035 } }
}
Transform {
translation -0.5 0.375 0
children USE Cyl
}
Transform {
translation 0.5 0.375 0
children USE Cyl
}

FIG. 47.19

This is how our world-in-progress looks.

The static part of our world is complete. Now it’s time to add the moving parts—the disks themselves.

The Disks

For this example, I use five disks. The definition of each disk is pretty simple, and is shown in listing 47.18.

Listing 47.18 A disk.

DEF Disk1
Transform {
translation -0.5 0.305 0
children [
Shape {
appearance Appearance { material Material { diffuseColor 0.5 0 0.5 } }
geometry Cylinder { radius 0.12 height 0.04 }
}
]
}

The disks are just cylinders. All of the disks are the same, except for the value of the translation (they’re stacked vertically, so the Y component will be different), the value of the radius (each disk is smaller than the one below it), and the diffuseColor of the disk.

If you’re already familiar with VRML 2.0, you’re probably wondering why a PROTO wasn’t used for the disks. That is in fact the way it would normally be done.
 
Unfortunately, this book is being written at a very early stage of VRML 2.0, and no fully compliant browsers are available. In fact, there are only two VRML 2.0 browsers: Sony’s CyberPassage and SGI’s CosmoPlayer. Because CosmoPlayer doesn’t have Java support yet, CyberPassage was used for these examples. CyberPassage has some bugs that are related to the use of prototypes, so it’s necessary to actually replicate the code for each disk. Such is life at the bleeding edge.
 

There’ll be some additional nodes for each disk, but for now let’s just stop at the geometry. Figure 47.20 shows the posts with the disks stacked in their starting position.


FIG. 47.20

The posts and the disks are ready for action.

Now that all of the geometry is in place, it’s time to start dealing with behavior.

Adding the Interpolators and TimeSensors

There’s going to be a PositionInterpolator for each disk to handle its movement, and it’ll be driven by a TimeSensor node. Let’s look at the interpolator first. The interpolator for the first disk is shown in listing 47.19.

Listing 47.19 The PositionInterpolator for a disk.

DEF Disk1Inter
PositionInterpolator {
key [ 0, 0.3, 0.6, 1 ]
}

There are four keys, spaced roughly 0.3 units apart. Each disk is going to move from its current location to a point immediately above the post it’s on. The disk then moves to a point immediately above the post it’s moving to, then it finally moves down into position. Four locations, four keys. Notice that no key values are specified; they’ll be filled in later by our Java code.

The timer associated with each disk is a TimeSensor, as shown in listing 47.20.

Listing 47.20 The TimeSensor for a disk.

DEF Disk1Timer
TimeSensor {
loop FALSE
enabled TRUE
stopTime 1
}

The timer is designed to run once each time it’s started (which is why its loop field is FALSE). It starts off being enabled. The startTime is not specified; again, this is because it will be filled in from our Java code.

The next step is to connect the TimeSensor to the PositionInterpolator and the PositionInterpolator to the Transform node for the disk. A pair of ROUTE statements does the trick:

ROUTE Disk1Timer.fraction_changed TO Disk1Inter.set_fraction
ROUTE Disk1Inter.value_changed TO Disk1.set_translation

Our next step is going to be to add a Script node. It will need to be able to update the keyValue field of the PositionInterpolator and the startTime field of the TimeSensor, so let’s add a couple of additional ROUTEs:

ROUTE SCRIPT.disk1Start TO Disk1Timer.startTime
ROUTE SCRIPT.disk1Locations TO Disk1Inter.keyValue

The Script node called SCRIPT will have a disk1Start field into which it will write the start time for the interpolation. This node will also have a disk1locations field into which it will write the four locations that this disk should move through (current location, above the current post, above the destination post, and final location).

The complete VRML source for a single disk, therefore, looks like listing 47.21.

Listing 47.21 The complete VRML code for a single disk.

DEF Disk1
Transform {
translation -0.5 0.305 0
children [
Shape {
appearance Appearance { material Material { diffuseColor 0.5 0 0.5 } }
geometry Cylinder { radius 0.12 height 0.04 }
}
]
}
DEF Disk1Inter PositionInterpolator { key [ 0, 0.3, 0.6, 1 ] }
DEF Disk1Timer TimeSensor { loop FALSE enabled TRUE stopTime 1 }
ROUTE SCRIPT.disk1Start TO Disk1Timer.startTime
ROUTE Disk1Timer.fraction TO Disk1Inter.set_fraction
ROUTE Disk1Inter.value_changed TO Disk1.set_translation
ROUTE SCRIPT.disk1Locations TO Disk1Inter.keyValue

This complete sequence is replicated for each of the five disks. Of course, Disk1 is replaced with Disk2, Disk3, and so on.

Adding the Script Node

To keep things simple, there’s going to be a single Script node to drive the entire simulation. This node has a large number of inputs and outputs, as shown in listing 47.22.

Listing 47.22 The Script node for the Towers of Hanoi.

DEF SCRIPT Script {
url "Hanoi.class"
eventIn SFBool clicked
eventIn SFTime tick
eventOut MFVec3f disk1Locations
eventOut SFTime disk1Start
eventOut MFVec3f disk2Locations
eventOut SFTime disk2Start
eventOut MFVec3f disk3Locations
eventOut SFTime disk3Start
eventOut MFVec3f disk4Locations
eventOut SFTime disk4Start
eventOut MFVec3f disk5Locations
eventOut SFTime disk5Start
}

The script is loaded from a file called Hanoi.class, which is the result of compiling Hanoi.java. It’s described in excruciating detail later. The clicked eventIn is used to let the Script node know when the user has clicked on the base of the posts (to start or stop the simulation). The tick eventIn is used to advance the simulation.

For each disk, there’s the set of locations that get routed to the PositionInterpolator’s keyValue field as described earlier. There is also a start time that gets routed to the disk’s TimeSensor’s startTime value.

There’s also a ROUTE to connect the TouchSensor on the base to the clicked field of the Script:

ROUTE TOUCH_SENSOR.isActive TO SCRIPT.clicked

A TimeSensor drives the simulation, as shown in listing 47.23.

Listing 47.23 The TimeSensor which drives the simulation.

DEF TIMEBASE TimeSensor {
cycleInterval 1.5
enabled TRUE
loop TRUE
}

This TimeSensor sends a cycleTime event every 1.5 seconds, forever. Each of these cycleTime events triggers the moving of one disk.

And, finally, there’s a ROUTE to connect this timer to the Script node’s tick field:

ROUTE TIMEBASE.cycleTime TO SCRIPT.tick

That’s it for the VRML end of things. Figure 47.21 shows an overall diagram of the how the nodes are connected to each other.


FIG. 47.21

The routing relationships for the Towers of Hanoi example.

The complete source for HANOI.WRL is found on the CD-ROM that accompanies this book.

Now it’s time to create our script in Java.

Hanoi.java

The Towers of Hanoi problem is usually given as an example of the power of recursion. An explanation of recursive algorithms is beyond the scope of this chapter, but the basic idea is that a function is able to partition a problem and then call itself to handle each of the two (or more) pieces that result.

The initialize() method of our Hanoi class will be used to generate the complete sequence of moves and store them in an array. Whenever a message arrives from the TimeSensor, the next step in the sequence will be carried out. The click message will allow the user to turn us on (or off).

The moves themselves will be stored in three arrays: disks[], startposts[], and endposts[]. The disks[] array stores the number of the disk (0–4, because there are five disks) that’s supposed to be moved. The startposts[] and endposts[] arrays store the starting and ending post numbers (0 through 2, because there are three posts).

There’s also a postdisks[] array, which keeps track of the number of disks on each post. It’ll be used it to compute the height of the top-most disk on each post in order to make the moves.

Begin with the standard header and declarations for our data, shown in listing 47.24.

Listing 47.24 The beginning of the Hanoi Class.

import vrml.*;
public class Hanoi extends Script {

// the following three arrays record the moves to be made
int disks[] = new int[120]; // which disk to move
int startposts[] = new int[120]; // post to move it from
int endposts[] = new int[120]; // post to move it to
int nmoves = 0; // number of entries used in those three arrays
int current_move = 0; // which move you're on now
boolean forwards = true; // initially, move from post 0 to post 2
int postdisks[] = new int[3]; // number of disks on each of the posts

Next comes the initialize() method. It just calls a recursive routine called hanoi_r() to do the actual work, then initializes the number of disks on each post. Because all of the disks are on the first post to begin with, and the entries in postdisks[] are all zero initially, this initialization is pretty easy. Listing 47.25 shows the how all this works.

Listing 47.25 The initialize() method.

/***** initialize() builds table of moves *****/
public void initialize() {
int number_of_disks = 5;
postdisks[0] = number_of_disks; // first post has all the disks
hanoi_r(number_of_disks, 0, 2); // generate the sequence of moves
}

Next, a flag is defined that indicates whether the routine is running. There’s also a processEvent() method to handle events coming into the script. These are shown in listing 47.26.

Listing 47.26 The processEvent() method.

boolean running = false; // true if we're running
/***** clicking on the base starts and stops the action *****/
public void processEvent(Event ev) {
if (ev.getName().equals(“click”)) {
ConstSFBool value = (ConstSFBool) ev.getValue();
if (value.getValue() == false) {
running = running ? false : true; // toggle
}
else if (ev.getName().equals(“tick”))
tick(ev.getTime());
}
}

This code fragment is similar to that from the earlier example. Recall that all readable fields have a getValue() method, which returns a standard Java value. In the case of a ConstSFBool field, the getValue() method returns a boolean type value. If that value is true, then the user touched the object (by clicking it with the mouse) and if the value is false, the user “un-touched” the object (for example, by releasing the mouse). In such a case, the running flag is toggled true or false. If the incoming event is a “tick” rather than a “click”, the next move in sequence is executed.

When you reach the end of the list of moves, all the disks have been moved to their destination post. At that point, you replay the sequence backwards to return to the original configuration. You then play the sequence forwards again, and so on. This is shown in listing 47.27.

Listing 47.27 The tick() method.

/***** at each tick (cycleTime), make the next move in the sequence *****/
void tick(double time) {
if (running == false)
return; // do nothing if we're not running
if (forwards) // moving from source to destination
{
make_move(disks[current_move], startposts[current_move], endposts[current_move], time);
if (++current_move >= nmoves) {
current_move = nmoves-1;
forwards = false;
}
}
else { // moving in the other direction
make_move(disks[current_move], endposts[current_move], startposts[current_move], time);
if (--current_move < 0) {
current_move = 0;
forwards = true;
}
}
}

The tick() method does nothing if running is false. If the sequence is running forward, the tick() method makes the move and increments the current_move counter. When it reaches the last move, it makes the last move into the next one and reverses directions.

If the sequence is running backward, the opposite move is made—from the endposts[current_move] post to the startposts[current_move] post. The current move is decremented. When the first move is reached, it becomes the next one and again the direction is reversed.

The make_move() method is where most of the talking to VRML is done. To start with, some constants are defined for use in array indexing:

static final int X = 0, Y = 1, Z = 2; // elements of an SFVec3f

Doing this lets you say (for example) vector[Y] to refer to the Y component of the three-element vector, instead of having to say vector[1].

To make a move, it’s necessary to fill in the four-element array of locations, each of which is itself an array of three elements (X, Y, and Z). Listing 47.28 shows how the first position for the disk is computed.

Listing 47.28 Finding the starting location.

/**** Routine to make an actual move *****/
void make_move(int disk, int from, int to, ConstSFTime now) {
float four_steps[][] = { { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } };
// compute starting location for disk
// center post is at x=0, left post is x=-0.5 and right post is x=0.5
four_steps[0][X] = (from - 1) * 0.5f;
// vertical position is height of disk (0.04) times number of disks
// on source post, plus height of base
four_steps[0][Y] = 0.04f * postdisks[from] + 0.145f;
// disk is centered on post in Z axis
four_steps[0][Z] = 0f;

Since the center post is at X = 0, the left post is at X = -0.5, and the right post is at X = 0.5, the expression (from-1) * 0.5f gives the X coordinate of the “from” post. Since each disk is 0.04 meters high, and there are postdisks[from] disks on the “from” post, and the base is 0.145 units tall, it’s easy to compute the current Y component of the disk’s location. The Z component is easy: it’s zero, because the disk is centered on the post along that axis.

Computing the destination location is almost exactly the same, as shown in listing 47.29.

Listing 47.29 Finding the ending location.

// compute ending location for disk
// center post is at x=0, left post is x=-0.5 and right post is x=0.5
four_steps[3][X] = (to - 1) * 0.5f;
// vertical position is height of disk (0.04) times number of disks
// on four_steps[0] post, plus height of base
four_steps[3][Y] = 0.04f * postdisks[to] + 0.145f;
// disk is centered on post in Z axis
four_steps[3][Z] = 0f;

The intermediate locations are the same, except that the Y coordinates will be one meter up, as shown in listing 47.30.

Listing 47.30 Finding the intermediate locations.

// now fill in the missing steps
// one meter above the source post
four_steps[1][X] = four_steps[0][0];
four_steps[1][Y] = 1f;
four_steps[1][Z] = 0f;
// one meter above the destination post
four_steps[2][X] = four_steps[3][0];
four_steps[2][Y] = 1f;
four_steps[2][Z] = 0f;

The next step is to adjust the count of the number of disks on each post:

--postdisks[from]; // one less disk on source post
++postdisks[to]; // one more disk on destination post

Finally, the move is made by updating the eventOuts in the Script (which are routed to the disk’s PositionInterpolator and TimeSensor). The code to do this is shown in listing 47.31.

Listing 47.31 Moving the disk.

// now move the disk
MFVec3f locations = (MFVec3f) getEventOut("disk" + (disk+1) + "Locations");
locations.setValue(four_steps);
SFTime timerStart = (SFTime) getEventOut("disk" + (disk+1) + "Start");
timerStart.setValue(now.getValue());
}

The name of the eventOut is based on the disk number. Notice that 1 is added to the disk; this is because in the VRML file, the disks were counted starting from 1 instead of 0. The eventOut that is found using getEventOut() is routed to the keyValue field of a PositionInterpolator for the disk in question.

The timer is found in a similar fashion. The value now, which is the timestamp of the event that caused this routine to run, is set as the start time for the timer. This starts the timer going, which drives the interpolator, which moves the disk.

So far so good. All that’s needed now is the actual recursive routine for generating the moves. This is shown in listing 47.32.

Listing 47.32 The recursive Move-Generator.

/***** hanoi_r() is a recursive routine for generating the moves *****/
// freeposts[starting_post][ending_post] gives which post is unused
static final int[][] freeposts = { { 0, 2, 1 }, { 2, 0, 0 }, { 1, 0, 0 } };
void hanoi_r(int number_of_disks, int starting_post, int goal_post) {
if (number_of_disks > 0) { // check for end of recursion
int free_post = freeposts[starting_post][goal_post];
hanoi_r(number_of_disks - 1, starting_post, free_post);
// add this move to the arrays
disks[nmoves] = number_of_disks - 1;
startposts[nmoves] = starting_post;
endposts[nmoves] = goal_post;
++nmoves;
hanoi_r(number_of_disks - 1, free_post, goal_post);
}
}

The freeposts[] array is used to determine which post to use to make the move. If the move is from post 0 to post 2, then post 1 is free. This is represented by freeposts[0][2] having the value 1. Note that the main diagonal of this little matrix (the [0][0], [1][1], and [2][2] elements) will never be used because the starting_post and goal_post will never be the same.

And that’s it —— the complete Towers of Hanoi puzzle, solved using Java and VRML. The complete Hanoi.java source code is included on the CD-ROM that comes with this book.

The Bleeding Edge

All of the examples listed in the text of this chapter should work with any final release (not beta) VRML 2.0 browser that supports scripting in Java.

Just to be on the safe side, I’ll be maintaining an “errata” sheet for this chapter, just off of my Web page (http://ece.uwaterloo.ca/~broehl/bernie.html).

This chapter has barely scratched the surface of VRML. There’s lots more to learn about, such as PROTO and EXTERNPROTO, and there are lots of other nodes that have only been mentioned in passing. VRML promises to be as revolutionary as Java itself, and the combination of the two is very powerful indeed.

Be sure to check the VRML Repository (http://sdsc.edu/vrml) for a complete listing of VRML resources, including links to the complete specification and lots of examples and tools.

See you in Cyberspace!


Previous Page TOC Next Page

| Previous Chapter | Next Chapter |

|Table of Contents | Book Home Page |

| Que Home Page | Digital Bookshelf | Disclaimer |


To order books from QUE, call us at 800-716-0044 or 317-361-5400.

For comments or technical support for our books and software, select Talk to Us.

© 1996, QUE Corporation, an imprint of Macmillan Publishing USA, a Simon and Schuster Company