next up previous contents
Next: 5.3 View Dependent Filtering Up: 5 Texture Mapping Previous: 5.1.2 Texture Environment

5.2 MIPmap Generation

Having explored the possibility of tiling low resolution textures to achieve the effect of high resolution textures, we can next examine methods for generating better texturing results without resorting to tiling. Again, OpenGL supports a modest collection of filtering algorithms, the highest quality of the minification algorithms being GL_LINEAR_MIPMAP_LINEAR. OpenGL does not specify a method for generating the individual mipmap levels (LODs). Each level can be loaded individually, so it is possible, but probably not desirable, to use a different filtering algorithm to generate each mipmap level.

The GLU library provides a very simple interface (gluBuild2DMipmaps) for generating all of the 2D levels required. The algorithm currently employed by most implementations is a box filter. There are a number of advantages to using the box filter; it is simple, efficient, and can be repeatedly applied to the current level to generate the next level without introducing filtering errors. However, the box filter has a number of limitations that can be quite noticeable with certain textures. For example, if a texture contains very narrow features (e.g., lines), then aliasing artifacts may be very pronounced.

The best choice of filter functions for generating mipmap levels is somewhat dependent on the manner in which the texture will be used and it is also somewhat subjective. Some possibilities include using a linear filter (sum of 4 pixels with weights [1/8,3/8,3/8,1/8]) or a cubic filter (weighted sum of 8 pixels). Mitchell and Netravali [30] propose a family of cubic filters for general image reconstruction which can be used for mipmap generation. The advantage of the cubic filter over the box is that it can have negative side lobes (weights) which help maintain sharpness while reducing the image. This can help reduce some of the blurring effect of filtering with mipmaps.

When attempting to use a filtering algorithm other than the one supplied by the GLU library, it is important to keep a couple of things in mind. The highest resolution image of the mipmap (LOD 0) should always be used as the input image source for each level to be generated. For the box filter, the correct result is generated when the preceding level is used as the input image for generating the next level, but this is not true for other filter functions. Each time a new level is generated, the filter needs to be scaled to twice the width of the previous version of the filter. A second consideration is that in order to maintain a strict factor of two reduction, filters with widths wider than 2 need to sample outside the boundaries of the image. This is commonly handled by using the value for the nearest edge pixel when sampling outside the image. However, a more correct algorithm can be selected depending on whether the image is to be used in a texture in which a repeat or clamp wrap mode is to be used. In the case of repeat, requests for pixels outside the image should wrap around to the appropriate pixel counted from the opposite edge, effectively repeating the image.

MIPmaps may be generated using the host processor or using the OpenGL pipeline to perform some of the filtering operations. For example, the GL_LINEAR minification filter can be used to draw an image of exactly half the width and height of an image which has been loaded into texture memory, by drawing a quad with the appropriate transformation (i.e., the quad projects to a rectangle one fourth the area of the original image). This effectively filters the image with a box filter. The resulting image can then be read from the color buffer back to host memory for later use as LOD 1. This process can be repeated using the newly generated mipmap level to produce the next level and so on until the coarsest level has been generated.

The above scheme seems a little cumbersome since each generated mipmap level needs to be read back to the host and then loaded into texture memory before it can be used to create the next level. The glCopyTexImage(c)apability, added in OpenGL 1.1, allows an image in the color buffer to be copied directly to texture memory.

This process can still be slightly difficult in OpenGL 1.0 as it only allows a single texture of a given dimension (1D, 2D) to exist at any one time, making it difficult to build up the mipmap texture while using the non-mipmapped texture for drawing. This problem is solved in OpenGL 1.1 with texture objects which allow multiple texture definitions to coexist at the same time. However, it would be much simpler if we could use the most recent level loaded as part of the mipmap as the current texture for drawing. OpenGL 1.1 only allows complete textures to be used for texturing, meaning that all mipmap levels need to be defined. Some vendors have added yet another extension which can deal with this problem (though that was not the original intent behind the extension). This third extension, the texture LOD extension, limits the selection of mipmap image arrays to a subset of the arrays that would normally be considered; that is, it allows an application to specify a contiguous subset of the mipmap levels to be used for texturing. If the subset is complete then the texture can be used for drawing. Therefore we can use this extension to limit the mipmap images to the level most recently created and use this to create the next smaller level. The other capability of the LOD extension is the ability to clamp the LOD to a specified floating point range so that the entire filtering operation can be restricted. This extension will be discussed in more detail later on.

The above method outlines an algorithm for generating mipmap levels using the existing texture filters. There are other mechanisms within the OpenGL pipeline that can be combined to do the filtering. Convolution can be implemented using the accumulation buffer (this will be discussed in more detail in the section on the accumulation buffer). A texture image can be drawn using a point sampling filter (GL_NEAREST) and the result added to the accumulation buffer with the appropriate weighting. Different pixels (texels) from an NxN pattern can be selected from the texture by drawing a quad that projects to a region 1/N x 1/N of the original texture width and height with a slight offset in the s and t coordinates to control the nearest sampling. Each time a textured quad is rendered to the color buffer it is accumulated with the appropriate weight in the accumulation buffer. Combining point sampled texturing with the accumulation buffer allows the implementation of nearly arbitrary filter kernels. Sampling outside the image, however, still remains a difficulty for wide filter kernels. If the outside samples are generated by wrapping to the opposite edge, then the GL_REPEAT wrap mode can be used.


next up previous contents
Next: 5.3 View Dependent Filtering Up: 5 Texture Mapping Previous: 5.1.2 Texture Environment

David Blythe
Thu Jul 17 21:24:28 PDT 1997