Elsevier

Computers & Graphics

Volume 22, Issue 1, 25 February 1998, Pages 55-69
Computers & Graphics

Scene Simplification
Walkthroughs of complex environments using image-based simplification

https://doi.org/10.1016/S0097-8493(97)00083-6Get rights and content

Abstract

We present an image-based technique to accelerate the navigation in complex static environments. We perform an image-space simplification of each sample of the scene taken at a particular viewpoint and dynamically combine these simplified samples to produce images for arbitrary viewpoints. Since the scene is converted into a bounded complexity representation in the image space, with the base images rendered beforehand, the rendering speed is relatively insensitive to the complexity of the scene. The proposed method correctly simulates the kinetic depth effect (parallax), occlusion, and can resolve the missing visibility information. This paper describes a suitable representation for the samples, a specific technique for simplifying them, and different morphing methods for combining the sample information to reconstruct the scene. We use hardware texture mapping to implement the image-space warping and hardware affine transformations to compute the view-dependent warping function.

Introduction

In contrast to a conventional geometry-based rendering pipeline image-based renderers use pre-rendered images of the scene as the basic primitives to render a scene—in place or in conjunction with the usual 3D models. This makes it possible to achieve much higher levels of realism, not only when modeling with real world photographs, but also when using synthetically generated imagery, since much more complex rendering algorithms can be used in the pre-navigation step. Image-based rendering also decouples the navigation frame rate from the complexity of the models, since only the fixed resolution images of the models are used, making this approach potentially faster than traditional ones as the model complexity continues to increase.

The trade-off of geometry complexity for images is not new. Texture mapping was introduced in 1974 by Catmull[8], and has been used extensively since then. Blinn and Newell[7]used pre-rendered images to map the surrounding environment on reflective surfaces.

Image-based rendering has been used with two different but closely related aims: to enable the navigation of environments modeled by real-world photographs and to accelerate the navigation of synthetic environments. In the first case the scene is not modeled on the computer and parameters such as depth information are not easily available. In the second case, all the information necessary for an image-based renderer can be retrieved from the original model.

Traditional approaches to graphics acceleration for the navigation of a three-dimensional environment have involved:

— reducing the rendering complexity by using texture mapping6, 7, and by using various levels of complexity in shading and illumination models[4].

— reducing the geometric complexity of the scene, by using level-of-detail hierarchies11, 17, 23, 24, 36, 37, 46, and by visibility-based culling2, 20, 22, 29, 44

— exploiting frame-to-frame coherence with one of the above5, 49.

However, as the complexity of the three-dimensional object–space has increased beyond the bounded image–space resolution, image-based rendering has begun to emerge as a viable alternative to the conventional three-dimensional geometric modeling and rendering, in specific application domains. Image-based rendering has been used to navigate (although with limited freedom of movement) in environments modeled from real-world digitized images9, 33, 43. More recent approaches19, 26generalize the idea of plenoptic modeling by characterizing the complete flow of light in a given region of space. This can be done for densely sampled arrays of images (digitized or synthetic) without relying on depth information, resulting in a large amount of information that limits its applicability to current environments without further research. Promising results towards making these approaches feasible in real time appear in[42].

Combining simple geometric building blocks with view-dependent textures derived from image-based rendering3, 16, 30has resulted in viable techniques for navigation in environments that can be described by those simple blocks. Also, Harry et al.[25]derived simple 3D scene models from photographs that allowed navigation in a single image. They use a spidery mesh graphical user interface, enabling the user to easily specify a vanishing point and background and foreground objects in an existing picture. An interesting use of image-based rendering for distributed virtual environments has been presented in[31]. In this approach only a subset of the scene information that can not be extrapolated from the previous frame is transmitted in a compressed form from the server to the client, thereby dramatically reducing the required network band-width. The potential of image-based rendering specifically for the navigation in generic synthetic environments on single graphics machines however has been investigated in fewer instances10, 32, 40.

We have presented a conceptual discussion and an implemented system for the problem of image-based rendering using image–space simplification and morphing[14]. In this paper, we discuss the technique in more detail, present the image–space simplification algorithm used, and compare the results of different blending techniques. Given a collection of z-buffered images representing an environment from fixed viewpoints and view directions, our approach first constructs an image–space simplification of the scene as a pre-process, and then reconstructs a view of this scene for arbitrary viewpoints and directions in real-time. We achieve speed through the use of the commonly available texture mapping hardware, and partially rectify the visibility gaps (“tears”) pointed out in previous work on image-based rendering9, 10through morphing.

In Section 2, we present an overview of the image-based rendering area; in Section 3, we discuss the relation between the morphing problem and image-based rendering. Section 4describes the image–space simplification technique in detail, and 5 Single node navigation, 6 Multiple node navigationdiscuss the navigation problem, comparing various forms of node combination. Some results are presented in Section 7.

Section snippets

Image-based navigation

Image-based rendering uses images as the basic primitive for generating other images, as opposed to the more traditional approach that renders directly from geometric models. Image-based rendering can be described as a process consisting of generally three steps:

—Sampling–samples from the scene model are obtained at discrete viewpoints and viewing directions;

—Reconstruction–samples are organized into data structures that allow evaluation through some kind of interpolation;

—Resampling–sampled

Environment mapping and morphing

Changes of visibility that occur as an observer moves freely in an environment can be simulated by using precomputed views of the scene at selected viewpoints. Out technique samples the original scene from a set of fixed viewpoints, associating a node with every selected position, consisting of an extended environment map with depth and color information for every direction, and also the camera parameters. This is essentially a sampling of the plenoptic function, that associates depth

Image-space simplification

Given an environment map with depth and color information at a viewpoint, we have seen that it is possible to create views from new positions and directions by appropriately warping the environment map. To generate environment maps for viewpoints intermediate to the previously selected nodes, we morph neighboring environment maps into an intermediate one.

Our solution to the image–space–based rendering problem simplifies the environment, as seen from a given viewpoint, by linear polygons. This

Single node navigation

A node consists of a cubical environment map and its triangulation as discussed in Section 4. An example of this is shown in Fig. 13. The navigation inside a node involves projecting these triangulation for a given viewpoint and viewing direction. The projection and the subsequent z-buffering correctly handle the visibility and the perspective for regions where adequate information is available.

The six sides of a node are not normally all visible at once. A cube divides the space into six

Multiple node navigation

Given a set of nodes, solving the visibility problem for a certain viewpoint and direction involves two subproblems: selecting the appropriate nodes and combining the information from these nodes. If the nodes are uniformly distributed, the selection of the nodes that are closer to the observer is a simple solution that yields acceptable results. This is the approach that we have implemented. The remainder of this section discusses the combination of information from two nodes. Combination of

Results

We have tested our implementation on a model generated by us. The initial model was ray-traced and two cubical environment maps, each consisting of six 512×512 images (with depth), were generated. From these 3 M data points, we obtained a simplified representation consisting of a total of 30 K texture-mapped triangles using the top-down approach described before to generate a Delaunay triangulation.

We have measured the performance in two reference systems: a single SGI Challenge R10000 processor

Conclusion

We have described an image-based rendering technique for navigation of 3D environments by using view-dependent warping and morphing. The method relies on a set of cubical environment maps that are pre-rendered from a collection of fixed (and preferably uniformly distributed) viewpoints within the virtual model. Every given viewpoint is represented by a node that consists of a cubical environment map and an associated 3D triangulation of the six faces of the map. We have described the

Acknowledgements

We would like to thank CNPq (Brazilian Council of Scientific and Technological Development) for the financial support to the first two authors. This work has been supported in part by the National Science Foundation CAREER award CCR-9502239. This research was performed at the Visualization laboratory at SUNY Stony Brook.

References (49)

  • J.K. Aggarwal et al.

    On the computation of motion from sequences of images–a review

    Proceedings of the IEEE

    (August 1988)
  • Airey, J. M., Increasing Update Rates in the Building Walkthrough System with Automatic Model-Space Subdivision and...
  • Aliaga, D. G., Visualization of complex models using dynamic texture-based simplification. In IEEE Visualization ’96...
  • Bergman, L. D., Fuchs, H., Grant, E. and Spach, S., Image rendering by adaptive refinement. Computer Graphics (SIGGRAPH...
  • Bishop, G., Fuchs, H., McMillan, L. and Zagier, E., Frameless rendering: Double buffering considered harmful. In...
  • Blinn, J. F., Simulation of wrinkled surfaces. In SIGGRAPH ’78. ACM, 1978, pp....
  • J.F. Blinn et al.

    Texture and reflection in computer generated images

    CACM

    (October 1976)
  • Catmull, E., A Subdivision Algorithm for Computer Display of Curved Surfaces. PhD thesis. University of Utah,...
  • Chen, S. E., Quicktime VR–an image-based approach to virtual environment navigation. In Computer Graphics Annual...
  • Chen, S. E. and Williams, L., View interpolation for image synthesis. Computer Graphics (SIGGRAPH ’93 Proceedings), 27,...
  • Cohen, J., Varshney, A., Manocha, D., Turk, G., Weber, H., Agarwal, P., Brooks, Jr F. P. and Wright, W. V.,...
  • Cyan, Myst: The Surrealistic Adventure That Will Become Your World, Broderbund Software,...
  • Darsa, L. and Costa, B., Multi-resolution representation and reconstruction of adaptively sampled images. In...
  • Darsa, L., Costa, B. and Varshney, A., Navigating static environments using image–space simplification and morphing. In...
  • de Figueiredo, L. H., Adaptive sampling of parametric curves. In Graphics Gems V, ed. A. Paeth. AP Professional, San...
  • Debevec, P. E., Taylor, C. J. and Malik, J., Modeling and rendering architecture from photographs: A hybrid geometry-...
  • DeRose, T. D., Lounsbery, M. and Warren, J., Multiresolution analysis for surface of arbitrary topological type. Report...
  • Gomes, J., Costa, B., Darsa, L., Velho, L., Wolberg. G., and Berton, J., Warping and Morphing of Graphical Objects,...
  • Gortler, S. J., Grzeszczuk, R., Szelinski, R. and Cohen, M. F., The lumigraph. In Proceedings of SIGGRAPH ’96 (New...
  • Greene, N., Hierarchical polygon tiling with coverage masks. In Proceedings of SIGGRAPH ’96 (New Orleans, LA, August...
  • N. Greene

    Environment mapping and other applications of world projections

    IEEE CG&A

    (1986)
  • Greene, N. and Kass, M., Hierarchical Z-buffer visibility. In Computer Graphics Proceedings, Annual Conference Series,...
  • T. He et al.

    Controlled topology simplification

    IEEE Transactions on Visualization and Computer Graphics

    (1996)
  • Hoppe, H., Progressive meshes. In Proceedings of SIGGRAPH ’96 (New Orleans, LA, August 4–9, 1996). Computer Graphics...
  • Cited by (18)

    View all citing articles on Scopus
    View full text