Dynamic view-dependent multiresolution on a client–server architecture
Introduction
Solid objects in CAD are often represented by boundary models. A surface is given as a collection of patches, each patch being the instantiation of some parametric surface from a rather small family (e.g. NURBS). However, boundary models are often converted into raw triangle meshes for technical, pragmatic and commercial reasons. This is accomplished by discretizing each patch of the CAD model into a set of triangles. Discretization is necessary, e.g. for display, simulation and analysis purposes. In many application domains, the result of this conversion is usually a very large mesh. Examples are large-scale design reviews, architectural demos, virtual environments, object reconstruction from 3D range devices, and terrains in geographic information systems.
The resolution, or level of detail (LOD), of a mesh is proportional to the density of its triangles, thus to their number. Meshes at very high resolution are widely available, but their huge size requires storage and computational power that is available only on powerful and expensive workstations. Even if powerful machines are becoming rather cheap and available, the bandwidth necessary to transmit such meshes is still hard to find. Therefore, Web 3D applications still deliver very simple and sketched virtual environments, leaving the problem of supporting CAD teamwork in a Web environment unsolved.
However, for most operations we do not need the full detail in each part of the scene, but only in some interesting areas. Selectively refined meshes are meshes whose level of detail is variable in space according to some criterion defined by the user. They reduce memory and computational costs and can be obtained by a process of on-line simplification that removes details from a large dense mesh. This is the basic motivation behind the development of many algorithms for triangle mesh simplification. However, mesh simplification is too time consuming to be efficiently performed on line.
A more recent approach consists of decoupling the simplification phase from the selective refinement phase. This has led to the development of multiresolution models: they are essentially a compact way of encoding the simplification steps as a partial order, from which a virtually continuous set of meshes at different LODs can be extracted, based on efficient graph traversal strategies. The criteria, determining the desired LOD for a selectively refined mesh, typically vary with time and thus dynamic algorithms are required which can efficiently update a previously extracted mesh without necessarily performing a complete traversal of the multiresolution data structure.
Many authors have developed multiresolution structures for regular or irregular triangle meshes (see [11], [14], [20] for recent surveys). However, all existing proposals for multiresolution data structures assume that the process handling the multiresolution model and the one, which uses the extracted meshes, reside on the same machine.
In this paper, we consider, instead, a client–server architecture in which the server stores a multiresolution data structure and contains some mechanism for extracting selectively refined meshes from it. A client sends requests to the server by specifying the required LOD (which usually may vary in different parts of the mesh) and the maximum size of a mesh that can be managed locally. The server selectively refines a mesh according to the client requests, and transmits it progressively to the client. The client processes such meshes locally, and possibly generates another request when the LOD of the current mesh becomes not satisfactory. Our assumption is based on the consideration that clients have limited on-board memory and processing power, so they can neither store a multiresolution data structure, nor receive a mesh at full resolution and run a simplification algorithm on it.
The main issue here is reducing communication time and, thus, the amount of information sent from the server to the client to describe the extracted mesh. Also, we need a compact data structure for a multiresolution model to handle huge meshes on the server, and an efficient selective refinement algorithm capable of dynamically modifying previously extracted meshes and operating in a client–server environment.
We develop here an instance of the Multi-Triangulation as defined in [8], [21], that we call a Vertex-based Multi-Triangulation (MT), since it is built based on a vertex decimation strategy. Vertex decimation is a very popular technique used in simplification, which allows preserving the topology of the mesh and the shape of the triangles (which is important when simplifying, for instance, meshes generated according to the Delaunay criterion). A Vertex-based MT is represented as a DAG in which the root provides a base mesh, while each other node describes a refinement update in the form of vertex insertion.
The major contributions of the paper are:
- •
An algorithm for selective refinement on the Vertex-based MT, which is dynamic, thus allowing incremental modifications on a previously extracted mesh, and generates only meshes of controlled size.
- •
An efficient implementation of such algorithm in a client–server environment.
- •
A compressed representation for mesh updates.
- •
A compact representation for the MT, based on the above compressed description for its nodes.
- •
A collection of mechanisms for handling communication between data structures on server and on the client.
- •
A client-caching mechanism to avoid retransmitting all details at each update.
The remainder of the paper is organized as follows. In Section 2, we review previous related literature. In Section 3, we briefly describe the Multi-Triangulation, and give a formulation for the problem of selective refinement. In Section 4, we propose a dynamic algorithm for selective refinement on a Vertex MT. In Section 5, we discuss the realization of the algorithm in a client–server environment. In Section 6, we describe the data structures necessary to support the algorithm both on the server and on the client, and we provide the details of the implementation. In Section 7, we evaluate the proposed method through an experimental simulation. Finally, in Section 8, we provide some concluding remarks.
Section snippets
Related work
A lot of work has been carried out in the last few years on simplification of triangle meshes, and a number of multiresolution data structures have been proposed. The interested reader can refer, e.g. to [2], [11], [14], [20] for recent surveys. In this section, we review just the papers that consider a client–server environment.
In such contexts, there are two basic issues: reducing the amount of information sent from server to client to describe the extracted mesh, and supporting the
Vertex-based Multi-Triangulation
The MT is a general multiresolution model based on local updates that we introduced in [8], [21]. In this section, we briefly review the model and we describe a special version of it, which is based on vertex insertion and deletion.
A dynamic algorithm for selective refinement
In [5], [6], we have proposed and analyzed selective refinement algorithms for a general MT. Such algorithms solve a selective refinement query with a different formulation, which does not take into account the constraint on the size of the output mesh, and optimizes the number of triangles in the output.
In this section, we present a new dynamic algorithm which represents an effective heuristics for answering the selective refinement query defined in Section 3 on a Vertex-based
Selective refinement in a client–server architecture
In a client–server version of selective refinement, two concurrent processes are running on different machines and communicating through a bi-directional transmission channel.
The server executes the same extraction procedure described in Section 4 but it does not store the current mesh, which, in this case, is maintained by the client. Each time an update must be performed, the server only modifies its extraction status, while operations REFINE and COARSEN consist of sending instructions to the
Implementation issues and data structures
The algorithm for selective refinement requires the following data structures (see Fig. 9):
- •
The MT is a data structure maintained by the server to manage the Multi-Triangulation. Update codes are maintained at the nodes of the MT, which are used during traversal to generate Transmitted Updates.
- •
The extraction status is also maintained by the server.
- •
The current mesh is maintained by the client and is updated through REFINE and COARSEN operations.
- •
The translation tables are two symmetric data
Experimental evaluation
The client–server system is still under implementation. In this Section, we report some results about the traffic generated on the channel, obtained through simulations performed on our standalone implementation.
For our experiments, we have used the following assumptions, which are summarized in Fig. 14: the maximum degree d of a vertex is ≤11, thus it is represented on four bits; the length of code C for a TU is ≤2(d−1)=20 bits; the bound b on the size of the extracted mesh is 4000 triangles,
Concluding remarks
The client–server algorithm for selective refinement proposed in this paper has several innovative aspects. Simulations show that it can be used effectively and efficiently on reasonably large geometric models, not only in the context of a (relatively fast) local network, but even on a (relatively slow) geographic network, like the Web.
While the standalone version is already implemented, we are now proceeding with the implementation of the complete client–server version. We foresee that further
Leila De Floriani is professor of Computer Science at the University of Genova, Italy. She received an advanced degree in Mathematics from the University of Genova in 1977. From 1977 to 1981 she was a research associate at the Institute for Applied Mathematics of the Italian National Research Council in Genova, and from 1981 to 1982 an Assistant Professor at the Department of Mathematics of the University of Genova. From 1982 to 1990 she has been a senior scientist at the Institute of Applied
References (25)
- et al.
A comparison of mesh simplification algorithms
Computers and Graphics
(1998) Efficient implementation of progressive meshes
Computers & Graphics
(1998)- et al.
Multiresolution decimation based on global error
The Visual Computer
(1997) - De Floriani L, Magillo P, Morando F, Puppo E. Handling multiresolution meshes on a client–server architecture....
- De Floriani L, Magillo P, Morando F, Puppo E. A reversible automation for compressed coding of triangulated polygons....
- De Floriani L, Magillo P, Puppo E. Building and traversing a surface at variable resolution. In Proceedings IEEE...
- De Floriani, L., Magillo, P., Puppo, E., Efficient implementation of Multi-Triangulations. In Proceedings IEEE...
- De Floriani L, Magillo P, Puppo E. Compressing triangulated irregular networks. Geoinformatica. Submitted for...
- et al.
A formal approach to multiresolution modeling
Progressive forest split compression
(1998)
Generalized view-dependent simplification
Computer Graphics Forum
Cited by (24)
Distributed data organization and parallel data retrieval methods for huge laser scanner point clouds
2011, Computers and GeosciencesCitation Excerpt :With the popularization of computer networks, data sharing, analysing, and processing via global or local area networks have become the trend, especially when huge volumes of data such as point clouds are concerned. In a single-server-multi-user computing environment (De Floriani et al., 2000; Kim et al., 2004), though the system supports a global and persistent domain name that allows for multiple clients to share the same storage devices, they do not provide parallel data access capacity; therefore, the speed of data visualization and processing decreases rapidly with the increasing number of clients, to an extent that may become intolerable. Moreover, the volume of the dataset is so large in LiDAR scenarios that a dataset cannot be stored within a single computer any longer, but must be held in distributed computers.
Constrained set-up of the tGAP structure for progressive vector data transfer
2009, Computers and GeosciencesCitation Excerpt :The idea of gradually refining low-resolution vector data in mobile applications has been discussed by several researchers. The refinement of terrain models in computer graphics is presented by De Floriani et al. (2000). Buttenfield (2002) presents an algorithm for the gradual refinement of polyline coordinates; the method is based on the line simplification algorithm of Douglas and Peucker (1973) and assures that topological properties are preserved.
Level-of-detail for data analysis and exploration: A historical overview and some new perspectives
2006, Computers and Graphics (Pergamon)Citation Excerpt :Detail inside the ROI may vary according to different laws (e.g., related with distance from a viewpoint, in the case of view-dependent rendering). We have developed several algorithms for selective refinement [13,34,35], all based on the concept of DAG-traversal. The simplest algorithm starts from the base mesh and progressively includes updates while refining the mesh, until the desired LOD is achieved.
A multi-resolution topological representation for non-manifold meshes
2004, CAD Computer Aided DesignCitation Excerpt :In this section, first we describe the NMT, which is a natural extension to the non-manifold domain of the model that we developed for the manifold case in Ref. [28] and our subsequent work. Next, we describe the basic structure of an algorithm that performs selective refinement on a NMT, which is also a natural extension of an algorithm we presented in Ref. [6]. A selective refinement query on a multi-resolution model consists of extracting a mesh, which satisfies some application-dependent requirements, such as approximating a spatial object with a certain accuracy which can be either uniform, or variable in space.
View-dependent mesh streaming with minimal latency
2005, International Journal of Shape ModelingFinite element mesh generation from surface meshes
2008, Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering
Leila De Floriani is professor of Computer Science at the University of Genova, Italy. She received an advanced degree in Mathematics from the University of Genova in 1977. From 1977 to 1981 she was a research associate at the Institute for Applied Mathematics of the Italian National Research Council in Genova, and from 1981 to 1982 an Assistant Professor at the Department of Mathematics of the University of Genova. From 1982 to 1990 she has been a senior scientist at the Institute of Applied Mathematics of the Italian National Research Council. She is leading several national and EEC projects on algorithms and data structures for representing and manipulating geometric data. Leila De Floriani has written over 90 technical publications on the subjects of computational geometry, geometric modeling, algorithms and data structures for spatial data handling and graph theory. She is a member of the Editorial Board of the journals “International Journal of Geographic Information Systems” and “Geoinformatica”. Her present research interests include geometric modeling, computational geometry, spatial data handling for geographic information systems. Leila De Floriani is a member of ACM, IEEE Computer Society, and International Association for Pattern Recognition (IAPR).
Paola Magillo received an advanced degree in Computer Science at the University of Genova, Genova (Italy), in 1992, and a PhD in Computer Science, at the same university, in 1999. In 1993, she was research associate at the “Institut National de Recherche en Informatique et Automatique” (INRIA), Sophia Antipolis (France), working with the research group of J.D. Boissonnat. Since December 1993, she has been working as a researcher at the Department of Computer and Information Sciences (DISI) of the University of Genova, where she got a permanent position in 1996. Her research interests include computational geometry, geometric modeling, geographic information systems and computer graphics. Since November 1995, she is a member of the International Association for Pattern Recognition (IAPR).
Franco Morando was born in Genova in 1960, He received a Laurea from Electronic Engineering from the University of Genova, Italy, in July 1986. From 1987 to 1991 he was at Elsag R&D Departement, working on Artificial Intelligence. As a part time consultant, in the last decade, he was also involved in researches on Formal Methods in Software Engineering, at the University of Genova. From 1997, as a PhD student, he joined the Geometric Modeling and Computer Graphics Group at the Computer Science Department of the University of Genova. His current research interests include: Computational Topology, Geometric Modeling, Multiresolution representation with applications to non-Manifold Modeling and Web3D.
Enrico Puppo is associate professor of Computer science at the Department of computer and Information Sciences of the University of Genova, where he is member of the Geometric Modeling and Computer Graphics Group. He received a Laurea in Mathematics from the University of Genova, Italy, in March 1986. From April 1986 to October 1998 he has been research assistant (until November 1988), and research scientist (from December 1988) at the Institute for Applied Mathematics of the National Research Council of Italy. In different periods, between 1989 and 1992, he has been visiting researcher at the Center for Automation Research of the University of Maryland. Enrico Puppo has written over 60 scientific publications on the subjects of algorithms and data structures for spatial data handling, geometric modeling, computational geometry, and image processing. His current research interests are in multiresolution modeling, geometric algorithms and data structures, geometric compression and object reconstruction, with applications to computer graphics, Geographical Information Systems, scientific visualization, and computer vision. Enrico Puppo is a member of ACM, IEEE Computer Society, and International Association for Pattern Recognition (IAPR).