Skip to content
BY 4.0 license Open Access Published by De Gruyter December 18, 2017

An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data

  • Ganesan Gunasekaran EMAIL logo and Meenakshisundaram Venkatesan

Abstract

The main idea behind this work is to present three-dimensional (3D) image visualization through two-dimensional (2D) images that comprise various images. 3D image visualization is one of the essential methods for excerpting data from given pieces. The main goal of this work is to figure out the outlines of the given 3D geometric primitives in each part, and then integrate these outlines or frames to reconstruct 3D geometric primitives. The proposed technique is very useful and can be applied to many kinds of images. The experimental results showed a very good determination of the reconstructing process of 2D images.

1 Introduction

The growth of three-dimensional (3D) images or object visualization is present especially in the field of medical science, the manufacturing industry, defense systems, topography systems, the digital world, and remote monitoring. This growth is supervising the direction of different studies and funding agencies for applications in these areas. Medical images have been targeted to display visualization in multidimensional models, such as the axial view, sagittal view, and coronal view. Hence, the image is attained from the projections of more than one focal device at various places. In that condition, 3D visualization of the same parts in human beings can be carried out. The advent of image visualization processes or methods for medical data sets has been made possible through the introduction of modern technologies in analysis and behaviors. Conversely, these methods depend on computer reconstructions with the use of certain hardware devices such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound devices. Three-dimensional visualization of images, particularly medical data, provides important information about the object and their behaviors, which are very significant in diagnosis and treatment. The importance of 3D visualization techniques leftover express now undetermined and untrained; hence, it is a very significant field of research to offer new tools, hardware equipment, methods, analysis, cures, etc. Constructing an image entails mapping some properties of a geometric primitive onto the image space, which are utilized to visualize an object and its behaviors to illustrate its structure or involvements. In medical science, understanding of mapping and growing techniques is essential in accepting performance with respect to their development. The main goal of 3D image visualization is to convey and appreciate ideas of an effective framework to create useful image data. The requirements for new modern efficient techniques of 3D image visualization and investigation will develop progressively. In this article, the basic methods of 3D image visualization techniques are described in detail, and effective techniques are also identified to perform 3D image visualization for medical data sets.

2 Existing Works

Visualization is the process of transforming data into a visual form. The application of 3D image visualization is a significant characteristic of digital image processing, involving large collections in many fields of day-to-day life. Several existing works are available on image visualization. Various insights from these works are presented in this article.

Yoo and Chen [21] reported on a medical image visualization technique that has 3D medical image instinctual navigation and investigation capabilities, and presented utilizing DICOM images.

Meißner et al. [12] identified and presented a new technique that uses and enables direct volume rendering based on texture mapping. The drawbacks of this proposed method are a lack of lighting, issues related to transparent classification, and incompatibility with all modalities. The work demonstrated various fast re-rendering of volume and surface graphics by finding the depth of objects, color, and opacity value.

Alberola et al. [1] illustrated an effective application for image visualization. The application uses a graphical tool for the combined work of image analysis and visualization of the prototype created out of parts of the volumetric information. In these applications, let us consider more users to concurrently and cooperatively investigate medical images. Furthermore, for clinical purposes, direct measures of tissues and parts of the body on the 3D plane are compulsory.

Bhalerao et al. [3] provided an idea for quick re-rendering of volume data consisting of various distinct materials. They also facilitated accelerated rendering methods of fixed views to reduce storage overhead without rescanning the entire volume for clinical data.

Dey et al. [5] presented a technique to consider an intraoperative endoscopic video process to the 3D surface, resulting from preprocessing scanning for improved visualization via surgery. This technique helps provide panoramic and stereoscopic visualization from random viewpoint references, and steering of the filled regions after the process. The verification analyses are operated on phantoms, where the pre-processing image data perceptions replicate the intra-processing state.

Meißner et al. [13] discussed various issues about the applications of volume rendering processes, such as interpolation methods, illumination concepts, classification problems, and others.

Shum and Kang [19] demonstrated their fundamental concepts and dissimilarity as well as their limitations. Image-based rendering techniques were demonstrated and reviewed. The characteristics of various rendering categories and their corresponding representative techniques were presented. Also, the advantage and disadvantage of modern volume rendering methods over the traditional volume rendering approach were presented.

Pizer et al. [14] concentrated on multiscale processing for 3D modeling as well as 3D rendering in solid representation. Their paper focused on techniques for 3D image segmentation that use medical model depiction both to capture existing information of geometric primitives and the basic concepts of estimation of the model to image similarities. A model illustration-based visualization process for objects in both 3D and the 3D image relation to a medical particle was also presented.

Rosset et al. [17] showed an illustration of multidimensional image manipulation for display and clarification of a huge collection of multidimensional and multimodality images. The method was exceptionally fast and effectively optimized via 3D graphic standard commonly utilized for computer games, optimized for the benefits of any high-end hardware graphics accelerator boards obtainable. X-ray radiology imaging modalities involve from traditional sets of 2D topographic parts to 3D volumetric capturing processes, extending to higher-dimensional images with temporal and functional information that can be captured with ultrafast CT and magnetic resonance scanner with combined PET/CT scanners.

Sainz and Pajarola [18] reported a comparative study on a recently developed point-based rendering method, and its implementations within a common testbed was explored and presented. Also, a comparison of point-based rendering techniques to a basic triangle mesh method was performed.

Gu and Peters [7] presented speedy multistage hybrid techniques for 3D subdivision of medical images. A completely 3D, accurate, and consistent subdivision method for visualization using a sped-up multistage hybrid technique was authorized. Morphological transformation was reflected to extract areas of high intensity in similar sizes of objects to be divided.

Kim et al. [10] presented an integrated interactive segregation into real-time volume rendering of PET and CT images. It addresses 3D medical image fusion in real time and interchange ability of segregating the volume of PET or CT images. It also performs multilevel fusion of PET/CT/MRI image volumes.

Swedlow [20] projected the growth of both an open microscopy environment server and an open microscopy environment remote object server that offer a flexible set of tools for digital image data handling systems. In this study, an information model that gives the explanation of an image information gathering arrangement with an examination of the results has been presented. This work was able to enhance the utilization of its tools and applications as well as to encompass the responsibilities delivered with open microscopy.

Zhang et al. [22] illustrated new techniques such as segment-oriented post-color attenuated categorical methods to discuss the issues of interactive 3D medical image visualization. An effective combination processing method is involved to take the benefit of the symmetric storage format of the look-up color table in creating the matrix. This algorithm provides the interactive visualization of the medical image database in both investigative and relaxed processes.

Aliroteh and McInerney [2] projected the speed and perfect interactive segmentation process for extracting and visualizing a huge collection of geometric primitives from 3D medical images. This technique is very easy to perform and perfectly initialized with a few quick outlines given across the width of the target images on various key parts of the volumetric images.

Ferre et al. [6] demonstrated stereoscopic image visualization for teleprocessing robots, in which the utilization of stereoscopic image has been assessed for the teleprocessing robot assistance process. The presentations of stereoscopic and monoscopic images were compared, in which stereoscopic images had been referred as a guaranteed depth calculation.

Herlambang et al. [8] discussed the existing stages of auto-stereoscopic visualization systems using the basic methods of assembly videography. The operative system was used to visualize four-dimensional (4D) magnetic resonance images that were created from registration of 3D magnetic resonance and 4D ultrasound images. The assessment of computational speed showed that the graphical processing unit operating time is much faster than the central processing unit operating time for essential videography volumetric methods.

Krechetova et al. [11] showed the structure of 3D images based on various medical information that are trained by using CT scan, MRI, scintigraphy, etc. In their paper, a method of 3D model restructuring from medical images by utilizing initial data attained for forming DICOM files was presented.

Bhalerao et al. [3] described a novel data-theoretic approach for a threshold-based segmentation that uses excess entropy to estimate the physical appearance of 2D or 3D images and to locate the optimal threshold. The major goal of this technique is the utilization of additional entropy values as a quantity of the physical behaviors of an image. The results and discussion illustrated the good performance of the proposed method.

Depeursinge et al. [4] presented a model for mobile access to peer-reviewed medical data sets based on textual search and content-based retrieval and image visualization. Also, they discussed a web-based interface process designed for a limited screen area, developed to query via web services a medical data retrieval engine and optimizing the number of observations to be transferred in a wireless communication process.

Ramasubramanian et al. [15] focused on the widely accepted notion that 3D visualization of medical images supports patient analysis. Improved streaming conditions were estimated to visualize and control the 3D medical images that had a high quality on the web-based system. The 3D visualization systems are very useful in various biomedical fields, such as interactive directed surgery, computer-aided investigation, and telemedicine methods.

Rodt et al. [16] discussed and proposed a 3D computer tomography framework for facial fractures. The chief reason for this framework is to appraise and compare surface rendering and volume rendering for facial fractures in 3D CT.

Jung et al. [9] explained various types of efficient medical image visualization methods through adaptive binned visibility histogram techniques that help improve the visibility of volume rendering methods.

Jung et al. [9] proposed a new visualization algorithm called slice of interest from PET images, supplemented through the volume relative information from a DVR to its equivalent. Finally, the proposed method was described as an efficient technique that provides real-time enhancements to the reliability of medical analysis and investigations.

3 Three-Dimensional Volumetric Data

Three-dimensional volumetric data can be recreated with the help of computer graphic packages, in which images are considered as digital forms. The digital form of images are stored in a mathematical arrangement in computer systems. Commonly, a 3D volumetric image is built from a collection of parallel slices that have the user data. Furthermore, 3D images are sampled anisotropically, with the distance between sequences more than the in-plane pixel scale. Hence, the spatial resolution, which is the size of image voxels, has a base equivalent to the pixel dimension determined by the in-plane resolution of the scanner and a height equivalent to the distance between sequential slices. Three different sizes of a voxel are shown in Figure 1.

Figure 1: Representations of Voxels with Different Resolutions.
(A) Low resolution; (B) medium-level resolution; (C) high resolution.
Figure 1:

Representations of Voxels with Different Resolutions.

(A) Low resolution; (B) medium-level resolution; (C) high resolution.

The first one represents the isotropically sampled image at low resolution, as shown in Figure 1A; the second one is the anistropically sampled image with in-place resolution higher than the resolution between sequential slices, as shown in Figure 1B; and the third is the isotropically sampled image at high-resolution, as shown in Figure 1C. When the voxel size is given isotropically, 3D rendering will have the actual size in all three dimensions. This allows estimation at any plane and recreating the slice image along any arbitrary axis in 3D space.

4 Three-Dimensional Image Visualization

Illustrations in 2D depend on the original appearance of the image plane with respect to the constructions of importance. Furthermore, imaging systems have limited abilities to effectively construct 2D images directly. Several methods are used to create 2D images from 3D pictures, and these methods depend on the extraction of the significant slices. Many techniques and systems have been established for 3D image demonstration and picturing, and these techniques typically are further categorized into three techniques: surface rendering, volume rendering, and image-based rendering. The combination of all these techniques generates the visualization of 3D volume images, and each technique has its own merits and demerits. The selection of these techniques depends on the user requirements and resulting analysis of the visualization process. Surface rendering methods depend on the extraction of outlines, in which the surface of the basic construction is defined. Then, the surface batches are located at each outline point, and hidden surface removal and shading methods are utilized to render the visible surface. The merits of the surface rendering technique increase the computational time, as a low data rate is needed for outlines. Moreover, this technique can take the merit of specific graphics hardware to increase the geometric transformation of rendering processes. The drawback of this technique, based on the data needed for constructing outlines, is that some data are lost in this operation. Similarly, this technique removes any interactive creation of the surface to be rendered.

Volume rendering techniques based on ray-casting methods have led to the development of powerful techniques for image visualization. Three-dimensional visualization of volume images is given without the required preceding knowledge about surface picture segmentation. In these techniques, the entire image volumes are needed in maintaining the original volume image data. This technique supports the abilities and is more powerful in rendering the original image data. The volume rendering method is reserved for high-resolution image visualization of displayed structures. Conversely, these techniques affect high-end processing speed observations to develop 3D image visualization. The third technique is image-based rendering, which depends on the image space methods that need more powerful and flexible techniques for medical data visualization.

Actually, images are illustrated in a 2D matrix (M*N). Nevertheless, pictures can be demonstrated in a 3D matrix (M*N*L), which represents 3D volumetric information or 3D visualization. The methods of constructing 3D computer graphics can consequentially further be divided into three common stages: 3D, which defines the methods of establishing the shape of a picture; outline and animation, which define the motions and locations of the pictures within a scene; and 3D rendering, which creates images.

5 Volume Estimation

Volume estimation is an important issue in the field of image visualization, including medical images as well as geographical images. Volume estimation means 3D construction of an image from slices. The procedure of construction depends on the following aspects, as shown in Figure 2.

Figure 2: Representations of Volume Reconstruction from Slices.
Figure 2:

Representations of Volume Reconstruction from Slices.

The steps for the construction of volume estimation from slices are given below.

Step 1: Find the contour of the area of interest in each slice.

Step 2: Measure the indicated area in each slice. The calculation of the indicated area in each slice is approximately similar to the area of the rectangular.

(1) Sl1,Sl2, and Sl3,

where Sln is the slice ID number.

Step 3: Determine the volume between two slices, Sl1 and Sl2.

(2) vn=12d(Sl1+Sl1Sl2+Sl2),

where d is the distance between two slices.

Step 4: Reconstruct the volume of indicated areas for all slices as

(3) V=N=0N1vn,

where N is the number of slices, V is the volume of given slices, and vn is the volume between each slice.

6 Proposed Image Visualization Technique Pipeline Process

This proposed image visualization technique is used to produce 3D pictures depending on the extraction of significant functionalities. The proposed image visualization technique pipeline process can be applied in six stages, as shown in Figure 3. In the first stage, the different medical images, such as MRI, CT scan, and X-ray, are imported as input to the proposed method, and the quality of the image, noise level, etc., are checked. The input images are subject to preprocessing as they may have noises. In the second stage in this proposed method, various filtering techniques, such as de-noising, decimation, and multiresolution and mesh generation filters, are employed. There are several reasons why a digital image can be degraded, including improper opening of files, atmospheric disturbance, and motion between the capturing devices and the captured objects. For obtaining a better-quality image, the noise existing in the image has to be removed first and foremost. Various noise reducing filters are applied to the input medical images to obtain noise-free images that are forwarded to the next stages.

Figure 3: Flow Diagram for the Proposed Image Visualization Pipeline Process.
Figure 3:

Flow Diagram for the Proposed Image Visualization Pipeline Process.

The third stage deals with edge enhancement of noise-removed input images. Basically, edge enhancement is the technique of digital image processing filter that improves the edge similarities of given medical images in an effort to enhance the sharpness of the images. The edge filter process recognizes sharp edge boundaries in the image, such as the edge between a substance and a background of a contrasting color, and increases the image quality in the region near the edge. The effects of edge enhancement are unlikely to make an image perfect, as edge enhancement does not improve refined details that may appear in the more constant region of the image, such as a texturing or particles that appear in flat or smooth regions of the image. The advantage of the edge enhancement stage is reducing the imperfections in the image reproduction, such as particles or noise or deficiencies in the image. Therefore, the edge enhancement stage improves the noise level as well as the quality of the images. The fourth stage is focused on the rendering process, and volume rendering methods have been established to overcome the issues of the perfection of illustration of surfaces in the iso-surface methods. Volume rendering issues are relevant to making a decision for every volume component, on whether or not the surface passed through it and can produce false positives or false negatives, mainly concentrating on the appearance of small or poorly identified structures. The drawbacks of volume rendering include not using intermediate geometrical illustrations in focus to surface rendering methods. It deals with the opportunity to display weak or fuzzy surfaces. The surface rendering method helps in determining whether a surface is present or not.

The volume rendering method encompasses the following steps:

Step 1: Creating an RGBA volume from the data.

Step 2: Reconstructing a continuous function from these non-continuous data sets.

Step 3: Estimating the 2D viewing plane from the desired point of view.

Step 4: An RGBA volume is a 4D vector data set, where the first three components are the primary colors R, G, and B and the last component A denotes the opacity value. For example, an opacity value of 0 means completely transparent and value of 1 means completely opaque.

Step 5: Beyond the RGBA volume, an opaque background is located. The mapping of the information to opacity values acts as a classification of the information that is involved in the processing.

Therefore, the appearance of surfaces can be improved by utilizing shading methods to form the RGB mapping. Also, opacity can be used to see the interior of the data volume as well. The main advantage of the volume rendering method is that interior data are not sorted out. Therefore, it helps in looking at the 3D data set as a whole. The drawbacks are the complex interpretation of the overcast interiors and the long time required compared to that needed for surface rendering. In the proposed system, the flow diagram of volume rendering methods applies two important approaches, such as the ray casting and splatting methods. These methods are used to determine the four visualization elements of the RGBA volume value, opacity value, complete transparencies, and completely opaque values. The proposed system differs in the way that the RGBA volume values are estimates of the 2D viewing plane. Volume rendering using the ray casting method involves different developmental processes considered for ray casting. The representations will define the transparency given as

(4) Cout=Cin(1α(xi))+c(xi)α(xi),

where Cout represents the color of the ray as it leaves each sample location, Cin is the color of the ray as it enters, c(xi) defines the color, and α(xi) represents sample positions by the transparency. The above-mentioned formula is in a back-to-front space order, which means beginning in the background and moving toward the image plane, and also provides the pixel color. It is understood from the above-mentioned formula that the opacity acts as an information selector. For example, consider any sample points or pixels with opacity values relatively near 1. Hide practically all data along the ray between the background and the sample pixel, and opacity values near 0 transfer the information practically unaltered. This method of composition is equal to the density-emitter process, where the color and opacity values indicate the rapid emission rate and the opacity indicates the immediate absorption rate. The final stage is producing the constructions of the 3D modeling process. Three-dimensional modeling is a process of developing the scientific representation of any 3D surface from the 2D images through 3D modeling tools. Three-dimensional models are widely used these days in multiple fields. In medical sciences, 3D models of humans are very much utilized for diagnoses and surgical procedures. The 3D models can be synthesized by utilizing multiple 2D image slices from different medical image modalities. Also, the models can be reconstructed by utilizing three different image slices, such as axial, sagittal, and coronal, from different medical image modalities. The 3D models thus reconstructed can be categorized into solid modeling and contour modeling. Usually, in engineering and medical simulations, solid modeling is utilized. Contour modeling is used for surface modeling. The difference between these two modeling categories lies in the way that they are created and manipulated for 3D modeling. Also, the conventions of usage with respect to different fields and the approximations are applied to each model.

7 Results and Discussion

Various different types of medical images (MRI and CT scan) with a size of 256×256 are utilized to show the volume rendering of medical images in the proposed method. A data set of >35 various medical images is considered and experimented on here, and the different image quality measure metrics of the medical images are measured to optimize the proposed method. Table 1 shows the various representations of the original image, noise removal image, rendered image, and 3D reconstructed output medical images that are obtained from the proposed system. These experimental images prove the achieved results. Input images were considered from any of the original medical images from various data sources, then subjected to all pipeline processing stages, and finally transformed to the last output images correctly. The output achieved will help physicians who perform real-time surgical processes. With the help of the reconstructed 3D model, each part of the human brain can be visualized undoubtedly by rotating the fabricated model. Also, the minute part of the human brain model can be visualized by zooming-in the model to obtain the clear organization of the model constructed. This facility will help the domain expert to investigate and perform surgical processes without any problems.

Table 1:

Experimental Results Obtained by Various Medical Image Data Sets.

Table 1: Experimental Results Obtained by Various Medical Image Data Sets.

Figure 4 illustrates a graphical representation of the quality metrics obtained for the input data set. The maximum structural content obtained from the developed system is 0.92. The peak signal-to-noise ratio ranges from 7.76 to 17.25. The least mean square error obtained is 1.05, and the maximum error obtained is 2.16. The normalized cross-correlation obtained ranges from 0.45 to 0.78. The minimum normalized average error obtained is 0.46, and the maximum normalized average error obtained is 0.92. From this, it is noted that the error produced by the developed system yielded is relatively less and produced a better visualized 3D model.

Figure 4: Comparative Graphical Illustration of Various Quality Metrics with Existing Methods.
(A) Graphical depiction of structural content. (B) Graphical depiction of mean square error. (C) Graphical depiction of normalized cross correlation. (D) Graphical depiction of normalized average error. (E) Graphical depiction of peak signal to noise ratio. (F) Graphical depiction of various image quality metrics.
Figure 4:

Comparative Graphical Illustration of Various Quality Metrics with Existing Methods.

(A) Graphical depiction of structural content. (B) Graphical depiction of mean square error. (C) Graphical depiction of normalized cross correlation. (D) Graphical depiction of normalized average error. (E) Graphical depiction of peak signal to noise ratio. (F) Graphical depiction of various image quality metrics.

8 Conclusions

In this study, the proposed system deals with an efficient technique for 3D image visualization through 2D images. Images have many issues, such as low-resolution output display, high-resolution output display, very low contrast, and geometric primitive distortions. The integration of image slices to create a 3D image graphical illustration, which is significant for our proposed system, solve the issues and enable its many applications in medical sciences. Therefore, the proposed system involves an efficient technique for 3D image visualization process through the extraction of the higher-dimensional pictures from 2D images. Finally, our proposed system helps in developing medical images and provides several types of image file formats.

Bibliography

[1] C. Alberola, R. Cárdenes, M. Martín, M. A. Martín, M. A. Rodríguez-Florido and J. Ruiz-Alzola, diSNei: a collaborative environment for medical images analysis and visualization. in: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 814–823, Springer, Berlin, 2000.Search in Google Scholar

[2] M. Aliroteh and T. McInerney, SketchSurfaces: sketch-line initialized deformable surfaces for efficient and controllable interactive 3D medical image segmentation, in: International Symposium on Visual Computing, pp. 542–553, Springer, Berlin, 2007.10.1007/978-3-540-76858-6_53Search in Google Scholar

[3] A. Bhalerao, I. Boada, M. Feixas and M. Sbert, Image segmentation using excess entropy, J. Signal Process. Syst. 54 (2009), 205–214.10.1007/s11265-008-0194-6Search in Google Scholar

[4] A. Depeursinge, S. Duc, I. Eggel and H. Muller, Mobile medical visual information retrieval, IEEE Trans. Inf. Technol. Biomed. 16 (2012), 53–61.10.1109/TITB.2011.2173585Search in Google Scholar PubMed

[5] D. Dey, P. J. Slomka, D. G. Gobbi and T. M. Peters, Mixed reality merging endoscopic images and 3D surfaces, in: Medical Image Computing and Computer-Assisted Intervention (MICCAI 2000), Third International Conference, pp. 796–803, Pittsburgh, PA, USA, 2000.10.1007/978-3-540-40899-4_82Search in Google Scholar

[6] M. Ferre, S. Cobos, R. Aracil and M. Sánchez Urán, 3D-image visualization and its performance in teleoperation, in: Virtual Reality, pp. 22–31, Springer, 2007.10.1007/978-3-540-73335-5_3Search in Google Scholar

[7] L. Gu and T. Peters, 3D segmentation of medical images using a fast multistage hybrid algorithm, Int. J. Comput. Assist. Radiol. Surg. 1 (2006), 23–31.10.1007/s11548-006-0001-4Search in Google Scholar

[8] N. Herlambang, H. Liao, K. Matsumiya, K. Masamune and T. Dohi, Real-time autostereoscopic visualization of registration-generated 4D MR image of beating heart, in: International Workshop on Medical Imaging and Virtual Reality, pp. 349–358, Springer, Berlin, 2008.10.1007/978-3-540-79982-5_38Search in Google Scholar

[9] Y. H. Jung, J. Kim, D. Feng and M. Fulham, Occlusion and slice-based volume rendering augmentation for PET-CT, IEEE J. Biomed. Health Inform. 21 (2016), 1005–1014.10.1109/JBHI.2016.2565502Search in Google Scholar PubMed

[10] J. Kim, W. Cai, S. Eberl and D. Feng, Real-time volume rendering visualization of dual-modality PET/CT images with interactive fuzzy thresholding segmentation, IEEE Trans. Inf. Technol. Biomed. 11 (2007), 161–169.10.1109/TITB.2006.875669Search in Google Scholar PubMed

[11] K. Krechetova, A. Glaz and A. Platkajis, 3D medical image visualization and volume estimation of pathology zones, in: 14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics, pp. 532–535, Springer, Berlin, 2008.10.1007/978-3-540-69367-3_142Search in Google Scholar

[12] M. Meißner, U. Hoffmann and W. Straßer, Enabling classification and shading for 3D texture mapping based volume rendering, in: Proceedings of the 10th IEEE Visualization 1999 Conference (VIS’99), IEEE Computer Society, California, 1999.Search in Google Scholar

[13] M. Meißner, H. Pfister, R. Westermann and C. M. Wittenbrink, Volume visualization and volume rendering techniques, in: Eurographics Tutorial, Eurographics, 2000.Search in Google Scholar

[14] S. M. Pizer, P. T. Fletcher, S. Joshi, A. Thall, J. Z. Chen, Y. Fridman, D. S. Fritsch, G. Gash, J. M. Glotzer, M. R. Jiroutek, C. Lu, K. E. Muller, G. Tracton, P. Yushkevich and E. L. Chaney, Deformable m-reps for 3D medical image segmentation, Int. J. Comput. Vis. 55 (2003), 85–106.10.1023/A:1026313132218Search in Google Scholar

[15] M. Ramasubramanian, P. Shankar and D. M. D. Rangaswamy, 3D object conversion via 2d images a survey report, in: The National Conference on Architecture, Software systems and Green Computing NCASG-2013, organized by Department of Computer Science and Engineering, Aarupadai Veedu Institute of Technology, Chennai, 2013.Search in Google Scholar

[16] T. Rodt, S. O. Bartling, J. E. Zajaczek, M. A. Vafa, T. Kapapa, O. Majdani, J. K. Krauss, M. Zumkeller, H. Matthies, H. Becker and J. Kaminsky, Evaluation of surface and volume rendering in 3D-CT of facial fractures, Dentomaxillofac. Radiol. 35 (2006), 227–231.10.1259/dmfr/22989395Search in Google Scholar PubMed

[17] A. Rosset, L. Spadola and O. Ratib, OsiriX: an open-source software for navigating in multidimensional DICOM images, J. Digit. Imaging 17 (2004), 205–216.10.1007/s10278-004-1014-6Search in Google Scholar PubMed PubMed Central

[18] M. Sainz and R. Pajarola, Point-based rendering techniques, Comput. Graph. 28 (2004), 869–879.10.1016/j.cag.2004.08.014Search in Google Scholar

[19] H. Shum and S. B. Kang, Review of image-based rendering techniques, in: Visual Communications and Image Processing 2000, pp. 2–13, International Society for Optics and Photonics, Bellingham, WA, 2000.10.1117/12.386541Search in Google Scholar

[20] J. Swedlow, The open microscopy environment: a collaborative data modeling and software development project for biological image informatics, in: Imaging Cellular and Molecular Biological Functions, pp. 71–92, Springer, 2007.10.1007/978-3-540-71331-9_3Search in Google Scholar

[21] T. S. Yoo and D. T. Chen, Interactive 3D medical visualizations: a parallel approach to surface rendering 3D medical data, Proc. S/CAR 94 (1994), 100–105.Search in Google Scholar

[22] Q. Zhang, R. Eagleson and T. Peters, Rapid voxel classification methodology for interactive 3D medical image visualization, in: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2007, pp. 86–93, Springer, 2007.10.1007/978-3-540-75759-7_11Search in Google Scholar PubMed

Received: 2017-06-30
Published Online: 2017-12-18

©2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Downloaded on 30.4.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2017-0315/html
Scroll to top button