Displaying discrete scalar data fields
Volume rendering describes a range of techniques for displaying discrete scalar data fields such as volumetric data sets. These are generated in various areas including medical imaging, technical simulations, gaming and measurements from the geo scientific domain. There are different ways for acquiring the data, such as computer tomography, magnetic resonance imaging, ultrasound or seismic techniques. In these domains volume rendering is nowadays an essential tool for analyzing three-dimensional datasets. In addition to the field of scienti fic visualization, volume rendering also plays an important role in the generation of "special effects". For example certain structures that do not contain an implicit surface are very diffcult to render using traditional surface-based rendering strategies. Typical examples of these structures are fire, fractals, smoke or fog.
Basic Concepts of Volume Rendering
Techniques for displaying volumetric objects can be classified as indirect or direct volume rendering techniques, each of which has their own limitations and benefits regarding for example runtime performance or image quality.
Indirect Volume Rendering
The basic principle of indirect volume rendering is to segment and extract certain areas of interest from the volume and transform them into polygonal models. One example of an algorithm for this purpose is the marching cubes algorithm, which approximates a polygonal model from a voxel-based dataset. This approximation of a surface can be rendered utilizing a standard rendering pipeline including all available illumination techniques. Feature extraction effectively means that only a subset of a dataset is taken into consideration. This results in a reduction of the data that needs to be processed during the rendering stage, which effectively increases rendering speed. The visualization of a volume by extracting and rendering isosurfaces may be a reasonable approach for particular data sets such as CT scans.
Direct Volume Rendering
The second category is Direct Volume Rendering (DVR). Using DVR techniques it is not necessary to extract a polygonal iso surface from the dataset. Instead the scalar values of the voxels are mapped to certain rendering attributes such as color and opacity. Thus DVR covers all techniques for rendering volumetric objects without an intermediate conversion to surface geometry. During rendering all voxels are shaded according to a transfer function that associates distinct intensity ranges to distinct rendering properties. Depending on the transfer function different parts can be displayed or completely hidden in the visualization. The design of a suitable transfer function can be very time-consuming but also requires a lot of a priori knowledge about the data sets. Therefore it is crucial to offer interactive tools, that let the user dynamically update the relevant parameters and see the classiffcation results directly.
There are two different definitions of the term voxel. The one that is used as part of this thesis assumes that voxels are points in 3D space containing a coordinate and a corresponding scalar value. The scalar value of points that are located in between certain voxels is determined by trilinear interpolation. The second definition describes a voxel as a cubic region within 3D space. The scalar value of the voxel covers the whole interior of this cell. Following the first definition a voxel can be understood as a grid point within a uniform grid structure. The scalar value of each of these grid points can be distinguished by accessing the grid element at the respective coordinate of the voxel.
Therefore a single voxel can be described as a tuple (x, y, z, v). The first three elements encode the spatial coordinate of the voxel. The corresponding scalar value is contained in the fourth element v. How this value has to be interpreted depends on the type of the dataset. Its intended meaning can be for example tissue density, temperature or seismic amplitudes. In the above example voxels serve as grid points of a uniform grid, where the distance between adjacent voxels is always the same. Conversely there are also non uniform-grids.
Techniques for Volume Rendering
Volume rendering techniques can be classied as either image-order or object-order techniques. Here for each group one example will be introduced, namely texture slicing and volume raycasting.
The concept of texture slicing is a very natural way of displaying volumetric datasets. For example, a CT scanner takes multiple stacked images of a human body. In discrete intervals two-dimensional sections are scanned and the measured density values are saved as pixel elements. In this way the continuous object is discretized and stored as a stack of images.
A reconstruction of the scanned object can be achieved by placing multiple quad primitives closely next to each other. This group of quads is also called proxy geometry. In a basic implementation the position of the quad vertices are used to calculate texture coordinates at which the volume is sampled from.
During rasterization each quad is then textured with a section through the volume with respect to its spatial coordinates. There are dierent ways for positioning the proxy geometry. The most common are object-aligned and view-aligned.
The advantage of basic texture slicing is its simplicity regarding the implementation as well as its runtime performance. Limitations however exist regarding the image quality, which often contains rendering artifacts at the edges of slice polygons. If texture slicing is implemented using two-dimensional textures, then only bilinear interpolation is available.
However, on modern hardware this can be overcome by directly using three-dimensional textures that offer trilinear interpolation for sampling points located in between grid points.
In order to accurately display high frequencies in the data, proxy slices must be placed very close to each other, which results in a high amount of geometry that needs to be processed. Advanced rendering techniques such as adaptive sampling distances can not be implemented easily into texture slicing algorithms.
The basic idea of raycasting is to generate an imaginary ray for each pixel of the final image, which is illustrated in the following figure. This ray is cast from the view point into the volume and is sampled at regular intervals.
These samples are used to accumulate the color of the ray as it traverses through the volume. Color and transparency are applied at this point depending on the intensity of the voxel and the transfer function.
Moreover, shading can be performed at this stage based on local illumination models and available light sources. Once the ray leaves the volume on the opposite side, the accumulated color is used as final pixel color.
Raycasting is different to raytracing because the rays are not reflected at surface interfaces but instead are traversed through the whole range of the volume. Therefore the pixel color is determined as the sum of color and opacity values of all samples taken along that ray.