0% found this document useful (0 votes)
2 views

Display of Surfaces From Volume Data

Uploaded by

Bob Plan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Display of Surfaces From Volume Data

Uploaded by

Bob Plan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

1

Display of Surfaces from Volume Data

Marc Levoy

June, 1987
(revised February, 1988)

Computer Science Department


University of North Carolina
Chapel Hill, NC 27514

Abstract

The application of volume rendering techniques to the display of surfaces from sampled scalar func-
tions of three spatial dimensions is explored. Fitting of geometric primitives to the sampled data is not
required. Images are formed by directly shading each sample and projecting it onto the picture plane.
Surface shading calculations are performed at every voxel with local gradient vectors serving as sur-
face normals. In a separate step, surface classification operators are applied to obtain a partial opacity
for every voxel. Operators that detect isovalue contour surfaces and region boundary surfaces are
presented. Independence of shading and classification calculations insures an undistorted visualization
of 3-D shape. Non-binary classification operators insure that small or poorly defined features are not
lost. The resulting colors and opacities are composited from back to front along viewing rays to form
an image. The technique is simple and fast, yet displays surfaces exhibiting smooth silhouettes and
few other aliasing artifacts. The use of selective blurring and super-sampling to further improve image
quality is also described. Examples from two applications are given: molecular graphics and medical
imaging.

1. Introduction

Visualization of scientific computations is a rapidly growing field within computer graphics. A


large subset of these applications involve sampled functions of three spatial dimensions, also known as
volume data. Surfaces are commonly used to visualize volume data because they succinctly present the
3-D configuration of complicated objects. In this paper, we explore the use of isovalue contour sur-
faces to visualize electron density maps for molecular graphics, and the use of region boundary sur-
faces to visualize computed tomography (CT) data for medical imaging.
The currently dominant techniques for displaying surfaces from volume data consist of applying
a surface detector to the sample array, fitting geometric primitives to the detected surfaces, then
rendering these primitives using conventional surface rendering algorithms. The techniques differ
from one another mainly in the choice of primitives and the scale at which they are defined. In the
medical imaging field, a common approach is to apply thresholding to the volume data. The resulting
binary representation can be rendered by treating 1-voxels as opaque cubes having six polygonal faces
[1]. If this binary representation is augmented with the local grayscale gradient at each voxel, substan-
tial improvements in surface shading can be obtained [2-5]. Alternatively, edge tracking can be
applied on each slice to yield a set of contours defining features of interest, then a mesh of polygons
can be constructed connecting the contours on adjacent slices [6]. As the scale of voxels approaches
that of display pixels, it becomes feasible to apply a local surface detector at each sample location.
This yields a very large collection of voxel-sized polygons, which can be rendered using standard
algorithms [7]. In the molecular graphics field, methods for visualizing electron density maps include
stacks of isovalue contour lines, ridge lines arranged in 3-space so as to connect local maxima [8], and
2

basket meshes representing isovalue contour surfaces [9].


All of these techniques suffer from the common problem of having to make a binary
classification decision: either a surface passes through the current voxel or it does not. As a result,
these methods often exhibit false positives (spurious surfaces) or false negatives (erroneous holes in
surfaces), particularly in the presence of small or poorly defined features.
To avoid these problems, researchers have begun exploring the notion of volume rendering
wherein the intermediate geometric representation is omitted. Images are formed by shading all data
samples and projecting them onto the picture plane. The lack of explicit geometry does not preclude
the display of surfaces, as will be demonstrated in this paper. The key improvement offered by
volume rendering is that it provides a mechanism for displaying weak or fuzzy surfaces. This capabil-
ity allows us to relax the requirement, inherent when using geometric representations, that a surface be
either present or absent at a given location. This in turn frees us from the necessity of making binary
classification decisions. Another advantage of volume rendering is that it allows us to separate shading
and classification operations. This separation implies that the accuracy of surface shading, hence the
apparent orientation of surfaces, does not depend on the success or failure of classification. This
robustness can be contrasted with rendering techniques in which only voxels lying on detected sur-
faces are shaded. In such systems, any errors in classification result in incorrectly oriented surfaces.
Smith has written an excellent introduction to volume rendering [10]. Its application to CT data
has been demonstrated by PIXAR [11], but no details of their approach have been published. The
technique described in this paper grew out of the author’s earlier work on the use of points as a render-
ing primitive [12]. Its application to CT data was first reported in June, 1987 [13], and was presented
at the SPIE Medical Imaging II conference in January, 1988 [14].

2. Rendering pipeline

The volume rendering pipeline used in this paper is summarized in figure 1. We begin with an
array of acquired values f 0 (xi ) at voxel locations xi = (xi ,y j ,zk ). The first step is data preparation which
may include correction for non-orthogonal sampling grids in electron density maps, correction for
patient motion in CT data, contrast enhancement, and interpolation of additional samples. The output
of this step is an array of prepared values f 1 (xi ). This array is used as input to the shading model
described in section 3, yielding an array of voxel colors c λ (xi ), λ = r,g,b. In a separate step, the array
of prepared values is used as input to one of the classification procedures described in section 4, yield-
ing an array of voxel opacities α(xi ). Rays are then cast into these two arrays from the observer
eyepoint. For each ray, a vector of sample colors c λ (x˜i ) and opacities α(x˜i ) is computed by re-
sampling the voxel database at K evenly spaced locations x˜i = (xi˜,y ˜j ,zk˜ ) along the ray and tri-linearly
interpolating from the colors and opacities in the eight voxels closest to each sample location as shown
in figure 2. Finally, a fully opaque background of color cbkg, λ is draped behind the dataset and the re-
sampled colors and opacities are merged with each other and with the background by compositing in
back-to-front order to yield a single color C λ (u˜i ) for the ray, and, since only one ray is cast per image
pixel, for the pixel location u˜i = (ui˜,v ˜j ) as well.
The compositing calculations referred to above are simply linear interpolations. Specifically, the
color Cout, λ (u˜i ) of the ray as it leaves each sample location is related to the color Cin, λ (u˜i ) of the ray as
it enters and the color c λ (x˜i ) and opacity α(x˜i ) at that sample location by the transparency formula
Cout, λ (u˜i ) = Cin, λ (u˜i )(1 − α(x˜i )) + c λ (x˜i )α(x˜i ).
Solving for pixel color C λ (u˜i ) in terms of the vector of sample colors c λ (x˜i ) and opacities α(x˜i ) along
the associated viewing ray gives
C λ (u˜i ) = C λ (ui˜,v ˜j ) =
3

 
K K
Σ  c λ (xi˜,y ˜j ,zk˜ )α(xi˜,y ˜j ,zk˜ ) Π (1 − α(xi˜,y ˜j ,zm̃ )) (1)
k˜ =0 m̃=k˜ +1

where c λ (xi˜,y j˜,z 0 ) = cbkg, λ and α(xi˜,y j˜,z 0 ) = 1.

3. Shading

Using the rendering pipeline presented above, the mapping from acquired data to color provides
3-D shape cues, but does not participate in the classification operation. Accordingly, a shading model
was selected that provides a satisfactory illusion of smooth surfaces at a reasonable cost. It is not the
main point of the paper and is presented mainly for completeness. The model chosen is due to Phong
[15]:
c λ (xi ) = cp, λ ka, λ +

c
 λ
p,
k (N(xi ).L) + ks, λ (N(xi ).H)n (2)
k 1 + k 2 d (xi ) d, λ
where

c λ (xi ) = λ’th component of color at voxel location xi , λ = r,g,b,

cp, λ = λ’th component of color of parallel light source,

ka, λ = ambient reflection coefficient for λ’th color component,

kd, λ = diffuse reflection coefficient for λ’th color component,

ks, λ = specular reflection coefficient for λ’th color component,

n = exponent used to approximate highlight,

k 1 , k 2 = constants used in linear approximation of depth-cueing,

d (xi ) = perpendicular distance from picture plane to voxel location xi ,

N(xi ) = surface normal at voxel location xi ,

L = normalized vector in direction of light source,

H = normalized vector in direction of maximum highlight.

Since a parallel light source is used, L is a constant. Furthermore,


V+L
H = 
|V + L|
where
V = normalized vector in direction of observer.

Since an orthographic projection is used, V and hence H are also constants. Finally, the surface nor-
mal is given by
4

∇f (xi )
N(xi ) =  
| ∇f (xi ) |
where the gradient vector ∇f (xi) is approximated using the operator
∇f (xi ) = ∇f (xi ,y j ,zk ) ∼


%&
" 1"#$ f (x ,y ,z ) − f (x ,y ,z ) ,
! i +1 j k i −1 j k
2

 1 f (x ,y ,z ) − f (x ,y ,z ) ,
i j +1 k i j −1 k
2 


1 f (x ,y ,z ) − f (x ,y ,z ) .
i j k +1 i j k −1
2

4. Classification

The mapping from acquired data to opacity performs the essential task of surface classification.
We will first consider the rendering of isovalue contour surfaces in electron density maps, i.e. surfaces
defined by points of equal electron density. We will then consider the rendering of region boundary
surfaces in computed tomography (CT) data, i.e. surfaces bounding tissues of constant CT number.

4.1. Isovalue contour surfaces

Determining the structure of large molecules is a difficult problem. The method most commonly
used is ab initio interpretation of electron density maps, which represent the averaged density of a
molecule’s electrons as a function of position in 3-space. These maps are obtained from X-ray diffrac-
tion studies of crystallized samples of the molecule.
One obvious way to display isovalue surfaces is to opaquely render all voxels having values
greater than some threshold. This produces 3-D regions of opaque voxels the outermost layer of which
is the desired isovalue surface. Unfortunately, this solution prevents display of multiple concentric
semi-transparent surfaces, a very useful capability. Using a window in place of a threshold does not
solve the problem. If the window is too narrow, holes appear. If it too wide, display of multiple sur-
faces is constrained. In addition, the use of thresholds and windows introduces artifacts into the image
that are not present in the data.
The classification procedure employed in this study begins by assigning an opacity αv to voxels
having selected value fv , and assigning an opacity of zero to all other voxels. In order to avoid aliasing
artifacts, we would also like voxels having values close to fv to be assigned opacities close to αv . The
most pleasing image is obtained if the thickness of this transition region stays constant throughout the
volume. We approximate this effect by having the opacity fall off as we move away from the selected
value at a rate inversely proportional to the magnitude of the local gradient vector.
This mapping is implemented using the expression
5

'(
( 1
if | ∇f (xi ) | = 0 and
( f (xi ) = fv
| f − f (x ) |
1 v
1 − + + | , ,,,,,,,, | if | ∇f (xi ) | > 0 and
i
α(xi ) = αv ) ( r | | ∇f (xi ) | |
( f (xi ) − r | ∇f (xi ) | ≤ fv ≤ (3)
(
f (xi ) + r | ∇f (xi ) |
*
0 otherwise
where r is the desired thickness in voxels of the transition region, and the gradient vector is approxi-
mated using the operator given in section 3. A graph of α(xi ) as a function of f (xi ) and | ∇f (xi ) | for
typical values of fv , αv , and r is shown in figure 3.
If more than one isovalue surface is to be displayed in a single image, they can be classified
separately and their opacities combined. Specifically, given selected values fvn , n = 1, . . . , N, N ≥ 1,
opacities αvn and transition region thicknesses rn , we can use equation (3) to compute αn (xi ), then
apply the relation
N
αtot (xi ) = 1 − Π (1 − αn (xi )).
n =1

4.2. Region boundary surfaces

From a densitometric point of view, the human body is a complex arrangement of biological tis-
sues each of which is fairly homogeneous and of predictable density. Clinicians are mostly interested
in the boundaries between tissues, from which the sizes and spatial relationships of anatomical features
can be inferred.
Although many researchers use isovalue surfaces for the display of medical data, it is not clear
that they are well suited for that purpose. The reason can be explained briefly as follows. Given an
anatomical scene containing two tissue types A and B having values fvA and fvB where fvA < fvB , data
acquisition will produce voxels having values f (xi ) such that fvA ≤ f (xi ) ≤ fvB . Thin features of tissue
type B may be represented by regions in which all voxels bear values less than fvB . Indeed, there is no
threshold value greater than fvA guaranteed to detect arbitrarily thin regions of type B, and thresholds
close to fvA are as likely to detect noise as signal.
The procedure employed in this study is based on the following simplified model of anatomical
scenes and the CT scanning process. We assume that scenes contain an arbitrary number of tissue
types bearing CT numbers falling within a small neighborhood of some known value. We further
assume that tissues of each type touch tissues of at most two other types in a given scene. Finally, we
assume that, if we order the types by CT number, then each type touches only types adjacent to it in
the ordering. Formally, given N tissue types bearing CT numbers fvn , n = 1, . . . , N, N ≥ 1 such that
fvm < fvm +1 , m = 1, . . . , N −1, then no tissue of CT number fvn 1 touches any tissue of CT number
fvn 2 , | n 1 −n 2 | > 1.
If these criteria are met, each tissue type can be assigned an opacity and a piecewise linear map-
ping can be constructed that converts voxel value fvn to opacity αvn , voxel value fvn +1 to opacity αvn +1 ,
and intermediate voxel values to intermediate opacities. Note that all voxels are typically mapped to
some non-zero opacity and will thus contribute to the final image. This scheme insures that thin
regions of tissue will still appear in the image, even if only as faint wisps. Note also that violation of
the adjacency criteria leads to voxels that cannot be unambiguously classified as belonging to one
region boundary or another and hence cannot be rendered correctly using this method.
The superimposition of multiple semi-transparent surfaces such as skin and bone can substan-
tially enhance the comprehension of CT data. In order to obtain such effects using volume rendering,
6

we would like to suppress the opacity of tissue interiors while enhancing the opacity of their bounding
surfaces. We implement this by scaling the opacities computed above by the magnitude of the local
gradient vector.
Combining these two operations, we obtain 89 a set of expressions
<=
-.
f (xi ) − fvn
. α ;;;;;;;;; +
. vn +1 : fv − fv > if fvn ≤ f (xi ) ≤ fvn +1
n +1 n
α(xi ) = | ∇f (xi ) | 12 56
/. f − f (x )
4 4v4n4+1444444i4
. αvn 3
fvn +1 − fvn 7
(4)
.
0 otherwise
0

for n = 1, . . . , N −1, N ≥ 1. The gradient vector is approximated using the operator given in section 3.
A graph of α(xi ) as a function of f (xi ) and | ∇f (xi ) | for three tissue types A, B, and C, having typical
values of fvA , fvB , fvC , αvA , αvB , and αvC is shown in figure 4.

5. Discussion

5.1. Computational complexity

One of the strengths of the rendering method presented in this paper is its modularity. By stor-
ing intermediate results at various places in the pipeline, the cost of generating a new image after a
change in shading or classification parameters is minimized. Let us consider some typical cases.
Given acquired value f (xi ) and gradient magnitude | ∇f (xi ) | , the mapping to opacity α(xi ) can
be implemented with one lookup table reference. This implies that if we store gradient magnitudes for
all voxels, computation of new opacities following a change in classification parameters entails only
generation of a new lookup table followed by one table reference per voxel.
The cost of computing new colors c λ (xi ) following a change in observer direction V, light
source direction L, or other shading parameter is more substantial. Effective rotation sequences can be
generated, however, using a single set of colors. The visual manifestation of fixing the shading is that
light sources appear to travel around with the data as it rotates and highlights are incorrect. Since we
are visualizing imaginary or invisible phenomena anyway, observers are seldom troubled by this
effect.
The most efficient way to produce a rotation sequence is then to hold both colors and opacities
constant and alter only the direction in which rays are cast. If we assume a square image n pixels wide
and use orthographic projection, in which case sample coordinates can be efficiently calculated using
differencing, the combined cost of ray tracing, re-sampling, and compositing to compute n 2 pixels is
3Kn 2 additions, 2Kn 2 tri-linear interpolations, and Kn 2 linear interpolations, where K is the number of
sample locations along each ray.

5.2. Image quality

Although the notation used in equation (1) has been borrowed from the literature of image com-
positing [16], the analogy is not exact, and the differences are fundamental. Volume data consists of
samples taken from a bandlimited 3-D scene, whereas the data acquired from an image digitizer con-
sists of samples taken from a bandlimited 2-D projection of a 3-D scene. Unless we reconstruct the 3-
D scene that gave rise to our volume data, we cannot compute an accurate projection of it. Volume
rendering performs no such reconstruction. Image quality is therefore limited by the number of
7

viewing rays. In the current implementation, we cast one ray per pixel. Such point sampling would
normally produce strong aliasing, but, by using non-binary classification decisions, we carry much of
the bandlimiting inherent in the acquired data over into the image, substantially reducing aliasing
artifacts. Stated another way, we are depending on 3-D bandlimiting to avoid aliasing in 2-D projec-
tions.
Within these limitations, there are two ways to improve image quality, blurring and super-
sampling. If the array of acquired values are blurred slightly during data preparation, the oversharp
surface silhouettes occasionally exhibited by volume renderings are softened. Alternatively, we can
apply blurring to the opacities generated by the classification procedure, but leave the shading
untouched. This has the effect of softening silhouettes without adversely affecting the crispness of
surface detail.
The decision to reduce aliasing at the expense of resolution arises from two conflicting goals:
generating artifact-free images and keeping rendering costs low. In practice, the slight loss in image
sharpness might not be disadvantageous. Indeed, it is not clear that the accuracy afforded by more
expensive visibility calculations is useful, at least for the types of data considered in this study. Blurry
silhouettes have less visual impact, but they reflect the true imprecision in our knowledge of surface
locations.
An alternative means for improving image quality is super-sampling. The basic idea is to inter-
polate additional samples between the acquired ones prior to compositing. If the interpolation method
is a good one, the accuracy of the visibility calculations is improved, reducing some kinds of aliasing.
Another option is to apply this interpolation during data preparation. Although this alternative sub-
stantially increases computational expense throughout the remainder of the pipeline, it improves the
accuracy of our shading and classification calculations as well as our visibility.

6. Implementation and results

The dataset used in the molecular graphics study is a 113 x 113 x 113 voxel portion of an elec-
tron density map for the protein Cytochrome B5. Figure 5 shows four slices spaced 10 voxels apart in
this dataset. Each whitish cloud represents a single atom. Using the shading and classification calcula-
tions described in sections 3 and 4.1, colors and opacities were computed for each voxel in the
expanded dataset. These calculations required 5 minutes on a SUN 4/280 having 32MB of main
memory. Ray tracing and compositing were performed as described in section 2 and took 40 minutes,
yielding the image in figure 6.
The dataset used in the medical imaging study is a CT study of a cadaver and was acquired as
113 slices of 256 x 256 samples each. Using the shading and classification calculations described in
sections 3 and 4.2, two sets of colors and opacities were computed, one showing the air-skin interface
and a second showing the tissue-bone interface. The computation of each set required 20 minutes.
Using the compositing algorithm described in section 2, two views were then computed from each set
of colors and opacities, producing four images in all, as shown in figure 7. The computation of each
view required an additional 20 minutes. The horizontal bands through the patient’s teeth in all of these
images are artifacts due to scattering of X-rays from dental fillings and are present in the acquired data.
The bands across her forehead and under her chin in the air-skin images are gauze bandages used to
immobilize her head during scanning. Her skin and nose cartilage are rendered semi-transparently
over the bone surface in the tissue-bone images.
Figure 8 was generated by combining halves from each of the two sets of colors and opacities
already computed for figure 7. Heightened transparency of the temporal bone and the bones surround-
ing the maxillary sinuses - more evident in moving sequences than in a static view - is due to general-
ized osteoporosis. It is worth noting that rendering techniques employing binary classification deci-
sions would likely display holes here instead of thin, wispy surfaces.
The dataset used in figures 9 and 10 is of the same cadaver, but was acquired as 113 slices of
512 x 512 samples each. Figure 9 was generated using the same procedure as for figure 7, but casting
8

four rays per slice in the vertical direction in order to correct for the aspect ratio of the dataset. Figure
10 was generated by expanding the dataset to 452 slices using a cubic B-spline in the vertical direc-
tion, then generating an image from the larger dataset by casting one ray per slice. As expected, more
detail is apparent in figure 10 than figure 9.

7. Conclusions

Volume rendering has been shown to be an effective modality for the display of surfaces from
sampled scalar functions of three spatial dimensions. As demonstrated by the figures, it can generate
images exhibiting approximately equivalent resolution, yet fewer interpretation errors, than techniques
relying on geometric primitives.
Despite its advantages, volume rendering has several problems. The omission of an intermedi-
ate geometric representation makes selection of appropriate shading parameters critical to the effec-
tiveness of the visualization. Slight changes in opacity ramps or interpolation methods radically alter
the features that are seen as well as the overall quality of the image. For example, the thickness of the
transition region surrounding the isovalue contour surfaces described in section 4.1 stays constant only
if the local gradient magnitude stays constant within a radius of r voxels around each point on the sur-
face. The time and ensemble averaging inherent in X-ray crystallography usually yields suitable data,
but there are considerable variations among datasets. Algorithms are needed that automatically select
an optimum value for r based on the characteristics of a particular dataset.
Volume rendering is also very sensitive to artifacts in the acquisition process. For example, CT
scanners generally have anisotropic spatial sensitivity. This problem manifests itself as striping in
images. With live subjects, patient motion is also a serious problem. Since shading calculations are
strongly dependent on the orientation of the local gradient, slight mis-alignments between adjacent
slices produce strong striping.
An issue related specifically to region boundary surfaces is the rendering of datasets not meeting
the adjacency criteria described in section 4.2. This includes most internal soft tissue organs in CT
data. One simple solution would be for users to interactively clip or carve away unwanted tissues, thus
isolating subsets of the acquired data that meet the adjacency criteria. Since the user or algorithm is
not called upon to define geometry, but merely to isolate regions of interest, this approach promises to
be easier and to produce better images than techniques involving surface fitting. A more sophisticated
approach would be to combine volume rendering with high-level object definition methods in an
interactive setting. Initial visualizations, made without the benefit of object definition, would be used
to guide scene analysis and segmentation algorithms, which would in turn be used to isolate regions of
interest, producing a better visualization. If the output of such segmentation algorithms included
confidence levels or probabilities, they could be mapped to opacity and thus modulate the appearance
of the image.
Although this paper focuses on display of surfaces, the same pipeline can be easily modifed to
render interstitial volumes as semi-transparent gels. Color and texture can be added to represent such
variables as gradient magnitude or anatomical tissue type. Visualizations combining sampled and
geometric data also hold much promise. For example, it might be useful to superimpose ball-and-stick
molecular models onto electron density maps or medical prostheses onto CT scans. To obtain correct
visibility, a true 3-D merge of the sampled and geometric data must be performed. One possible solu-
tion is to scan-convert the geometry directly into the sampled array and render the ensemble. Alterna-
tively, classical ray tracing of the geometry can be incorporated directly into the volume rendering
pipeline. Another useful tool would be the ability to perform a true 3-D merge of two or more visuali-
zations, allowing, for example, the superimposition of radiation treatment planning isodose surfaces
over CT data.
The prospects for rapid interactive manipulation of volume data are encouraging. By pre-
computing colors and opacities and storing them in intermediate 3-D datasets, we simplify the image
generation problem to one of geometrically transforming two values per voxel and compositing the
9

results. One promising technique for speeding up these transformations is to combine a 3-pass version
of 2-pass texture mapping [17] with compositing. By re-sampling separately in each of three orthogo-
nal directions, computational expense is reduced. Given a suitably interpolated sample array, it might
be possible to omit the third pass entirely, compositing transformed 2-D slices together to form an
image. This further suggests that hardware implementations might be feasible. A recent survey of
architectures for rendering voxel data is given by Kaufman [18]. A suitably interconnected 2-D array
of processors with sufficient backing storage might be capable of producing visualizations of volume
datasets in real-time or near real-time.

Acknowledgements

The author wishes to thank Profs. Frederick P. Brooks Jr., Henry Fuchs, Steven M. Pizer, and
Turner Whitted for their encouragement on this paper, and John M. Gauch, Andrew S. Glassner, Mark
Harris, Lawrence M. Lifshitz, Andrew Skinner and Lee Westover for enlightening discussions and
advice. The comments of Nelson Max on an early version of this paper were also helpful. The elec-
tron density map used in this study was obtained from Jane and Dave Richardson of Duke University,
and the CT scan was provided by the Radiation Oncology Department at North Carolina Memorial
Hospital. This work was supported by NIH grant RR02170 and ONR grant N00014-86-K-0680.

References

[1] Herman, G.T. and Liu, H.K., ‘‘Three-Dimensional Display of Human Organs from Computer
Tomograms,’’ Computer Graphics and Image Processing, Vol. 9, No. 1, January, 1979, pp. 1-
21.

[2] Hoehne, K.H. and Bernstein, R., ‘‘Shading 3D-Images from CT Using Gray-Level Gradients,’’
IEEE Transactions on Medical Imaging, Vol. MI-5, No. 1, March, 1986, pp. 45-47.

[3] Schlusselberg, Daniel S. and Smith, Wade K., ‘‘Three-Dimensional Display of Medical Image
Volumes,’’ Proceedings of the 7th Annual Conference of the NCGA, Anaheim, CA, May, 1986,
Vol. III, pp. 114-123.

[4] Goldwasser, Samuel, ‘‘Rapid Techniques for the Display and Manipulation of 3-D Biomedical
Data,’’ Tutorial presented at 7th Annual Conference of the NCGA, Anaheim, CA, May, 1986.

[5] Trousset, Yves and Schmitt, Francis, ‘‘Active-Ray Tracing for 3D Medical Imaging,’’ EURO-
GRAPHICS ’87 conference proceedings, pp. 139-149.

[6] Pizer, S.M., Fuchs, H., Mosher, C., Lifshitz, L., Abram, G.D., Ramanathan, S., Whitney, B.T.,
Rosenman, J.G., Staab, E.V., Chaney, E.L. and Sherouse, G., ‘‘3-D Shaded Graphics in Radioth-
erapy and Diagnostic Imaging,’’ NCGA ’86 conference proceedings, Anaheim, CA, May, 1986,
pp. 107-113.

[7] Lorensen, William E. and Cline, Harvey E., ‘‘Marching Cubes: A High Resolution 3D Surface
Construction Algorithm,’’ Computer Graphics, Vol. 21, No. 4, July, 1987, pp. 163-169.

[8] Williams, Thomas Victor, A Man-Machine Interface for Interpreting Electron Density Maps,
PhD thesis, University of North Carolina, Chapel Hill, NC, 1982.
10

[9] Purvis, George D. and Culberson, Chris, ‘‘On the Graphical Display of Molecular Electrostatic
Force-Fields and Gradients of the Electron Density,’’ Journal of Molecular Graphics, Vol. 4,
No. 2, June, 1986, pp. 89-92.

[10] Smith, Alvy Ray, ‘‘Volume graphics and Volume Visualization: A Tutorial,’’ Technical Memo
176, PIXAR Inc., San Rafael, California, May, 1987.

[11] Drebin, Robert A., Verbal presentation in Computer Graphics and the Sciences tutorial at SIG-
GRAPH ’87 conference.

[12] Levoy, Marc and Whitted, Turner, ‘‘The Use of Points as a Display Primitive,’’ Technical Report
85-022, Computer Science Department, University of North Carolina at Chapel Hill, January,
1985.

[13] Levoy, Marc, ‘‘Rendering of Surfaces from Volumetric Data,’’ Technical Report 87-016, Com-
puter Science Department, University of North Carolina at Chapel Hill, June, 1987.

[14] Levoy, Marc, ‘‘Direct visualization of Surfaces from Computed Tomography Data,’’ SPIE Medi-
cal Imaging II conference proceedings, Newport Beach, CA, February, 1988 (to appear).

[15] Bui-Tuong, Phong, ‘‘Illumination for Computer-Generated Pictures,’’ Communications of the


ACM. Vol. 18, No. 6, June, 1975, pp. 311-317.

[16] Porter, Thomas and Duff, Tom, ‘‘Compositing Digital Images,’’ Computer Graphics, Vol. 18,
No. 3, July, 1984, pp. 253-259.

[17] Catmull, Edwin and Smith, Alvy Ray, ‘‘3-D Transformations of Images in Scanline Order,’’
Computer Graphics, Vol. 14, No. 3, July, 1980, pp. 279-285.

[18] Kaufman, Arie, ‘‘Voxel-Based Architectures for Three-Dimensional Graphics,’’ Proceedings of


the IFIP 10th World Computer Congress, Dublin, Ireland, September, 1986, pp. 361-366.

You might also like