0% found this document useful (0 votes)
90 views9 pages

Shan2010 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views9 pages

Shan2010 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Retrospective Evaluation of PET-MRI Registration Algorithms

Zuyao Y. Shan,1 Sara J. Mateja,2 Wilburn E. Reddick,1 John O. Glass,1 and Barry L. Shulkin3

The purpose of this study is to evaluate the accuracy of limitations. In addition, with the recent development
registration positron emission tomography (PET) head of the multimodality and population-based atlases,4
images to the MRI-based brain atlas. The [18F]fluoro-2-
deoxyglucose PET images were normalized to the MRI- it will be a great benefit to normalize PET informa-
based brain atlas using nine registration algorithms including tion with other information such as cytoarchitectonic
objective functions of ratio image uniformity (RIU), normal- probability maps and molecular architectonic maps.
ized mutual information (NMI), and normalized cross corre- The population-averaged standard space based on
lation (CC) and transformation models of rigid-body, linear,
MR images is generated to allow spatial normal-
affine, and nonlinear transformations. The accuracy of
normalization was evaluated by visual inspection and ization of multidimensional data; for example,
quantified by the gray matter (GM) concordance between ICBM452 is an averaged brain atlas based on MR
normalized PET images and the brain atlas. The linear and images of 452 healthy subjects, and cytoarchitec-
affine registration based on the RIU provided the best GM tonic probability maps are currently adding to it
concordance (average similarity index of 0.71 for both). We (www.loni.ucla.edu/ICBM/). Therefore, registration
also observed that the GM concordances of linear and
affine registration were higher than those of the rigid and plays an important role in PET studies.
nonlinear registration among the methods evaluated. A computation registration algorithm is typically
made up of four components: an objective function,
KEY WORDS: Normalization, PET, MR, brain, tissue a transformation model, an optimization process,
concordance and an interpolation method. The objective func-
BACKGROUND tion defines the quantitative measure of the spatial
agreement between two images. The objective func-
tions establish images' correspondence extrinsically
P ositron emission tomography (PET) has
matured in recent years as a functional imaging
modality that provides insight into cell metabolism
(fiducial localization errors) or intrinsically. The
intrinsic correspondences include feature-based
in health and disease.1,2 [18F]Fluoro-2-deoxyglucose
1
(FDG) PET images deliver quantitative data on From the Division of Translational Imaging Research,
Department of Radiological Sciences, MS 212, St. Jude
human brain metabolism.3 Unfortunately, FDG
Children’s Research Hospital, 262 Danny Thomas Place,
PET data contain little anatomic information. In Memphis, TN 38105, USA.
contrast, magnetic resonance (MR) images provide 2
From the Eckerd College, St. Petersburg, FL, USA.
3
details of anatomic structure. Therefore, combining From the Division of Nuclear Medicine, Department of
PET and MR provides important information on the Radiological Sciences, St. Jude Children’s Research Hospital,
Memphis, TN, USA.
structure–function relationship and permits precise
Part of the study has been presented on ISMRM 17th annual
anatomically based definition of a region of interest.1
meeting.
The fusion of PET and MR images can be achieved Correspondence to: Zuyao Y. Shan, Division of Translational
by using hardware or software, that is, a dedicated Imaging Research, Department of Radiological Sciences, MS
system acquiring PET and MR images simulta- 212, St. Jude Children’s Research Hospital, 262 Danny Thomas
neously or a computation algorithm fusing PET and Place, Memphis, TN 38105, USA; tel: +1-901-4952673;
fax: +1-901-4955706; e-mail: [email protected]
MR images that are collected separately. Although
Copyright * 2010 by Society for Imaging Informatics in
the hardware solution provides near-perfect image Medicine
registration, very few such systems are available in Online publication 1 May 2010
current clinic settings due to the cost and technology doi: 10.1007/s10278-010-9300-y

Journal of Digital Imaging, Vol 24, No 3 (June), 2011: pp 485Y493 485


486 SHAN ET AL.

methods and intensity-based methods. The repre- truth of registration evaluation is introducing extrin-
sentative feature-based objective functions are sic markers either by neurosurgery or by attaching
“head-and-hat”,5 the iterative closest point algo- them to the skin surface. For example, the “Retro-
rithm,6 and, recently, wavelet-based attribute vec- spective Registration Evaluation Project” provided
tor.7 The representative intensity-based objective a common evaluation framework based on gold
functions include cross correlation (CC),8 square standard PET, computed tomography (CT), and
intensity difference, 9 ratio image uniformity MR images of nine patients undergoing neuro-
(RIU),10,11 mutual information,12 and Kullback– surgery (four fiducial markers on each patient).15
Leibler distance.13 The transformation model The disadvantage of validation based on fiducial
defines degrees of freedom (DOF) of moving markers is that markers are spatially sparse and far
images, including rigid-body registration with 6 from the interior brain structures and, thus, do not
DOF (3 translation and 3 rotation), linear registra- provide local resolution and accuracy sufficient for
tion with 9 DOF (3 scaling plus the 6 DOF of validation.14 An alterative ground truth method is
rigid-body registration), affine registration with 12 to use a physical phantom with fiducial markers.
DOF (3 shearing plus the 9 DOF of linear regis- For example, eight registration algorithms were
tration), and nonlinear registration with more than evaluated using a Hoffman brain phantom filled
12 DOF (dependent on transformation models with 99Tcm (single-photon emission tomography
used). Many nonlinear transformation models [SPET]) and a phantom filled with water doped
have been developed. A detailed review of regis- with Gd-DTPA.16 The physical phantom images
tration objective functions and transformation share the same limitation of spatially sparse markers
models can be found in a previous report.14 The (four markers were used in a previous study 16). The
optimization process is a computer search for the third approach to generate ground truth is construc-
extremum of the objective function. The inter- tion of simulated images with known transforma-
polation method is used to resample the source tions. For example, two registration algorithms were
images to the desired image resolution. In this study, compared using 192 PET images that were simu-
we evaluated image fusing algorithms using five lated from six MR images.17 Validations using the
well-known registration toolkits. These toolkits use features extracted from the images are often used for
different objective functions and transformation inter-subject registration of MR images. For exam-
models, and several objective functions and trans- ple, the manually segmented brain structures have
formation models are implemented in each toolkit. been used to evaluate the different transformation
We believe that objective functions and transforma- models in registration of MR images in previous
tion models are the key factors that affect the reports.18–20 A selective-wavelet reconstruction
accuracy of registration. Therefore, we evaluated technique using a frequency-adaptive wavelet space
registration algorithms in terms of objective function/ threshold was also used to compare transformation
transformation model rather than registration toolkit. models for inter-subject MR PET registrations.21
Although almost every registration algorithm is The purpose of this study was to evaluate the
developed with experimental validation results, agreement of the spatial normalized PET images
each validation experiment unfortunately has a with the MR-based brain atlas using well-accepted
unique data set and design. Therefore, independent registration algorithms. The rationale for this study
quantitative and qualitative assessments of registra- was based on the following considerations. First,
tion fidelity are essential for multimodality image normalization of PET images to the brain atlas will
fusing.14 Previous validation studies can be divided play an important role in interpreting PET results
into four groups based on the methodology of with the development of population-based multi-
selection/generation of ground truth: fiducial mar- modal brain atlases. However, previous validations
kers, phantoms, simulated images, or features were focused on multi-modality registration of
extracted from images. A brief introduction is images from the same patient or simulated images
provided below with examples of the evaluation of from the same subject. Secondly, most of the
PET-MR registration studies; however, registration previous comparison studies were focused on rigid-
evaluation without the involvement of PET-MR body registration, although it is reasonable to believe
registration was not included due to the scope of this that nonlinear registration may not improve the
study. The first approach to generating the ground agreement of normalization with consideration of
RETROSPECTIVE EVALUATION OF PET-MRI REGISTRATION ALGORITHMS 487

the spatial resolution of PET images. We do expect patient group was arbitrarily selected and consisted of
affine and linear registration to provide better agree- 11 girls and 14 boys with a median age of 6.92 years
ment than the previously compared rigid-body (range, 5.08–9.0 years). The whole-body PET images
registration. In this study, nine registration algo- were acquired for diagnosis purposes on a GE
rithms with RIU,10,11,22 normalized mutual infor- Discovery LS PET-CT scanner with a spatial reso-
mation (NMI),12 and normalized CC objective lution of 3:9 3:9  4:25mm (Fig. 1).
functions with rigid-body, linear, affine, and non- The brain atlas, ICBM452 (www.loni.ucla.edu/
linear transformation models were used to normalize Atlases/), is a population-based brain atlas averaged
head sections of whole-body PET images of 25 from T1-weighted MR images of 452 healthy young
patients to the population-based brain atlas adult brains. The space of the atlas is not based on
ICBM452. These registration algorithms are imple- any single subject but is an average space con-
mented in registrations toolkits of Automated Image structed from the average position, orientation, scale,
Registration (AIR 5.0);10,11,22,23 Medical Image and shear from all the individual subjects (Fig. 1).
Processing, Analysis and Visualization (MIPAV) Retrospectively using the above data for this
(mipav.cit.nih.gov); HERMES software (Hermes study was approved by the institutional review
Medical Solutions, Sweden); and VTK CISG. The board at our institution.
gray matter (GM) on normalized PET images was
segmented using the fuzzy C-means algorithm24 Registration Algorithms
implemented in MIPAV. The tissue concordance of
GM on the normalized PET images and those Nine registration methods with RIU, NMI, or
provided with the atlas were calculated to evaluate normalized CC objective functions were evaluated
the agreement of the spatial normalization. using rigid, linear, affine, and nonlinear trans-
formation models. All registration algorithms were
implemented as true 3D registrations. The registra-
METHODS tion methods were selected on the basis of the
knowledge of and availability to the authors. This
Data study focused on objective functions and trans-
formation models. Therefore, we used the optimi-
The head sections of whole-body FDG PET images zation search method provided by each registration
of 25 patients treated for malignant lymphoma or toolkit, and trilinear interpolation was used for all
Hodgkin's disease were used for the evaluation. The the registration tests.

Fig 1. An illustration of a PET image and the brain atlas used for registration evaluation. Transverse slices near the level of basal
ganglia and generally showing the putamen and lateral ventricle were selected for illustration. The left image and the right image are slices
from the PET and atlas data, respectively.
488 SHAN ET AL.

Four registration algorithms on the basis of the and quantitatively by calculation of GM tissue
RIU objective function with rigid, linear, affine, or concordance between the normalized PET images
nonlinear transformation models were evaluated and the brain atlas. The visual inspection included
using AIR 5.0.10,11,22,23 The RIU objective func- visual assessment of the agreement of brain
tion is a mean-normalized standard deviation of surfaces, cerebella, brain stems, and boundaries
the ratio of the source image intensity to the target of corpus callosum and lateral ventricles between
image intensity.22,23 The rigid, linear, and affine the normalized PET images and the brain atlas in
models are the standard transformation models as orthogonal views.
described in “Introduction”. A third-order poly- The normalized PET images were segmented
nomial with 60 parameters was used for nonlinear using fuzzy C-means 24 implemented in MIPAV.
registration as recommended by the AIR developer The fuzzy C-means algorithm is an unsupervised
(bishopw.loni.ucla.edu/AIR3/howtosubjects.html). segmentation method based on fuzzy set theory
The optimization procedure used in AIR is an and generalized K-means algorithms. It iteratively
iterative, univariate Newton–Raphson search.22,23 minimizes the fuzzy membership function
Four registration algorithms on the basis of NMI weighted difference between voxel intensity and
objective function with rigid, linear, affine, or non- clusters' centers,
linear transformation models were evaluated. The N X
X C
NMI objective function is a measure of how well one Jm ¼ m 2
ij ðIðiÞ  cð jÞÞ ð1Þ
image explains the other and is calculated as the sum i¼1 j¼1
of probability distributions of the source and target
images divided by the joint probability distribution
of the source and target images.12,25 We used the in which mij is the fuzzy membership function and
NMI-based rigid-body registration implemented in m can be any number greater than 1 (m=2 in
MIPAV (mipav.cit.nih.gov) with the Powell optimi- MIPAV). The iteration stops when the difference
zation search. The linear registration-based NMI between membership functions of all voxels in two
used was implemented in the commercially available consecutive iterations is smaller than the tolerance.
HERMES software (Hermes Medical Solutions) The detailed implementation of the algorithm is
with the Powell optimization search. The affine described in the MIPAV documentation (mipav.
registration used was implemented in SPM2 (www. cit.nih.gov). The validity of fuzzy C-means seg-
fil.ion.ucl.ac.uk/spm/software/spm2/) with the mentation algorithm on PET was reported in
Powell optimization search. The free form deforma- previous studies.24,28 The normalized PET images
tion (FFD) based on NMI implemented in the VTK were segmented into three classes with tolerance of
CISG registration toolkit 26 was used as the nonlinear 0.0001 for this study. The GM regions on the atlas
registration. were defined as voxels for which GM probability
One registration algorithm with the normalized (provided with the atlas) was greater than that of
CC objective function and the affine transforma- white matter (WM) and cerebrospinal fluid (CSF)
tion model implemented in MIPAV was tested. on the tissue probability map and greater than 0.5.
The normalized CC is a measure of the correlation The tissue concordances of GM between the
between the source and target images with the normalized PET images and atlases were calcu-
assumption of a linear relationship. It is calculated lated as kappa indices:
as the sum of the intensity correlation (normalized k ðIP ; IA Þ ¼ 2  jIP \ IA j=jIP j þ jIA j ð2Þ
by subtracting the mean) between the target and
source images divided by the standard deviation.27
The Powell optimization search was used for this where Ip and IA are normalized PET images and
registration algorithm. the brain atlas, respectively.

Evaluation
RESULTS
The agreement of spatial normalization was
evaluated qualitatively by visual inspection of Figure 2 shows a typical normalized PET image
normalized PET images overlaid on the brain atlas overlaid on the atlas using RIU with the four
RETROSPECTIVE EVALUATION OF PET-MRI REGISTRATION ALGORITHMS 489

Fig 2. An example of spatial normalization of a PET image to the MR brain atlas using RIU with various transformation models. The
normalized PET image was overlaid onto the brain atlas with blue, green, yellow, and red representing increasing FDG uptake (intensities
of PET images). Orthogonal views of the same image are displayed in each column. From left to right, the normalized PET images show
rigid-body, linear, affine, and nonlinear transformation models.

transformation models. Visual inspection of RIU- An example of the spatial normalization of PET
based transformations showed that the affine images to the MR brain atlas using NMI with the
registrations provided better agreement of brain four transformation models is shown in Figure 4.
surface and matched the low FDG uptakes with the Visual inspection showed a similar degree of
WM and ventricles better than the rigid-body mismatching among all the transformation models
registration did. Visual inspection also showed (better brain surface agreement with less agree-
that normalized PET images from 20 of 25 patients ment of FDG uptake, WM, and CSF or vice versa).
had distorted transformations; thus, we considered However, no distorted normalization was observed
that these registrations failed (Fig. 3). in the nonlinear registrations.

Fig 3. A PET image that was considered a failed normalization with distortion. From left to right, orthogonal (sagittal, coronal, and
transverse) views are shown.
490 SHAN ET AL.

Fig 4. Spatial normalization of PET to the brain atlas using NMI with various transformation models. The normalized PET image was
overlaid onto the brain atlas with blue, green, yellow, and red representing increasing FDG uptake (intensities of PET images). Orthogonal
views of the same image are displayed in each column. From left to right, the normalized PET images show rigid-body, linear, affine, and
nonlinear transformation models.

A comparison of spatial normalization using performance is affected by the level of noise and
various objective functions with the affine regis- resolution of image data. Therefore, we used PET
tration model is shown in Figure 5. Visual data acquired from actual clinical care rather than
inspection showed that spatial normalization using data generated for research purposes or simulated
CC and RIU had better agreement of brain surface from MR images for the evaluation. The whole-body
agreement than those using NMI had and that PET scans of these patients were acquired for
normalization using RIU matched the low FDG lymphoma or Hodgkin's disease without any sign
uptake with CSF and WM better than using NMI. of central nervous system insult. We used GM
The GM concordances of the normalized PET concordance as the quantitative measure of spatial
images and the brain atlas are summarized in Table 1. normalization because the data were acquired with-
The spatial normalization using RIU had the best out landmarks. Although previous evaluation studies
agreement (averaged kappa index of 0.71) among the focused on rigid-body registration, the results of the
objective functions evaluated. The linear and affine co-registration portion of our study were consistent
registrations had better GM agreement than rigid- with those of previous studies.15–17,29
body registration (most of the nonlinear registrations The visual inspection was consistent with the
failed) using RIU as the objective function. measure of GM concordance, although visual
inspection cannot produce a conclusive judgment
in some cases; for example, there was no visually
DISCUSSION discerned difference in the comparison of spatial
normalization using various objective functions
The linear and affine registration methods using with the affine registration model (Fig. 5). Visual
RIU had the best agreement among the registrations inspection showed that the affine registration
we evaluated. It is well known that registration model had the best agreement; hence, we used it
RETROSPECTIVE EVALUATION OF PET-MRI REGISTRATION ALGORITHMS 491

Fig 5. Comparison of the spatial normalization of PET data to the brain atlas using various objective functions with affine registration.
The normalized PET image was overlaid onto the brain atlas with blue, green, yellow, and red representing increasing FDG uptake
(intensities of PET images). Orthogonal views of the same image are displayed in each column. From left to right, the images show the
CC, RIU, and NMI objective functions.

to evaluate the objective functions. The other Therefore, we did not repeat those comparisons
objective functions, including count difference, in this study. We used GM concordance as the
shape difference, sign changes, variance, square quantitative metric for evaluation in this study
root, 2D gradient, and 3D gradient, were compared because we believe a “perfect” registration will
with RIU in a previous study,16 and RIU yielded align the brain tissues together. However, GM
the smallest errors for SPET-MRI registration. concordance could be affected by the segmenta-
tion, inherent resolutions, and other factors. We
believe that the effect of the accuracy of segmen-
Table 1. GM Concordance and Robustness of Registration
tation on the evaluation process is negligible
Averaged GM Standard deviation Robustness because there are significant intensity differences
Registration concordance of the concordance (%)
(FDG uptakes) between the GM and WM/CSF on
RIU/rigid 0.60 0.06 100 PET images. The GM segmentation on the brain
RIU/linear 0.71 0.03 100
atlas is provided with the atlas, averaged from
RIU/affine 0.71 0.04 100
RIU/nonlinear – – 20
the individual brain images. Therefore, GM con-
NMI/rigid 0.59 0.07 100 cordance can be used as an index for comparison
NMI/linear 0.59 0.06 100 between registration methods. We did not use the
NMI/affine 0.52 0.04 96 WM or CSF concordances because the intensities
NMI/nonlinear 0.56 0.11 92
of WM and CSF are similar on PET images.
CC/affine 0.64 0.04 100
Although we are not able to ensure that the evaluated
GM concordance was calculated as the kappa index between algorithms stand for the state-of-art registration
normalized PET and brain atlas. Robustness was calculated as the methodology, to the best of our knowledge, the
percentage of registrations without obvious distortion over the total
number of registrations. RIU, NMI, and CC represent the objective
software and algorithms evaluated in this study are
functions of RIU, NMI, and CC. “–” designates that the GM widely used and well accepted in clinic and research
concordance was not evaluated or registration was not evaluated studies.
492 SHAN ET AL.

We found that spatial normalization using RIU there are different brain templates available for
had the best agreement of GM concordance among various purposes of registration. For example, we
the objective functions tested. Although both RIU believe that the MNI PET template is more
and NMI measure the intensity correspondence appropriate if the purpose of registration is to
between two images, RIU segments one image normalize individual PET image into a common
into partitions and maps the other image voxel space. We used ICBM452 brain atlas based on MR
intensities into each partition, whereas NMI maps images because we would like to explore the
two images' intensity distribution with the assump- relationship between FDG uptakes and anatomic
tion that the GM intensity on the MR image can be structures and cytoarchitectonic structures in the
mapped to the GM intensity on the PET image future. For the same reason, we did not normalize
with a similar intensity distribution (not necessa- the PET data to the pediatric brain template.
rily equivalence of voxel intensity). There is a There were several limitations to this study.
great difference between GM and WM on FDG First, whole-body PET scans were used rather than
PET images, and variations in voxel intensity for brain PET scans. It is to be expected that the
each tissue are greater than those with other image whole-body PET images had higher noise levels
modalities such as MR and CT. This feature and lower resolution than brain-specific images
enables RIU-based registration to more reliably would have had. The whole-body PET data were
find the global minimum because of the greater used based on the availability of normal PET data
weighted sum of standard deviations, although the (with the assumption that head images of the
low resolution and partial volume effect make patients with malignant lymphoma or Hodgkin's
boundaries on PET images less sharp. We found disease would not be affected by their disease).
that linear and affine registrations had better Second, we tested objective functions and trans-
agreement than the rigid-body registration. This formation models but not other key factors of the
was expected for inter-subject registration because optimization process and interpolation method.
of inter-subject size variations. However, we were Third, the time required for computation of spatial
surprised to find out that registrations with non- normalization was not evaluated. The second and
linear transformation models did not yield better or third limitations were because we used publicly
equal agreements than affine registration, con- available image-processing toolkits. We did not
sidering that nonlinear registration was performed code algorithms to test the optimization process
on the affinely normalized images. We think that and interpolation method because these two factors
the possible reason for this is that the limited are more standardized than the other two factors.
information resulting from the low resolution of
PET images cannot produce a “correct” trans-
formation; that is, the distorted registration does CONCLUSION
have smaller objective function value although it
does not demonstrate better “alignment” than the In summary, we found that either linear or affine
affine normalization. registration using RIU as the objective function
There is no consensus on how good is good provides the best GM concordance among the
enough for registration. That determination depends registration methods we tested. The linear and
on the purpose of registration and availability of affine registration models generally yield higher
other approaches. The best agreement we obtained GM concordance than rigid and nonlinear models.
in this study was a GM concordance of 0.71 by
linear/affine registration based on RIU. We think
that it is satisfactory for spatial normalization of REFERENCES
PET data to a brain atlas, considering the fact that
the GM concordance is calculated on resampled 1. Carson RE, Daube-Witherspoon ME, Herscovitch P:
PET and brain atlas images with spatial resolution Quantitative Functional Brain Imaging With Postron Emission
of 1  1  1mm, whereas the spatial resolution of Tomography. Academic Press, San Diego, CA 92101, 1998
2. Phelps ME: PET: Molecular Imaging and Its Biological
the original PET image was 3:9  3:9  4:25mm. Applications. Springer, New York, NY, 2004
This is in agreement with the conclusions from 3. Phelps ME, Huang SC, Hoffman EJ, Selin C, Sokoloff
previous validation studies.16,17,29,30 Furthermore, L, Kuhl DE: Tomographic measurement of local cerebral
RETROSPECTIVE EVALUATION OF PET-MRI REGISTRATION ALGORITHMS 493

glucose metabolic-rate in humans with (F-18)2-fluoro-2- SPET-SPET brain co-registration: Evaluation of the performance
deoxy-D-glucose—validation of method. Ann Neurol 6:371– of eight different algorithms. Nucl Med Commun 20:659–669,
388, 1979 1999
4. Toga AW, Thompson PM, Mori S, Amunts K, Zilles K: 17. Kiebel SJ, Ashburner J, Poline JB, Friston KJ: MRI and
Towards multimodal atlases of the human brain. Nat Rev PET coregistration—A cross validation of statistical parametric
Neurosci 7:952–966, 2006 mapping and automated image registration. Neuroimage 5:271–
5. Pelizzari CA, Chen GTY, Spelbring DR, Weichselbaum 279, 1997
RR, Chen CT: Accurate 3-dimensional registration of CT, PET, 18. Noblet V, Heinrich C, Heitz F, Armspach JP: Retro-
and/or MR images of the brain. J Comput Assist Tomogr spective evaluation of a topology preserving non-rigid registra-
13:20–26, 1989 tion method. Med Image Anal 10:366–384, 2006
6. Besl PJ, Mckay ND: A method for registration of 3-D 19. Crum WR, Rueckert D, Jenkinson M, Kennedy D, Smith
Shapes. IEEE Trans Pattern Anal Mach Intell 14:239–256, 1992 SM: A framework for detailed objective comparison of non-
7. Xue Z, Shen DG, Davatzikos C: Correspondence detec- rigid registration algorithms in neuroimaging. Med Image
tion using wavelet-based attribute vectors. Med Image Comput Comput Comput Assist Interv—MICCAI 2004, Pt 1, Proceed-
Comput Assist Interv—MICCAI 2003, Pt 2 2879:762–770, 2003 ings 3216:679–686, 2004
8. Lemieux L, Kitchen ND, Hughes SW, Thomas DGT: 20. Grachev ID, Berdichevsky D, Rauch SL, Heckers S,
Voxel-based localization in frame-based and frameless stereo- Kennedy DN, Caviness VS, Alpert NM: A method for assessing
taxy and its accuracy. Med Phys 21:1301–1310, 1994 the accuracy of intersubject registration of the human brain
9. Hajnal JV, Saeed N, Oatridge A, Williams EJ, Young IR, using anatomic landmarks. Neuroimage 9:250–268, 1999
Bydder GM: Detection of subtle brain changes using subvoxel 21. Dinov ID, Mega MS, Thompson PM, Woods RP,
registration and subtraction of serial MR-images. J Comput Sumners DL, Sowell EL, Toga AW: Quantitative comparison
Assist Tomogr 19:677–691, 1995 and analysis of brain image registration using frequency-
10. Woods RP, Grafton ST, Watson JDG, Sicotte NL, adaptive wavelet shrinkage. IEEE Trans Inf Technol Biomed
Mazziotta JC: Automated image registration: II. Intersubject 6:73–85, 2002
validation of linear and nonlinear models. J Comput Assist 22. Woods RP, Mazziotta JC, Cherry SR: MRI-PET regis-
Tomogr 22:153–165, 1998 tration with automated algorithm. J Comput Assist Tomogr
11. Woods RP, Grafton ST, Holmes CJ, Cherry SR, 17:536–546, 1993
Mazziotta JC: Automated image registration: I. General 23. Woods RP, Cherry SR, Mazziotta JC: Rapid automated
methods and intrasubject, intramodality validation. J Comput algorithm for aligning and reslicing PET images. J Comput
Assist Tomogr 22:139–152, 1998 Assist Tomogr 16:620–633, 1992
12. Maes F, Collignon A, Vandermeulen D, Marchal G, 24. Pham DL, Prince JL: An adaptive fuzzy C-means
Suetens P: Multimodality image registration by maximization of algorithm for image segmentation in the presence of intensity
mutual information. IEEE Trans Med Imaging 16:187–198, 1997 inhomogeneities. Pattern Recogn Lett 20:57–68, 1999
13. Gan R, Wu J, Chung ACS, Yu SCH, Wells WM: 25. Studholme C, Hill DLG, Hawkes DJ: An overlap
Multiresolution image registration based on Kullback–Leibler invariant entropy measure of 3D medical image alignment.
distance. Med Image Comput Comput Assist Interv—MICCAI Pattern Recogn 32:71–86, 1999
2004, Pt 1, Proceedings 3216:599–606, 2004 26. Crum WR, Hartkens T, Hill DLG: Non-rigid image
14. Gholipour A, Kehtarnavaz N, Briggs R, Devous M, registration: Theory and practice. Br J Radiol 77:S140–S153, 2004
Gopinath K: Brain functional localization: A survey of image 27. Hajnal JV, Hill DLG, Hawkes DJ: Medical Image
registration techniques. IEEE Trans Med Imaging 26:427–451, Registration. CRC Press, New York, 2001
2007 28. Hatt M, le Rest CC, Turzo A, Roux C, Visvikis D: A
15. West J, Fitzpatrick JM, Wang MY, Dawant BM, Maurer fuzzy locally adaptive Bayesian segmentation approach for
CR, Kessler RM, Maciunas RJ, Barillot C, Lemoine D, volume determination in PET. IEEE Trans Med Imaging
Collignon A, Maes F, Suetens P, Vandermeulen D, vandenElsen 28:881–893, 2009
PA, Napel S, Sumanaweera TS, Harkness B, Hemler PF, Hill 29. Strother SC, Anderson JR, Xu XL, Liow JS, Bonar DC,
DLG, Hawkes DJ, Studholme C, Maintz JBA, Viergever MA, Rottenberg DA: Quantitative comparisons of image registration
Malandain G, Pennec X, Noz ME, Maguire GQ, Pollack M, techniques based on high-resolution MRI of the brain. J
Pelizzari CA, Robb RA, Hanson D, Woods RP: Comparison and Comput Assist Tomogr 18:954–962, 1994
evaluation of retrospective intermodality brain image registration 30. West J, Fitzpatrick JM, Wang MY, Dawant BM, Maurer
techniques. J Comput Assist Tomogr 21:554–566, 1997 CR, Kessler RM, Maciunas RJ: Retrospective intermodality
16. Koole M, D'Asseler Y, Van Laere K, Van de Walle R, registration techniques for images of the head: Surface-based
Van de Wiele C, Lemahieu I, Dierckx RA: MRI-SPET and versus volume-based. IEEE Trans Med Imaging 18:144–150, 1999

You might also like