NeuroConstruct 3D Reconstruction and Visualization of Neurites in Optical Microscopy Brain Images
NeuroConstruct 3D Reconstruction and Visualization of Neurites in Optical Microscopy Brain Images
NeuroConstruct: 3D Reconstruction
and Visualization of Neurites in Optical
Microscopy Brain Images
Parmida Ghahremani , Saeed Boorboor , Pooya Mirhosseini, Chetan Gudisagar, Mala Ananth,
David Talmage , Lorna W. Role, and Arie E. Kaufman , Fellow, IEEE
Abstract—We introduce NeuroConstruct, a novel end-to-end application for the segmentation, registration, and visualization of brain
volumes imaged using wide-field microscopy. NeuroConstruct offers a Segmentation Toolbox with various annotation helper functions that
aid experts to effectively and precisely annotate micrometer resolution neurites. It also offers an automatic neurites segmentation using
convolutional neuronal networks (CNN) trained by the Toolbox annotations and somas segmentation using thresholding. To visualize
neurites in a given volume, NeuroConstruct offers a hybrid rendering by combining iso-surface rendering of high-confidence classified
neurites, along with real-time rendering of raw volume using a 2D transfer function for voxel classification score versus voxel intensity value.
For a complete reconstruction of the 3D neurites, we introduce a Registration Toolbox that provides automatic coarse-to-fine alignment of
serially sectioned samples. The quantitative and qualitative analysis show that NeuroConstruct outperforms the state-of-the-art in all design
aspects. NeuroConstruct was developed as a collaboration between computer scientists and neuroscientists, with an application to the
study of cholinergic neurons, which are severely affected in Alzheimer’s disease.
Index Terms—Wide-field microscopy, neuron morphology, segmentation, registration, hybrid volume rendering, CNN
Moreover, segmentation and tracing techniques are First end-to-end application for reconstructing and
limited to classifying features captured within the visualizing neurites in densely-structured WFM
designed algorithm or trained model. Thus, a com- images.
plimentary approach is required. Novel 3D Segmentation Toolbox for streamlining
The primary goal of NeuroConstruct is reconstruction of segmentation of neurites with features including
neuronal structures in WFM whole brain specimens that ena- brushing, erasing, optical flow, snap, gamma correc-
bles the exploration of the nervous system. To achieve this tion, skeletonizing.
goal, we address all challenges and present NeuroConstruct, Novel CNN model for segmenting the neurites in
a novel end-to-end application to reconstruct neuronal struc- low-resolution densely structured WFM images.
tures by performing tasks of segmenting, registering, and Novel algorithm for registering depth-adjacent brain
visualizing neuronal structures in brain volumes. sections using a coarse-to-fine sequential process.
To overcome the segmentation and visualization prob- First hybrid visualization technique that combines
lem, NeuroConstruct offers a novel Segmentation Toolbox. segmentation results with the raw input volume.
It provides simultaneous 2D cross-sectional views and 3D
volume rendering of image stacks along with real-time 2 RELATED WORK AND BACKGROUND
user-drawn annotations. It also provides novel annotation
2.1 Biological Background
functions to help experts annotate neurons in 3D brain
The human brain has 80–100 billion neurons, and the nervous
images efficiently. We further implemented automatic neu-
system groups neurons into different neurite morphology.
ron segmentation using a nested CNN that hires skip path-
Studies have shown that in mice brain, an axonal arbor of a
ways for connecting the encoder and decoder to compute
single cholinergic neuron, including its terminal branches, is
feature maps and segment neurites using the extracted
as long as 30 cm [3]. Given the extensive branching of cholin-
maps combined with image processing techniques. CNNs
ergic projections, conventional specimen preparation and
have achieved breakthrough performance in various seg-
imaging techniques make it difficult to analyze their full
mentation tasks. Their primary issue is requiring a vast
expanse and intricate features. Beyond the genetic labeling
amount of labeled data for training. Due to the high density
novelty, a 3D reconstruction of the circuity is required for
of neurons in brain images, their manual annotation in 3D
understanding the cholinergic connectome.
image stacks requires tremendous time and effort. We intro-
duce a workflow to speed up ground-truth generation.
The robustness of deep-learning models dramatically 2.2 Segmentation
depends on the accuracy and availability of sufficient train- Based on the motivation behind the scientific investigation,
ing data. Biologist’s workflow is subjected to experimental visualizing neuronal structures is more significant than ren-
variations, and their data has immense biological variabil- dering voxel intensity values of raw volumes. Our previous
ity. Thus, the infeasibility of capturing sufficient training work [2] presents a preprocessing method for meaningful
data covering all neuronal variations can result in the model rendering of neurons. However, a more robust solution
failing to segment a neurite for which it was not trained. (e.g., neuron segmentation) is required for extracting neu-
Therefore, we devise a hybrid approach to visualize the rites for visualization and registration purposes.
extracted neurites along with possible unsegmented neu- Neuron segmentation is a challenging task in neurobiol-
rites. Specifically, our model generates a per-voxel confi- ogy, due to the low quality of images and high complexity of
dence score of the classification as a neurite. In our hybrid neuron morphology. To tackle this challenge, a number of
visualization, we first render the iso-surface of high-confi- manual or semi-automatic segmentation tools have been
dence neurites using a user-adjusted confidence threshold, developed, such as Neurolucida [4], V3D [5], ManSegTool [6],
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4953
SIGEN [7]. These tools rely on tedious manual operations, endpoints to find an optimal transform. NeuronSticher [32]
making the segmentation of complex neurites in large vol- aligns neurite tips at stack boundaries using triangulation.
umes nearly impossible. However, these tips are identified from neuron-tracing
Since neurons have a branching tree structure, many reconstructions, relying on tip selection.
methods have been hired for tracing dendritic branches and
axonal trees, such as optimal seed-points pathfinding [8], 2.4 Visualization
[9], [10], model fitting [11], [12], fast marching [13], [14], and Recently introduced tools for the reconstruction, visualiza-
distance-tree hierarchical pruning [15]. Most of these meth- tion, and analysis of complex neural connection maps enable
ods require an ideal manually- or automatically-generated neuroscientists to gain insights into the underlying brain
set of seeds. Manual marker placement is tedious, while morphology and function. We refer the reader to a survey
automatic seed generation is greatly affected by image low [33] of techniques for macro-, meso-, and micro-scale connec-
quality, noisy patterns, and broken structures. tivity visualization for connectomics. Volume rendering has
Recently, a few learning models have been developed to been developed for 3D reconstruction and visualization of
automatically trace neurons in OM images. Chen et al. [16] brain microscopy images. Mosaliganti et al. [34] developed
trained a self-learning method using user-provided neuron axial artifacts correction and 3D reconstruction of cellular
reconstructions. Li et al. [17] hired 3D CNNs for segmenting structures from OM images. Nakao et al. [35] discussed inter-
neurites, which suffer from relatively long computation. active visualization and proposed a TF design for two-pho-
Zhou et al. [18] developed an open-source toolbox, Deep- ton microscopy volumes based on feature spaces. Wan et al.
Neuron, with a 2D CNN followed by 3D mapping. Many of [36] described an interactive rendering tool for confocal
these methods have shown to perform well in segmenting a microscopy data, combining the rendering of multi-channel
single neuron on high-resolution images. However, they volume and polygon mesh data.
cannot faithfully reconstruct complex neuron morphology Beyer et al. [37] presented ConnectomeExplorer for inter-
in images with medium to low quality. active 3D visualization and query-guided visual analysis of
There is also a vast amount of research on segmenting large volumetric EM datasets. Hadwiger et al. [38] designed
neuronal membranes in EM images. Deep learning models scalable multi-resolution virtual memory architecture for
have shown an outstanding performance in automatic neu- visualizing petascale volumes imaged as a continuous
rites segmentation [19], [20], [21]. However, due to the lim- stream of high-resolution EM images. Haehn et al. [39] devel-
ited availability of ground-truth data, they suffer from over- oped a scalable platform for visualizing registration parame-
and under-segmentation. Haehn et al. [22] developed desk- ters and steps for fine-tuning the alignment computation,
top applications for proofreading the automatic algorithm visualizing segmentation of 2D nano-scale images with over-
segmentations. Unfortunately, these methods are unappli- layed layers, and interactive visualization for proofreading
cable to WFM images due to the differences in neuron EM images. Neuroglancer [40] is a WebGL-based visualiza-
visual representation, details level, and images quality. tion framework for volumetric EM data. These methods are
designed specifically for confocal, two-photon, or EM data.
2.3 Registration and Alignment When applied to WFM, they do not yield qualitatively accu-
Volume reconstruction for OM images of brain specimens rate neuron projections. Our previous work [2] discussed the
utilizes intensity- or feature-based methods. Intensity-based challenges related to WFM volume visualization and intro-
approaches select a pair of representative images from adja- duced a workflow for its meaningful rendering.
cent sub-volumes and compute a correlation measure to
estimate their relative spatial registration [23], [24], [25].
These methods do not enhance registration accuracy at a 3 NEUROCONSTRUCT OVERVIEW
finer morphological scale. Also, imaging artifacts, uneven NeuroConstruct is an end-to-end application for neuron
contrast, and large datasets are potential bottlenecks for reconstruction and visualization for WFM images. It con-
these methods. Feature-based methods use specific struc- sists of four main components: Segmentation of neuronal
tures knowledge, which needs preprocessing for producing structures, proofreading reconstructed structures, registra-
geometrical features as registration landmarks. Landmarks tion of brain sections, and visualization of the reconstructed
registration methods [26], [27] are fast and scale up easily neurons and raw data using hybrid volume rendering. We
with higher-order transformation models. Tsai et al. [28] present a fast and efficient ground-truth data generation
presented microscopy slices montage synthesis by utilizing pipeline. NeuroConstruct provides an interactive Segmen-
generic alignment cues from multiple fluorescence channels tation Toolbox for automatically segmenting neurons and
without landmarks segmentation. Yigitsoy and Navab [29] proofreading the segmentations. It also renders the recon-
proposed tensor voting based structure propagation for structed volume by combining the segmentation results
multi-modal medical images mosaicing. with the raw input volume. NeuroConstruct also presents a
Lee and Bajcsy [30] registered and reconstructed depth Registration Toolbox for automatic coarse-to-fine registra-
adjacent sub-volumes acquired by a confocal microscope by tion of depth-adjacent 3D brain sections.
connecting the 3D trajectories of salient cylindrical structures NeuroConstruct enables neuroscientists to study brain
at the sub-volume boundaries. This method is ineffective on sections of interest thoroughly. After the whole brain section
sparsely labeled samples due to the lack of continuous struc- is acquired, the user can follow a 4-step pipeline (Fig. 2): (1)
tures to be segmented for the proposed trajectory fusion. automatically and manually coarse-aligning whole brain sec-
Dercksen et al. [31] proposed the alignment of filamentous tions using the Registration Toolbox, (2) automatically and
structures by tracing the filaments and matching the traced manually fine-aligning an ROI from the coarse-aligned
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4954 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022
Fig. 3. Architecture of the 3D CNN segmentation model. It consists of contracting and expansion paths. The network input is a batch of grayscale
images and its output is a probability map of the same size. Each output pixel represents the probability of the input pixel being part of a neurite.
the segmented results to create an accurate ground-truth set path through a dense stage block to fuse the output from
using the Segmentation Toolbox (explained in Sec. 6). the previous stage layer of the same dense block with the
corresponding up-sampled output of the lower dense block.
The re-designed skip pathways enable more similar seman-
5 SEGMENTATION
tic level feature maps between the encoder and decoder,
The neuron segmentation process consists of two steps: Neu- making the optimization problem easier, and the use of
rite segmentation and soma segmentation. We segment neu- RSU blocks enables a more efficient extraction and aggrega-
rites using our designed model. For segmenting somas, we tion of intra-stage multi-scale contextual features.
use a thresholding technique combined with the segmented Architecture of the Network. Our network consists of 21
neurites as a guide. This section describes the neuron seg- stages. Each stage is formed from an RSU block with a spe-
mentation process, including the dataset and the proposed cific height. We represent each stage as RSUi;j L , where i is
model, and soma segmentation technique. the index of the down-sampling layer along the encoder of
the big U-structure, j is the index of the up-sampling layer
5.1 Data along the decoder of the big U-structure, and L is the num-
We created a ground-truth dataset of WFM image stacks for ber of encoder layers in the U-Net-like structure of the RSU
training and testing purposes. We cropped six regions of size block, except in the stage with L ¼ 2. Since the resolution of
25512512 randomly from the brain medial septum region feature maps in the last encoding stage is relatively low, fur-
with the size of 254800033000 and annotated using the ther down-sampling results in the loss of useful informa-
pipeline explained in Sec. 4.2, to be used as ground-truth. tion. Therefore, in the last encoder stage (L ¼ 2), we only
This dataset covers many variations in neuron morphology use dilated convolutions, having the same resolution as its
due to the large size of the cropped regions and the high den- input feature maps. As shown in Fig. 3, our network con-
sity of neurons. The segmentation model was trained, tested, sists of a sub-network of encoder stages which is the back-
and validated on 3, 2, and 1 image stacks, respectively. We bone of the network, a sub-network of decoder stages, skip
also conducted a qualitative analysis to evaluate the accuracy pathways, and a saliency map fusion module. The fusion
of reconstructed neurites, performed by domain experts. module is responsible for generating the probability map.
Using our trained model, we segmented neurons for eight The network generates five side output saliency probability
image stacks (never before seen by the model) with sizes 5
maps Sside 4
, Sside 3
, Sside 2
, Sside 1
, Sside from stages RSU25;0 , RSU34;1 ,
varying between 25512512 and 2110241024 randomly RSU43;2 , RSU52;3 , RSU61;4 by a 33 convolution layer, up-sam-
cropped from the medial septum and cortical sections of five pling layer and a sigmoid function, and five top output
different brains. 1 2 3 4 5
saliency probability maps Stop , Stop , Stop , Stop , Stop from
stages RSU70;1 , RSU70;2 , RSU70;3 , RSU70;4 , RSU70;5 by a 11 con-
5.2 Proposed Network for Neurite Segmentation volution layer and a sigmoid function. Then, the final
We propose a nested encoder-decoder network with re- saliency map Sfuse is generated by concatenating all side
designed skip pathways for connecting the encoder and and top output saliency maps, followed by a 11 convolu-
decoder sub-networks (similar to U-Net++ [42]) for a pre- tion layer and sigmoid function.
cise semantic segmentation, and stacking U-structure (simi- RSU Block. Each block consists of 3 main components: (1)
lar to U2 -Net [43]) for salient object detection. Fig. 3 shows Input convolution layer, transforming the input feature map
an overview of the network architecture. Our network is a x to an intermediate map F ðxÞ for local feature extraction. (2)
two-level nested U-structure consisting of 21 stages. Each U-Net based encoder-decoder structure with input of the
stage is configured with a ReSidual U-block (RSU) intro- intermediate feature map F ðxÞ that extracts the multi-scale
duced by [43]. In our network, the feature maps follow a contextual information UðF ðxÞÞ, where U represents the
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4956 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022
Fig. 7. Steps of our neurite alignment (fine registration) in two depth-adjacent ROI sections. (a) We determine a subset of neurites whose trajectory
suggests continuity beyond the interface of its section. We then estimate a direction of its propagated trajectory into the depth-adjacent section and
define an ellipsoidal region around its end-point, representing a 3D space of possible continuing locations, shown in the top of (b). Then, we use ICP
on the ellipsoids point cloud representation to estimate a rigid-body transformation, as shown in the bottom of (b). Finally, we present the user with
an interface to verify the results and correct for any misaligned neurites (c).
Finally, a rigid-body transformation is estimated using Let V1 and V2 be two adjacent sections, where the
Mattes mutual information (MI) [49] with a regular-step z-slices order from V1 to V2 are in anterior to poste-
gradient descent optimizer. MI for intensity-based registra- rior direction.
tion uses joint probability distribution of pixel samples Let Vpost 1 and Vant
2 be the z-slices corresponding to the
between images and define a mapping for a set of pixels posterior sub-section of V1 and the anterior sub-sec-
based on value similarity by reducing the entropy between tion of V2 , respectively. In our implementation, the
the distributions, signaling the images are likely better extracted thickness of Vpost
1 and Vant2 are set to be
aligned. This method is well-suited for our coarse alignment 5 mm.
as the brain tissue outer-boundary or internal regions vary Let Vo be the region defined by Vpost 1 [ Vant
2 . This is
in perimeter across the anterior to posterior, and geometry- essentially the interfacing region V1 and V2 .
based registration will not yield effective results. In apply- Broadly, we determine a subset of neurites from V1 and
ing MI, we adopted a multi-resolution approach to avoid V2 whose trajectory within the section suggests continuity
reaching a local minimum due to noise and tiling artifacts beyond the section interface and establish a correspondence
in a sparsely labeled sample. We doubled the step size for between the candidates with a similar trajectory in the
each subsequent higher resolution to avoid the optimizer depth-adjacent sections. Using this information, we solve the
from focusing on progressively smaller regions. alignment problem in Vo , by maximizing overlaps between
linearly extrapolated trajectories of the neurites. Fig. 7 pro-
7.2 Fine Alignment vides an illustration of our fine alignment algorithm.
Following coarse registration, we utilize the coherency First, for each two serial sections, we locate neurites with
between neurites geometric structures across depth-adja- trajectories propagating into the other section and estimate
cent sections. Our fine alignment method has 3 major a propagated trajectory direction. We first use the segmenta-
steps: (1) ROI selection, (2) automatic neurite trajectory tion mask volume (see Sec. 5) to find a 3D line segment that
propagation, and (3) automatic trajectory alignment. The passes through the connected-component voxels of each
main idea is to adopt a feature-based registration to maxi- extracted propagating neurite from Vpost 1 or Vant
2 . Next,
mize the neurites morphological continuity in neighboring using the line segments, we determine propagation direc-
brain sections. tion as a vector vi;j , for each propagating neurite. Since the
ROI Selection. A critical limitation of micrometer resolu- goal is to extrapolate the propagating trajectory, we did not
tion microscopy images of brain samples is its large spatial consider the entire neurite in the section (V1 or V2 ) but
extent. For a computationally faster and memory-efficient rather limited to the z-thickness defined for Vpost1 and Vant
2 .
implementation of the fine alignment method, we ask the Finally, each neurite trajectory propagation in the over-
user to mark an ROI. lapping 3D space Vo is defined by an ellipsoidal region
Neurite Trajectory Propagation. Common methods for around its end-point close to the section interfacing slice,
registering microscopic images introduce fiduciary land- using the estimated vector vi;j as the major axis (see Fig. 7).
marks during image acquisition, which are then registered The choice of an ellipsoid to represent the trajectory propa-
to reconstruct the complete volume. However, this adds gation is to accommodate possible neurite signal-loss and
complexity to neuroscientists’ workflow. In our fine align- non-rigid deformation from physically slicing the brain.
ment approach, we have developed a novel method that Because of hydration and dehydration of sample prepara-
uses linearly extrapolated neurites trajectories to infer tion, in addition to the uniform brain growing and shrink-
their corresponding continuity, beyond the section slicing ing, tissue distortion may occur. Therefore, we center the
interface. This correspondence between neurites is used to ellipsoid at the end-point of a propagating neurite to maxi-
estimate the necessary transformation parameters that mize the search space for estimating a transformation that
spatially align depth-adjacent sections. To formally define aligns depth-adjacent sections. The ellipsoid major axis is
our approach: aligned parallel to the estimated trajectory propagation
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4960 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022
affected by Alzheimer’s disease.” They also evaluated each unique solution for serial rendering of WFM images. The
component and features of the toolboxes separately (see base and moving images are loaded in two separate frames
Sec. 10.2 and 10.3). for initial coarse alignment. Our experts said: “the loaded
images offer a good deal of flexibility and can be loaded as a
10.2 Segmentation Toolbox Evaluation maximum intensity projection of a 3D image (image contains
Our neuroscientists used the Segmentation Toolbox to anno- X, Y, and Z information) or a sub-stack of the XYZ image.”
tate and trace projections within WFM sections. They evalu- The Toolbox then computes coarse alignment and offers to
ated the Toolbox, including the hybrid volume rendering move and rotate the image and visualize the base and mov-
and annotation features, and compared the Toolbox with ing image individually. Our experts believe that “this allows
available tools. They believe that segmentation and volume users control over the ultimate coarse alignment solution.”
rendering offer incomparable tools for neurite segmentation Upon completion of coarse alignment, users can save it and,
and subsequent visualization. Several features are unique to ultimately, the new volume (two registered images). This is
our Toolbox that our experts have not found in other pro- in our experts’ interest as it provides an essential intermedi-
grams they have used (e.g., ImageJ [53], FluoRender [36]). ate point at which further registration can be done with the
Hybrid Volume Rendering Evaluation. Using the hybrid series next image, or the new volume can be separately
volume rendering, users visualize and have ultimate control inspected. At this point, images fine alignment can be con-
over the auto-segmentation result. The Toolbox then pro- ducted upon selection of a small relevant ROI. Similar to
vides a user-friendly interface for identifying signals from coarse alignment, the fine-alignment is visualized within the
the background and improving the segmentation. Our Toolbox and can be moved/rotated for ultimate control over
experts believe that “both of these features allow users to the fine-alignment solution. The final aligned volume (coarse
visualize a full resolution image, which is critical during + fine) can be saved for further analysis.
segmentation.” Overall, they found this visualization ideal, Overall, our experts found a few features of this Tool-
as it overlaid the segmentation on the raw data, allowing box as a unique solution for neuroscientists. “First, the
users to adjust and visualize the segmentation based on toolbox completes segmentation of the images as a step
what they desire to visualize in the raw. They also remarked before registration, which provides a complementary set
“this allowed for segmenting different neuron types, ini- of information to use in visualization and analysis. Second,
tially strongest, and the option to include weaker ones.” the down-sampling conducted by the toolbox is a tempo-
Annotation Features Evaluation. The Toolbox offers an rary solution for ease of working with large images. Upon
option to control the contrast of all views. Our experts found saving, the solutions for both coarse- and fine-alignment
this feature helpful, saying that “the gamma slider allowed us are applied to each Z step within the stack, thereby provid-
to incorporate all fibers weak and strong, or just to annotate ing the end volume with full image resolution. Finally, the
and understand morphological or trajectory in the strongest registration toolbox offers a great deal of flexibility to
fibers.” The sliders allow the user to choose an ROI to focus incorporate the user’s preference into the registration with
on and visualize it across X, Y, and Z individually. To improve the use of arrow toggles to nudge the image as well as tog-
upon or erode the annotation, the Toolbox provides a simple, gles to rotate the image. These components of the registra-
user-friendly solution, a paintbrush-like feature. Our experts tion toolbox make it a powerful tool for registration of
said “this intuitive tool makes it exceptionally easy for users serial WFM images.”
to improve the segmentation on their own in fluid strokes, fol-
lowing a projection along its path. The ability to draw, trace, 10.4 Segmentation Method Evaluation
and erase on a slice by slice and pixel by pixel basis provided Quantitative Analysis. Evaluation Metrics. We evaluated the
us complete control over how refined or simple we wanted trained model using several metrics. We define TP as pixels
the annotations to be, and the brush size option allows us to correctly segmented as neurite, TN as pixels correctly seg-
carefully follow and mimic the projection morphology and mented as background, FP as background pixel segmented as
path.” Our experts expressed their interest in snap by saying neurite, and FN as neurite pixels segmented as background.
that “the toolbox offers a smart-adjustment to the simple We then compute precision: TPTP TP
þFP , recall: TP þFN , f1-score:
paintbrush concept by updating the view as you draw to the ðTP þTNÞ ðTP Þ
2 recallþprecision , accuracy: ðTP þTNþFP þFNÞ , IOU: ðTP þFP
recallprecision
þFNÞ ,
point where the signal is sharpest for every click. This is espe-
ð2TP Þ
cially helpful as projection paths in real data are hardly ever and Dice: ð2TP þFP þFNÞ .
straight or within the same optical section but instead come in Testing Set. Our testing set contains 50 images of size
and out of view, making tracing them challenging!” Using the 512512, cropped from the medial septum, and manually
updated segmentation, a skeletonized image can be generated segmented using the pipeline in Sec. 4. The manual segmen-
and overlaid onto the 3D rendering for simplified, clean visu- tations are binary masks, where 0 represents background
alization. Our experts remarked that “while the annotation and 1 represents neurite. The network output is a probabil-
feature allowed for visualization of neuron morphology, the ity map within the [0,1] range, where neurite pixels have a
skeleton view allowed for a simplified way to visualize neu- higher probability than background pixels.
rite trajectory. These two features worked in complement to Speed. The trained model segments a 512512 image in
enhance 3D visualization.” 0.28 seconds. Running time scales up linearly by the volume
depth and every 512 increment in width and length, as each
10.3 Registration Toolbox Evaluation input 3D image stack is broken into 2D image stacks of size
Our experts registered and visualized brain sections using 512512 with step size of 1, 512, and 512 over z-,y-, and
the Registration Toolbox and believe that it provides a xaxis, respectively.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4962 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022
TABLE 1
A Comparative Analysis of our Model Against
State-of-the-Art Models
Comparison. We applied several automatic neuron tracing Fig. 9. The neuron segmentation of a densely structured WFM image of
a brain medial septum section. Green, yellow, and blue colors represent
techniques on our testing set, including App2 [15], Smart reconstructed neurites, reconstructed somas, and ground-truth segmen-
Tracing [16], Rivulet [54], TReMAP [55], and NueroGPS-Tree tation, respectively. (a) Raw image. (b) Reconstructed neurons overlaid
[10], but they failed at reconstructing neuronal structures in on the raw image. (c) Reconstructed neurites and the ground-truth seg-
the absence of a soma in densely-structured low-resolution mentation overlaid on the raw image.
images. We compared the performance of our model against
three state-of-the-art models including U2 -Net [43], U-Net+ Utility of NeuroConstruct registration in domain science.
+ [42], U-Net3+ [56], summarized in Table 1. You can find Advances in genetic labeling have allowed us to selectively
precision-recall curve for these models in Fig. S2, available label specific neuronal subsets. The next frontier is to follow
online in Supplementary Material. As seen, our model these neurons through 3D to allow complete reconstruction
achieves the best overall performance in segmenting neu- of single neurons and their entire arbors (dendritic branches
rites. As shown in Fig. 9, our model detects strong and weak (shown in Fig. 4)). Without a proper reconstruction method,
neurites and only misses extremely weak ones. quantitative analysis of WFM data typically does not make
We also evaluated our model against these three models claims about individual neurons and their trajectories, but
on various tasks, including nuclei segmentation and liver- instead about neuron populations. Normal workflow
tumor segmentation with larger datasets (consisting of includes subsampling stacks across a region to get a density
thousands of images). According to all metrics, our network of labeled neurons and processes from anterior to posterior.
had better performance and more robustness on various These processes would not come from a single soma, but
segmentation tasks, compared to other networks. instead gives a population estimate of labeled fibers. Using
Expert Analysis. Our model reconstructs the complete neu- NeuroConstruct, we register serial slices, reconstruct and
ronal structure including weak neurites blended into the isolate single neurons and their labeled arbors. Our registra-
background due to the image low contrast in densely struc- tion now allows for a number of applications and analyses
tured image stacks. Our experts were provided with crops that were previously impossible with WFM data.
from different brain regions to evaluate how our segmenta- Quantitative analysis of arbor length. Prior to registration,
tion improved neuron visualization as compared to raw we were only able to visualize somas and extensions from
data. Despite the brain region, our experts reported that single stacks and unable to determine the arbors morphol-
there was significant recovery of fibers in the images as com- ogy through the anterior to posterior axis. This allows us to
pared to raw data. Their typical workflow would involve cre- visualize the single neuron arbors complexity through serial
ating MIP of images and visualizing them in ImageJ [53]. sections, and compute extensions length. Registration
Using ImageJ, despite increasing gamma value in the raw helped to recover 5255mm of the processes length from the
data, they were unable to visualize some of the fibers that selected sub-ROI in Fig. S4, available online in Supplemen-
were appropriately segmented by our model. However, for tary Material.
evaluating the segmentation results generated by our model, Quantitative analysis of arbor branch points. Using the infor-
they created MIPs of both the raw and segmented data and mation gathered from the registered dataset, another analy-
overlaid the images in ImageJ. Our experts were excited to sis of interest is the number and location of branch points
see the dramatic improvement in recovery between raw and from the soma. The morphological diversity of these
segmented images that could now be visualized using their branches could tune different aspects of cell communica-
typical workflow (ImageJ). See Fig. S3, available online in tion. Prior to registration, analysis on a single stack would
Supplementary Material for 2 examples. have shown one or two branches. Following NeuroCon-
struct reconstruction process, it is evident that there are 24
10.5 Registration Method Evaluation branch nodes visible. The analysis determines the distance
NeuroConstruct was used to register six individual serial between branch points and the soma, and arbor complexity.
sections from the mouse brain specimen (see Sec. 4). Each Analysis of fiber integrity. A key feature in neuroanatomy
section was 16bit, 2193735616 pixels 47 TIFF images, study is the health and dramatic deterioration and fragmen-
totaling approximately 55GBs per stack. The coarse registra- tation of cholinergic fibers in specific brain regions during
tion took 25 minutes. We cropped a 50005000 ROI of size brain diseases, such as Alzheimer’s. The fibers integrity is
6GB. The automatic fine alignment process for six ROI stacks critical to maintain neural connectivity and function. Using
took 33 minutes. In total, coarse-to-fine registration of a 300 the registered dataset and the ability to visualize weak fibers
mm sample took 68 minutes. We also tested a use-case for re- from the segmentation, experts now visualize alterations to
registration after correcting a misaligned stack, taking an fiber morphology including swelling and fragmentation.
additional 25 minutes. In Fig. S4, available online in Supple- For example, we determine how far from the soma the frag-
mentary Material, we show two focused neurons, studied by mentation began in affected neuron processes, and how
our experts for evaluation. Following are their discussions: much of the process remains intact and healthy.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4963
11 CONCLUSION AND FUTURE WORK [10] T. Quan et al., “NeuroGPS-tree: Automatic reconstruction of large-
scale neuronal populations with dense neurites,” Nat. Methods,
We presented NeuroConstruct, a novel end-to-end applica- vol. 13, pp. 51–54, 2016.
tion for the segmentation, registration, and visualization of [11] Y. Wang, A. Narayanaswamy, C.-L. Tsai, and B. Roysam, “A
broadly applicable 3D neuron tracing method based on open-
brain volumes, imaged using WFM. NeuroConstruct com- curve snake,” Neuroinformatics, vol. 9, pp. 193–217, 2011.
bines deep learning 3D segmentation methods, a novel algo- [12] S. Li et al., “Optimization of traced neuron skeleton using lasso-
rithm for coarse-to-fine registration of brain sections, and a based model,” Front. Neuroanatomy., vol. 13, 2019, Art. no. 18.
[13] S. Basu and D. Racoceanu, “Reconstructing neuronal morphology
hybrid approach to visualize recovered neurites. To gener- from microscopy stacks using fast marching,” in Proc. IEEE Int.
ate a ground-truth set to train and test the model, we devel- Conf. Image Process., 2014, pp. 3597–3601.
oped the Segmentation Toolbox, facilitating annotation of [14] J. Yang, M. Hao, X. Liu, Z. Wan, N. Zhong, and H. Peng, “FMST:
WFM image stacks. We evaluated NeuroConstruct quanti- An automatic neuron tracing method based on fast marching and
minimum spanning tree,” Neuroinformatics, vol. 17, pp. 185–196,
tatively and qualitatively, along with experts’ analysis. Our 2019.
results show that NeuroConstruct outperforms the state-of- [15] H. Xiao and H. Peng, “APP2: Automatic tracing of 3D neuron
the-art in all aspects of the design, including segmentation, morphology based on hierarchical pruning of a gray-weighted
registration, and visualization of neurons in WFM images. image distance-tree,” Bioinformatics, vol. 29, pp. 1448–1454, 2013.
[16] H. Chen, H. Xiao, T. Liu, and H. Peng, “SmartTracing: Self-learning-
Our application is designed with the goal of helping neu- based neuron reconstruction,” Brain Inform., vol. 2, pp. 135–144,
roscientists to study cholinergic neurons in WFM images 2015.
and perform well in reconstruction and visualization of [17] R. Li, T. Zeng, H. Peng, and S. Ji, “Deep learning segmentation of
optical microscopy images improves 3D neuron reconstruction,”
WFM neurons. We tested our segmentation model on confo- IEEE Trans. Med. Imag., vol. 36, no. 7, pp. 1533–1541, Jul. 2017.
cal and two-photon microscopic images. Although our net- [18] Z. Zhou, H.-C. Kuo, H. Peng, and F. Long, “DeepNeuron: An
work was trained using WFM image stacks, it could recover open deep learning toolbox for neuron tracing,” Brain Inform., vol.
most of the neurites in these stacks perfectly well. In the 5, pp. 1–9, 2018.
[19] A. Fakhry, T. Zeng, and S. Ji, “Residual deconvolutional networks
future, we will revise the NeuroConstruct to create a univer- for brain electron microscopy image segmentation,” IEEE Trans.
sal application that reconstructs and visualizes images Med. Imag., vol. 36, no. 2, pp. 447–456, Feb. 2016.
acquired by other imaging modalities. We will incorporate [20] M. Januszewski et al., “High-precision automated reconstruction
of neurons with flood-filling networks,” Nat. Methods, vol. 15,
more features into our Segmentation Toolbox to increase pp. 605–610, 2018.
efficiency and simplicity of annotating WFM neurons. Note [21] K. Luther and H. S. Seung, “Learning metric graphs for neuron
that the reconstruction quality can be further improved by segmentation in electron microscopy images,” in Proc IEEE Int.
training the model with a larger variety of neurites. Symp. Biomed. Imag., 2019, pp. 244–248.
[22] D. Haehn et al., “Design and evaluation of interactive proofread-
ing tools for connectomics,” IEEE Trans. Vis. Comput. Graph., vol.
ACKNOWLEDGMENTS 20, no. 12, pp. 2466–2475, Dec. 2014.
[23] P. Bajcsy, S.-C. Lee, A. Lin, and R. Folberg, “Three-dimensional
This work was supported in part by the NSF under Grants volume reconstruction of extracellular matrix proteins in Uveal
CNS1650499, OAC1919752, ICER1940302, and IIS2107224 and Melanoma from fluorescent confocal laser scanning microscope
in part by the Intramural Research Program of NIH, NINDS, images,” J. Microsc., vol. 221, pp. 30–45, 2006.
[24] T. Ju et al., “3D volume reconstruction of a mouse brain from his-
and NIMH. tological sections using warp filtering,” J. Neurosci. Methods, vol. 156,
pp. 84–100, 2006.
REFERENCES [25] H. Liang, N. Dabrowska, J. Kapur, and D. S. Weller, “Whole brain
reconstruction from multilayered sections of a mouse model of
[1] T. H. Ferreira-Vieira, I. M. Guimaraes, F. R. Silva, and F. M. Ribeiro, status epilepticus,” in Proc. Asilomar Conf. Signals, Syst., Comput.,
“Alzheimer’s disease: Targeting the cholinergic system,” Curr. Neu- 2017, pp. 1260–1263.
ropharmacol., vol. 14, pp. 101–115, 2016. [26] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting
[2] S. Boorboor, S. Jadhav, M. Ananth, D. Talmage, L. Role, and of two 3D point sets,” IEEE Trans. Pattern Anal. Mach. Intell., vol.
A. Kaufman, “Visualization of neuronal structures in wide-field PAMI-9, no. 5, pp. 698–700, Sep. 1987.
microscopy brain images,” IEEE Trans. Vis. Comput. Graph., vol. 25, [27] J. Luck, C. Little, and W. Hoff, “Registration of range data using a
no. 1, pp. 1018–1028, Aug. 2018. hybrid simulated annealing and iterative closest point algorithm,”
[3] E. C. Ballinger, M. Ananth, D. A. Talmage, and L. W. Role, “Basal in Proc. IEEE Int. Conf. Robot. Automat., 2000, pp. 3739–3744.
forebrain cholinergic circuits and signaling in cognition and cog- [28] C.-L. Tsai et al., “Robust, globally consistent and fully automatic
nitive decline,” Neuron, vol. 91, pp. 1199–1218, 2016. multi-image registration and montage synthesis for 3D multi-
[4] J. R. Glaser and E. M. Glaser, “Neuron imaging with neurolu- channel images,” J. Microsc., vol. 243, pp. 154–171, 2011.
cida— PC-based system for image combining microscopy,” Com- [29] M. Yigitsoy and N. Navab, “Structure propagation for image
put. Med. Imag. Graph., vol. 14, pp. 307–317, 1990. registration,” IEEE Trans. Med. Imag., vol. 32, no. 9, pp. 1657–1670,
[5] H. Peng, Z. Ruan, F. Long, J. H. Simpson, and E. W. Myers, “V3d Sep. 2013.
enables real-time 3D visualization and quantitative analysis of [30] S.-C. Lee and P. Bajcsy, “Trajectory fusion for 3D volume
large-scale biological image data sets,” Nat. Biotechnol., vol. 28, no. 4, reconstruction,” Comput. Vis. Image Understanding, vol. 110,
pp. 348–353, 2010. pp. 19–31, 2008.
[6] C. Magliaro, A. L. Callara, N. Vanello, and A. Ahluwalia, “A man- [31] V. J. Dercksen, H.-C. Hege, and M. Oberlaender, “The filament
ual segmentation tool for three-dimensional neuron datasets,” editor: An interactive software environment for visualization,
Front. Neuroinform., vol. 11, 2017, Art. no. 36. proof-editing and analysis of 3D neuron morphology,” Neuroinfor-
[7] H. Ikeno, A. Kumaraswamy, K. Kai, T. Wachtler, and H. Ai, “A matics, vol. 12, pp. 325–339, 2014.
segmentation scheme for complex neuronal arbors and applica- [32] H. Chen, D. M. Iascone, N. M. da Costa, E. S. Lein, T. Liu, and
tion to vibration sensitive neurons in the honeybee brain,” Front. H. Peng, “Fast assembling of neuron fragments in serial 3D
Neuroinform., vol. 12, 2018, Art. no. 61. sections,” Brain Inform., vol. 4, 2017, Art. no. 183.
[8] H. Peng, F. Long, and G. Myers, “Automatic 3D neuron tracing [33] H. Pfister et al., “Visualization in connectomics,” in Scientific Visu-
using all-path pruning,” Bioinformatics, vol. 27, pp. 239–247, 2011. alization, Berlin, Germany: Springer, 2014, pp. 221–245.
[9] M. H. Longair, D. A. Baker, and J. D. Armstrong, “Simple neurite [34] K. Mosaliganti et al., “Reconstruction of cellular biological struc-
tracer: Open source software for reconstruction, visualization tures from optical microscopy data,” IEEE Trans. Vis. Comput.
and analysis of neuronal processes,” Bioinformatics, vol. 27, Graph., vol. 14, no. 4, pp. 863–876, Jul./Aug. 2008.
pp. 2453–2454, 2011.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4964 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022
[35] M. Nakao et al., “Visualizing in vivo brain neural structures using Parmida Ghahremani received the BS degree in
volume rendered feature spaces,” Comput. Biol. Med., vol. 53, computer engineering from the Sharif University of
pp. 85–93, 2014. Technology, Iran. She is currently working toward
[36] Y. Wan, H. Otsuna, C.-B. Chien, and C. Hansen, “FluoRender: An the PhD degree in computer science from Stony
application of 2D image space methods for 3D and 4D confocal Brook University. Her research interests include
microscopy data visualization in neurobiology research,” in Proc. computer vision, biomedical imaging, deep learn-
IEEE Pacific Vis. Symp., 2012, pp. 201–208. ing, and computer graphics.
[37] J. Beyer, A. Al-Awami, N. Kasthuri, J. W. Lichtman, H. Pfister, and
M. Hadwiger, “ConnectomeExplorer: Query-guided visual analy-
sis of large volumetric neuroscience data,” IEEE Trans. Vis. Com-
put. Graph., vol. 19, no. 12, pp. 2868–2877, Dec. 2013.
[38] M. Hadwiger, J. Beyer, W. Jeong, and H. Pfister, “Interactive vol-
ume exploration of petascale microscopy data streams using a
visualization-driven virtual memory approach,” IEEE Trans. Vis.
Comput. Graph., vol. 18,no. 12, pp. 2285–2294, Dec. 2012. Saeed Boorboor received the BSc (Hons.)
[39] D. Haehn et al., “Scalable interactive visualization for con- degree in computer science from the School of
nectomics,” Informatics, vol. 4, 2017, Art. no. 29. Science and Engineering, Lahore University of
[40] WebGL-based viewer for volumetric data. Accessed: Oct. 12, 2020. Management Sciences, Pakistan. He is currently
[Online]. Available: https://github.com/google/neuroglancer working toward the PhD degree in computer sci-
[41] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional ence from Stony Brook University. His research
networks for biomedical image segmentation,” in Proc. Int. Conf. interests include scientific visualization, biomedi-
Med. Image Comput. Comput.-Assist. Interv., 2015, pp. 234–241. cal imaging, and computer graphics.
[42] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++:
A nested u-net architecture for medical image segmentation,” in
Proc. Deep Learn. Med. Image Anal. Multimodal Learn. Clin. Decis.
Support, 2018, pp. 3–11.
[43] X. Qin, Z. Zhang, C. Huang, M. Dehghan, O. R. Zaiane, and
M. Jagersand, “U2-Net: Going deeper with nested U-structure
for salient object detection,” Pattern Recognit., vol. 106, 2020,
Pooya Mirhosseini received the BSc degree in
Art. no. 107404.
software engineering from the Sharif University of
[44] P.-S. Liao et al., “A fast algorithm for multilevel thresholding,” J.
Technology and the MSc degree in computer sci-
Inf. Sci. Eng., vol. 17, no. 5, pp. 713–727, 2001.
ence from Stony Brook University. He was a
[45] B. D. Lucas and T. Kanade, “An iterative image registration tech-
research assistant with Visualization Lab. He is
nique with an application to stereo vision,” in Proc. 7th Int. Joint
currently a software engineer with Apple in the
Conf. Artif. Intell., vol. 2, pp. 674––679, 1981.
Bay Area. His research interests include embrace
[46] M. Roberts et al., “Neural process reconstruction from sparse user
low-level GPU optimization, GPGPU, visualiza-
scribbles,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist.
tion, and virtual/augmented reality.
Interv., 2011, pp. 621–628.
[47] T.-C. Lee, R. L. Kashyap, and C.-N. Chu, “Building skeleton mod-
els via 3D medial surface axis thinning algorithms,” Graphical
Models Image Process., vol. 56, pp. 462–478, 1994.
[48] U. Bagci and L. Bai, “Automatic best reference slice selection for
smooth volume reconstruction of a mouse brain from histological
images,” IEEE Trans. Med. Imag., vol. 29, no. 9, pp. 1688–1696, Chetan Gudisagar received the BTech degree
Sep. 2010. from the National Institute of Technology Karna-
[49] D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellen, and taka, Surathkal, India, and the MSc degree in
W. Eubank, “PET-CT image registration in the chest using free- computer science from Stony Brook University,
form deformations,” IEEE Trans. Med. Imag., vol. 22, no. 1, New York. He is currently working on an open
pp. 120–128, Jan. 2003. source project on a consistency platform called
[50] H. W. Kuhn, “The Hungarian method for the assignment prob- CorfuDB with VMware. His research interests
lem,” Nav. Res. Logist. Quart., vol. 2, no. 1/2, pp. 83–97, 1955. include distributed systems, backend develop-
[51] W. J. Schroeder, B. Lorensen, and K. Martin, The Visualization Tool- ment, and data science.
kit: An Object-oriented Approach to 3D Graphics. New York, NY,
USA: Kitware, 2004.
[52] L. Ibanez, W. Schroeder, L. Ng, and J. Cates, The ITK Software
Guide, 1st ed. New York, NY, USA: Kitware, 2003.
[53] W. S. Rasband, “ImageJ: Image processing and analysis in java,”
Astrophysics Source Code Library, vol. 1, pp. 06013, Jun. 2012. Mala Ananth received the BS degree in biology
[54] S. Liu, D. Zhang, S. Liu, D. Feng, H. Peng, and W. Cai, “Rivulet: and the PhD degree in neuroscience from Stony
3D neuron morphology tracing with iterative back-tracking,” Neu- Brook University in 2011 and 2019, respectively.
roinformatics, vol. 14, pp. 387–401, 2016. From 2013, she was a research assistant with
[55] Z. Zhou, X. Liu, B. Long, and H. Peng, “TReMAP: Automatic 3D Brookhaven National Laboratory. She is currently
neuron reconstruction based on tracing, reverse mapping and a postdoctoral research fellow with the National
assembling of 2D projections,” Neuroinformatics, vol. 14, pp. 41–50, Institute of Neurological Disorders and Stroke.
2016. Her research interests include the heterogeneity
[56] H. Huang et al., “UNET 3+: A full-scale connected UNet for medi- of cell types in age-related cognitive decline.
cal image segmentation,” in Proc. IEEE Int. Conf. Acoust., Speech
Signal Process., 2020, pp. 1055–1059.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4965
David Talmage received the BA degree in biol- Arie E. Kaufman (Fellow, IEEE) received the PhD
ogy from the University of Virginia and the PhD degree in computer science from Ben-Gurion Uni-
degree in genetics from the University of Minne- versity, Israel, in 1977. From 1999 to 2017, he was
sota. He was a postdoctoral with the Rockefeller a chair with the Computer Science Department. He
University and Harvard Medical School. He is cur- is currently a distinguished professor of computer
rently a senior scientist with the National Institute science, the director of the Center of Visual Com-
of Mental Health, NIH. He has authored or co- puting, and the chief scientist of the Center of
authored more than 70 peer-reviewed articles Excellence with Wireless and Information Technol-
that have been cited nearly 4000 times. ogy, Stony Brook University. He has conducted
research for more than 40 years in visualization,
graphics, and imaging and has authored or coau-
thored more than 350 refereed papers. He was the founding editor-in-chief
of the IEEE Transactions on Visualization and Computer Graphics, 1995–
1998. He is an ACM fellow, National Academy of Inventors fellow, and the
Lorna W. Role received the AB degree in applied recipient of the IEEE Visualization Career Award in 2005, and inducted into
mathematics and the PhD degree in physiology the IEEE Visualization Academy in 2019, and the Long Island Technology
from Harvard University. He was a postdoctoral in Hall of Fame in 2013.
pharmacology with Harvard Medical School and
the Washington University School of Medicine. She
was an assistant professor with Columbia Univer- " For more information on this or any other computing topic,
sity in 1985 and became a professor. In 2008, he please visit our Digital Library at www.computer.org/csdl.
was a SUNYdistinguished professor and chair with
the Department of Neurobiology and Behavior, and
co-director of Neurosciences Institute, Stony Brook
University. She is currently the scientific director
and senior investigator with NINDS, NIH. She was the recipient of many
awards and honors, including fellow of the American Association for the
Advancement of Science in 2011 and fellow of the American College of
Neuropsychopharmacology in 2009.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.