0% found this document useful (0 votes)
24 views15 pages

NeuroConstruct 3D Reconstruction and Visualization of Neurites in Optical Microscopy Brain Images

Uploaded by

iqra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views15 pages

NeuroConstruct 3D Reconstruction and Visualization of Neurites in Optical Microscopy Brain Images

Uploaded by

iqra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO.

12, DECEMBER 2022 4951

NeuroConstruct: 3D Reconstruction
and Visualization of Neurites in Optical
Microscopy Brain Images
Parmida Ghahremani , Saeed Boorboor , Pooya Mirhosseini, Chetan Gudisagar, Mala Ananth,
David Talmage , Lorna W. Role, and Arie E. Kaufman , Fellow, IEEE

Abstract—We introduce NeuroConstruct, a novel end-to-end application for the segmentation, registration, and visualization of brain
volumes imaged using wide-field microscopy. NeuroConstruct offers a Segmentation Toolbox with various annotation helper functions that
aid experts to effectively and precisely annotate micrometer resolution neurites. It also offers an automatic neurites segmentation using
convolutional neuronal networks (CNN) trained by the Toolbox annotations and somas segmentation using thresholding. To visualize
neurites in a given volume, NeuroConstruct offers a hybrid rendering by combining iso-surface rendering of high-confidence classified
neurites, along with real-time rendering of raw volume using a 2D transfer function for voxel classification score versus voxel intensity value.
For a complete reconstruction of the 3D neurites, we introduce a Registration Toolbox that provides automatic coarse-to-fine alignment of
serially sectioned samples. The quantitative and qualitative analysis show that NeuroConstruct outperforms the state-of-the-art in all design
aspects. NeuroConstruct was developed as a collaboration between computer scientists and neuroscientists, with an application to the
study of cholinergic neurons, which are severely affected in Alzheimer’s disease.

Index Terms—Wide-field microscopy, neuron morphology, segmentation, registration, hybrid volume rendering, CNN

1 INTRODUCTION a trade-off. Due to its optics, images acquired using WFM


suffer from a degraded contrast between foreground and
DVANCES in optical microscopy (OM) have driven the
A field of neuroanatomy and the acquisition of high-reso-
lution 3D images of the brain across multiple spatial scales.
background, low signal-to-noise ratio (SNR), and poor axial
resolution (Fig. 1). The microscope limited chamber size
and the adverse affect of increased light scattering in thicker
Using new techniques and tools for reconstruction, visuali-
samples compel neuroscientists to physically slice thin sec-
zation, and analysis of these 3D images, neuroscientists can
tions of specimens. Consequently, a brain study is con-
now study in detail the structural and functional connectiv-
strained to analyzing individual sections of a specimen.
ity underlying the brain. This technology is widely used to
To study and diagnose brain diseases, neuroscientists
diagnose diseases caused by neuron degeneration, such as
explore the structure and function of the nervous system. In
cholinergic neurons in Alzheimer’s disease [1].
brain studies using WFM images, due to its optics and lim-
There are various techniques for imaging brain sections,
ited chamber size, neuroscientists face three challenges:
such as electron (EM), confocal, two-photon, and light
wide-field microscopy (WFM). Amongst neuroscientists,
 Segmentation of neuronal structures: The primary infor-
WFM is preferred for experimental studies due to its large
mation neuroscientists expect from WFM brain vol-
field-of-view and fast image acquisition. Imaging a 40 slice
umes are neuronal structures. The inherent WFM
of a sample using confocal microscopy would take 15 hours,
limitations, such as out-of-focus blurring and the
but only 1.5 hours on WFM. Moreover, WFM automatically
absence of distinctive set of intensity values differen-
moves the sample stage, resulting in sequential image
tiating foreground (neurons) from background (blur-
acquisition without manually readjusting the sample orien-
ring artifacts and brain tissue), leads to failure of
tation for every field-of-view. These advantages come with
current neuron tracing methods.
 Registration of neuronal structures: Following the seg-
 Parmida Ghahremani, Saeed Boorboor, Pooya Mirhosseini, Chetan Gudisa- mentation, the reconstruction of the entire brain
gar, and Arie E. Kaufman are with the Department of Computer Science, specimen as a full volume enables a complete under-
Stony Brook University, Stony Brook, NY 11794-2424 USA. standing of neuron morphologies. However, regis-
E-mail: {pghahremani, sboorboor, semirhossein, fchetan, ari}@cs.stonybrook.edu.
 Mala Ananth, David Talmage, and Lorna W. Role are with the National tration of brain sections is complicated, as the
Institutes of Health, Bethesda, MD 20892 USA. physical slicing results in non-rigid deformations on
E-mail: {mala.ananth, david.talmage, lorna.role}@nih.gov. captured images.
Manuscript received 5 Jan. 2021; revised 10 Aug. 2021; accepted 19 Aug. 2021.  Visualization of neuronal structures: The WFM limita-
Date of publication 3 Sept. 2021; date of current version 27 Oct. 2022. tions make visualization parameters adjustment
(Corresponding author: Parmida Ghahremani.)
Recommended for acceptance by C. Wang.
complex and time-consuming, and directly applying
Digital Object Identifier no. 10.1109/TVCG.2021.3109460 rendering techniques do not yield effective results.
1077-2626 © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See ht_tps://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4952 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022

calling it the structural representation mode. Next, we intro-


duce a 2D transfer function (TF) of voxel confidence versus
raw data voxel intensity, calling it the fusion mode. By inter-
actively manipulating the TF, experts visualize possible
neurites in the raw volume below the confidence threshold.
To overcome the registration challenge, NeuroConstruct
provides a Registration Toolbox for coarse-to-fine registra-
tion of depth-adjacent brain sections along with visualiza-
tion of brain sections to be registered and the aligned
sections. Using the brain overall structural anatomy, it first
estimates a global rigid-body transformation that coarsely
registers adjacent sections. Then, we introduce a novel
method that maximizes sparsely labeled neurites morpho-
logical continuity in a user-selected region-of-interest (ROI).
Fig. 1. WFM images are volumes obtained by focusing at different
depths of thinly sliced specimen [2]. (a) Volume rendering of unpro- We estimate the trajectories of severed neurites at interfaces
cessed WFM brain image. (b) Top-left: 2D cross-sectional view of the between slices, using an ellipsoid as the approximate loca-
volume in x-y plane. Top-right: 2D cross-section in y-z plane at the verti- tion where these neurites continue in the adjacent section.
cal dashed line. Bottom: 2D cross-section in x-z plane at the horizontal
The contributions (to the best of our knowledge) of this
dashed line. The cross-sections show how out-of-focus light occludes
low-intensity features, making it difficult to analyze 3D structures. paper are as follows:

Moreover, segmentation and tracing techniques are  First end-to-end application for reconstructing and
limited to classifying features captured within the visualizing neurites in densely-structured WFM
designed algorithm or trained model. Thus, a com- images.
plimentary approach is required.  Novel 3D Segmentation Toolbox for streamlining
The primary goal of NeuroConstruct is reconstruction of segmentation of neurites with features including
neuronal structures in WFM whole brain specimens that ena- brushing, erasing, optical flow, snap, gamma correc-
bles the exploration of the nervous system. To achieve this tion, skeletonizing.
goal, we address all challenges and present NeuroConstruct,  Novel CNN model for segmenting the neurites in
a novel end-to-end application to reconstruct neuronal struc- low-resolution densely structured WFM images.
tures by performing tasks of segmenting, registering, and  Novel algorithm for registering depth-adjacent brain
visualizing neuronal structures in brain volumes. sections using a coarse-to-fine sequential process.
To overcome the segmentation and visualization prob-  First hybrid visualization technique that combines
lem, NeuroConstruct offers a novel Segmentation Toolbox. segmentation results with the raw input volume.
It provides simultaneous 2D cross-sectional views and 3D
volume rendering of image stacks along with real-time 2 RELATED WORK AND BACKGROUND
user-drawn annotations. It also provides novel annotation
2.1 Biological Background
functions to help experts annotate neurons in 3D brain
The human brain has 80–100 billion neurons, and the nervous
images efficiently. We further implemented automatic neu-
system groups neurons into different neurite morphology.
ron segmentation using a nested CNN that hires skip path-
Studies have shown that in mice brain, an axonal arbor of a
ways for connecting the encoder and decoder to compute
single cholinergic neuron, including its terminal branches, is
feature maps and segment neurites using the extracted
as long as 30 cm [3]. Given the extensive branching of cholin-
maps combined with image processing techniques. CNNs
ergic projections, conventional specimen preparation and
have achieved breakthrough performance in various seg-
imaging techniques make it difficult to analyze their full
mentation tasks. Their primary issue is requiring a vast
expanse and intricate features. Beyond the genetic labeling
amount of labeled data for training. Due to the high density
novelty, a 3D reconstruction of the circuity is required for
of neurons in brain images, their manual annotation in 3D
understanding the cholinergic connectome.
image stacks requires tremendous time and effort. We intro-
duce a workflow to speed up ground-truth generation.
The robustness of deep-learning models dramatically 2.2 Segmentation
depends on the accuracy and availability of sufficient train- Based on the motivation behind the scientific investigation,
ing data. Biologist’s workflow is subjected to experimental visualizing neuronal structures is more significant than ren-
variations, and their data has immense biological variabil- dering voxel intensity values of raw volumes. Our previous
ity. Thus, the infeasibility of capturing sufficient training work [2] presents a preprocessing method for meaningful
data covering all neuronal variations can result in the model rendering of neurons. However, a more robust solution
failing to segment a neurite for which it was not trained. (e.g., neuron segmentation) is required for extracting neu-
Therefore, we devise a hybrid approach to visualize the rites for visualization and registration purposes.
extracted neurites along with possible unsegmented neu- Neuron segmentation is a challenging task in neurobiol-
rites. Specifically, our model generates a per-voxel confi- ogy, due to the low quality of images and high complexity of
dence score of the classification as a neurite. In our hybrid neuron morphology. To tackle this challenge, a number of
visualization, we first render the iso-surface of high-confi- manual or semi-automatic segmentation tools have been
dence neurites using a user-adjusted confidence threshold, developed, such as Neurolucida [4], V3D [5], ManSegTool [6],
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4953

SIGEN [7]. These tools rely on tedious manual operations, endpoints to find an optimal transform. NeuronSticher [32]
making the segmentation of complex neurites in large vol- aligns neurite tips at stack boundaries using triangulation.
umes nearly impossible. However, these tips are identified from neuron-tracing
Since neurons have a branching tree structure, many reconstructions, relying on tip selection.
methods have been hired for tracing dendritic branches and
axonal trees, such as optimal seed-points pathfinding [8], 2.4 Visualization
[9], [10], model fitting [11], [12], fast marching [13], [14], and Recently introduced tools for the reconstruction, visualiza-
distance-tree hierarchical pruning [15]. Most of these meth- tion, and analysis of complex neural connection maps enable
ods require an ideal manually- or automatically-generated neuroscientists to gain insights into the underlying brain
set of seeds. Manual marker placement is tedious, while morphology and function. We refer the reader to a survey
automatic seed generation is greatly affected by image low [33] of techniques for macro-, meso-, and micro-scale connec-
quality, noisy patterns, and broken structures. tivity visualization for connectomics. Volume rendering has
Recently, a few learning models have been developed to been developed for 3D reconstruction and visualization of
automatically trace neurons in OM images. Chen et al. [16] brain microscopy images. Mosaliganti et al. [34] developed
trained a self-learning method using user-provided neuron axial artifacts correction and 3D reconstruction of cellular
reconstructions. Li et al. [17] hired 3D CNNs for segmenting structures from OM images. Nakao et al. [35] discussed inter-
neurites, which suffer from relatively long computation. active visualization and proposed a TF design for two-pho-
Zhou et al. [18] developed an open-source toolbox, Deep- ton microscopy volumes based on feature spaces. Wan et al.
Neuron, with a 2D CNN followed by 3D mapping. Many of [36] described an interactive rendering tool for confocal
these methods have shown to perform well in segmenting a microscopy data, combining the rendering of multi-channel
single neuron on high-resolution images. However, they volume and polygon mesh data.
cannot faithfully reconstruct complex neuron morphology Beyer et al. [37] presented ConnectomeExplorer for inter-
in images with medium to low quality. active 3D visualization and query-guided visual analysis of
There is also a vast amount of research on segmenting large volumetric EM datasets. Hadwiger et al. [38] designed
neuronal membranes in EM images. Deep learning models scalable multi-resolution virtual memory architecture for
have shown an outstanding performance in automatic neu- visualizing petascale volumes imaged as a continuous
rites segmentation [19], [20], [21]. However, due to the lim- stream of high-resolution EM images. Haehn et al. [39] devel-
ited availability of ground-truth data, they suffer from over- oped a scalable platform for visualizing registration parame-
and under-segmentation. Haehn et al. [22] developed desk- ters and steps for fine-tuning the alignment computation,
top applications for proofreading the automatic algorithm visualizing segmentation of 2D nano-scale images with over-
segmentations. Unfortunately, these methods are unappli- layed layers, and interactive visualization for proofreading
cable to WFM images due to the differences in neuron EM images. Neuroglancer [40] is a WebGL-based visualiza-
visual representation, details level, and images quality. tion framework for volumetric EM data. These methods are
designed specifically for confocal, two-photon, or EM data.
2.3 Registration and Alignment When applied to WFM, they do not yield qualitatively accu-
Volume reconstruction for OM images of brain specimens rate neuron projections. Our previous work [2] discussed the
utilizes intensity- or feature-based methods. Intensity-based challenges related to WFM volume visualization and intro-
approaches select a pair of representative images from adja- duced a workflow for its meaningful rendering.
cent sub-volumes and compute a correlation measure to
estimate their relative spatial registration [23], [24], [25].
These methods do not enhance registration accuracy at a 3 NEUROCONSTRUCT OVERVIEW
finer morphological scale. Also, imaging artifacts, uneven NeuroConstruct is an end-to-end application for neuron
contrast, and large datasets are potential bottlenecks for reconstruction and visualization for WFM images. It con-
these methods. Feature-based methods use specific struc- sists of four main components: Segmentation of neuronal
tures knowledge, which needs preprocessing for producing structures, proofreading reconstructed structures, registra-
geometrical features as registration landmarks. Landmarks tion of brain sections, and visualization of the reconstructed
registration methods [26], [27] are fast and scale up easily neurons and raw data using hybrid volume rendering. We
with higher-order transformation models. Tsai et al. [28] present a fast and efficient ground-truth data generation
presented microscopy slices montage synthesis by utilizing pipeline. NeuroConstruct provides an interactive Segmen-
generic alignment cues from multiple fluorescence channels tation Toolbox for automatically segmenting neurons and
without landmarks segmentation. Yigitsoy and Navab [29] proofreading the segmentations. It also renders the recon-
proposed tensor voting based structure propagation for structed volume by combining the segmentation results
multi-modal medical images mosaicing. with the raw input volume. NeuroConstruct also presents a
Lee and Bajcsy [30] registered and reconstructed depth Registration Toolbox for automatic coarse-to-fine registra-
adjacent sub-volumes acquired by a confocal microscope by tion of depth-adjacent 3D brain sections.
connecting the 3D trajectories of salient cylindrical structures NeuroConstruct enables neuroscientists to study brain
at the sub-volume boundaries. This method is ineffective on sections of interest thoroughly. After the whole brain section
sparsely labeled samples due to the lack of continuous struc- is acquired, the user can follow a 4-step pipeline (Fig. 2): (1)
tures to be segmented for the proposed trajectory fusion. automatically and manually coarse-aligning whole brain sec-
Dercksen et al. [31] proposed the alignment of filamentous tions using the Registration Toolbox, (2) automatically and
structures by tracing the filaments and matching the traced manually fine-aligning an ROI from the coarse-aligned
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4954 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022

serially sectioned at 20 mm thickness and serially imaged on


a WFM scanner (Olympus VS-120). Imaging was conducted
with a 40 objective with z-step of 1 mm. WFM images are
volumes obtained by focusing at different depths of a thinly
sliced specimen, which means that each section is an image
stack of 2D slices.

4.2 Ground Truth Generation


To generate ground-truth of reconstructed neurites in 3D
WFM images, we annotated neurites in several brain section
regions using automatic segmentation and manual refine-
Fig. 2. Overview of the application. After acquiring a whole-brain section, ment. Due to a large number of neurites in brain samples,
experts can use NeuroConstruct to perform a thorough analysis of the generating a ground-truth dataset is nearly impossible. To
neuronal structures. They can follow the pipeline shown in the figure. In
the registration toolbox, given depth-adjacent whole brain sections, Neu-
solve this, we have developed a novel approach for efficient
roConstruct automatically coarse-aligns the sections. Then, users have neuron segmentation in image stacks. We used a small set
to select an ROI to start the fine-alignment process. NeuroConstruct pro- of 2D strips to train a 2D CNN model for generating prelim-
vides features for manual refinement of coarse and fine registration and inary neurite segmentations to be proofread using the Seg-
allows users to save the coarse-aligned brain sections and finely regis-
tered region for further analysis. Next, users can load the registered ROI mentation Toolbox. This technique speeds up the ground-
in the segmentation toolbox to visualize and segment the neuronal struc- truth generation process significantly. Our data generation
tures manually and automatically using the manual annotation features pipeline consists of 3 main steps:
and automatic segmentation methods.
Step 1 (2D-Strip Annotation). In WFM images, neurites in
x-y plane have a tree-shaped structure. In y-z/x-z planes,
stacks using the Registration Toolbox, (3) visualizing the 3D neurites are bright blobs with light projections going out-
and 2D views of the fine-aligned stacks using the Segmenta- wards (Fig. 1). Due to neuron high density and image low
tion Toolbox (4) automatically and manually segmenting the SNR, distinguishing the weak neurons from the background
neuronal structures using the provided functions in the Seg- in x-y plane is hard and neuron annotation in this plane is
mentation Toolbox. These result in a complete and precise time-consuming. Therefore, we used x-z and y-z views for
reconstruction and visualization of neuronal data. neuron annotation. Two neuroscientists spent about 40
Users can also choose to use each component, including hours each to annotate one hundred of 2D strips of y-z/x-z
coarse-alignment, and fine-alignment in the Registration planes with approximately 30 neurons in each strip result-
Toolbox, and visualization, segmentation, and manual anno- ing in a set of 3000 individual neuron representations.
tation in the Segmentation Toolbox independently without Step 2 (Preliminary Segmentation). We used the annotated
the need to go through the whole pipeline. The components strips to train a U-Net [41] based network consisting of con-
individual usability helps experts to perform the desired tracting and extracting paths. The contraction path consists
analysis, including visualization, segmentation, and registra- of five downsampling components with two 33 convolu-
tion on any WFM of interest independently or go through tions, each followed by a parameter rectified linear unit
certain steps in the pipeline in the desired manner. (PReLU) and a 22 max pooling with stride 2. We used
Both segmentation and registration can be done on the dropout layers with a rate of 0.5 after the last two compo-
whole brain sections. However, because of the high density nents to reduce overfitting. The expansive path consists of
of neurons and the large size of the brain sections, it takes a feature map upsampling followed by a 22 convolution,
tremendous amount of time to segment the neurons in a merged with the correspondingly cropped feature map
brain section, and neuroscientists might not infer useful from the contracting path, and two 33 convolutions, each
information by analyzing a large segmented brain section. followed by a PReLU. Last, a pixel-wise softmax applied on
the resultant image followed by a Poisson loss function to
classify pixels into neurite or background.
4 DATA PREPARATION Each 2D strip size is 25512, which is the number of adja-
4.1 Biological Prep cent stacks in z-direction (depth) by volume width or
To train and test our application, we used two types of sam- length. We train the network with input images (512512)
ples generated by our neuroscientists (Institutional Animal tiled with side by side strips with spacing. It makes the seg-
Care and Use Committee approval# 1618). For segmentation mentation process approximately 15 times faster than feed-
components, we used densely labeled samples obtained ing the network, a single strip per image. Our dataset with
from a transgenic mouse line where a fusion protein of tau 3000 neuron representations introduces to the network a
and a green fluorescent protein was under the control of the large variety of shape, size, and intensity of neurons. To
ChAT promotor (ChAT-tauGFP mouse). This allowed for infuse more spatial neuronal information, we augmented
labeling all cholinergic fibers and cell bodies throughout all the strips by applying flip and zoom transformations. We
the slices within the sample. For registration components, trained our network with 300 images, each containing 15
we used sparsely labeled samples obtained from a knock-in individual augmented 25512 strips. Under-segmentation
mouse where cre-recombinase expressed exclusively in cho- is a major concern, which is the artifact of lack of neurons
linergic neurons (ChAT-IRES-Cre mice). Mice were transcar- projection traveling in z-direction in x-z and y-z planes.
dially perfused with 4% PFA, and brain tissue was harvested Step 3 (Refinement). We generated a 3D neuronal segmenta-
and sucrose equilibrated for cryosectioning. Samples were tion dataset using the trained model in Step 2 and proofread
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4955

Fig. 3. Architecture of the 3D CNN segmentation model. It consists of contracting and expansion paths. The network input is a batch of grayscale
images and its output is a probability map of the same size. Each output pixel represents the probability of the input pixel being part of a neurite.

the segmented results to create an accurate ground-truth set path through a dense stage block to fuse the output from
using the Segmentation Toolbox (explained in Sec. 6). the previous stage layer of the same dense block with the
corresponding up-sampled output of the lower dense block.
The re-designed skip pathways enable more similar seman-
5 SEGMENTATION
tic level feature maps between the encoder and decoder,
The neuron segmentation process consists of two steps: Neu- making the optimization problem easier, and the use of
rite segmentation and soma segmentation. We segment neu- RSU blocks enables a more efficient extraction and aggrega-
rites using our designed model. For segmenting somas, we tion of intra-stage multi-scale contextual features.
use a thresholding technique combined with the segmented Architecture of the Network. Our network consists of 21
neurites as a guide. This section describes the neuron seg- stages. Each stage is formed from an RSU block with a spe-
mentation process, including the dataset and the proposed cific height. We represent each stage as RSUi;j L , where i is
model, and soma segmentation technique. the index of the down-sampling layer along the encoder of
the big U-structure, j is the index of the up-sampling layer
5.1 Data along the decoder of the big U-structure, and L is the num-
We created a ground-truth dataset of WFM image stacks for ber of encoder layers in the U-Net-like structure of the RSU
training and testing purposes. We cropped six regions of size block, except in the stage with L ¼ 2. Since the resolution of
25512512 randomly from the brain medial septum region feature maps in the last encoding stage is relatively low, fur-
with the size of 254800033000 and annotated using the ther down-sampling results in the loss of useful informa-
pipeline explained in Sec. 4.2, to be used as ground-truth. tion. Therefore, in the last encoder stage (L ¼ 2), we only
This dataset covers many variations in neuron morphology use dilated convolutions, having the same resolution as its
due to the large size of the cropped regions and the high den- input feature maps. As shown in Fig. 3, our network con-
sity of neurons. The segmentation model was trained, tested, sists of a sub-network of encoder stages which is the back-
and validated on 3, 2, and 1 image stacks, respectively. We bone of the network, a sub-network of decoder stages, skip
also conducted a qualitative analysis to evaluate the accuracy pathways, and a saliency map fusion module. The fusion
of reconstructed neurites, performed by domain experts. module is responsible for generating the probability map.
Using our trained model, we segmented neurons for eight The network generates five side output saliency probability
image stacks (never before seen by the model) with sizes 5
maps Sside 4
, Sside 3
, Sside 2
, Sside 1
, Sside from stages RSU25;0 , RSU34;1 ,
varying between 25512512 and 2110241024 randomly RSU43;2 , RSU52;3 , RSU61;4 by a 33 convolution layer, up-sam-
cropped from the medial septum and cortical sections of five pling layer and a sigmoid function, and five top output
different brains. 1 2 3 4 5
saliency probability maps Stop , Stop , Stop , Stop , Stop from
stages RSU70;1 , RSU70;2 , RSU70;3 , RSU70;4 , RSU70;5 by a 11 con-
5.2 Proposed Network for Neurite Segmentation volution layer and a sigmoid function. Then, the final
We propose a nested encoder-decoder network with re- saliency map Sfuse is generated by concatenating all side
designed skip pathways for connecting the encoder and and top output saliency maps, followed by a 11 convolu-
decoder sub-networks (similar to U-Net++ [42]) for a pre- tion layer and sigmoid function.
cise semantic segmentation, and stacking U-structure (simi- RSU Block. Each block consists of 3 main components: (1)
lar to U2 -Net [43]) for salient object detection. Fig. 3 shows Input convolution layer, transforming the input feature map
an overview of the network architecture. Our network is a x to an intermediate map F ðxÞ for local feature extraction. (2)
two-level nested U-structure consisting of 21 stages. Each U-Net based encoder-decoder structure with input of the
stage is configured with a ReSidual U-block (RSU) intro- intermediate feature map F ðxÞ that extracts the multi-scale
duced by [43]. In our network, the feature maps follow a contextual information UðF ðxÞÞ, where U represents the
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4956 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022

U-Net-like structure. The structure with larger height results


in richer local and global features. (3) Residual connection fus-
ing the output from the local features F ðxÞ and multi-scale
features UðF ðxÞÞ using the formula: F ðxÞ + UðF ðxÞÞ. RSU
blocks enables capturing of the fine details that might be lost
by direct up-sampling with large scales.
We define RSU blocks as GL ¼ RSU-LðCin ; M; Cout Þ,
Fig. 4. (a) Neuron structure. (b) Region cropped from the brain medial
where Cin ; Cout ; M are respectively the sizes of input chan- septum. A soma is a bright green blob.
nel, output channel, and RSU internal layers channel. L is
the number of layers in the encoder of the RSU block,
except for L ¼ 2. As mentioned earlier, due to the low res- entropy loss to calculate loss of each output (‘),
olution of feature maps in the last encoding stage, we use defined as:
dilated convolutions instead of down-sampling. So, in this
stage, the defined L do not denote the number of layers in X
M
‘¼ yi logðy0i Þ þ ð1  yi Þlogð1  y0i Þ (4)
the encoding stage. We formulate the skip pathways as i¼0
follows:
( where M is the number of samples, yi is the label for
i1;j
GL ðRSULþ1 Þ j ¼ 0;
RSUL ¼i;j
i;m j1 iþ1;j1 i, and y0i is the predicted label for sample i. It takes
GL ð½½RSUL m¼0 ; UPðRSUL1 ÞÞ j > 0; 186 min. to train the network with defined parame-
(1) ters using the training set of 540 images with a size
where function UPð:Þ denotes an up-sampling layer, and ½  of 512512.
denotes the concatenation layer. Stages at level j ¼ 0 receive  Output Data. The model generates images of the
one input from the previous layer. Stages at level j > 0 same size as the input (512512). Each output pixel
receive j þ 1 inputs, where j input(s) are the output(s) of the represents the probability of the input pixel being
previous j stage(s) in the same pathway and the last input is part of a neurite, showing the network confidential-
the up-sampled output from the lower pathway. ity in classifying the pixel as foreground (with close
Training. The training regiment is as follows: to zero for background pixels). Weak neurites have
lower probability than strong ones. This probability
 Input Data. We trained the network with 512512 map enables experts to study neuron morphology in
images. To add more variation in neuron morphology, detail, as they can study weak and strong neurites in
we applied elastic deformations (e.g., random rotation one whole structure or independently.
and scaling), resulting in 540 images of size 512512.
To generate a universal model for segmenting neu- 5.3 Soma Detection
ron morphology that encompasses images captured Soma is the spherical neuron part containing the nucleus.
from different sections of the brain using any imaging Fig. 4a shows the neuron structure. As shown in Fig. 4b,
techniques, we adjust the pixel intensities through soma has higher contrast than neurites. This feature allows
Eq. 2: us to segment somas using thresholding techniques without
 g the need to create a ground-truth dataset, training networks,
Ik  minIk
Adjustedk ¼ (2) and spending time and resources. Using this feature, we
maxIk  minIk
extract the somas from the image stacks by thresholding the
voxel intensities using multi-level otsu’s method [44]. We
where k is the image index in the image stack, Ik is
consider two threshold levels (t1 , t2 ) to classify the image
the intensity array of the kth image in the stack,
into three classes; the background voxels are in the range 0
minIk and maxIk are respectively the minimum and
to t1  1, the neurite voxels most probably are in the range
maximum intensities in the corresponding image,
t1 to t2  1, and the soma voxel (which are the brightest vox-
and g is the intensity adjustment parameter. Using
els) are in the range t2 to 256. Some strong neurites might
Eq. 2, we map the intensity range to [0, 1], leading to
have high contrast similar to somas resulting in misannota-
a universal WFM model.
tion. We use the generated segmentation mask by the model
 Parameters. We trained the network using Adam
consisting of dendrites and axons to remove the segmented
optimizer with a batch size of 4 for 2D training for
neurites that are mistakenly segmented as somas.
100 epochs. The initial learning rate was set to 0.001.
We define the loss function as:
6 SEGMENTATION TOOLBOX
X
N
NeuroConstruct provides a Segmentation Toolbox for seg-
L¼ wntop ‘ntop þ wnside ‘nside þ wfuse ‘fuse (3)
menting and visualizing neurons in small crops of WFM
i¼1
image stacks, as our neuroscientists required a toolbox that
where N ¼ 5, ‘nside and ‘ntop denote the loss of the side visualizes neurons in fine detail with the highest possible
n
output saliency map Sside and the top output saliency resolution enabling them to study complex brain morphol-
n n n
map Stop , wside and wtop denote their weights, and ‘fuse ogy. As shown in Fig. 5 and the accompanying video, the
is the final fusion output saliency map Sfuse with its Toolbox offers three options and various features for efficient
corresponding weight wfuse . We use binary cross- and accurate neurite segmentation: (1) annotating neurites
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4957

surrounding the selected neurite and applies the Pyramidal


Lucas-Kanade optical flow algorithm [45]. Then, it finds the
most similar regions (based on pixels intensities) to the cor-
responding neurite in the following image slices. Roberts
et al. [46] hired a similar approach for estimating optimal
volumetric pathway through the image stack connecting
the computed 2D segmentations using dense optical flow.
Their approach requires 2D segmentation on the first and
last image slices to compute the minimum distance from
pixels in these slices, while we compute the neuron path
using the user-specified segmentation on the first slice and
find the most similar regions in the following slices.
Snap. A neurite might be visible in several image slices
with different levels of contrast and brightness. However, a
pixel is considered a neurite if it is unblurry and has higher
contrast than pixels in adjacent slices. In some cases, due to
the low contrast between neurites and the background and
Fig. 5. Segmentation Toolbox screenshot. The main menu has four image blurriness, it is challenging to select the correct slices
options: File, Analyze, Process, Help. File option provides loading image
stacks and/or annotations, and saving annotations (shown in red). Ana- containing the neurite. The Toolbox helps the user to find
lyze option includes the Skeletonize feature. Process options are 2D and those slices. The user specifies the ROI containing the corre-
3D segmentation and registration of the image stacks. Help menu has sponding neurite in x-y plane and clicks on the neurite in
instructions on Toolbox use. (a) 2D cross-sectional view of the volume in
x-y plane. (b) 2D cross-sectional view in x-z plane. (c) 2D cross-sectional
one of the slices. The Toolbox searches for the sharpest point
view of the volume in y-z plane. (d) 3D volume of a WFM image stack. with the highest contrast in the specified slices and selects
(e) Area selection frame to specify an ROI to focus on by adjusting corre- the 2D region with the highest intensity in a virtual cylinder
sponding sliders. (f) Brush options frame including size, color, and eraser. perpendicular to the view plane from the corresponding
(g) Display options frame containing gamma value selection sliders for
2D and 3D views and annotation display options. (h) Annotation functions point with a brush size diameter.
frame, containing the helper functions for neurite annotation, including 3D Skeleton. The Toolbox employs user’s annotation to
snap and optical flow features. generate a 3D skeleton (cf. [47]). The algorithm defines an
octree over the annotations and examines 333 neighbor-
from scratch, (2) loading an annotation file and refining the hood of voxels. It iteratively proceeds until only strips of
annotations, (3) selecting NeuroConstruct Segmentation voxels are left. This method outperformed various methods
method and proofreading the results. we tested for 3D skeleton generation, as they failed to create
the skeleton of weak neurons with thin regions.
6.1 Visualization of Image Stacks Gamma Correction. The user can change the contrast of 2D
views using the corresponding slider. It transforms linear
The Toolbox provides four simultaneous views of the image
color mapping to a non-linear space and adjusts the contrast
stacks to assist neuroscientists in effectively and confidently
using the user-specified gamma values.
mark neurites in the raw WFM volume: The first three are
TF.The user can change the contrast and color of the ren-
2D cross-sectional views in the x-y, x-z, and y-z planes. The
dered volume using a slider. To adjust the rendered volume
fourth view is rendering of the 3D volume and iso-surface
contrast, the Toolbox uses the following novel TF:
of annotated neurites. These four views provide a compre-
hensive exploration of the image stacks and no limitation in  g ¼ 0: Defining n bins with equal length for the TF,
visualizing and annotating any structure. where n is the number of bins.
 0 < g  1: Breaking the TF into bins with length
6.2 Annotation Features defined as ai ¼ 2g ai1 , where ai is the ith bin, and
P n
The user can annotate the neurites and somas with two dif- i¼0 ai ¼ 1.
ferent colors, red and black. When the user draws over any We empirically found that n ¼ 20 works well on our data.
2D cross-sectional slice, all views are updated simulta- We provided our experts with our TF and gamma correction
neously to follow the user’s drawing. The Toolbox allows methods applied to the same image stacks. After a thorough
the user to zoom in and out in all views. The user can spec- comparison between these techniques, they remarked that
ify an ROI to focus on. In the 2D views, the Toolbox bounds our TF outperforms gamma correction in terms of neurite
the selected ROI with a blue rectangle, and in the 3D view, visual representation in blurry images with low contrast.
it only renders the specified ROI. If a user prefers to manu-
ally annotate an image stack from scratch or manually refine 6.3 Visualization of Automatic Segmentation
an automatically segmented stack, the Toolbox offers novel The primary information neuroscientists expect to visualize
features to ease the manual annotation process. We describe in brain WFM volumes are neuronal structures – namely,
below some of these functions: tubular-shaped neurites and blob-like somas. The inherent
Optical Flow. This feature helps the user to annotate neu- challenges of blurred WFM data and the absence of distinc-
rites more efficiently. As shown in Fig. 1, a neurite in x-z tive set of voxel intensity values that can differentiate fore-
and y-z planes is a circular bright region. When the user ground (neuronal data) from background (blurring artifacts
clicks on a neurite, the Toolbox finds the smallest 2D region and brain tissue) makes the task of adjusting visualization
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4958 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022

To visualize possible neurite structures in the raw volume


with scores below the threshold confidence, we offer the
hybrid fusion mode that simultaneously renders iso-surfaces
extracted in the structural mode along with selected voxels
from raw volume, using a 2D TF that represents the raw vol-
ume scalar intensity versus segmentation confidence score.
Users can draw a rectangular region on the 2D TF for direct
volume rendering (DVR) of the selected voxels, as in Fig. 6,
and select points on the 2D TF to render iso-surface of the
selected voxels. By observing domain experts, we deter-
mined to use structural rendering along with DVR with the
2D TF as a default preset, as it is more effective to select an
entire voxel range. One limitation of such a correlation is that
Fig. 6. Neuronal data hybrid visualization. Top row shows the cortical
region and bottom row shows the medial septum region. Green repre- we assume that the model outputs a level of reasonable confi-
sents iso-surface rendering of the automatic segmentation result, and dence for neuron voxels, compared to background and noise
blue represents direct volume rendering of raw data using 2D TF map- voxels. A future work to overcome this limitation can be
ping. For both datasets: (a) Structural mode: iso-surface rendering of the model re-training using a refined set of annotations gener-
high-confidence voxels. (b)-(c) Hybrid Fusion mode: the iso-surfaces in
(a) combined with direct volume rendering of the raw data using two dif- ated by this hybrid visualization mode.
ferent ranges selected on the 2D TF mappings shown on the left.
7 REGISTRATION
parameters or directly applying surface rendering techni-
ques to the raw volume tedious and disfavored by neuro- In an ideal scenario, undisturbed imaging of biological sam-
scientists. Finding optimal visualization parameters for ples would allow for the best reconstruction of cholinergic
rendering both high- and low-intensity neurites while sup- axonal projections. However, current experimental methods
pressing similar intensity-valued blurring artifacts becomes for imaging using a WFM require the physical sectioning of
more complicated in densely populated samples. Thus, we brain specimens. To this end, NeuroConstruct presents a
use the output of our segmentation pipeline as an integral coarse-to-fine pipeline for the registration and alignment of
component for visualizing neurons. neuronal fibers across the sliced sections.
The robustness of deep learning models greatly depends
on the accuracy and availability of sufficient ground-truth 7.1 Coarse Registration
training data. For training, we attempted to accurately anno- In neuroscientists’ workflow, physically sectioned brain sli-
tate neurites of varying intensity, trajectory, morphology, ces specimen are placed on a glass-slide at arbitrary posi-
and biological labeling. However, using real data, there is tions and orientations and imaged separately using WFM.
large biological diversity and within biologists’ workflow Therefore, an intuitive initial step is to coarsely register the
there are many experimental variations from data prepara- individual brain section volumes. For coarsely registering
tion methods to imaging modality input parameters and the image stacks, we tried several methods (e.g., intensity-
conditions. Thus, in a practical scenario, preparing a training based [23], [24], [25]) that perform well in registering slices
set covering all variations is an ambitious goal to achieve. at a coarse level, but they do not align neurites within slices
In NeuroConstruct, we combine the ability to use visuali- at a finer, morphological scale. We also attempted to adapt
zation methods to render essential features within a dataset the montage/mosaic stitching alignment (primarily used to
and the binary mask from our segmentation model as two align microscopy tiles in x-y direction) [28], [29]. Our
complimentary inputs to what we term as, hybrid visualiza- method draws insights from the tensor voting method [29].
tion. Using hybrid visualization, users can recover missing We take as input a series of depth-adjacent sections and
neurites by exploring the correlation between the raw vol- use the tissue structure to estimate a global rigid-body
ume scalar values and the segmentation confidence score transformation between adjacent sections. The series of 3D
for each voxel, determined by our model, which we repre- sections have to be transformed into a single reference coor-
sent as a 2D TF. Given a 3D image stack, NeuroConstruct dinate system. So, as an initialization step, users need to
first segments each image stack, and then combines them to specify a reference brain section. Using the referenced sec-
create the final 3D segmentation mask corresponding to the tion, we determine the slice representing the tissue largest
output. The computed segmentation mask is a volume, spatial extents and consider that slice image space as the ref-
where each voxel represents a confidence score between 0 erence coordinate system for subsequent transformations.
and 1, where 0 is background and 1 represents a neurite For estimating rigid transformations between sections,
with full confidence. Using this score, we provide two visu- the interfacing z-slices from each depth-adjacent section pair
alization modes, structural and fusion (see Fig. 6 and are used as representatives for the coarse registration. Some
Fig. S1, which can be found on the Computer Society Digital approaches suggest using slices with the highest contrast
Library at http://doi.ieeecomputersociety.org/10.1109/ within the 3D sections [30], [48], as the end slices usually
TVCG.2021.3109460. in Supplementary Material). The struc- have low signal values. However, since NeuroConstruct
tural mode renders the segmentation mask as an iso-sur- registers several sections, we expect changes in the brain
face. Using a slider, users interactively choose a minimum outer boundary across the sequence. Thus, to avoid cascad-
confidence score as the minimum threshold value for recon- ing errors, we choose interfacing slices with the smallest vari-
structing an iso-surface using marching cubes. ation in the outer boundary between adjacent sections.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4959

Fig. 7. Steps of our neurite alignment (fine registration) in two depth-adjacent ROI sections. (a) We determine a subset of neurites whose trajectory
suggests continuity beyond the interface of its section. We then estimate a direction of its propagated trajectory into the depth-adjacent section and
define an ellipsoidal region around its end-point, representing a 3D space of possible continuing locations, shown in the top of (b). Then, we use ICP
on the ellipsoids point cloud representation to estimate a rigid-body transformation, as shown in the bottom of (b). Finally, we present the user with
an interface to verify the results and correct for any misaligned neurites (c).

Finally, a rigid-body transformation is estimated using  Let V1 and V2 be two adjacent sections, where the
Mattes mutual information (MI) [49] with a regular-step z-slices order from V1 to V2 are in anterior to poste-
gradient descent optimizer. MI for intensity-based registra- rior direction.
tion uses joint probability distribution of pixel samples  Let Vpost 1 and Vant
2 be the z-slices corresponding to the
between images and define a mapping for a set of pixels posterior sub-section of V1 and the anterior sub-sec-
based on value similarity by reducing the entropy between tion of V2 , respectively. In our implementation, the
the distributions, signaling the images are likely better extracted thickness of Vpost
1 and Vant2 are set to be
aligned. This method is well-suited for our coarse alignment 5 mm.
as the brain tissue outer-boundary or internal regions vary  Let Vo be the region defined by Vpost 1 [ Vant
2 . This is
in perimeter across the anterior to posterior, and geometry- essentially the interfacing region V1 and V2 .
based registration will not yield effective results. In apply- Broadly, we determine a subset of neurites from V1 and
ing MI, we adopted a multi-resolution approach to avoid V2 whose trajectory within the section suggests continuity
reaching a local minimum due to noise and tiling artifacts beyond the section interface and establish a correspondence
in a sparsely labeled sample. We doubled the step size for between the candidates with a similar trajectory in the
each subsequent higher resolution to avoid the optimizer depth-adjacent sections. Using this information, we solve the
from focusing on progressively smaller regions. alignment problem in Vo , by maximizing overlaps between
linearly extrapolated trajectories of the neurites. Fig. 7 pro-
7.2 Fine Alignment vides an illustration of our fine alignment algorithm.
Following coarse registration, we utilize the coherency First, for each two serial sections, we locate neurites with
between neurites geometric structures across depth-adja- trajectories propagating into the other section and estimate
cent sections. Our fine alignment method has 3 major a propagated trajectory direction. We first use the segmenta-
steps: (1) ROI selection, (2) automatic neurite trajectory tion mask volume (see Sec. 5) to find a 3D line segment that
propagation, and (3) automatic trajectory alignment. The passes through the connected-component voxels of each
main idea is to adopt a feature-based registration to maxi- extracted propagating neurite from Vpost 1 or Vant
2 . Next,
mize the neurites morphological continuity in neighboring using the line segments, we determine propagation direc-
brain sections. tion as a vector vi;j , for each propagating neurite. Since the
ROI Selection. A critical limitation of micrometer resolu- goal is to extrapolate the propagating trajectory, we did not
tion microscopy images of brain samples is its large spatial consider the entire neurite in the section (V1 or V2 ) but
extent. For a computationally faster and memory-efficient rather limited to the z-thickness defined for Vpost1 and Vant
2 .
implementation of the fine alignment method, we ask the Finally, each neurite trajectory propagation in the over-
user to mark an ROI. lapping 3D space Vo is defined by an ellipsoidal region
Neurite Trajectory Propagation. Common methods for around its end-point close to the section interfacing slice,
registering microscopic images introduce fiduciary land- using the estimated vector vi;j as the major axis (see Fig. 7).
marks during image acquisition, which are then registered The choice of an ellipsoid to represent the trajectory propa-
to reconstruct the complete volume. However, this adds gation is to accommodate possible neurite signal-loss and
complexity to neuroscientists’ workflow. In our fine align- non-rigid deformation from physically slicing the brain.
ment approach, we have developed a novel method that Because of hydration and dehydration of sample prepara-
uses linearly extrapolated neurites trajectories to infer tion, in addition to the uniform brain growing and shrink-
their corresponding continuity, beyond the section slicing ing, tissue distortion may occur. Therefore, we center the
interface. This correspondence between neurites is used to ellipsoid at the end-point of a propagating neurite to maxi-
estimate the necessary transformation parameters that mize the search space for estimating a transformation that
spatially align depth-adjacent sections. To formally define aligns depth-adjacent sections. The ellipsoid major axis is
our approach: aligned parallel to the estimated trajectory propagation
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4960 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022

vector vi;j , and the other two axes perpendicular to it. In


our implementation, the ellipsoid major axis length is
twice the thickness defined for Vpost 1 and Vant
2 and the other
two axes length equals to the neurite thickness. This length
ensures that the ellipsoid propagates to the depth-adjacent
section.
Trajectory Alignment. After ellipsoid generation, the final
step is to maximize propagating neurites alignment. We
translate this problem into a simpler point-cloud registra-
tion, where each point on the ellipsoid surface represents a
potential location for the continued neurite trajectory. By
discretizing the ellipsoidal regions as 3D point clouds and
estimating a rigid-body transformation using ICP, we apply Fig. 8. Screenshot of the Registration Toolbox. (a) Base image viewer.
(b) Moving image viewer. (c) Coarse alignment frame, showing the result.
the transformation to the entire section and concatenate Users can adjust image contrast using provided sliders and refine the
with its depth-adjacent section to create an aligned sub- rotation or translation parameters to reach the desired registration. Using
volume. the provided buttons at the bottom, users can hide/show the base, mov-
ing, and transformed moving image to verify the registration result more
Each potential correspondence measure v in the ICP esti- precisely. (d) Fine alignment frame, showing the result on the user-
mation is assigned using g m;n ¼ v1 Dm;n þ v2 ju1;m  u2;n j, selected ROI. It provides the same features as the coarse-alignment
where m and n are the jth indices of the two sets trajectory frame, including manual alignment parameters and display options.
vector, D is the euclidean distance between the two vectors
start points, and v1;2 are weights. vi;j is determined as the
vector projection angle vi;j on the x-y plane (top in Fig. 7b). 9 IMPLEMENTATION
To establish a potential one-to-one correspondence The NeuroConstruct consists of several components, Seg-
between the trajectories, we apply a weighted bipartite match- mentation and Registration Toolbox, segmentation training
ing approach using the Hungarian algorithm [50] that mini- models, coarse and fine alignment registration, and struc-
mizes the cost across all of the potential assignments. Using tural and hybrid visualization modes. The Segmentation
these, a transformation is measured based on Arun et al.’s and Registration Toolbox, fine alignment algorithm, and the
method [26] that maps the propagated trajectories from one structural and hybrid visualization modes were imple-
ðiÞ ðiÞ
section to the other using pfixed ¼ Rpmoving þ T , where i 2 mented using python, VTK [51] and Qt libraries. The coarse
ðiÞ ðiÞ
1; . . . ; N points, pfixed is the ith fixed point, pmoving is the ith alignment module was implemented in C++ using ITK [52]
moving point, R is a 33 rotation matrix, and T is a 31 trans- and VTK [51]. The CNN model was implemented using
lation vector. In our implementation, we consider V1 to be the python and Keras and trained and tested on a desktop with
fixed section if its position is anterior to the reference slice an NVIDIA Quadro RTX 6000 GPU, which was also used
(determined in the coarse registration step, Sec. 7.1) and the for all NeuroConstruct implementations.
moving section otherwise.
10 EVALUATIONS AND RESULTS
8 REGISTRATION TOOLBOX 10.1 NeuroConstruct Evaluation
NeuroConstruct offers a Registration Toolbox which pro- We were in close contact with our neuroscientists to design
vides coarse and fine alignment registration and visualiza- our system, add features and refine functionalities based on
tion of brain sections. We conducted a study to find the their needs. These domain experts (co-authors) thoroughly
proper way of visualizing large image stacks with GBs of evaluated the system using various scenarios. As the first
data. We provided our neuroscientists with two visualiza- use case, they followed the 4-step pipeline, starting by
tion options: (1) visualization of the image stacks maximum coarse-aligning, fine-aligning the brain sections through the
intensity projection (MIP), and (2) visualization of the raw Registration Toolbox, visualizing, and automatically and
image stack combined with a slider for moving between the manually segmenting the neurons through the Segmentation
slices. Our experts preferred MIP visualization as they Toolbox to reconstruct the whole WFM brain sections
could verify the registration result faster and easier. acquired from different regions of the brain. In other use
The Registration Toolbox offers sections coarse align- cases, they used each component individually. After a care-
ment and user-selected ROI fine alignment and visualizes ful review of NeuroConstruct, our neuroscientists found it
the base, moving, and transformed moving image in the “exclusive” and “specialized”. They said “the framework
corresponding viewer (see Fig. 8). The user can then manu- provides an end-to-end solution for segmentation, registra-
ally refine the registration parameters using the provided tion, and visualization of serially collected WFM images.
translation and rotation options. Rendering image stack Upon image acquisition, data can be registered using the
MIP speeds up the manual alignment process as the trans- Registration Toolbox, and then each individual image can be
formation is only applied on the MIP images and enables an segmented for a clean representation of the signal to the
interactive user experience. For accelerating the computa- background. At each of these steps, the critical component is
tion, the final registration parameters are applied on the an automated solution that can be entirely modified by user
whole image stack, in two stages: (1) before starting fine input. These user-updated segmentations on the registered
alignment, and (2) while saving the user-requested regis- volumes essentially enable tracing individual projections
tered images. across a volume, facilitating the study of cholinergic neurons
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4961

affected by Alzheimer’s disease.” They also evaluated each unique solution for serial rendering of WFM images. The
component and features of the toolboxes separately (see base and moving images are loaded in two separate frames
Sec. 10.2 and 10.3). for initial coarse alignment. Our experts said: “the loaded
images offer a good deal of flexibility and can be loaded as a
10.2 Segmentation Toolbox Evaluation maximum intensity projection of a 3D image (image contains
Our neuroscientists used the Segmentation Toolbox to anno- X, Y, and Z information) or a sub-stack of the XYZ image.”
tate and trace projections within WFM sections. They evalu- The Toolbox then computes coarse alignment and offers to
ated the Toolbox, including the hybrid volume rendering move and rotate the image and visualize the base and mov-
and annotation features, and compared the Toolbox with ing image individually. Our experts believe that “this allows
available tools. They believe that segmentation and volume users control over the ultimate coarse alignment solution.”
rendering offer incomparable tools for neurite segmentation Upon completion of coarse alignment, users can save it and,
and subsequent visualization. Several features are unique to ultimately, the new volume (two registered images). This is
our Toolbox that our experts have not found in other pro- in our experts’ interest as it provides an essential intermedi-
grams they have used (e.g., ImageJ [53], FluoRender [36]). ate point at which further registration can be done with the
Hybrid Volume Rendering Evaluation. Using the hybrid series next image, or the new volume can be separately
volume rendering, users visualize and have ultimate control inspected. At this point, images fine alignment can be con-
over the auto-segmentation result. The Toolbox then pro- ducted upon selection of a small relevant ROI. Similar to
vides a user-friendly interface for identifying signals from coarse alignment, the fine-alignment is visualized within the
the background and improving the segmentation. Our Toolbox and can be moved/rotated for ultimate control over
experts believe that “both of these features allow users to the fine-alignment solution. The final aligned volume (coarse
visualize a full resolution image, which is critical during + fine) can be saved for further analysis.
segmentation.” Overall, they found this visualization ideal, Overall, our experts found a few features of this Tool-
as it overlaid the segmentation on the raw data, allowing box as a unique solution for neuroscientists. “First, the
users to adjust and visualize the segmentation based on toolbox completes segmentation of the images as a step
what they desire to visualize in the raw. They also remarked before registration, which provides a complementary set
“this allowed for segmenting different neuron types, ini- of information to use in visualization and analysis. Second,
tially strongest, and the option to include weaker ones.” the down-sampling conducted by the toolbox is a tempo-
Annotation Features Evaluation. The Toolbox offers an rary solution for ease of working with large images. Upon
option to control the contrast of all views. Our experts found saving, the solutions for both coarse- and fine-alignment
this feature helpful, saying that “the gamma slider allowed us are applied to each Z step within the stack, thereby provid-
to incorporate all fibers weak and strong, or just to annotate ing the end volume with full image resolution. Finally, the
and understand morphological or trajectory in the strongest registration toolbox offers a great deal of flexibility to
fibers.” The sliders allow the user to choose an ROI to focus incorporate the user’s preference into the registration with
on and visualize it across X, Y, and Z individually. To improve the use of arrow toggles to nudge the image as well as tog-
upon or erode the annotation, the Toolbox provides a simple, gles to rotate the image. These components of the registra-
user-friendly solution, a paintbrush-like feature. Our experts tion toolbox make it a powerful tool for registration of
said “this intuitive tool makes it exceptionally easy for users serial WFM images.”
to improve the segmentation on their own in fluid strokes, fol-
lowing a projection along its path. The ability to draw, trace, 10.4 Segmentation Method Evaluation
and erase on a slice by slice and pixel by pixel basis provided Quantitative Analysis. Evaluation Metrics. We evaluated the
us complete control over how refined or simple we wanted trained model using several metrics. We define TP as pixels
the annotations to be, and the brush size option allows us to correctly segmented as neurite, TN as pixels correctly seg-
carefully follow and mimic the projection morphology and mented as background, FP as background pixel segmented as
path.” Our experts expressed their interest in snap by saying neurite, and FN as neurite pixels segmented as background.
that “the toolbox offers a smart-adjustment to the simple We then compute precision: TPTP TP
þFP , recall: TP þFN , f1-score:
paintbrush concept by updating the view as you draw to the ðTP þTNÞ ðTP Þ
2 recallþprecision , accuracy: ðTP þTNþFP þFNÞ , IOU: ðTP þFP
recallprecision
þFNÞ ,
point where the signal is sharpest for every click. This is espe-
ð2TP Þ
cially helpful as projection paths in real data are hardly ever and Dice: ð2TP þFP þFNÞ .
straight or within the same optical section but instead come in Testing Set. Our testing set contains 50 images of size
and out of view, making tracing them challenging!” Using the 512512, cropped from the medial septum, and manually
updated segmentation, a skeletonized image can be generated segmented using the pipeline in Sec. 4. The manual segmen-
and overlaid onto the 3D rendering for simplified, clean visu- tations are binary masks, where 0 represents background
alization. Our experts remarked that “while the annotation and 1 represents neurite. The network output is a probabil-
feature allowed for visualization of neuron morphology, the ity map within the [0,1] range, where neurite pixels have a
skeleton view allowed for a simplified way to visualize neu- higher probability than background pixels.
rite trajectory. These two features worked in complement to Speed. The trained model segments a 512512 image in
enhance 3D visualization.” 0.28 seconds. Running time scales up linearly by the volume
depth and every 512 increment in width and length, as each
10.3 Registration Toolbox Evaluation input 3D image stack is broken into 2D image stacks of size
Our experts registered and visualized brain sections using 512512 with step size of 1, 512, and 512 over z-,y-, and
the Registration Toolbox and believe that it provides a xaxis, respectively.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4962 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022

TABLE 1
A Comparative Analysis of our Model Against
State-of-the-Art Models

Comparison. We applied several automatic neuron tracing Fig. 9. The neuron segmentation of a densely structured WFM image of
a brain medial septum section. Green, yellow, and blue colors represent
techniques on our testing set, including App2 [15], Smart reconstructed neurites, reconstructed somas, and ground-truth segmen-
Tracing [16], Rivulet [54], TReMAP [55], and NueroGPS-Tree tation, respectively. (a) Raw image. (b) Reconstructed neurons overlaid
[10], but they failed at reconstructing neuronal structures in on the raw image. (c) Reconstructed neurites and the ground-truth seg-
the absence of a soma in densely-structured low-resolution mentation overlaid on the raw image.
images. We compared the performance of our model against
three state-of-the-art models including U2 -Net [43], U-Net+ Utility of NeuroConstruct registration in domain science.
+ [42], U-Net3+ [56], summarized in Table 1. You can find Advances in genetic labeling have allowed us to selectively
precision-recall curve for these models in Fig. S2, available label specific neuronal subsets. The next frontier is to follow
online in Supplementary Material. As seen, our model these neurons through 3D to allow complete reconstruction
achieves the best overall performance in segmenting neu- of single neurons and their entire arbors (dendritic branches
rites. As shown in Fig. 9, our model detects strong and weak (shown in Fig. 4)). Without a proper reconstruction method,
neurites and only misses extremely weak ones. quantitative analysis of WFM data typically does not make
We also evaluated our model against these three models claims about individual neurons and their trajectories, but
on various tasks, including nuclei segmentation and liver- instead about neuron populations. Normal workflow
tumor segmentation with larger datasets (consisting of includes subsampling stacks across a region to get a density
thousands of images). According to all metrics, our network of labeled neurons and processes from anterior to posterior.
had better performance and more robustness on various These processes would not come from a single soma, but
segmentation tasks, compared to other networks. instead gives a population estimate of labeled fibers. Using
Expert Analysis. Our model reconstructs the complete neu- NeuroConstruct, we register serial slices, reconstruct and
ronal structure including weak neurites blended into the isolate single neurons and their labeled arbors. Our registra-
background due to the image low contrast in densely struc- tion now allows for a number of applications and analyses
tured image stacks. Our experts were provided with crops that were previously impossible with WFM data.
from different brain regions to evaluate how our segmenta- Quantitative analysis of arbor length. Prior to registration,
tion improved neuron visualization as compared to raw we were only able to visualize somas and extensions from
data. Despite the brain region, our experts reported that single stacks and unable to determine the arbors morphol-
there was significant recovery of fibers in the images as com- ogy through the anterior to posterior axis. This allows us to
pared to raw data. Their typical workflow would involve cre- visualize the single neuron arbors complexity through serial
ating MIP of images and visualizing them in ImageJ [53]. sections, and compute extensions length. Registration
Using ImageJ, despite increasing gamma value in the raw helped to recover 5255mm of the processes length from the
data, they were unable to visualize some of the fibers that selected sub-ROI in Fig. S4, available online in Supplemen-
were appropriately segmented by our model. However, for tary Material.
evaluating the segmentation results generated by our model, Quantitative analysis of arbor branch points. Using the infor-
they created MIPs of both the raw and segmented data and mation gathered from the registered dataset, another analy-
overlaid the images in ImageJ. Our experts were excited to sis of interest is the number and location of branch points
see the dramatic improvement in recovery between raw and from the soma. The morphological diversity of these
segmented images that could now be visualized using their branches could tune different aspects of cell communica-
typical workflow (ImageJ). See Fig. S3, available online in tion. Prior to registration, analysis on a single stack would
Supplementary Material for 2 examples. have shown one or two branches. Following NeuroCon-
struct reconstruction process, it is evident that there are 24
10.5 Registration Method Evaluation branch nodes visible. The analysis determines the distance
NeuroConstruct was used to register six individual serial between branch points and the soma, and arbor complexity.
sections from the mouse brain specimen (see Sec. 4). Each Analysis of fiber integrity. A key feature in neuroanatomy
section was 16bit, 2193735616 pixels 47 TIFF images, study is the health and dramatic deterioration and fragmen-
totaling approximately 55GBs per stack. The coarse registra- tation of cholinergic fibers in specific brain regions during
tion took 25 minutes. We cropped a 50005000 ROI of size brain diseases, such as Alzheimer’s. The fibers integrity is
6GB. The automatic fine alignment process for six ROI stacks critical to maintain neural connectivity and function. Using
took 33 minutes. In total, coarse-to-fine registration of a 300 the registered dataset and the ability to visualize weak fibers
mm sample took 68 minutes. We also tested a use-case for re- from the segmentation, experts now visualize alterations to
registration after correcting a misaligned stack, taking an fiber morphology including swelling and fragmentation.
additional 25 minutes. In Fig. S4, available online in Supple- For example, we determine how far from the soma the frag-
mentary Material, we show two focused neurons, studied by mentation began in affected neuron processes, and how
our experts for evaluation. Following are their discussions: much of the process remains intact and healthy.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4963

11 CONCLUSION AND FUTURE WORK [10] T. Quan et al., “NeuroGPS-tree: Automatic reconstruction of large-
scale neuronal populations with dense neurites,” Nat. Methods,
We presented NeuroConstruct, a novel end-to-end applica- vol. 13, pp. 51–54, 2016.
tion for the segmentation, registration, and visualization of [11] Y. Wang, A. Narayanaswamy, C.-L. Tsai, and B. Roysam, “A
broadly applicable 3D neuron tracing method based on open-
brain volumes, imaged using WFM. NeuroConstruct com- curve snake,” Neuroinformatics, vol. 9, pp. 193–217, 2011.
bines deep learning 3D segmentation methods, a novel algo- [12] S. Li et al., “Optimization of traced neuron skeleton using lasso-
rithm for coarse-to-fine registration of brain sections, and a based model,” Front. Neuroanatomy., vol. 13, 2019, Art. no. 18.
[13] S. Basu and D. Racoceanu, “Reconstructing neuronal morphology
hybrid approach to visualize recovered neurites. To gener- from microscopy stacks using fast marching,” in Proc. IEEE Int.
ate a ground-truth set to train and test the model, we devel- Conf. Image Process., 2014, pp. 3597–3601.
oped the Segmentation Toolbox, facilitating annotation of [14] J. Yang, M. Hao, X. Liu, Z. Wan, N. Zhong, and H. Peng, “FMST:
WFM image stacks. We evaluated NeuroConstruct quanti- An automatic neuron tracing method based on fast marching and
minimum spanning tree,” Neuroinformatics, vol. 17, pp. 185–196,
tatively and qualitatively, along with experts’ analysis. Our 2019.
results show that NeuroConstruct outperforms the state-of- [15] H. Xiao and H. Peng, “APP2: Automatic tracing of 3D neuron
the-art in all aspects of the design, including segmentation, morphology based on hierarchical pruning of a gray-weighted
registration, and visualization of neurons in WFM images. image distance-tree,” Bioinformatics, vol. 29, pp. 1448–1454, 2013.
[16] H. Chen, H. Xiao, T. Liu, and H. Peng, “SmartTracing: Self-learning-
Our application is designed with the goal of helping neu- based neuron reconstruction,” Brain Inform., vol. 2, pp. 135–144,
roscientists to study cholinergic neurons in WFM images 2015.
and perform well in reconstruction and visualization of [17] R. Li, T. Zeng, H. Peng, and S. Ji, “Deep learning segmentation of
optical microscopy images improves 3D neuron reconstruction,”
WFM neurons. We tested our segmentation model on confo- IEEE Trans. Med. Imag., vol. 36, no. 7, pp. 1533–1541, Jul. 2017.
cal and two-photon microscopic images. Although our net- [18] Z. Zhou, H.-C. Kuo, H. Peng, and F. Long, “DeepNeuron: An
work was trained using WFM image stacks, it could recover open deep learning toolbox for neuron tracing,” Brain Inform., vol.
most of the neurites in these stacks perfectly well. In the 5, pp. 1–9, 2018.
[19] A. Fakhry, T. Zeng, and S. Ji, “Residual deconvolutional networks
future, we will revise the NeuroConstruct to create a univer- for brain electron microscopy image segmentation,” IEEE Trans.
sal application that reconstructs and visualizes images Med. Imag., vol. 36, no. 2, pp. 447–456, Feb. 2016.
acquired by other imaging modalities. We will incorporate [20] M. Januszewski et al., “High-precision automated reconstruction
of neurons with flood-filling networks,” Nat. Methods, vol. 15,
more features into our Segmentation Toolbox to increase pp. 605–610, 2018.
efficiency and simplicity of annotating WFM neurons. Note [21] K. Luther and H. S. Seung, “Learning metric graphs for neuron
that the reconstruction quality can be further improved by segmentation in electron microscopy images,” in Proc IEEE Int.
training the model with a larger variety of neurites. Symp. Biomed. Imag., 2019, pp. 244–248.
[22] D. Haehn et al., “Design and evaluation of interactive proofread-
ing tools for connectomics,” IEEE Trans. Vis. Comput. Graph., vol.
ACKNOWLEDGMENTS 20, no. 12, pp. 2466–2475, Dec. 2014.
[23] P. Bajcsy, S.-C. Lee, A. Lin, and R. Folberg, “Three-dimensional
This work was supported in part by the NSF under Grants volume reconstruction of extracellular matrix proteins in Uveal
CNS1650499, OAC1919752, ICER1940302, and IIS2107224 and Melanoma from fluorescent confocal laser scanning microscope
in part by the Intramural Research Program of NIH, NINDS, images,” J. Microsc., vol. 221, pp. 30–45, 2006.
[24] T. Ju et al., “3D volume reconstruction of a mouse brain from his-
and NIMH. tological sections using warp filtering,” J. Neurosci. Methods, vol. 156,
pp. 84–100, 2006.
REFERENCES [25] H. Liang, N. Dabrowska, J. Kapur, and D. S. Weller, “Whole brain
reconstruction from multilayered sections of a mouse model of
[1] T. H. Ferreira-Vieira, I. M. Guimaraes, F. R. Silva, and F. M. Ribeiro, status epilepticus,” in Proc. Asilomar Conf. Signals, Syst., Comput.,
“Alzheimer’s disease: Targeting the cholinergic system,” Curr. Neu- 2017, pp. 1260–1263.
ropharmacol., vol. 14, pp. 101–115, 2016. [26] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting
[2] S. Boorboor, S. Jadhav, M. Ananth, D. Talmage, L. Role, and of two 3D point sets,” IEEE Trans. Pattern Anal. Mach. Intell., vol.
A. Kaufman, “Visualization of neuronal structures in wide-field PAMI-9, no. 5, pp. 698–700, Sep. 1987.
microscopy brain images,” IEEE Trans. Vis. Comput. Graph., vol. 25, [27] J. Luck, C. Little, and W. Hoff, “Registration of range data using a
no. 1, pp. 1018–1028, Aug. 2018. hybrid simulated annealing and iterative closest point algorithm,”
[3] E. C. Ballinger, M. Ananth, D. A. Talmage, and L. W. Role, “Basal in Proc. IEEE Int. Conf. Robot. Automat., 2000, pp. 3739–3744.
forebrain cholinergic circuits and signaling in cognition and cog- [28] C.-L. Tsai et al., “Robust, globally consistent and fully automatic
nitive decline,” Neuron, vol. 91, pp. 1199–1218, 2016. multi-image registration and montage synthesis for 3D multi-
[4] J. R. Glaser and E. M. Glaser, “Neuron imaging with neurolu- channel images,” J. Microsc., vol. 243, pp. 154–171, 2011.
cida— PC-based system for image combining microscopy,” Com- [29] M. Yigitsoy and N. Navab, “Structure propagation for image
put. Med. Imag. Graph., vol. 14, pp. 307–317, 1990. registration,” IEEE Trans. Med. Imag., vol. 32, no. 9, pp. 1657–1670,
[5] H. Peng, Z. Ruan, F. Long, J. H. Simpson, and E. W. Myers, “V3d Sep. 2013.
enables real-time 3D visualization and quantitative analysis of [30] S.-C. Lee and P. Bajcsy, “Trajectory fusion for 3D volume
large-scale biological image data sets,” Nat. Biotechnol., vol. 28, no. 4, reconstruction,” Comput. Vis. Image Understanding, vol. 110,
pp. 348–353, 2010. pp. 19–31, 2008.
[6] C. Magliaro, A. L. Callara, N. Vanello, and A. Ahluwalia, “A man- [31] V. J. Dercksen, H.-C. Hege, and M. Oberlaender, “The filament
ual segmentation tool for three-dimensional neuron datasets,” editor: An interactive software environment for visualization,
Front. Neuroinform., vol. 11, 2017, Art. no. 36. proof-editing and analysis of 3D neuron morphology,” Neuroinfor-
[7] H. Ikeno, A. Kumaraswamy, K. Kai, T. Wachtler, and H. Ai, “A matics, vol. 12, pp. 325–339, 2014.
segmentation scheme for complex neuronal arbors and applica- [32] H. Chen, D. M. Iascone, N. M. da Costa, E. S. Lein, T. Liu, and
tion to vibration sensitive neurons in the honeybee brain,” Front. H. Peng, “Fast assembling of neuron fragments in serial 3D
Neuroinform., vol. 12, 2018, Art. no. 61. sections,” Brain Inform., vol. 4, 2017, Art. no. 183.
[8] H. Peng, F. Long, and G. Myers, “Automatic 3D neuron tracing [33] H. Pfister et al., “Visualization in connectomics,” in Scientific Visu-
using all-path pruning,” Bioinformatics, vol. 27, pp. 239–247, 2011. alization, Berlin, Germany: Springer, 2014, pp. 221–245.
[9] M. H. Longair, D. A. Baker, and J. D. Armstrong, “Simple neurite [34] K. Mosaliganti et al., “Reconstruction of cellular biological struc-
tracer: Open source software for reconstruction, visualization tures from optical microscopy data,” IEEE Trans. Vis. Comput.
and analysis of neuronal processes,” Bioinformatics, vol. 27, Graph., vol. 14, no. 4, pp. 863–876, Jul./Aug. 2008.
pp. 2453–2454, 2011.
Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
4964 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 28, NO. 12, DECEMBER 2022

[35] M. Nakao et al., “Visualizing in vivo brain neural structures using Parmida Ghahremani received the BS degree in
volume rendered feature spaces,” Comput. Biol. Med., vol. 53, computer engineering from the Sharif University of
pp. 85–93, 2014. Technology, Iran. She is currently working toward
[36] Y. Wan, H. Otsuna, C.-B. Chien, and C. Hansen, “FluoRender: An the PhD degree in computer science from Stony
application of 2D image space methods for 3D and 4D confocal Brook University. Her research interests include
microscopy data visualization in neurobiology research,” in Proc. computer vision, biomedical imaging, deep learn-
IEEE Pacific Vis. Symp., 2012, pp. 201–208. ing, and computer graphics.
[37] J. Beyer, A. Al-Awami, N. Kasthuri, J. W. Lichtman, H. Pfister, and
M. Hadwiger, “ConnectomeExplorer: Query-guided visual analy-
sis of large volumetric neuroscience data,” IEEE Trans. Vis. Com-
put. Graph., vol. 19, no. 12, pp. 2868–2877, Dec. 2013.
[38] M. Hadwiger, J. Beyer, W. Jeong, and H. Pfister, “Interactive vol-
ume exploration of petascale microscopy data streams using a
visualization-driven virtual memory approach,” IEEE Trans. Vis.
Comput. Graph., vol. 18,no. 12, pp. 2285–2294, Dec. 2012. Saeed Boorboor received the BSc (Hons.)
[39] D. Haehn et al., “Scalable interactive visualization for con- degree in computer science from the School of
nectomics,” Informatics, vol. 4, 2017, Art. no. 29. Science and Engineering, Lahore University of
[40] WebGL-based viewer for volumetric data. Accessed: Oct. 12, 2020. Management Sciences, Pakistan. He is currently
[Online]. Available: https://github.com/google/neuroglancer working toward the PhD degree in computer sci-
[41] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional ence from Stony Brook University. His research
networks for biomedical image segmentation,” in Proc. Int. Conf. interests include scientific visualization, biomedi-
Med. Image Comput. Comput.-Assist. Interv., 2015, pp. 234–241. cal imaging, and computer graphics.
[42] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++:
A nested u-net architecture for medical image segmentation,” in
Proc. Deep Learn. Med. Image Anal. Multimodal Learn. Clin. Decis.
Support, 2018, pp. 3–11.
[43] X. Qin, Z. Zhang, C. Huang, M. Dehghan, O. R. Zaiane, and
M. Jagersand, “U2-Net: Going deeper with nested U-structure
for salient object detection,” Pattern Recognit., vol. 106, 2020,
Pooya Mirhosseini received the BSc degree in
Art. no. 107404.
software engineering from the Sharif University of
[44] P.-S. Liao et al., “A fast algorithm for multilevel thresholding,” J.
Technology and the MSc degree in computer sci-
Inf. Sci. Eng., vol. 17, no. 5, pp. 713–727, 2001.
ence from Stony Brook University. He was a
[45] B. D. Lucas and T. Kanade, “An iterative image registration tech-
research assistant with Visualization Lab. He is
nique with an application to stereo vision,” in Proc. 7th Int. Joint
currently a software engineer with Apple in the
Conf. Artif. Intell., vol. 2, pp. 674––679, 1981.
Bay Area. His research interests include embrace
[46] M. Roberts et al., “Neural process reconstruction from sparse user
low-level GPU optimization, GPGPU, visualiza-
scribbles,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist.
tion, and virtual/augmented reality.
Interv., 2011, pp. 621–628.
[47] T.-C. Lee, R. L. Kashyap, and C.-N. Chu, “Building skeleton mod-
els via 3D medial surface axis thinning algorithms,” Graphical
Models Image Process., vol. 56, pp. 462–478, 1994.
[48] U. Bagci and L. Bai, “Automatic best reference slice selection for
smooth volume reconstruction of a mouse brain from histological
images,” IEEE Trans. Med. Imag., vol. 29, no. 9, pp. 1688–1696, Chetan Gudisagar received the BTech degree
Sep. 2010. from the National Institute of Technology Karna-
[49] D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellen, and taka, Surathkal, India, and the MSc degree in
W. Eubank, “PET-CT image registration in the chest using free- computer science from Stony Brook University,
form deformations,” IEEE Trans. Med. Imag., vol. 22, no. 1, New York. He is currently working on an open
pp. 120–128, Jan. 2003. source project on a consistency platform called
[50] H. W. Kuhn, “The Hungarian method for the assignment prob- CorfuDB with VMware. His research interests
lem,” Nav. Res. Logist. Quart., vol. 2, no. 1/2, pp. 83–97, 1955. include distributed systems, backend develop-
[51] W. J. Schroeder, B. Lorensen, and K. Martin, The Visualization Tool- ment, and data science.
kit: An Object-oriented Approach to 3D Graphics. New York, NY,
USA: Kitware, 2004.
[52] L. Ibanez, W. Schroeder, L. Ng, and J. Cates, The ITK Software
Guide, 1st ed. New York, NY, USA: Kitware, 2003.
[53] W. S. Rasband, “ImageJ: Image processing and analysis in java,”
Astrophysics Source Code Library, vol. 1, pp. 06013, Jun. 2012. Mala Ananth received the BS degree in biology
[54] S. Liu, D. Zhang, S. Liu, D. Feng, H. Peng, and W. Cai, “Rivulet: and the PhD degree in neuroscience from Stony
3D neuron morphology tracing with iterative back-tracking,” Neu- Brook University in 2011 and 2019, respectively.
roinformatics, vol. 14, pp. 387–401, 2016. From 2013, she was a research assistant with
[55] Z. Zhou, X. Liu, B. Long, and H. Peng, “TReMAP: Automatic 3D Brookhaven National Laboratory. She is currently
neuron reconstruction based on tracing, reverse mapping and a postdoctoral research fellow with the National
assembling of 2D projections,” Neuroinformatics, vol. 14, pp. 41–50, Institute of Neurological Disorders and Stroke.
2016. Her research interests include the heterogeneity
[56] H. Huang et al., “UNET 3+: A full-scale connected UNet for medi- of cell types in age-related cognitive decline.
cal image segmentation,” in Proc. IEEE Int. Conf. Acoust., Speech
Signal Process., 2020, pp. 1055–1059.

Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.
GHAHREMANI ET AL.: NEUROCONSTRUCT: 3D RECONSTRUCTION AND VISUALIZATION OF NEURITES IN OPTICAL MICROSCOPY BRAIN... 4965

David Talmage received the BA degree in biol- Arie E. Kaufman (Fellow, IEEE) received the PhD
ogy from the University of Virginia and the PhD degree in computer science from Ben-Gurion Uni-
degree in genetics from the University of Minne- versity, Israel, in 1977. From 1999 to 2017, he was
sota. He was a postdoctoral with the Rockefeller a chair with the Computer Science Department. He
University and Harvard Medical School. He is cur- is currently a distinguished professor of computer
rently a senior scientist with the National Institute science, the director of the Center of Visual Com-
of Mental Health, NIH. He has authored or co- puting, and the chief scientist of the Center of
authored more than 70 peer-reviewed articles Excellence with Wireless and Information Technol-
that have been cited nearly 4000 times. ogy, Stony Brook University. He has conducted
research for more than 40 years in visualization,
graphics, and imaging and has authored or coau-
thored more than 350 refereed papers. He was the founding editor-in-chief
of the IEEE Transactions on Visualization and Computer Graphics, 1995–
1998. He is an ACM fellow, National Academy of Inventors fellow, and the
Lorna W. Role received the AB degree in applied recipient of the IEEE Visualization Career Award in 2005, and inducted into
mathematics and the PhD degree in physiology the IEEE Visualization Academy in 2019, and the Long Island Technology
from Harvard University. He was a postdoctoral in Hall of Fame in 2013.
pharmacology with Harvard Medical School and
the Washington University School of Medicine. She
was an assistant professor with Columbia Univer- " For more information on this or any other computing topic,
sity in 1985 and became a professor. In 2008, he please visit our Digital Library at www.computer.org/csdl.
was a SUNYdistinguished professor and chair with
the Department of Neurobiology and Behavior, and
co-director of Neurosciences Institute, Stony Brook
University. She is currently the scientific director
and senior investigator with NINDS, NIH. She was the recipient of many
awards and honors, including fellow of the American Association for the
Advancement of Science in 2011 and fellow of the American College of
Neuropsychopharmacology in 2009.

Authorized licensed use limited to: University of Sargodha. Downloaded on November 06,2023 at 10:01:35 UTC from IEEE Xplore. Restrictions apply.

You might also like