0% found this document useful (0 votes)
12 views43 pages

Machine Learning and Deep Learning Approaches

This document discusses the development of a robust machine learning method for the detection and segmentation of brain tumors, particularly glioblastoma, in MRI scans. It highlights the limitations of traditional segmentation methods and introduces novel algorithms that utilize super pixel segmentation, Gabor wavelet filters, and conditional random fields to improve accuracy. The proposed system aims to enhance the segmentation quality for clinical applications in neurosurgery and radiation oncology by effectively managing large-scale neuroimaging datasets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views43 pages

Machine Learning and Deep Learning Approaches

This document discusses the development of a robust machine learning method for the detection and segmentation of brain tumors, particularly glioblastoma, in MRI scans. It highlights the limitations of traditional segmentation methods and introduces novel algorithms that utilize super pixel segmentation, Gabor wavelet filters, and conditional random fields to improve accuracy. The proposed system aims to enhance the segmentation quality for clinical applications in neurosurgery and radiation oncology by effectively managing large-scale neuroimaging datasets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

BRAIN DISEASE DIAGNOSIS USING MACHINE LEARNING

ABSTRACT
Purpose Detection and segmentation of a brain tumor such as glioblastoma multi
formed in magnetic resonance (MR) images are often challenging due to its intrinsically
heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI
scans was developed and tested. Methods Simple thresholds and statistical methods are
unable to adequately segment the various elements of the GBM, such as local contrast
enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory
results in larger data sets, and the methods based on generative or discriminative models have
intrinsic limitations during application, such as small sample set learning and transfer. The
promises of these two projects were to model the complex interaction of brain and behavior
and to understand and diagnose brain diseases by collecting and analyzing large quantities of
data. Archiving, analyzing, and sharing the growing neuroimaging datasets posed major
challenges. New computational methods and technologies have emerged in the domain of Big
Data but have not been fully adapted for use in neuroimaging. In this work, we introduce the
current challenges of neuroimaging in a big data context. We review our efforts toward
creating a data management system to organize the large-scale fMRI datasets, and present our
novel algorithms/methods. A new method was developed to overcome these challenges.
Multimodal MR images are segmented into super pixels using algorithms to alleviate the
sampling issue and to improve the sample representativeness. Next, features were extracted
from the super pixels using multi-level Gabor wavelet filters. Based on the features, a
conditional Random Field Grey Level Co-occurrence Matrix (GLCM) model and an affinity
metric model for tumors were trained to overcome the limitations of previous generative
models. Based on the output of the GREY LEVEL CO-OCCURRENCE MATRIX (GLCM)
and spatial affinity models, conditional random fields theory was applied to segment the
tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity
model. Finally, labeling noise was removed using “structural knowledge” such as the
symmetrical and continuous characteristics of the tumor in spatial domain. Finally, labeling
noise was removed using “structural knowledge” such as the symmetrical and continuous
characteristics of the tumor in spatial domain. The (Bat Algorithm) models were trained and
tested on augmented images and validation is performed.

1
CHAPTER 1
INTRODUCTION

BRAIN AND TUMOR SEGMENTATION

Combining image segmentation based on statistical classification with a geometric


prior has been shown to significantly increase robustness and reproducibility. Using a
probabilistic geometric model of sought structures and image registration serves both
initialization of probability density functions and definition of spatial constraints. A strong
spatial prior, however, prevents segmentation of structures that are not part of the model. In
practical applications, we encounter either the presentation of new objects that cannot be
modeled with a spatial prior or regional intensity changes of existing structures not explained
by the model. Our driving application is the segmentation of brain tissue and tumors from
three-dimensional magnetic resonance imaging (MRI). Our goal is a high-quality
segmentation of healthy tissue and a precise delineation of tumor boundaries. We present an
extension to an existing expectation maximization (EM) segmentation algorithm that
modifies a probabilistic brain atlas with an individual subject's information about tumor
location obtained from subtraction of post- and pre-contrast MRI. The new method handles
various types of pathology, space- occupying mass tumors and in ltrating changes like
edema. Preliminary results once five cases presenting tumor types with very different
characteristics demonstrate the potential of the new technique for clinical routine use for
planning and monitoring in neurosurgery, radiation oncology, and radiology. A geometric
prior can be used by atlas-based segmentation, which regards segmentation as a registration
problem in which a fully labeled, template MR volume is registered to an unknown dataset.
High dimensional warping results in a one-to-one correspondence between the template and
subject images, resulting in a new, automatic segmentation. These methods require elastic
registration of images to account for geometrical distortions produced by pathological
processes. Such registration remains challenging and is not yet solved for the general case.
Wareld et al. [12], [13] combined elastic atlas registration with statistical classification.
Elastic registration of a brain atlas helped to mask the brain from surrounding structures. A
further step uses “distance from brain boundary" as an additional feature to improve
separation of clusters in multi-dimensional feature space. Initialization of probability density
functions still requires a supervised selection of training regions. The core idea, namely to

2
augment statistical classification with spatial information to account for the overlap of
distributions in intensity feature space, is part of the new method.

Automatic segmentation of MR images of normal brains by statistical classification,


using an atlas prior for initialization and also for geometric constraints. A most recent
extension detects brain lesions as outlier sand was successfully applied for detection of
multiple sclerosis lesions. Brain tumors, however, can't be simply modeled as intensity
outliers due to overlapping intensities with normal tissue and/or significant size. We propose
a fully automatic method for segmenting MR images presenting tumor and edema, both
mass-effect and inltrating structures. Additionally, tumor and edema classes are added to the
segmentation. The spatial atlas that is used as a prior in the classification is modified to
include prior probabilities for tumor and edema. As with the work done by other groups, we
focus on a subset of tumors to make the problem tractable. Our method provides a full
classification of brain tissue into white matter, grey matter, tumor, and edema. Because the
method is fully automatic, its reliability is optimal. We have applied our tumor segmentation
framework to five different datasets, including a wide range of tumor types and sizes Fig. 5
shows results for two datasets. Because the tumor class has a strong spatial prior, many small
structures, mainly blood vessels, are classified as tumor because they enhance with contrast.
Post processing using level set evolution is necessary to get a final segmentation for the
tumor [ shows the final spatial priors used for classification of the dataset with the additional
tumor and edema channels. We have developed a model-based segmentation method for
segmenting head MR image datasets with tumors and in ltrating edema. This is achieved by
extending the spatial prior of a statistical normal human brain atlas with individual
information derived from the patient's dataset. Thus, we combine the statistical geometric
prior with image-specific information for both geometry of newly appearing objects, and
probability density functions for healthy tissue and pathology. Applications to five tumor
patients with variable tumor appearance demonstrated that the procedure can handle large
variation of tumor size, interior texture, and locality. The method provides a good quality of
healthy tissue structures and of the pathology, a requirement for surgical planning or image
guided surgery .Thus, it goes beyond previous work that focuses on tumor segmentation only.
Currently, we are testing the validity of the segmentation system in a validation study that
compares resulting tumor structures with repeated manual experts' segmentations, both within
and between multiple experts. A preliminary machine versus human rater validation showed
an average overlap ratio of > 90% and an average MAD (mean average surface distance) of

3
0:8mm, which is smaller than the original voxel resolution. In our future work, we will study
the issue of deformation of normal anatomy in the presence of space-occupying tumors.
Within the range of tumors studied so far, the soft boundaries of the statistical atlas could
handle spatial deformation. However, we will develop a scheme for high dimensional
warping of multichannel probability data to get an improved match between atlas and patient
images.

TUMOR CLASS

In addition to the three tissue classes assumed in the EMS segmentation (white matter,
grey matter, csf), we add a new class for tumor tissue. Whereas the (spatial) prior
probabilities for the normal tissue classes are defined by the atlas, the spatial tumor prior is
calculated from the T1 pre- and post-contrast difference image. We assume
that(multiplicative) bias field is the same in both the pre- and post contrast images. Using the
log transform of the T1 pre- and post-contrast image intensities then gives a bias-free
difference image, since the bias fields (now additive) in the two images cancel out.
Difference Image Histogram: The histogram of the difference image shows a peak around 0,
corresponding to noise and subtle misregistration, and a positive response corresponding to
contrast enhancement. We would like to determine a weighting function, essentially a soft
threshold, that corresponds to our belief that a voxel is contrast enhanced. We calculate a
mixture model t to the histogram. Two Gaussian distributions are used to model the normal
difference image noise, and a gamma distribution is used to model the enhanced tissue. The
means of the Gaussian distributions and the location parameter of the gamma distribution are
constrained to be equal. Tumor Class Spatial Prior: The posterior probability of the gamma
distribution representing contrast enhancement is used to map the difference image to a
spatial prior probability image for tumor. This choice of spatial prior for tumor causes tissue
that enhances with contrast to be included in the tumor class, and prevents enhancing tissue
from cluttering the normal tissue classes. We also maintain a low base probability for the
tumor class across the whole brain region. In many of the cases we have examined, the tumor
voxel intensities are fairly well separated from normal tissue in the T1 pre contrast and T2
channels. Even when contrast agent only causes partial enhancement in the post contrast
image, the tumor voxels often have similar intensity values in the other two images (see Fig.
2 left). Including a small base probability for the tumor class allows non-enhancing tumor to
still be classified as tumor, as long as it is similar to enhancing tumor in the T1 and T2
channels. The normal tissue priors are scaled appropriately to allow for this new tumor prior,

4
so that the probabilities still sum to 1. B. Edema Class We also add a separate class for
edema. Unlike tumor structures, there is no spatial prior for the edema. As a consequence, the
probability density function for edema cannot be initialized automatically. We approach this
problem as follows: First, we have found that edema, when present, is most evident in white
matter. Also, we noticed from tests with supervised classification that the edema probability
density appears to be roughly between and white matter in the T1/T2 intensity space . We
create an edema class prior that is a fraction of the white matter spatial prior . The other atlas
priors are scaled to allow for the edema prior, just as for the tumor prior. The edema and the
white matter classes share the same region spatially, but are a bimodal probability density
composed of white matter and edema. During initialization of the class parameters in a
subject image, we calculate the estimates for gray matter, white matter, tumor and edema
using the modified atlas prior. Thus, white matter and edema would result in similar
probability density functions. The bimodal distribution is then initialized by modifying the
mean value for edema to be between white matter and , using prior knowledge about
properties of edema.

GREY LEVEL CO-OCCURRENCE MATRIX (GLCM)

A co-occurrence occurrence matrix, also referred to as a co , also referred to as a


cooccurrence occurrence distribution, is defined over an , is defined over an image to be the
distribution of to be the distribution of co-occurring values at a occurring values at a given
offset given offset Or Represents the distance and angular spatial Represents the distance and
angular spatial relationship over an image sub relationship over an image sub-region of
specific region of specific size.

The GLCM is created from a gray The GLCM is created from a gray-scale image.
The GLCM is calculates how often a pixel The GLCM is calculates how often a pixel with
gray with gray-level (grayscale intensity or level (grayscale intensity or Tone) value i occurs
either horizontally, occurs either horizontally, vertically, or diagonally to adjacent pixels
vertically, or diagonally to adjacent pixels with the value with the value j .

GRAPHICAL MODELING

Graphical modeling is a powerful framework for representation and inference in


multivariate probability distributions. It has proven useful in diverse areas of stochastic
modeling, including coding theory computer vision ,knowledge representation, Bayesian
statistics and natural-language processing. This factorization turns out to have a close

5
connection to certain conditional independence relationships among the variables — both
types of information being easily summarized by a graph. Indeed, this relationship between
factorization, conditional independence, and graph structure comprises much of the power of
the graphical modeling framework: the conditional independence viewpoint is most useful
for designing models, and the factorization viewpoint is most useful for designing inference
algorithms. In the rest of this section, we introduce graphical models from both the
factorization and conditional independence viewpoints, focusing on those models which are
based on undirected graphs.

All of the methods described in this survey assume that the structure of the model has
been decided in advance. It is natural to ask if we can learn the structure of the model as well.
As in graphical models more generally, this is a difficult problem. In fact, Bradley and
Guestrin point out an interesting complication that is specific to conditional models. For a
generative model p(x), maximum likelihood structure learning can be performed efficiently if
the model is restricted to be tree-structured, using the well-known Chow-Liu algorithm. In
the conditional case, when we wish to estimate the structure of p(y|x), the analogous
algorithm is more difficult, because it requires estimating marginal distributions of the form
p(yu,yv|x), that is, we need to estimate the effects of the entire input vector on every pair of
output variables. It is difficult to estimate these distributions without knowing the structure of
the model to begin with.

Classification methods provide established, powerful methods for predicting discrete


outcomes. But in the applications that we have been considering in this survey, we wish to
predict more complex objects, such as parse trees of natural language sentences, alignments
between sentences in different languages , and route plans in mobile robotics . Each of these
complex objects have internal structure, such as the tree structure of a parse, and we should
be able to use this structure in order to represent our predictor more efficiently. This general
problem is called structured prediction. Just as the GREY LEVEL CO-OCCURRENCE
MATRIX (GLCM) likelihood generalizes logistic regression to predict arbitrary structures,
the field of structured prediction generalizes the classification problem to the problem of
predicting structured objects. Structured prediction methods are essentially a combination of
classification and graphical modeling, combining the ability to compactly model multivariate
data with the ability to perform prediction using large sets of input features. GREY LEVEL
CO-OCCURRENCE MATRIX (GLCM)s provide one way to do this, generalizing logistic
regression, but other standard classification methods can be generalized to the structured

6
prediction case as well. Detailed information about structured prediction methods is available
in a recent collection of research papers. In this section, we give an outline and pointers to
some of these methods.

SUPER PIXEL SEGMENTATION

In recent years, super pixel algorithms have become a standard tool in computer
vision and many approaches have been proposed. However, different evaluation
methodologies make direct comparison difficult. We address this shortcoming with a
thorough and fair comparison of thirteen state-of-the art super pixel algorithms. To include
algorithms utilizing depth information we present results on both the Berkeley Segmentation
Dataset and the NYU Depth Dataset. Based on qualitative and quantitative aspects, our work
allows to guide algorithm selection by identifying important quality characteristics The
concept of super pixels is motivated by two important aspects. There are only few
publications devoted to the comparison of existing super pixel algorithms in a consistent
framework: to the best of our knowledge. However, these publications cannot include several
recent algorithms. Meanwhile, authors tend to include a brief evaluation intended to show
superiority of their proposed super pixel algorithm over selected existing approaches.
However, these results are not comparable across publications. We categorize the algorithms
according to criteria we find important for evaluation and algorithm selection. Roughly, the
algorithms can be categorized as either graph- based approaches or gradient ascent
approaches. Furthermore, we distinguish algorithms offering direct control over the number
of super pixels as well as algorithms providing a compactness parameter. Overall, we
evaluated thirteen state-of-the-art super pixel algorithms including three algorithms utilizing
depth information. Several algorithms provide both excellent performance and low runtime.
Furthermore, including additional information such as depth may not necessarily improve
performance. Therefore, additional criteria are necessary to asses super pixel algorithms. In
particular, we find that visual quality, runtime and the provided parameters are among these
criteria. Clearly, visual appearance is difficult to measure appropriately, however, it may have
serious impact on possible applications. Furthermore, low runtime is desirable when using
super pixel algorithms as preprocessing step, especially in real-time settings. Finally,
parameters should be interpretable and easy to tune and algorithms providing a compactness

7
parameter are preferable. In addition, as the number of super pixels can be understood as a
lower bound on performance, we prefer algorithms o erring direct control over the number of
super pixels. In conclusion, while many algorithms provide excellent performance with
respect to Under segmentation Error and Boundary Recall, they lack control over the number
of super pixels or a compactness parameter. Furthermore, these impressive results with
respect to Boundary Recall and Under segmentation Error do not necessarily respect the
perceived visual quality of the generated super pixel segmentations. Our comparison is split
into a qualitative part, examining the visual quality of the generated super pixels, and a
quantitative part based on Boundary Recall, Under segmentation Error and runtime. To
ensure a fair comparison, the parameters have been chosen to jointly optimize Boundary
Recall and Under segmentation Error using discrete grid search. Parameter optimization was
performed on the validation sets while comparison is performed on the test sets.

8
1.1 SYSTEM SPECIFICATION
1.1.1 HARDWARE CONFIGURATION:
SYSTEM : PENTIUM CORE 2 DUO

HARD DISK : 250 GB

RAM : 1 GB DDR 1

KEY BOARD : STANDARD 104 KEYS

MOUSE : OPTICAL MOUSE

MONITOR : 15 INCH VGA COLOR MONITOR

1.1.2 SOFTWARE CONFIGURATION:

OPERATING SYSTEM : WINDOWS 11


FRONT END : JAVA – jdk 1.8

9
1.1.3 SOFTWARE SPECIFICATION

ABOUT JAVA

Java is a high-level, versatile programming language initially developed by Sun


Microsystems in 1995. It's renowned for its platform independence, making it executable on
any device that supports Java, thanks to its "write once, run anywhere" (WORA) philosophy.
This is achieved through the Java Virtual Machine (JVM), which acts as an intermediary
between the Java code and the underlying hardware.

FEATURES OF JAVA
Inheritance:
 Java supports inheritance, allowing classes to inherit attributes and behaviors from
other classes.
 Inheritance promotes code reuse and hierarchical organization of classes.
 Features of inheritance include single inheritance, multilevel inheritance, and
hierarchical inheritance.
 Subclasses can override methods defined in their superclass using method overriding.
 Access modifiers control access to inherited members.
 The super keyword is used to refer to the superclass's members within a subclass.

Implementation (Interfaces):

 Java interfaces define a contract specifying a set of methods that implementing classes
must provide.
 Unlike class inheritance, Java allows a class to implement multiple interfaces.
 Interfaces enable a class to inherit behavior from multiple sources.
 Java 8 introduced default methods in interfaces, providing backward compatibility
when extending interfaces with new methods.

10
 Interfaces can also contain static methods, which are associated with the interface
itself.
 Marker interfaces, such as Serializable and Cloneable, signify special behaviors or
capabilities without defining any methods

SPECIFICATION OF JAVA

Image Processing Toolbox provides a comprehensive set of reference-standard


algorithms and graphical tools for image processing, analysis, visualization, and algorithm
development. You can perform image enhancement, image deblurring, feature detection,
noise reduction, image segmentation, spatial transformations, and image registration. Many
functions in the toolbox are multithreaded to take advantage of multicourse and
multiprocessor computers.

Image Processing Toolbox supports a diverse set of image types, including high
dynamic range, gig pixel resolution, ICC-compliant color, and tomographic images.
Graphical tools let you explore an image, examine a region of pixels, adjust the contrast,
create contours or histograms, and manipulate regions of interest (ROIs). With the toolbox
algorithms you can restore degraded images, detect and measure features, analyze shapes and
textures, and adjust the color balance of images.

DIGITAL IMAGE PROCESSING

Digital image processing deals with manipulation of digital images through a digital
computer. It is a subfield of signals and systems but focus particularly on images. DIP focuses
on developing a computer system that is able to perform processing on an image. The input
of that system is a digital image and the system process that image using efficient algorithms,
and gives an image as an output. The most common example is Adobe Photoshop. It is one of
the widely used applications for processing digital images.

INTRODUCTION

Signal processing is a discipline in electrical engineering and in mathematics that


deals with analysis and processing of analog and digital signals , and deals with storing ,

11
filtering , and other operations on signals. These signals include transmission signals, sound
or voice signals, image signals, and other signals e.t.c.

Out of all these signals, the field that deals with the type of signals for which the input
is an image and the output is also an image is done in image processing. As it name suggests,
it deals with the processing on images.

It can be further divided into analog image processing and digital image processing.

What is an Image

An image is nothing more than a two dimensional signal. It is defined by the


mathematical function f(x, y) where x and y are the two co-ordinates horizontally and
vertically.

The value of f(x ,y) at any point is gives the pixel value at that point of an image.

The above figure is an example of digital image that you are now viewing on your computer
screen. But actually, this image is nothing but a two dimensional array of numbers ranging
between 0 and 255.

128 30 123

232 123 321

123 77 89

80 255
12 255
Each number represents the value of the function f(x, y) at any point. In this case the value
128, 230 ,123 each represents an individual pixel value. The dimensions of the picture is
actually the dimensions of this two dimensional array.

Analyzing Images

Image Processing Toolbox provides a comprehensive suite of reference-standard


algorithms and graphical tools for image analysis tasks such as statistical analysis, feature
extraction, and property measurement.

Statistical functions let you analyze the general characteristics of an image by:

 Computing the mean or standard deviation


 Determining the intensity values along a line segment
 Displaying an image histogram
 Plotting a profile of intensity values

Edge-detection algorithms let you identify object boundaries in an image. These algorithms
include the Sobel, Prewitt, Roberts, Canny, and Laplacian of Gaussian methods. The
powerful Canny method can detect true weak edges without being "fooled" by noise.

Image segmentation algorithms determine region boundaries in an image. You can explore
many different approaches to image segmentation, including automatic thresholding, edge-
based methods, and morphology-based methods such as the watershed transform, often used
to segment touching objects.

13
Detection and outlining of an aircraft using segmentation and morphology.

Morphological operators enable you to detect edges, enhance contrast, remove noise,
segment an image into regions, thin regions, or perform skeletonization on regions.
Morphological functions in Image Processing Toolbox include:

 Erosion and dilation


 Opening and closing
 Labeling of connected components
 Watershed segmentation
 Reconstruction
 Distance transform
 Image Processing Toolbox also contains advanced image analysis functions that let
you:
 Measure the properties of a specified image region, such as the area, center of mass,
and bounding box
 Detect lines and extract line segments from an image using the Hough transform
 Measure properties, such as surface roughness or color variation, using texture
analysis functions

14
CHAPTER 2
2. SYSTEM STUDY
2.1 EXISTING SYSTEM
In existing system the comprehensive survey of existing tumor enhancement and
segmentation techniques. Each method is classified, analyzed, and compared against other
approaches. To examine the accuracy of the tumor enhancement and segmentation
techniques, the sensitivity and specificity of the approaches is presented and compared where
applicable. Finally, this research provides taxonomy for the available approaches and
highlights the best available enhancement and segmentation methods. it only categorized
tumor segmentation techniques into mass detection using a single view and mass detection
using multiple views. The mass detection using single view segmentation in turn is divided
into four categories: model-based methods, region-based methods, contour-based methods,
and clustering methods.

2.1.1 DRAWBACKS OF EXISTING SYSTEM:


 The techniques that were surveyed included: histogram based techniques, gradient
based techniques, polynomial modeling based techniques, active contour based
techniques, and classifiers based techniques.
 It only review the algorithms that have been proposed in the literature to enhance and
segment tumor images that contain both masses and micro-calcification.
 There is no clear edge results.
 Dilate image sharpening to find tumor object is not possible.
 Less accuracy.

METHODOLOGY

15
BAT ALGORITHM
 BAT algorithm, well-known for its optimization ability offers a quicker convergence
rate when compared to other contemporary optimization techniques, and it is quite
good for performing medical image segmentation.
 The introduction of BAT algorithm has been made by Zhang et al. and it has a unique
principle called echolocation, which is an inbred quality possessed by bats. In general,
the bats (mammal) have the ability to detect prey and avoid obstacles using the
process of echolocation that relates to the ultrasound signal produced by a bat, which
is around 16 KHz and it gets reflected on striking/interfering an obstacle or prey.
 Echolocation enables a bat to maneuver with speed. The application of BAT technique
has been extended to various issues such as optimizing on large-scale , fuzzy based
clustering , estimation of parameters involved in the structuring of dynamic biological
systems , providing multi-objective optimization , matching of images , economic
load and emission dispatch, data mining, scheduling, neural networks , and detection
of phishing in websites .

2.2 PROPSOSED SYSTEM:


The proposed system Grey Level Co-Occurrence Matrix (GLCM) Homomorphic
Function is chosen in order to distinguish the interior area from other organs in the MR image
dataset. Then modified gradient magnitude region growing algorithm is applied, in which
gradient magnitude is computed by Sobel operator and employed as the definition of
homogeneity criterion. This implementation allowed stable boundary detection when the
gradient suffers from intersection variations and gaps. By analyzing the gradient magnitude,
the sufficient contrast present on the boundary region that increases the accuracy of
segmentation. To calculate the size of segmented tumor the relabeled method based on
remaps the labels associated with object in a segmented image such that the label numbers
are consecutive with no gaps between the label numbers used. Any object can be extracted
from the relabeled output using a binary threshold. Here, BAT algorithm is adjusted to extract
and relabeled the tumor and then find its size in pixels. The algorithm works well in two
stages.

The first stage is to determine the input image labels and the number of pixels in each
label. The second stage is to determine the output requested region to get total number of

16
pixels accessed. Segmented areas are automatically calculated and to get desired tumor area
per slice.

2.2.1 ADVANTAGES OF PROPOSED SYSTEM:


 The Tumor are difficult images to interpret, and a preprocessing phase is necessary to
improve the quality of the images and make the feature extraction phase more
reliable.
 High accuracy.
 Clear edge results.
 Brain contour detection and confine further analysis to the brain region alone which
otherwise could bias the detection procedures in sequent stages.
 Effective Edge detection is a well-developed field on its own within Medical image
processing.
 Multi Phase Segmentation supported.

2.3 FEASIBILITY STUDY


Preliminary investigation examine project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are
aspects in the feasibility study portion of the preliminary investigation:

 Technical Feasibility
 Operation Feasibility
 Economical Feasibility

TECHNICAL FEASIBILITY
The technical issue usually raised during the feasibility stage of the investigation includes
the following:

 Does the necessary technology exist to do what is suggested?


 Do the proposed equipments have the technical capacity to hold the data required to
use the new system?
 Will the proposed system provide adequate response to inquiries, regardless of the
number or location of users?
 Can the system be upgraded if developed?

17
 Are there technical guarantees of accuracy, reliability, ease of access and data
security?

Earlier no system existed to cater to the needs of ‘Secure Infrastructure Implementation


System’. The current system developed is technically feasible. It is a web based user interface
for audit workflow at DB2 Database. Thus it provides an easy access to the users. The
database’s purpose is to create, establish and maintain a workflow among various entities in
order to facilitate all concerned users in their various capacities or roles. Permission to the
users would be granted based on the roles specified.

Therefore, it provides the technical guarantee of accuracy, reliability and security. The
software and hard requirements for the development of this project are not many and are
already available in-house at NIC or are available as free as open source. The work for the
project is done with the current equipment and existing software technology. Necessary
bandwidth exists for providing a fast feedback to the users irrespective of the number of users
using the system.

OPERATIONAL FEASIBILITY

Proposed projects are beneficial only if they can be turned out into information system.
That will meet the organization’s operating requirements. Operational feasibility aspects of
the project are to be taken as an important part of the project implementation. Some of the
important issues raised are to test the operational feasibility of a project includes the
following: -

 Is there sufficient support for the management from the users?


 Will the system be used and work properly if it is being developed and implemented?
 Will there be any resistance from the user that will undermine the possible application
benefits?

This system is targeted to be in accordance with the above-mentioned issues. Beforehand,


the management issues and user requirements have been taken into consideration. So there is
no question of resistance from the users that can undermine the possible application benefits.

The well-planned design would ensure the optimal utilization of the computer resources
and would help in the improvement of performance status.

ECONOMIC FEASIBILITY

18
A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economical feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.

The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, There is nominal expenditure and economical feasibility for
certain.

CHAPTER 3

3.SYSTEM DESIGN AND DEVELOPMENT

3.1 SYSTEM DESIGN

19
3.2 SYSTEM DEVELOPMENT
3.2.1 MODULE DESCRIPTION
MRI PREPROCESSING:
Preprocessing images commonly involves removing low frequency, background
noise, normalizing the intensity of individual practical images, removing reflections and
masking portion of images. Image processing is the technique of enhancing data images prior

20
to computational processing. The following preprocessing steps involves realignment and
unwarp slices within a volume, separately for every modality the overall flow diagram is
shown in Fig.2

.
Fig: 2 (a) original MRI (b) sub blocks of MRI (c) segmented tumor using GREY LEVEL
CO-OCCURRENCE MATRIX (GLCM)

Following standard preprocessing steps for brain MRI, the corresponding fractal and
intensity features are extracted. In the next step, different combinations of feature sets are
exploited for tumor segmentation and classification. Feature values are then directly fed to
the AdaBoost classifier for classification of tumor and non-tumor regions. Manual labeling to
tumor regions is performed for supervised classifier training. The trained classifiers are then
used to detect the tumor or nontumor segments in unknown brain MRI.

BIAS FEATURE EXTRACTION:

Feature extraction is a special form of Dimensionality reduction. When the input data
to an Algorithm is too large to be processed and it is suspected to be notoriously redundant
(e.g. the same measurement in both feet and meters) then the input data will be transformed
into a reduced representation set of features (also named features vector). Transforming the
input data into the set of features is called feature extraction. If the features extracted are
carefully chosen it is expected that the features set will extract the relevant information from
the input data in order to perform the desired task using this reduced representation instead of
the full size input.

BAT BRAIN TUMOR SEGMENTATION AND CLASSIFICATION


FROM NON-TUMOR TISSUE:

21
A support vector machine search an optimal separating hyper-plane between members
and non-members of a given class in a high dimension feature space. The inputs to the bat
algorithm are the feature subset selected during data pre-processing step and extraction step.
In GREY LEVEL CO-OCCURRENCE MATRIX (GLCM) kernels functions are used such as
graph kernel, polynomial kernel, RBF kernel etc. Among these kernel functions, a Radial
Basis Function (RBF) proves to be useful, due to the fact the vectors are nonlinearly mapped
to a very high dimension feature space. For tumor/non-tumor tissue segmentation and
classification, MRI pixels are considered as samples. These samples are represented by a set
of feature values extracted from different MRI modalities. Features from all modalities are
fused for tumor segmentation and classification. A modified supervised GREY LEVEL CO-
OCCURRENCE MATRIX (GLCM)ensemble of classifier is trained to differentiate tumor
from the non-tumor tissues.

GREY LEVEL CO-OCCURRENCE MATRIX (GLCM) HOMOMORPHIC


ALGORITHM FOR SEGMENATATION IS AS FOLLOWS

 Obtain the sub-image blocks, starting from the top left corner.
 Decompose sub-image blocks using two level 2-D GREY LEVEL CO-
OCCURRENCE MATRIX (GLCM).
 Derive Spatial Gray Level Dependence Matrices (SGLDM) or Gray Level Co-
occurrence matrices.
 For each 2 level high frequency sub-bands of decomposed sub image blocks with 1
for distance and 0, 45, 90 and 135 degrees for θ and averaged.
 From these co-occurrence matrices, the following nine Haralick second order
statistical texture features called wavelet Co-occurrence Texture features (WCT) are
extracted.

BAT BRAIN TUMOR SEGMENTATION USING STRUCTURE


PREDICTION:
In this section, the method proposed for segmentation of particular structures of the
brain tumor, i.e. whole tumor, tumor core, and active tumor, is evaluated. This method is
based on an approach, whose novelty lies in the principled combination of the deep approach
together with the local structure prediction in medical image segmentation task.
22
PARAMETER ANALYSIS
 A GLCM Homomorphism classifier, which does not consider interactions in the
labels of adjacent data points.
 Conversely, DRFs and MRFs consider these interactions, but do not have the same
appealing generalization properties as Radial Basis Function.
 Observation-matching
 Local-consistency
 Learning: parameter estimation
 Brain tumor segmentation using structure prediction
 In this work, we introduce the current challenges of neuroimaging in a big data
context.
 We review our efforts toward creating a data management system to organize the
large-scale fMRI datasets, and present our novel algorithms/methods
 A new method was developed to overcome these challenges.
 Multimodal MR images are segmented into super pixels using algorithms to alleviate
the sampling issue and to improve the sample representativeness.
 The parameters A and B are estimated from training data represented as pairs where
〈f(┤ γ_i (x)), t_i > is the real-valued bat algorithm response (here, distance to the
separator), and t_i denotes a related probability that y_i=1 , represented as the relaxed
probabilities: t_i = (N+ +1)/(N+ +2) if y_i=1y_i = -1, where N+ and N− are the
number of positive and negative class instances.
 Using these training instances, we can solve the following optimization problem to
estimate parameters A and B:
 min⁡〖- 〗 ∑_(i=1)^t▒〖[ t_i 〗 log⁡O ( t_(i ,) γ_i (x)) + ( 1-O(t_i,

γ_i (x)))]

DATA COLLECTION
 Dataset collection training dataset and test dataset
 Input as brain MRI images for brain tumor detection
 The Dataset used in the task has only images which is far from enough for the model
to train and hence has less accuracy.

23
 Increasing the size of dataset can increase the model performance and thus solving the
problem

CHAPTER 4

4. SYSTEM TESTING AND IMPLEMENTATION

4.1 SYSTEM TESTING

24
System testing is the stage of implementation, which is aimed at ensuring that the
system works accurately and efficiently before live operation commences. Testing is vital to
the success of the system. System testing makes a logical assumption that if all the parts of
the system are correct, the goal will be successfully achieved. The candidate system is subject
to a variety of tests.

A series of tests are performed for the proposed system before the system is ready for
user acceptance testing.

The testing steps are:

 Unit testing
 Integration testing
 Validation testing
 Output testing
 User acceptance testing

UNIT TESTING
Unit testing focuses verification efforts on the smallest unit of software design, the
module. This is also known as “module testing” .The modules are tested separately. This
testing is carried out during programming stage itself. In this testing step, each module is
found to be working satisfactorily as regard to the expected output from the module.

INTEGRATION TESTING
Data can be lost across an interface; one module can have an adverse effect on others;
sub-functions when combined may not produce the desired major functions; integration
testing is a systematic testing for constructing the program structure. While at the same time
conducting to uncover errors associated within the interface? The objective is to take unit
tested modules and to combine them and test it as a whole. Here correction is difficult
because the vast expenses of the entire program complicate the isolation of causes. This is the
integration-testing step; all the errors encountered are corrected for the next testing step.

VALIDATION TESTING
Verification testing runs the system in a simulated environment using simulated data.
This simulated test is sometimes called alpha testing. This simulated test is primarily looking
for errors and monitions regarding end user and decisions design specifications hat where
specified in the earlier phases but not fulfilled during construction.

25
Validation refers to the process of using software in a live environment in order to
find errors. The feedback from the validation phase generally produces changes in the
software to deal with errors and failures that are uncovered. Than a set of user sites is selected
that puts the system in to use on a live basis. They are called beta tests.

The beta test suits use the system in day to day activities. They process live
transactions and produce normal system output. The system is live in every sense of the
word; except that the users are aware they are using a system that can fail. But the
transactions that are entered and persons using the system are real. Validation may continue
for several months. During the course of validating the system, failure may occur and the
software will be changed. Continued use may produce additional failures and need for still
more changes.

OUTPUT TESTING
After performing the validation, the next step is output testing of the proposed system,
since no system could be useful if it does not produce the required output in the specified
format. Asking the users about the format required by them tests the output generated or
displayed by the system under consideration. Hence the output format is considered in two
ways-one is on screen and another in printed format.

USER ACCEPTANCE TESTING


User acceptance of a system is the key factor for the success of any system. The
system under consideration is tested for the user acceptance by constantly keeping in touch
with the prospective system users at the time of developing and making changes whenever
required. This is done in regard to the following point:

An acceptance test has the objective of selling the user on the validity and reliability
of the system .it verifies that the system’s procedures operate to system specifications and
that the integrity of important data is maintained. Performance of an acceptance test is
actually the user’s show. User motivation is very important for the successful performance of
the system. After that a comprehensive test report is prepared. This report shows the system’s
tolerance, Performance range, error rate and accuracy.

4.2 SYSTEM MAINTENANCE


The objectives of this maintenance work are to make sure that the system gets into
work all time without any bug. Provision must be for environmental changes which may

26
affect the computer or software system. This is called the maintenance of the system.
Nowadays there is the rapid change in the software world. Due to this rapid change, the
system should be capable of adapting these changes. In this project the process can be added
without affecting other parts of the system.

Maintenance plays a vital role. The system is liable to accept any modification after
its implementation. This system has been designed to favor all new changes. Doing this will
not affect the system’s performance or its accuracy.

Maintenance is necessary to eliminate errors in the system during its working life and
to tune the system to any variations in its working environment. It has been seen that there are
always some errors found in the system that must be noted and corrected. It also means the
review of the system from time to time.

The review of the system is done for:

 Knowing the full capabilities of the system.


 Knowing the required changes or the additional requirements.
 Studying the performance.

TYPES OF MAINTENANCE:
 Corrective maintenance
 Adaptive maintenance
 Perfective maintenance
 Preventive maintenance

CORRECTIVE MAINTENANCE

Changes made to a system to repair flows in its design coding or implementation. The
design of the software will be changed. The corrective maintenance is applied to correct the
errors that occur during that operation time. The user may enter invalid file type while
submitting the information in the particular field, then the corrective maintenance will
displays the error message to the user in order to rectify the error.

Maintenance is a major income source. Nevertheless, even today many organizations


assign maintenance to unsupervised beginners, and less competent programmers.

The user’s problems are often caused by the individuals who developed the product,
not the maintainer. The code itself may be badly written maintenance is despised by many

27
software developers Unless good maintenance service is provided, the client will take future
development business elsewhere. Maintenance is the most important phase of software
production, the most difficult and most thankless.

ADAPTIVE MAINTENANCE:
It means changes made to system to evolve its functionalities to change business
needs or technologies. If any modification in the modules the software will adopt those
modifications. If the user changes the server then the project will adapt those changes. The
modification server work as the existing is performed.

PERFECTIVE MAINTENANCE:
Perfective maintenance means made to a system to add new features or improve
performance. The perfective maintenance is done to take some perfect measures to maintain
the special features. It means enhancing the performance or modifying the programs to
respond to the users need or changing needs. This proposed system could be added with
additional functionalities easily. In this project, if the user wants to improve the performance
further then this software can be easily upgraded.

PREVENTIVE MAINTENANCE:
Preventive maintenance involves changes made to a system to reduce the changes of
features system failure. The possible occurrence of error that might occur are forecasted and
prevented with suitable preventive problems. If the user wants to improve the performance of
any process then the new features can be added to the system for this project.

4.3 EXPERIMENTAL SETUP


An GREY LEVEL CO-OCCURRENCE MATRIX (GLCM) Homomorphism
classifier, which does not consider interactions in the labels of adjacent data points.
Conversely, DRFs and MRFs consider these interactions, but do not have the same appealing
generalization properties as Radial Basis Function.

28
This section will review our GREY LEVEL CO-OCCURRENCE MATRIX (GLCM),
an extension of RBF that uses a brain tumor framework to model interactions in the labels of
adjacent data points

p(y│x) = (1 )/Z exp⁡{ ∑_(i∈S)▒ 〖 log⁡(O(y_(i ) 〗 ,γ_i (x))) +


∑_(i∈S)▒∑_(j∈N_i)▒〖V(y_i 〗 ,y_j ,X }

where γ_i (x) computes features from the observations x for location i , O(y_i,γ_i
(x)) s an SVM based Observation-Matching potential, and V ( y_(i ,) y_j,X ) is the Local-
Consistency potential over a pair-wise neighborhood structure, where N_i are the 8
neighbors around location i.

OBSERVATION-MATCHING
The Observation-Matching function maps from the observations (features) to class
labels. We would like to use SVMs for this potential. However, the decision function in
SVMs produces a distance value, not a posterior probability suitable for the DRFs’
framework. To convert the output of the decision function to a posterior probability. This
efficient method minimizes the risk of over fitting and is formulated as follows:

O(y_i =1 γ_i (X)) = 1/(1+exp⁡(AXf(γ_i (X)) + B) (5)

The parameters A and B are estimated from training data represented as pairs where
〈f(┤ γ_i (x)),t_i> is the real-valued SVM response (here, distance to the separator), and t_i
denotes a related probability that y_i=1 , represented as the relaxed probabilities: t_i= (N+
+1)/(N+ +2)if y_i=1y_i = -1, where N+ and N− are the number of positive and negative class
instances. Using these training instances, we can solve the following optimization problem to
estimate parameters A and B:

min⁡- ∑_(i=1)^t▒〖[ t_i 〗 log⁡O(t_(i ,) γ_i (x))+ ( 1-O(t_i,γ_i (x)))] (6)

Platt [15] used a Levenberg-Marquardt approach that tried to set B to guarantee that
the Hessian approximation was invertible. However, dealing with the constant directly can
cause problems, especially for unconstrained optimization problems [13]. Hence, we
employed Newton’s method with backtracking line search for simple and robust estimation.
To avoid overflows and underflows of expand log, we reformulated (6) as

min∑_(i=1)^t▒ 〖 [t_i 〗 (AXf(γ_i (X))+ B) +log⁡〖 ( 1+exp⁡〖 (-AXf(γ_i 〗 〗 (x))-B))]


(7)

29
LOCAL-CONSISTENCY
We use a DRF model for Local-Consistency, since we do not want to make the
(traditional MRF) assumption that the label interactions are independent of the features. We
adopted the following pairwise Local-Consistency potential

V(Y_I ,Y_J,X) = y_i y_j (v.∅_ij (X)) (8)

Where v is the vector of Local-Consistency parameters to be learned, while ∅_IJ (x)


calculates features for sites i and j. DRFs use a ∅_ij that penalizes for high absolute
differences in the features. As we are additionally interested in encouraging label continuity,
we used the following function that encourages continuity while discouraging discontinuity:
(max⁡(γ(x)) denotes the vector of max values of the features):

∅_ij (x) = ( max⁡(γ(x))- | γ_i (x)-γ_j (x)|)/(max⁡(γ(x)))

Observe that this function is large when neighboring elements have very similar
features, and small when there is a wide gap between their values.

LEARNING: PARAMETER ESTIMATION


GREY LEVEL CO-OCCURRENCE MATRIX (GLCM)s use a sequential learning
approach to parameter estimation. This involves first solving the SVM Quadratic
Programming problem (3). The resulting decision function is then converted to a posterior
probability using the training data and estimated relaxed probabilities. The Local-Consistency
parameters are then estimated from the m training pixels from each of the K training images
using pseudo likelihood [12]:

V ̂ =arg⁡max ∏_(k=1)^k▒〖∏_(i=1)^m▒p ( y_i^k 〗 | y^k N_t ,X^k |,V ) (10)

We ensure that the log-likelihood is convex by assuming a Gaussian prior over v that
is, p(v│T) is a Gaussain distribution with 0 means and T^2 I variance (see [9]). Thus, the
local-consistency parameters are estimated using its log likelihood:

v ̂ =arg 〖 max 〗 _v ∑_(k=1)^K▒∑_(i=1)^m▒{ O_i^n + ∑_(j∈N_t)▒ 〖 V(y_i^k


〗,y_j^k,X^k ) -log⁡(z_i^k ) }-1/2T v^t v

where z_i^k is a partition function for each site i in image k, and T is a regularizing
constant that ensures the Hessian is not singular. Keeping the Observation- Matching

30
(O_i^k=Oy_t ,γ_t (x))) constant, the optimal Local-Consistency parameters can be found by
gradient descent.

We close by noting that the M^3 N [10] framework resembles GREY LEVEL CO-
OCCURRENCE MATRIX (GLCM)s, as it also incorporates label dependencies and uses a
max-margin approach. However, the M3N approach uses a margin that magnifies the
difference between the target labels and the best runner-up, while we use the ‘traditional’ 2-
class SVM approach of maximizing the distance from the classes to a separating hyper plane.
An efficient approach for training and inference in a special case of M3Ns was presented in
[16]. However, the simultaneous learning and the inference strategy used still make
computations with this model expensive compared to GREY LEVEL CO-OCCURRENCE
MATRIX (GLCM)s.

Segmenting brain tumors is an important medical imaging problem, currently done


manually by expert radiation oncologists for radiation therapy target planning. Markov
Random Fields [5–7] and SVMs [1–3, 17] have been used in systems to perform this task. We
have recently evaluated DRFs and GREY LEVEL CO-OCCURRENCE MATRIX (GLCM)s
for the relatively easy case of segmenting “enhancing tumor areas” [11]. We extend this by
providing improved results for this easy case (due to using better preprocessing and features),
and results for two much harder segmentation cases. This section will present (i) our
experimental data and design, (ii) a summary of the MR preprocessing pipeline and the multi-
scale image-based and ‘alignment-based’

Left to right: T1 image, T1 image with contrast agent, T2 image, enhancing area label, edema
label, gross tumor label, full brain segmentation.

Features that afford a significant improvement over those previous results and allow
us to address more challenging tasks, and (iii) experimental results comparing SVMs, MRFs,
DRFs, and GREY LEVEL CO-OCCURRENCE MATRIX (GLCM)s within this context for
three different segmentation tasks. Our experimental data set consisted of T1, T1c (T1 after

31
injecting contrast agent), and T2 images (each 258 by 258 pixels) from 7 patients (Fig. 3),
each having either a grade 2 astrocytoma, an anaplastic astrocytoma, or a
glioblastomamultiforme. The data was preprocessed with an extensive MR preprocessing
pipeline (described in [3], and making use of [18, 19]) to reduce the effects of noise, inter-
slice intensity variations, and intensity inhomogeneity. In addition, this pipeline robustly
aligns the different modalities with each other, and with a template image in a standard
coordinate system (allowing the use of alignment-based features, mentioned below). We used
the most effective feature set from the comparative study in [17]. This multi-scale feature set
contains traditional image-based features in addition to three types of ‘alignment-based’
features: spatial probabilities for the 3 normal tissue types (white matter, gray matter and
cerebrospinal fluid), spatial expected intensity maps, and a characterization of left-to-right
symmetry (all measured at multiple scales). As with many of the related works on brain
tumor segmentation (such as [1, 2, 6, 20]), we employed a patient-specific training scenario,
where training data for the classifier is obtained from the patient to be segmented. In order to
be fair, all classifiers received the same training and testing pixels, and the testing pixels
came from a different area of the volume than the training pixels — here, distant MR slices
(this prevents the Random Field models from achieving high scores due to over-fitting.) In
our experiment, we applied 6 classifiers — a Maximum Likelihood classifier (degenerate
MRF), a Logistic Regression model (degenerate DRF), an SVM (degenerate GREY LEVEL
CO-OCCURRENCE MATRIX (GLCM)), an MRF, a DRF, and an GREY LEVEL CO-
OCCURRENCE MATRIX (GLCM) — to 13 different volumes, based on various time points
from 7 patients. For each of the Random Field methods, we initialized inference with the
corresponding degenerate classifier (ie. Maximum Likelihood, Logistic Regression, or SVM),
and used the computationally efficient Iterated Conditional Modes (ICM) algorithm to find a
locally optimal label configuration [12]. The 6 classifiers were evaluated over the 13 time
points for the following 3 tasks, where the ground truth was defined by an expert radiologist.
The first task was the relatively easy task of segmenting the ‘enhancing’ tumor area.

ie. the region that appears hyper-intense after injecting the contrast agent (and
including the non-enhancing or necrotic areas contained within the enhancing contour). The
second task was the segmentation of the entire edema area associated with the tumor, which
is significantly more challenging due to the high degree of similarity between the intensities
of edema areas and normal cerebrospinal fluid. The final task was segmenting the Gross
Tumor area as defined by the radiologist. This can be a subset of the edema but a superset of

32
the enhancing area, and is inherently a very challenging task, even for human experts, given
the modalities examined. We used the Jaccard similarity measure to assess the classifications
in terms of true positives (p), false positives (fp), and false negatives (fn):J=tp/(tp+fp+fn)

5. CONCLUSION
Our System brings together two recent trends in the brain tumor segmentation
literature: model-aware similarity and affinity calculations with GREY LEVEL CO-
OCCURRENCE MATRIX (GLCM) models with GREY LEVEL CO-OCCURRENCE

33
MATRIX (GLCM)-based evidence terms. In doing so, we make three main contributions. We
use super pixel-based appearance models to reduce computational cost, improve spatial
smoothness, and solve the data sampling problem for training GREY LEVEL CO-
OCCURRENCE MATRIX (GLCM) classifiers on brain tumor segmentation.

Also, we develop an affinity model that penalizes spatial discontinuity based on


model-level constraints learned from the training data. Finally, our structural denoising based
on the symmetry axis and continuity characteristics is shown to remove the false positive
regions effectively.

Our full system has been thoroughly evaluated on a challenging 20-case GBM and the
Bra TS challenge data set and shown to systematically perform on par with the state of the
art. The combination of the two tracts of ideas yields better performance, on average, than
either alone. In the future, we plan to explore alternative feature and classifier methods, such
as classification forests to improve overall performance.

FUTURE DIRECTIONS FOR PROPOSED WORK


We use super pixel-based appearance models to reduce computational cost, improve
spatial smoothness, and solve the data sampling problem for training GLCM classifiers on
brain tumor segmentation. Also, we develop an affinity model that penalizes spatial
discontinuity based on model-level constraints learned from the training data. Finally, our
structural denoising based on the symmetry axis and continuity characteristics is shown to
remove the false positive regions effectively. The training and validation were performed on
high-resolution MR image dataset with augmentations and the result is compared with deep
learning bat algorithm model Alex net. The performance of all bat algorithm models is
evaluated with the help of performance metrics recall, precision, F score specificity, and
overall accuracy.

BIBLIOGRAPHY

34
 Liu J, Udupa JK, Odhner D, Hackney D, Moonis G (2005) A system for brain tumor
volume estimation via MR imaging and fuzzy connectedness. Comput Med Imaging
Graphics 29(1):21–34
 Sled JG, Zijdenbos AP, Evans AC (1998) A nonparametric method for automatic
correction of intensity nonuniformity in MRI data. IEEE Trans Med Imaging
17(1):87–97
 Belaroussi B, Milles J, Carme S, Zhu YM, Benoit-Cattin H (2006) Intensity non-
uniformity correction in MRI: existing methods and their validation. Med Image Anal
10(2):234
 Madabhushi A, Udupa JK (2005) Interplay between intensity standardization and
inhomogeneity correction in MR image processing. IEEE Trans Med Imaging
24(5):561–576
 Prastawa M, Bullitt E, Ho S, Gerig G (2004) A brain tumor segmentation framework
based on outlier detection. Med Image Anal 8(3):275–283
 PhillipsW, Velthuizen R, Phuphanich S, Hall L, Clarke L, SilbigerM(1995)
Application of fuzzy c-means segmentation technique for tissue differentiation in MR
images of a hemorrhagic glioblastomamultiforme. MagnReson Imaging 13(2):277–
290
 Clark MC, Hall LO, Goldgof DB, Velthuizen R, Murtagh FR, SilbigerMS (1998)
Automatic tumor segmentation using knowledge based techniques. IEEE Trans Med
Imaging 17(2):187–201
 Fletcher-Heath LM, Hall LO, Goldgof DB, Murtagh FR (2001) Automatic
segmentation of non-enhancing brain tumors in magnetic resonance images.
ArtifIntell Med 21(1–3):43–63
 Warfield SK, Kaus M, Jolesz FA, Kikinis R (2000) Adaptive, template moderated,
spatially varying statistical classification. Med Image Anal 4(1):43–55
 KausMR,Warfield SK, Nabavi A, Black PM, Jolesz FA, Kikinis R (2001) Automated
segmentation of MR images of Brain Tumors1. Radiology 218(2):586–591
 Guillemaud R, Brady M (1997) Estimating the bias field of MR images. IEEE Trans
Med Imaging 16(3):238–251
 Corso JJ, Sharon E, Dube S, El-Saden S, Sinha U, Yuille A (2008) Efficient multilevel
brain tumor segmentation with integrated bayesian model classification. IEEE Trans
Med Imaging 27(5):629–640

35
 Zhou J, Chan K, Chong V, Krishnan S (2006) Extraction of brain tumor from MRI
images using one-class support vector machine. In: 27th annual international
conference of the engineering inmedicine and biology society, 2005. IEEE-EMBS
2005. pp 6411–6414
 Corso J, Yuille A, Sicotte N, Toga A (2007) Detection and segmentation of
pathological structures by the extended graph-shifts algorithm. In: Medical Image
Computing and Computer Assisted Intervention—MICCAI. pp 985–993
 Schapire RE, Freund Y, Bartlett P, Lee WS (1998) Boosting the margin: a new
explanation for the effectiveness of voting methods. Ann Stat 26(5):1651–1686

7.APPENDICES

36
A. SYSTEM FLOW DIAGRAM

B. SAMPLE CODING

JAVA

Bat.java

37
/*

* To change this license header, choose License Headers in Project Properties.

* To change this template file, choose Tools | Templates

* and open the template in the editor.

*/

package brain;

import java.util.ArrayList;

import java.util.List;

import java.util.Vector;

import java.util.Arrays;

/*

* @author admin

*/

public class Bat

private ArrayList<FeatureMap> feature_maps;

private int kernel_size;

private int stride;

private int padding;

public int countFeatureMaps;

private int input_size;

public int outputVol;

public void setHyperParameters( int hyperParameter)

38
padding = (hyperParameter >>28) & (0xF);

stride = (hyperParameter >> 16) & (0xFF);

kernel_size = (hyperParameter >>8) & (0xFF);

countFeatureMaps = hyperParameter & (0xFF);

if(kernel_size <=0 || countFeatureMaps <=0)

System.out.println("Invalid parameter passes to Convolution layer constructor");

public Bat( Fitness poolLayer, int hyperparameters, boolean debugSwitch)

setHyperParameters(hyperparameters);

this.feature_maps = new ArrayList<FeatureMap> ();

input_size = poolLayer.outputVolume();

outputVol = outputVolume();

for(int i =0; i< countFeatureMaps; i++)

FeatureMap featureMap = new FeatureMap(input_size,


kernel_size,outputVol,debugSwitch);

addFeatureMap(featureMap);

featureMap.initKernel();

public ArrayList<FeatureMap> get_fMaps()

39
return feature_maps;

public int getKernelSize()

return kernel_size;

40
C. SAMPLE INPUT

41
42
D. SAMPLE OUTPUT

43

You might also like