DWDM 3-1 Unit 2
DWDM 3-1 Unit 2
UNIT II
Data Preprocessing
Data Pre-processing: An Overview, Data Cleaning, Data Integration, Data Reduction, Data
Transformation and Data Discretization.
Data Preprocessing
Data preprocessing describes any type of processing performed on raw data to prepare it for another
processing procedure. Commonly used as a preliminary data mining practice, data preprocessing
transforms the data into a format that will be more easily and effectively processed for the purpose of the
user.
Data preprocessing describes any type of processing performed on raw data to prepare it for another
processing procedure. Commonly used as a preliminary data mining practice, data preprocessing
transforms the data into a format that will be more easily and effectively processed for the purpose of the
user
Data in the real world is dirty. It can be in incomplete, noisy and inconsistent from. These data needs to be
preprocessed in order to help improve the quality of the data, and quality of the mining results.
❖ If no quality data , then no quality mining results. The quality decision is always based on the
quality data.
❖ If there is much irrelevant and redundant information present or noisy and unreliable data, then
knowledge discovery during the training phase is more difficult
Incomplete data: lacking attribute values, lacking certain attributes of interest, or containing only
aggregate data. e.g., occupation=“ ”.
❖ Data cleaning
➢ Fill in missing values, smooth noisy data, identify or remove outliers, and resolve
inconsistencies
❖ Data integration
➢ Integration of multiple databases, data cubes, or files
❖ Data transformation
➢ Normalization and aggregation
❖ Data reduction
➢ Obtains reduced representation in volume but produces the same or similar
analytical results
❖ Data discretization
➢ Part of data reduction but with particular importance, especially for numerical data Forms
of Data Preprocessing
In other words, in many real-life situations, it is helpful to describe data by a single number that is
most representative of the entire collection of numbers. Such a number is called a measure of
central tendency. The most commonly used measures are as follows. Mean, Median, and Mode
Mean: mean, or average, of numbers is the sum of the numbers divided by n. That is:
Example 1
The marks of seven students in a mathematics test with a maximum possible mark of 20 are
given below:
15 13 18 16 14 17 12
Midrange
The midrange of a data set is the average of the minimum and maximum values.
Median: median of numbers is the middle number when the numbers are written in order. If is
even, the median is the average of the two middle numbers.
Example 2
The marks of nine students in a geography test that had a maximum possible mark of 50 are
given below:
47 35 37 32 38 39 36 34 35
Solution:
Arrange the data values in order from the lowest value to the highest value:
32 34 35 35 36 37 38 39 47
The fifth data value, 36, is the middle value in this arrangement.
Note:
In general:
If the number of values in the data set is even, then the median is the average of the two
middle values.
Example 3
Solution:
Arrange the data values in order from the lowest value to the highest value:
10 12 13 16 17 18 19 21
The number of values in the data set is 8, which is even. So, the median is the average of the
two middle values.
Trimmed mean
A trimming mean eliminates the extreme observations by removing observations from each end
of the ordered sample. It is calculated by discarding a certain percentage of the lowest and the
highest scores and then computing the mean of the remaining scores.
Mode of numbers is the number that occurs most frequently. If two numbers tie for most frequent
occurrence, the collection has two modes and is called bimodal.
The mode has applications in printing . For example, it is important to print more of the most
popular books; because printing different books in equal numbers would cause a shortage of
some books and an oversupply of others.
Example 4
48 44 48 45 42 49 48
Solution:
• It is possible for a set of data values to have more than one mode.
• If there are two data values that occur most frequently, we say that the set of data values
is bimodal.
• If there is three data values that occur most frequently, we say that the set of data values
is trimodal
• If two or more data values that occur most frequently, we say that the set of data
values is multimodal
• If there is no data value or data values that occur most frequently, we say that the set of
data values has no mode.
The mean, median and mode of a data set are collectively known as measures of central
tendency as these three measures focus on where the data is centered or clustered. To analyze
data using the mean, median and mode, we need to use the most appropriate measure of
central tendency. The following points should be remembered:
• The mean is useful for predicting future results when there are no extreme values in the
data set. However, the impact of extreme values on the mean may be important and
should be considered. E.g. the impact of a stock market crash on average investment
returns.
• The median may be more useful than the mean when there are extreme values in the
data set as it is not affected by the extreme values.
• The mode is useful when the most common item, characteristic or value of a data set is
required.
Measures of Dispersion
Measures of dispersion measure how spread out a set of data is. The two most commonly used
measures of dispersion are the variance and the standard deviation. Rather than showing how
data are similar, they show how data differs from its variation, spread, or dispersion.
Other measures of dispersion that may be encountered include the Quartiles, Inter quartile range
(IQR), Five number summary, range and box plots
Very different sets of numbers can have the same mean. You will now study two measures of
dispersion, which give you an idea of how much the numbers in a set differ from the mean of
the set. These two measures are called the variance of the set and the standard deviation of the
set
Percentile
Percentiles are values that divide a sample of data into one hundred groups containing (as far as
possible) equal numbers of observations.
The pth percentile of a distribution is the value such that p percent of the observations fall at or
below it.
The most commonly used percentiles other than the median are the 25th percentile and the
75th percentile.
The 25th percentile demarcates the first quartile, the median or 50th percentile demarcates the
second quartile, the 75th percentile demarcates the third quartile, and the 100th percentile
demarcates the fourth quartile.
Quartiles
Quartiles are numbers that divide an ordered data set into four portions, each containing
approximately one-fourth of the data. Twenty-five percent of the data values come before the first
quartile (Q1). The median is the second quartile (Q2); 50% of the data values come before the
median. Seventy-five percent of the data values come before the third quartile (Q3).
Q1=25th percentile=(n*25/100), where n is total number of data in the given data set Q2=median=50th
percentile=(n*50/100)
Q3=75th percentile=(n*75/100)
The inter quartile range is the length of the interval between the lower quartile (Q1) and the
upper quartile (Q3). This interval indicates the central, or middle, 50% of a data set.
IQR=Q3-Q1
Range
The range of a set of data is the difference between its largest (maximum) and smallest
(minimum) values. In the statistical world, the range is reported as a single number, the difference
between maximum and minimum. Sometimes, the range is often reported as “from (the
minimum) to (the maximum),” i.e., two numbers.
Example1:
The range of data set is 3–8. The range gives only minimal information about the spread of the
data, by defining the two extremes. It says nothing about how the data are distributed between
those two endpoints.
Example2:
In this example we demonstrate how to find the minimum value, maximum value, and range of
the following data: 29, 31, 24, 29, 30, 25
the range is 7.
Five-Number Summary
The Five-Number Summary of a data set is a five-item list comprising the minimum value, first
quartile, median, third quartile, and maximum value of the set.
Box plots
A box plot is a graph used to represent the range, median, quartiles and inter quartile range of a set
of data values.
(i) Draw a box to represent the middle 50% of the observations of the data set.
(ii) Show the median by drawing a vertical line within the box.
(iii) Draw the lines (called whiskers) from the lower and upper ends of the box to the
minimum and maximum values of the data set respectively, as shown in the following
diagram.
76 79 76 74 75 71 85 82 82 79 81
71 74 75 76 76 79 79 81 82 82 85
Q1=11*(25/100) th value
=75
=79
=11*(75/100)th value
= 82
Step 5: Min X= 71
Since the medians represent the middle points, they split the data into four equal parts. In other
words:
Outliers
Outlier data is a data that falls outside the range. Outliers will be any points below Q1 –
1.5×IQR or above Q3 + 1.5×IQR.
Example:
10.2, 14.1, 14.4, 14.4, 14.4, 14.5, 14.5, 14.6, 14.7, 14.7, 14.7, 14.9, 15.1, 15.9, 16.4
To find out if there are any outliers, I first have to find the IQR. There are fifteen data points, so
the median will be at position (15/2) = 7.5=8th value=14.6. That is, Q2 = 14.6.
Q1 is the fourth value in the list and Q3 is the twelfth: Q1 = 14.4 and Q3 = 14.9. Then
The values for Q1 – 1.5×IQR and Q3 + 1.5×IQR are the "fences" that mark off the "reasonable"
values from the outlier values. Outliers lie outside the fences.
Histogram
A histogram is a way of summarizing data that are measured on an interval scale (either discrete
or continuous). It is often used in exploratory data analysis to illustrate the major features of the
distribution of the data in a convenient form. It divides up the range of possible values in a data
set into classes or groups. For each group, a rectangle is constructed with a base length equal to
the range of values in that specific group, and an area proportional to the number of observations
falling into that group. This means that the rectangles might be drawn of non-uniform height.
The histogram is only appropriate for variables whose values are numerical and measured on an
interval scale. It is generally used when dealing with large data sets (>100 observations)
A histogram can also help detect any unusual observations (outliers), or any gaps in the data set.
2 Scatter Plot
A scatter plot is a useful summary of a set of bivariate data (two variables), usually drawn before
working out a linear correlation coefficient or fitting a regression line. It gives a good visual
picture of the relationship between the two variables, and aids the interpretation of the
correlation coefficient or regression model.
Each unit contributes one point to the scatter plot, on which points are plotted but not joined.
The resulting pattern indicates the type and strength of the relationship between the two
variables.
A scatter plot will also show up a non-linear relationship between the two variables and whether
or not there exist any outliers in the data.
3 Loess curve
It is another important exploratory graphic aid that adds a smooth curve to a scatter plot in order to
provide better perception of the pattern of dependence. The word loess is short for “local
regression.”
4 Box plot
The picture produced consists of the most extreme values in the data set (maximum and
minimum values), the lower and upper quartiles, and the median.
5 Quantile plot
◼ Displays all of the data (allowing the user to assess both the overall behavior and unusual
occurrences)
◼ Plots quantile information
◼ For a data xi data sorted in increasing order, ƒi indicates that approximately 100
ƒi% of the data are below or equal to the value xi
The ƒ quantile is the data value below which approximately a decimal fraction ƒ of the data is
found. That data value is denoted q(f). Each data point can be assigned an ƒ-value. Let a time
series x of length n be sorted from smallest to largest values, such that the sorted values have
rank. The ƒ-value for each observation is computed as . 1,2,..., n . The ƒ-value for
This kind of comparison is much more detailed than a simple comparison of means or medians.
A normal distribution is often a reasonable model for the data. Without inspecting the data,
however, it is risky to assume a normal distribution. There are a number of graphs that can be
used to check the deviations of the data from the normal distribution. The most useful tool for
assessing normality is a quantile or QQ plot. This is a scatter plot with the quantiles of the scores
on the horizontal axis and the expected normal scores on the vertical axis.
In other words, it is a graph that shows the quantiles of one univariate distribution against the
corresponding quantiles of another. It is a powerful visualization tool in that it allows the user to
view whether there is a shift in going from one distribution to another.
First, we sort the data from smallest to largest. A plot of these scores against the expected normal
scores should reveal a straight line.
The expected normal scores are calculated by taking the z-scores of (I - ½)/n where I is the rank in
increasing order.
Curvature of the points indicates departures of normality. This plot is also useful for detecting
outliers. The outliers appear as points that are far away from the overall pattern op points
A quantile plot is a graphical method used to show the approximate percentage of values below
or equal to the independent variable in a univariate distribution. Thus, it displays quantile
information for all the data, where the values measured for the independent variable are plotted
against their corresponding quantile.
Data Cleaning
Data cleaning routines attempt to fill in missing values, smooth out noise while identifying
outliers, and correct inconsistencies in the data.
Missing Values
The various methods for handling the problem of missing values in data tuples include:
(a) Ignoring the tuple: This is usually done when the class label is missing (assuming the
mining task involves classification or description). This method is not very effective unless the
tuple contains several attributes with missing values. It is especially poor when the percentage of
missing values per attribute varies considerably.
(b) Manually filling in the missing value: In general, this approach is time-consuming and may
not be a reasonable task for large data sets with many missing values, especially when the value
to be filled in is not easily determined.
(c) Using a global constant to fill in the missing value: Replace all missing attribute values by
the same constant, such as a label like “Unknown,” or −∞. If missing values are replaced by, say,
“Unknown,” then the mining program may mistakenly think that they form an interesting
concept, since they all have a value in common — that of “Unknown.” Hence, although this
method is simple, it is not recommended.
(d) Using the attribute mean for quantitative (numeric) values or attribute mode for
categorical (nominal) values, for all samples belonging to the same class as the given tuple:
For example, if classifying customers according to credit risk, replace the missing value with the
average income value for customers in the same credit risk category as that of the given tuple.
(e) Using the most probable value to fill in the missing value: This may be determined with
regression, inference-based tools using Bayesian formalism, or decision tree induction. For
example, using the other customer attributes in your data set, you may construct a decision tree to
predict the missing values for income.
Noisy data:
Noise is a random error or variance in a measured variable. Data smoothing tech is used for
removing such noisy data.
1 Binning methods: Binning methods smooth a sorted data value by consulting the
neighborhood", or values around it. The sorted values are distributed into a number of 'buckets',
or bins. Because binning methods consult the neighborhood of values, they perform local
smoothing.
In this technique,
boundaries, etc.
a. Smoothing by bin means: Each value in the bin is replaced by the mean
value of the bin.
b. Smoothing by bin medians: Each value in the bin is replaced by the bin
median.
c. Smoothing by boundaries: The min and max values of a bin are identified as the
bin boundaries. Each bin value is replaced by the closest boundary value.
• Example: Binning Methods for Data Smoothing
o Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
o Partition into (equi-depth) bins(equi depth of 3 since each bin contains three
values):
- Bin 1: 4, 8, 9, 15
In smoothing by bin means, each value in a bin is replaced by the mean value of the bin. For
example, the mean of the values 4, 8, and 15 in Bin 1 is 9. Therefore, each original value in this
bin is replaced by the value 9. Similarly, smoothing by bin medians can be employed, in which
each bin value is replaced by the bin median. In smoothing by bin boundaries, the minimum and
maximum values in a given bin are identified as the bin boundaries. Each bin value is then
replaced by the closest boundary value.
Suppose that the data for analysis include the attribute age. The age values for the data tuples
are (in
increasing order): 13, 15, 16, 16, 19, 20, 20, 21, 22, 22, 25, 25, 25, 25, 30, 33, 33, 35, 35, 35,
35, 36, 40, 45, 46, 52, 70.
(a) Use smoothing by bin means to smooth the above data, using a bin depth of 3. Illustrate your
steps.
The following steps are required to smooth the above data using smoothing by bin means with a
bin
depth of 3.
• Step 1: Sort the data. (This step is not required here as the data are already sorted.)
• Step 4: Replace each of the values in each bin by the arithmetic mean calculated for the bin.
Bin 1: 14, 14, 14 Bin 2: 18, 18, 18 Bin 3: 21, 21, 21
Bin 4: 24, 24, 24 Bin 5: 26, 26, 26 Bin 6: 33, 33, 33
Bin 7: 35, 35, 35 Bin 8: 40, 40, 40 Bin 9: 56, 56, 56
2 Clustering: Outliers in the data may be detected by clustering, where similar values are
organized into groups, or ‘clusters’. Values that fall outside of the set of clusters may be
considered outliers.
• Linear regression involves finding the best of line to fit two variables, so that one
variable can be used to predict the other.
Using regression to find a mathematical equation to fit the data helps smooth out the noise.
Field overloading: is a kind of source of errors that typically occurs when developers compress new
attribute definitions into unused portions of already defined attributes.
Unique rule is a rule says that each value of the given attribute must be different from all other
values of that attribute
Consecutive rule is a rule says that there can be no missing values between the lowest and highest
values of the attribute and that all values must also be unique.
Null rule specifies the use of blanks, question marks, special characters or other strings that
may indicate the null condition and how such values should be handled.
Issues:
1. Correlation analysis
Some redundancy can be identified by correlation analysis. The correlation between two
variables A and B can be measured by
• The result of the equation is > 0, then A and B are positively correlated, which means
the value of A increases as the values of B increases. The higher value may indicate
redundancy that may be removed.
• The result of the equation is = 0, then A and B are independent and there is no
correlation between them.
• If the resulting value is < 0, then A and B are negatively correlated where the values of
one attribute increase as the value of one attribute decrease which means each attribute
may discourages each other.
categorical data
Example:
Data Transformation
Normalization
In which data are scaled to fall within a small, specified range, useful for classification
algorithms involving neural networks, distance measurements such as nearest neighbor
classification and clustering. There are 3 methods for data normalization. They are:
1) min-max normalization
2) z-score normalization
3) normalization by decimal scaling
Min-max normalization: performs linear transformation on the original data values. It can
be defined as,
v' v (new _ maxA new _ minA ) new _ minA
minA
maxA minA
v is the value to be normalized minA,maxA are minimum and maximum values of an attribute A new_
maxA, new_ minA are the normalization range.
Problem: Min and Max values of attribute income are Rs 12000/- and Rs 98000/- respectively. Map income
to the range [0,1]. Find the transformed value of Rs 73600/-
These techniques can be applied to obtain a reduced representation of the data set that is much
smaller in volume, yet closely maintains the integrity of the original data. Data reduction
includes,
1. Data cube aggregation, where aggregation operations are applied to the data in the
construction of a data cube.
2. Attribute subset selection, where irrelevant, weakly relevant or redundant
attributes or dimensions may be detected and removed.
3. Dimensionality reduction, where encoding mechanisms are used to reduce the data set
size. Examples: Wavelet Transforms Principal Components Analysis
4. Numerosity reduction, where the data are replaced or estimated by alternative, smaller
data representations such as parametric models (which need store only the model
parameters instead of the actual data) or nonparametric methods such as clustering,
sampling, and the use of histograms.
5. Discretization and concept hierarchy generation, where raw data values for
attributes are replaced by ranges or higher conceptual levels. Data Discretization is a form of
numerosity reduction that is very useful for the automatic generation of concept
hierarchies.
Data cube aggregation: Reduce the data to the concept level needed in the analysis. Queries
regarding aggregated information should be answered using data cube when possible. Data
cubes store multidimensional aggregated information. The following figure shows a data cube
for multidimensional analysis of sales data with respect to annual sales per item type for each
branch.
Each cells holds an aggregate data value, corresponding to the data point in
multidimensional space.
Data cubes provide fast access to pre computed, summarized data, thereby benefiting on- line
analytical processing as well as data mining.
The following database consists of sales per quarter for the years 1997-1999.
Suppose, the analyzer interested in the annual sales rather than sales per quarter, the above data
can be aggregated so that the resulting data summarizes the total sales per year instead of per
quarter. The resulting data in smaller in volume, without loss of information necessary for the
analysis task.
Dimensionality Reduction
It reduces the data set size by removing irrelevant attributes. This is a method of attribute subset
selection are applied. A heuristic method of attribute of sub set selection is explained here:
Feature selection is a must for any data mining product. That is because, when you build a data
mining model, the dataset frequently contains more information than is needed to build the
model. For example, a dataset may contain 500 columns that describe characteristics of
customers, but perhaps only 50 of those columns are used to build a particular model. If you
keep the unneeded columns while building the model, more CPU and memory are required
during the training process, and more storage space is required for the completed model.
In which select a minimum set of features such that the probability distribution of
different classes given the values for those features is as close as possible to the original
distribution given the values of all features
1. Step-wise forward selection: The procedure starts with an empty set of attributes. The best of
the original attributes is determined and added to the set. At each subsequent iteration or step, the
best of the remaining original attributes is added to the set.
2. Step-wise backward elimination: The procedure starts with the full set of attributes. At
each step, it removes the worst attribute remaining in the set.
4. Decision tree induction: Decision tree induction constructs a flow-chart-like structure where
each internal (non-leaf) node denotes a test on an attribute, each branch corresponds to an
outcome of the test, and each external (leaf) node denotes a class prediction. At each node, the
algorithm chooses the “best" attribute to partition the data into individual classes. When decision
tree induction is used for attribute subset selection, a tree is constructed from the given data. All
attributes that do not appear in the tree are assumed to be irrelevant. The set of attributes
appearing in the tree form the reduced subset of attributes.
The mining algorithm itself is used to determine the attribute sub set, then it is called wrapper
approach or filter approach. Wrapper approach leads to greater accuracy since it optimizes the
evaluation measure of the algorithm while removing attributes.
Data compression
• Wavelet transforms
• Principal components analysis.
Wavelet compression is a form of data compression well suited for image compression. The
discrete wavelet transform (DWT) is a linear signal processing technique that, when applied to a
data vector D, transforms it to a numerically different vector, D0, of wavelet coefficients.
1. The length, L, of the input data vector must be an integer power of two. This condition can be
met by padding the data vector with zeros, as necessary.
• data smoothing
• calculating weighted difference
3. The two functions are applied to pairs of the input data, resulting in two sets of data of length
L/2.
4. The two functions are recursively applied to the sets of data obtained in the previous loop,
until the resulting data sets obtained are of desired length.
5. A selection of values from the data sets obtained in the above iterations are designated the
wavelet coefficients of the transformed data.
If wavelet coefficients are larger than some user-specified threshold then it can be retained. The
remaining coefficients are set to 0.
The principal components (new set of axes) give important information about variance. Using the
strongest components one can reconstruct a good approximation of the original signal.
Numerosity Reduction
Data volume can be reduced by choosing alternative smaller forms of data. This tech. can be
• Parametric method
• Non parametric method
Parametric: Assume the data fits some model, then estimate model parameters, and store only
the parameters, instead of actual data.
Non parametric: In which histogram, clustering and sampling is used to store reduced form of
data.
2 Histogram
⎯ Divide data into buckets and store average (sum) for each bucket
⎯ A bucket represents an attribute-value/frequency pair
⎯ It can be constructed optimally in one dimension using dynamic programming
⎯ It divides up the range of possible values in a data set into classes or groups. For each
group, a rectangle (bucket) is constructed with a base length equal to the range of values
in that specific group, and an area proportional to the number of observations
falling into that group.
⎯ The buckets are displayed in a horizontal axis while height of a bucket represents the
average frequency of the values.
Example:
The following data are a list of prices of commonly sold items. The numbers have been sorted.
1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18, 18, 18,
18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30.
Draw histogram plot for price where each bucket should have equi-width of 10
The buckets can be determined based on the following partitioning rules, including the following.
V- Optimal and MaxDiff histograms tend to be the most accurate and practical. Histograms are
highly effective at approximating both sparse and dense data, as well as highly skewed, and
uniform data.
Clustering techniques consider data tuples as objects. They partition the objects into groups or
clusters, so that objects within a cluster are “similar" to one another and “dissimilar" to objects in
other clusters. Similarity is commonly defined in terms of how “close" the objects are in space,
based on a distance function.
Quality of clusters measured by their diameter (max distance between any two objects in the
cluster) or centroid distance (avg. distance of each cluster object from its centroid)
Sampling
Sampling can be used as a data reduction technique since it allows a large data set to be
represented by a much smaller random sample (or subset) of the data. Suppose that a large data set,
D, contains N tuples. Let's have a look at some possible samples for D.
1. Simple random sample without replacement (SRSWOR) of size n: This is created by
drawing n of the N tuples from D (n < N), where the probably of drawing any tuple in D is 1=N,
i.e., all tuples are equally likely.
2. Simple random sample with replacement (SRSWR) of size n: This is similar to
SRSWOR, except that each time a tuple is drawn from D, it is recorded and then replaced. That
is, after a tuple is drawn, it is placed back in D so that it may be drawn again.
3. Cluster sample: If the tuples in D are grouped into M mutually disjoint “clusters", then a SRS
of m clusters can be obtained, where m < M. For example, tuples in a database are usually
retrieved a page at a time, so that each page can be considered a cluster. A reduced data
representation can be obtained by applying, say, SRSWOR to the pages, resulting in a cluster
sample of the tuples.
4. Stratified sample: If D is divided into mutually disjoint parts called “strata", a stratified
sample of D is generated by obtaining a SRS at each stratum. This helps to ensure a
representative sample, especially when the data are skewed. For example, a stratified sample
may be obtained from customer data, where stratum is created for each customer age group. In
this way, the age group having the smallest number of customers will be sure to be represented.
Advantages of sampling
1. An advantage of sampling for data reduction is that the cost of obtaining a sample is
proportional to the size of the sample, n, as opposed to N, the data set size. Hence,
sampling complexity is potentially sub-linear to the size of the data.
2. When applied to data reduction, sampling is most commonly used to estimate the
answer to an aggregate query.
Concept Hierarchy
A concept hierarchy for a given numeric attribute defines a Discretization of the attribute.
Concept hierarchies can be used to reduce the data by collecting and replacing low level
concepts (such as numeric values for the attribute age) by higher level concepts (such as young,
middle-aged, or senior).
Discretization and Concept hierarchy for numerical data:
There are five methods for numeric concept hierarchy generation. These include:
1. binning,
2. histogram analysis,
3. clustering analysis,
4. entropy-based Discretization, and
5. data segmentation by “natural partitioning".
An information-based measure called “entropy" can be used to recursively partition the values of
a numeric attribute A, resulting in a hierarchical Discretization.
Procedure:
Example:
Suppose that profits at different branches of a company for the year 1997 cover a wide range,
from -$351,976.00 to $4,700,896.50. A user wishes to have a concept hierarchy for profit
automatically generated
Suppose that the data within the 5%-tile and 95%-tile are between -$159,876 and
$1,838,761. The results of applying the 3-4-5 rule are shown in following figure
Step 1: Based on the above information, the minimum and maximum values are: MIN = -
$351, 976.00, and MAX = $4, 700, 896.50. The low (5%-tile) and high (95%-tile) values to be
considered for the top or first level of segmentation are: LOW = -$159, 876, and HIGH =
$1, 838,761.
Step 2: Given LOW and HIGH, the most significant digit is at the million dollar digit position
(i.e., msd =
1,000,000). Rounding LOW down to the million dollar digit, we get LOW’ = -$1; 000; 000; and
rounding
HIGH up to the million dollar digit, we get HIGH’ = +$2; 000; 000.
Step 3: Since this interval ranges over 3 distinct values at the most significant digit, i.e., (2; 000;
000-(-1, 000; 000))/1, 000, 000 = 3, the segment is partitioned into 3 equi-width sub segments
according to the 3-4-5 rule: (-$1,000,000 - $0], ($0 - $1,000,000], and ($1,000,000 - $2,000,000].
This represents the top tier of the hierarchy.
Step 4: We now examine the MIN and MAX values to see how they “fit" into the first level
partitions. Since the first interval, (-$1, 000, 000 - $0] covers the MIN value, i.e., LOW’ < MIN,
we can adjust the left boundary of this interval to make the interval smaller. The most
significant digit of MIN is the hundred thousand digit position. Rounding MIN down to this
position, we get MIN0’ = -$400, 000.
Therefore, the first interval is redefined as (-$400,000 - 0]. Since the last interval, ($1,000,000-
$2,000,000] does not cover the MAX value, i.e., MAX > HIGH’, we need to create a new interval
to cover it. Rounding up MAX at its most significant digit position, the new interval is
($2,000,000 - $5,000,000]. Hence, the top most level of the hierarchy contains four partitions, (-
$400,000 - $0], ($0 - $1,000,000], ($1,000,000 - $2,000,000], and ($2,000,000 - $5,000,000].
Step 5: Recursively, each interval can be further partitioned according to the 3-4-5 rule to form
the next lower level of the hierarchy:
− The first interval (-$400,000 - $0] is partitioned into 4 sub-interval s: (-$400,000 - -