0% found this document useful (0 votes)
20 views

Study+Material+Unit 4+Data+Preprocessing+

Uploaded by

jaydev rathava
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Study+Material+Unit 4+Data+Preprocessing+

Uploaded by

jaydev rathava
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

UNIT-4 Data Preprocessing

Data Preprocessing
Data preprocessing is the process of transforming raw data into an
understandable format. It is also an important step in data mining as we
cannot work with raw data. The quality of the data should be checked
before applying machine learning or data mining algorithms.

Why is Data preprocessing important?


Preprocessing of data is mainly to check the data quality. The quality
can be checked by the following

 Accuracy: To check whether the data entered is correct or not.


 Completeness: To check whether the data is available or not recorded.
 Consistency: To check whether the same data is kept in all the places
that do or do not match.
 Timeliness: The data should be updated correctly.
 Believability: The data should be trustable.
 Interpretability: The understandability of the data.

Major Tasks in Data Preprocessing:

1. Data cleaning
2. Data integration
3. Data reduction
4. Data transformation

1|Page
Data cleaning:
Data cleaning is the process to remove incorrect data, incomplete data
and inaccurate data from the datasets, and it also replaces the missing
values. There are some techniques in data cleaning

1.Handling missing values:

 Standard values like “Not Available” or “NA” can be used to replace the
missing values.
 Missing values can also be filled manually but it is not recommended
when that dataset is big.
 The attribute’s mean value can be used to replace the missing value
when the data is normally distributed
wherein in the case of non-normal distribution median value of the
attribute can be used.
 While using regression or decision tree algorithms the missing value can
be replaced by the most probable
value.

2.Noisy:
Noisy generally means random error or containing unnecessary
data points. Here are some of the methods to handle noisy data.

 Binning: This method is to smooth or handle noisy data. First, the data
is sorted then and then the sorted values are separated and stored in the
form of bins. There are three methods for smoothing data in the bin.
 Smoothing by bin mean method: In this method, the values in the bin
are replaced by the mean value of the bin;
 Smoothing by bin median: In this method, the values in the bin are
replaced by the median value;
 Smoothing by bin boundary: In this method, the using minimum and
maximum values of the bin values are taken and the values are replaced
by the closest boundary value.

2|Page
3.Regression: This is used to smooth the data and will help to handle
data when unnecessary data is present. For the analysis, purpose
regression helps to decide the variable which is suitable for our analysis.

4. Clustering: This is used for finding the outliers and also in grouping
the data. Clustering is generally used in unsupervised learning.

Data integration:
The process of combining multiple sources into a single dataset.
The Data integration process is one of the main components in data
management. There are some problems to be considered during data
integration.

1.Schema integration: Integrates metadata(a set of data that describes


other data) from different sources.

2.Entity identification problem: Identifying entities from multiple


databases. For example, the system or the use should know student _id
of one database and student_name of another database belongs to the
same entity.

3.Detecting and resolving data value concepts: The data taken from
different databases while merging may differ. Like the attribute values
from one database may differ from another database. For example, the
date format may differ like “MM/DD/YYYY” or “DD/MM/YYYY”.

Data reduction:
This process helps in the reduction of the volume of the data which
makes the analysis easier yet produces the same or almost the same
result. This reduction also helps to reduce storage space. There are some
of the techniques in data reduction are Dimensionality reduction,
Numerosity reduction, Data compression.

3|Page
1.Dimensionality reduction: This process is necessary for real-world
applications as the data size is big. In this process, the reduction of
random variables or attributes is done so that the dimensionality of the
data set can be reduced. Combining and merging the attributes of the
data without losing its original characteristics. This also helps in the
reduction of storage space and computation time is reduced. When the
data is highly dimensional the problem called “Curse of Dimensionality”
occurs.

2.Numerosity Reduction: In this method, the representation of the data


is made smaller by reducing the volume. There will not be any loss of
data in this reduction.

3.Data compression: The compressed form of data is called data


compression. This compression can be lossless or lossy. When there is
no loss of information during compression it is called lossless
compression. Whereas lossy compression reduces information but it
removes only the unnecessary information.

Data Transformation:
The change made in the format or the structure of the data is called
data transformation. This step can be simple or complex based on the
requirements. There are some methods in data transformation.

1.Smoothing: With the help of algorithms, we can remove noise from


the dataset and helps in knowing the important features of the dataset.
By smoothing we can find even a simple change that helps in prediction.

2.Aggregation: In this method, the data is stored and presented in the


form of a summary. The data set which is from multiple sources is
integrated into with data analysis description. This is an important step
since the accuracy of the data depends on the quantity and quality of the
data. When the quality and the quantity of the data are good the results
are more relevant.
4|Page
3.Discretization: The continuous data here is split into intervals.
Discretization reduces the data size. For example, rather than specifying
the class time, we can set an interval like (3 pm-5 pm, 6 pm-8 pm).

4.Normalization: It is the method of scaling the data so that it can be


represented in a smaller range. Example ranging from -1.0 to 1.0.
Z-Score Normalization
Z-Score helps in the normalization of data. If we normalize the data into a simpler form with the
help of z score normalization, then it’s very easy to understand by our brains.

Z- Score Formula

How to calculate Z-Score of the following data?


marks

10

15

20

5|Page
Mean = 13.25
Standard deviation = 4.6

marks marks after z-score normalization

8 -1.14

10 -0.7

15 0.3

20 1.4

6|Page
Min-Max Normalization

To normalize the values in a dataset to be between 0 and 1, you can use


the following formula:

zi = (xi – min(x)) / (max(x) – min(x))

where:

 zi: The ith normalized value in the dataset


 xi: The ith value in the dataset
 min(x): The minimum value in the dataset
 max(x): The maximum value in the dataset

For example, suppose we have the following dataset:

The minimum value in the dataset is 13 and the maximum value is 71.

To normalize the first value of 13, we would apply the formula shared
earlier:

7|Page
 zi = (xi – min(x)) / (max(x) – min(x)) = (13 – 13) / (71 – 13) = 0

To normalize the second value of 16, we would use the same formula:

 zi = (xi – min(x)) / (max(x) – min(x)) = (16 – 13) / (71 – 13)


= .0517

To normalize the third value of 19, we would use the same formula:

 zi = (xi – min(x)) / (max(x) – min(x)) = (19 – 13) / (71 – 13)


= .1034

We can use this exact same formula to normalize each value in the
original dataset to be between 0 and 1:

8|Page

You might also like