Data Cleaning and Data Pre Processing
Data Cleaning and Data Pre Processing
Jiawei Han and Micheline Kamber, Data mining, concept and techniques http://www.cs.sfu.ca Gregory Piatetsky-Shapiro, kdnuggest, http://www.kdnuggets.com/data_mining_course/
preprocessing
Outline
Introduction Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary
preprocessing 2
preprocessing 3
preprocessing 4
Number of targets
Rule of thumb: >100 for each class if very unbalanced, use stratified sampling
preprocessing 5
Broad categories:
intrinsic, contextual, representational, and accessibility.
preprocessing 6
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces the same or similar analytical results
Data discretization
Part of data reduction but with particular importance, especially for numerical data
preprocessing 7
preprocessing 8
Outline
Introduction Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary
preprocessing 9
Data Cleaning
Data cleaning tasks
Data acquisition and metadata Fill in missing values Unified date format Converting nominal to numeric Identify outliers and smooth out noisy data Correct inconsistent data
preprocessing 10
Clean data
0000000001,199706,1979.833,8014,5722 , ,#000310 . ,111,03,000101,0,04,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0300, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0300,0300.00
preprocessing 12
Field role: input : inputs for modeling target : output id/auxiliary : keep, but not use for modeling ignore : dont use for modeling weight : instance weight Field descriptions
preprocessing 13
preprocessing 14
Missing Data
Data is not always available
E.g., many tuples have no recorded value for several attributes, such as customer income in sales data
preprocessing 17
Problem:
values are non-obvious dont help intuition and knowledge discovery harder to verify, easier to make an error
preprocessing 18
---------------------------------365 + 1_if_leap_year
preprocessing 19
preprocessing 20
preprocessing 22
Q: Why is it important to preserve natural order? A: To allow meaningful comparisons, e.g. Grade > 3.5
preprocessing 23
ID 371 433
ID
371 433
C_re d
1 0
C_orange C_yello w
0 0
0 1
preprocessing 24
Q: How to deal with such fields ? A: Ignore ID-like fields whose values are unique for each record For other fields, group values naturally:
e.g. 50 US States 3 or 5 regions Profession select most frequent ones, group the rest
preprocessing 25
Noisy Data
Noise: random error or variance in a measured variable Incorrect attribute values may due to
faulty data collection instruments data entry problems data transmission problems technology limitation inconsistency in naming convention
Clustering
detect and remove outliers
Regression
smooth by fitting the data into regression functions
preprocessing 27
Cluster Analysis
preprocessing 30
Regression
y
Y1
Y1
y=x+1
X1
preprocessing 31
Outline
Introduction Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary
preprocessing 32
Data Integration
Data integration:
combines data from multiple sources into a coherent store
Schema integration
integrate metadata from different sources Entity identification problem: identify real world entities from multiple data sources, e.g., A.cust-id B.cust-#
preprocessing 33
Redundant data may be able to be detected by correlational analysis Careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality
preprocessing 34
Data Transformation
Smoothing: remove noise from data Aggregation: summarization, data cube construction Generalization: concept hierarchy climbing Normalization: scaled to fall within a small, specified range
min-max normalization z-score normalization normalization by decimal scaling
Attribute/feature construction
New attributes constructed from the given ones
preprocessing 35
preprocessing 36
Similar situation with multiple classes Majority class classifier can be 97% correct, but useless
preprocessing 37
Select remaining positive targets (e.g. 70% of all targets) from raw train Join with equal number of negative targets from raw train, and randomly sort it. Separate randomized balanced set into balanced train and balanced test
preprocessing 38
Balanced set
Raw Held
preprocessing 39
preprocessing 40
Outline
Introduction Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary
preprocessing 41
Queries regarding aggregated information should be answered using data cube, when possible
preprocessing 43
Dimensionality Reduction
Feature selection (i.e., attribute subset selection):
Select a minimum set of features such that the probability distribution of different classes given the values for those features is as close as possible to the original distribution given the values of all features reduce # of patterns in the patterns, easier to understand
preprocessing 44
Class 1
>
Class 2
Class 1
Class 2
Attribute Selection
There are 2d possible sub-features of d features First: Remove attributes with no or little variability
Examine the number of distinct field values
Rule of thumb: remove a field where almost all values are the same (e.g. null), except possibly in minp % or less of all records. minp could be 0.5% or more generally less than 5% of the number of targets of the smallest class
What is good N?
Rule of thumb -- keep top 50 fields
preprocessing 46
Data Compression
String compression
There are extensive theories and well-tuned algorithms Typically lossless But only limited manipulation is possible without expansion
Audio/video compression
Typically lossy compression, with progressive refinement Sometimes small fragments of signal can be reconstructed without reconstructing the whole
preprocessing 47
Data Compression
Compressed Data
preprocessing 48
Wavelet Transforms
Haar2 Daubechie4
Discrete wavelet transform (DWT): linear signal processing Compressed approximation: store only a small fraction of the strongest of the wavelet coefficients Similar to discrete Fourier transform (DFT), but better lossy compression, localized in space Method:
Length, L, must be an integer power of 2 (padding with 0s, when necessary) Each transform has 2 functions: smoothing, difference Applies to pairs of data, resulting in two set of data of length L/2 Applies two functions recursively, until reaches the desired length
preprocessing 49
Each data vector is a linear combination of the c principal component vectors Works for numeric data only Used when the number of dimensions is large
preprocessing 50
X1
preprocessing 51
Numerosity Reduction
Parametric methods
Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers) Log-linear models: obtain value at a point in m-D space as the product on appropriate marginal subspaces
Non-parametric methods
Do not assume models Major families: histograms, clustering, sampling
preprocessing 52
Multiple regression: allows a response variable Y to be modeled as a linear function of multidimensional feature vector Log-linear model: approximates discrete multidimensional probability distributions
preprocessing 53
Log-linear models:
The multi-way table of joint probabilities is approximated by a product of lower-order tables. Probability: p(a, b, c, d) = ab acad bcd
preprocessing 54
Histograms
A popular data reduction technique Divide data into buckets and store average (sum) for each bucket Can be constructed optimally in one dimension using dynamic programming Related to quantization problems.
40 35 30 25 20 15 10 5 0
10000 30000 50000 70000 90000
preprocessing 55
Clustering
Partition data set into clusters, and one can store cluster representation only Can be very effective if data is clustered but not if data is smeared Can have hierarchical clustering and be stored in multidimensional index tree structures There are many choices of clustering definitions and clustering algorithms, further detailed in Chapter 8
preprocessing 56
Sampling
Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the data Choose a representative subset of the data
Simple random sampling may have very poor performance in the presence of skew
Sampling
Raw Data
preprocessing 58
Sampling
Raw Data Cluster/Stratified Sample
preprocessing 59
Hierarchical Reduction
Use multi-resolution structure with different degrees of reduction Hierarchical clustering is often performed but tends to define partitions of data sets rather than clusters Parametric methods are usually not amenable to hierarchical representation Hierarchical aggregation
An index tree hierarchically divides a data set into partitions by value range of some attributes Each partition can be considered as a bucket Thus an index tree with aggregates stored at each node is a hierarchical histogram
preprocessing 60
Outline
Introduction Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary
preprocessing 61
Discretization
Three types of attributes:
Nominal values from an unordered set Ordinal values from an ordered set Continuous real numbers
Discretization:
divide the range of a continuous attribute into intervals Some classification algorithms only accept categorical attributes. Reduce data size by discretization Prepare for further analysis
preprocessing 62
Concept hierarchies
reduce the data by collecting and replacing low level concepts (such as numeric values for the attribute age) by higher level concepts (such as young, middle-aged, or senior).
preprocessing 63
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the entropy after partitioning is | S 1| |S 2| E (S ,T ) = Ent ( S 1) + Ent ( S 2) |S| |S| The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization. The process is recursively applied to partitions obtained until some stopping criterion is met, e.g., Experiments show that it may reduce data size and improve classification accuracy
preprocessing 65
Ent ( S ) E (T , S ) >
Step 4:
(-$4000 -$5,000)
(-$400 - 0) (-$400 -$300) (-$300 -$200) (-$200 -$100) (-$100 0) (0 $200) ($200 $400) ($400 $600)
($600 $800)
preprocessing 67
preprocessing 68
Outline
Introduction Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary
preprocessing 70
Summary
Data preparation is a big issue for both warehousing and mining Data preparation includes
Data cleaning and data integration Data reduction and feature selection Discretization
A lot a methods have been developed but still an active area of research
preprocessing 71
preprocessing 72