0% found this document useful (0 votes)
29 views

Transportation Data Mining: Chapter 2. Getting To Know Your Data

This chapter discusses getting to know transportation data. It covers data objects and attribute types, including nominal, binary, ordinal and numeric attributes. It also discusses basic statistical descriptions of data like measures of central tendency (mean, median, mode), dispersion, variance and standard deviation. Visualizing and measuring similarity in data are also introduced.

Uploaded by

gamil
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Transportation Data Mining: Chapter 2. Getting To Know Your Data

This chapter discusses getting to know transportation data. It covers data objects and attribute types, including nominal, binary, ordinal and numeric attributes. It also discusses basic statistical descriptions of data like measures of central tendency (mean, median, mode), dispersion, variance and standard deviation. Visualizing and measuring similarity in data are also introduced.

Uploaded by

gamil
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 77

Transportation Data Mining

Chapter 2. Getting to Know Your Data


Fengxiang Qiao, Ph.D., Southeast University
Chapter 2. Getting to Know Your Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary
2
Types of Data Sets: (1) Record Data
 Relational records
 Relational tables, highly structured
 Data matrix, e.g., numerical matrix, crosstabs

 Transaction data

timeout

season
coach

game
score
team

ball

lost
pla

wi
n
y
TID Items
1 Bread, Coke, Milk
2 Beer, Bread Document 1 3 0 5 0 2 6 0 2 0 2
3 Beer, Coke, Diaper, Milk
Document 2 0 7 0 2 1 0 0 3 0 0
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk Document 3 0 1 0 0 1 2 2 0 3 0

 Document data: Term-frequency vector (matrix) of text documents


3
Types of Data Sets: (2) Graphs and Networks
 Transportation network

 World Wide Web

 Molecular Structures

 Social or information networks


4
Types of Data Sets: (3) Ordered Data
 Video data: sequence of images

 Temporal data: time-series

 Sequential Data: transaction sequences

 Genetic sequence data


5
Types of Data Sets: (4) Spatial, image and multimedia Data

 Spatial data: maps

 Image data:

 Video data:

6
Important Characteristics of Structured Data
 Dimensionality
 Curse of dimensionality
 Sparsity
 Only presence counts
 Resolution
 Patterns depend on the scale
 Distribution
 Centrality and dispersion

7
Data Objects
 Data sets are made up of data objects
 A data object represents an entity
 Examples:
 sales database: customers, store items, sales
 medical database: patients, treatments
 university database: students, professors, courses
 Also called samples , examples, instances, data points, objects, tuples
 Data objects are described by attributes
 Database rows → data objects; columns → attributes

8
Attributes
 Attribute (or dimensions, features, variables)
 A data field, representing a characteristic or feature of a data object.
 E.g., customer _ID, name, address
 Types:
 Nominal (e.g., red, blue)
 Binary (e.g., {true, false})
 Ordinal (e.g., {freshman, sophomore, junior, senior})
 Numeric: quantitative
 Interval-scaled: 100○C is interval scales
 Ratio-scaled: 100○K is ratio scaled since it is twice as high as 50 ○K
 Q1: Is student ID a nominal, ordinal, or interval-scaled data?
 Q2: What about eye color? Or color in the color spectrum of physics?
9
Attribute Types
 Nominal: categories, states, or “names of things”
 Hair_color = {auburn, black, blond, brown, grey, red, white}
 marital status, occupation, ID numbers, zip codes
 Binary
 Nominal attribute with only 2 states (0 and 1)
 Symmetric binary: both outcomes equally important
 e.g., gender
 Asymmetric binary: outcomes not equally important.
 e.g., medical test (positive vs. negative)
 Convention: assign 1 to most important outcome (e.g., HIV positive)
 Ordinal
 Values have a meaningful order (ranking) but magnitude between successive
values is not known
 Size = {small, medium, large}, grades, army rankings
10
Numeric Attribute Types
 Quantity (integer or real-valued)
 Interval
 Measured on a scale of equal-sized units
 Values have order
 E.g., temperature in C˚or F˚, calendar dates
 No true zero-point
 Ratio
 Inherent zero-point
 We can speak of values as being an order of magnitude larger than the unit
of measurement (10 K˚ is twice as high as 5 K˚).
 e.g., temperature in Kelvin, length, counts, monetary quantities
11
Discrete vs. Continuous Attributes
 Discrete Attribute
 Has only a finite or countably infinite set of values
 E.g., zip codes, profession, or the set of words in a collection of documents
 Sometimes, represented as integer variables
 Note: Binary attributes are a special case of discrete attributes
 Continuous Attribute
 Has real numbers as attribute values
 E.g., temperature, height, or weight
 Practically, real values can only be measured and represented using a finite
number of digits
 Continuous attributes are typically represented as floating-point variables
12
Chapter 2. Getting to Know Your Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary
13
Basic Statistical Descriptions of Data
 Motivation
 To better understand the data: central tendency, variation and spread

 Data dispersion characteristics


 Median, max, min, quantiles, outliers, variance, ...
 Numerical dimensions correspond to sorted intervals
 Data dispersion:
 Analyzed with multiple granularities of precision
 Boxplot or quantile analysis on sorted intervals
 Dispersion analysis on computed measures
 Folding measures into numerical dimensions
 Boxplot or quantile analysis on the transformed cube

14
Measuring the Central Tendency: (1) Mean
Mean (algebraic measure) (sample vs. population):
Note: n is sample size and N is population size.

1 n
x   xi   x
n i 1 N
n
 Weighted arithmetic mean:
w x i i
x i 1
n

w
i 1
i
Trimmed mean:
 Chopping extreme values (e.g., Olympics gymnastics score computation)

15
Measuring the Central Tendency: (2) Median
 Median:
 Middle value if odd number of values, or average of the middle two values otherwise
 Estimated by interpolation (for grouped data):

Sum before the median interval


Approximate
median
n / 2  ( freq )l Interval width (L2 – L1)
median  L1  ( ) width
freq median
16 Low interval limit
Measuring the Central Tendency: (3) Mode
 Mode: Value that occurs most frequently in the data

Unimodal
 Empirical formula:
mean  mode  3  (mean  median)

Multi-modal
 Bimodal

 Trimodal

17
Symmetric vs. Skewed Data
symmetric
 Median, mean and mode of symmetric,
positively and negatively skewed data

positively skewed
negatively skewed

18
Properties of Normal Distribution Curve
← — ————Represent data dispersion, spread — ————→

Represent central tendency


19
Measures Data Distribution: Variance and Standard Deviation
Variance and standard deviation (sample: s, population: σ)
 Variance: (algebraic, scalable computation)
 Q: Can you compute it incrementally and efficiently?

n n n
1 1 1
  
2
 s 
2
( xi  x ) 2
 [ x i  ( x i ]
) 2

n  1 i 1 n  1 i 1
Standard deviation s (or σ) is the square root of variance s2 (or σ2)
n i 1
n n
1 1
   ( xi   )   xi  
2 2 2 2

N i 1 N i 1

20
Graphic Displays of Basic Statistical Descriptions
 Boxplot: graphic display of five-number summary
 Histogram: x-axis are values, y-axis repres. frequencies
 Quantile plot: each value xi is paired with fi indicating that approximately 100 fi %
of data are  xi
 Quantile-quantile (q-q) plot: graphs the quantiles of one univariant distribution
against the corresponding quantiles of another
 Scatter plot: each pair of values is a pair of coordinates and plotted as points in the
plane

21
Measuring the Dispersion of Data: Quartiles & Boxplots
 Quartiles: Q1 (25th percentile), Q3 (75th percentile)
 Inter-quartile range: IQR = Q3 – Q1
 Five number summary: min, Q1, median, Q3, max
 Boxplot: Data is represented with a box
 Q1, Q3, IQR: The ends of the box are at the first and
third quartiles, i.e., the height of the box is IQR
 Median (Q2) is marked by a line within the box
 Whiskers: two lines outside the box extended to
Minimum and Maximum
 Outliers: points beyond a specified outlier threshold, plotted individually
 Outlier: usually, a value higher/lower than 1.5 x IQR

22
Visualization of Data Dispersion: 3-D Boxplots

23
Histogram Analysis
Histogram
 Histogram: Graph display of tabulated frequencies, 40
35
shown as bars
30
Differences between histograms and bar charts 25

 Histograms are used to show distributions of 20


15
variables while bar charts are used to compare
10
variables 5
 Histograms plot binned quantitative data while 0
10000 30000 50000 70000 90000
bar charts plot categorical data
 Bars can be reordered in bar charts but not in
histograms
 Differs from a bar chart in that it is the area of
the bar that denotes the value, not the height as
in bar charts, a crucial distinction when the
categories are not of uniform width Bar chart
24
Histograms Often Tell More than Boxplots

 The two histograms shown in the left


may have the same boxplot
representation
 The same values for: min, Q1,
median, Q3, max
 But they have rather different data
distributions

25
Quantile Plot
 Displays all of the data (allowing the user to assess both the overall behavior and
unusual occurrences)
 Plots quantile information
 For a data xi data sorted in increasing order, fi indicates that approximately 100
fi% of the data are below or equal to the value xi

26 Data Mining: Concepts and Techniques


Quantile-Quantile (Q-Q) Plot
 Graphs the quantiles of one univariate distribution against the corresponding
quantiles of another
 View: Is there is a shift in going from one distribution to another?
 Example shows unit price of items sold at Branch 1 vs. Branch 2 for each quantile.
Unit prices of items sold at Branch 1 tend to be lower than those at Branch 2

27
Scatter plot
 Provides a first look at bivariate data to see clusters of points, outliers, etc.
 Each pair of values is treated as a pair of coordinates and plotted as points in the
plane

28
Positively and Negatively Correlated Data

 The left half fragment is positively


correlated
 The right half is negative
correlated
29
Uncorrelated Data

30
Chapter 2. Getting to Know Your Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary
31
Data Visualization
 Why data visualization?
 Gain insight into an information space by mapping data onto graphical primitives
 Provide qualitative overview of large data sets
 Search for patterns, trends, structure, irregularities, relationships among data
 Help find interesting regions and suitable parameters for further quantitative
analysis
 Provide a visual proof of computer representations derived
 Categorization of visualization methods:
 Pixel-oriented visualization techniques
 Geometric projection visualization techniques
 Icon-based visualization techniques
 Hierarchical visualization techniques
 Visualizing complex data and relations
32
Pixel-Oriented Visualization Techniques
 For a data set of m dimensions, create m windows on the screen, one for each dimension
 The m dimension values of a record are mapped to m pixels at the corresponding positions
in the windows
 The colors of the pixels reflect the corresponding values

(a) Income (b) Credit Limit (c) transaction volume (d) age
33
Laying Out Pixels in Circle Segments
 To save space and show the connections among multiple dimensions, space filling is often
done in a circle segment

(a) Representing a data record


in circle segment
Representing about 265,000 50-dimensional Data Items
(b) Laying out pixels in circle segment
34 with the ‘Circle Segments’ Technique
Geometric Projection Visualization Techniques
 Visualization of geometric transformations and projections of the data
 Methods
 Direct visualization
 Scatterplot and scatterplot matrices
 Landscapes
 Projection pursuit technique: Help users find meaningful projections of
multidimensional data
 Prosection views
 Hyperslice
 Parallel coordinates

35
Direct Data Visualization

Ribbons with Twists Based on Vorticity

36 Data Mining: Concepts and Techniques


Scatterplot Matrices
Used by ermission of M. Ward, Worcester Polytechnic Institute

 Matrix of scatterplots
(x-y-diagrams) of the
k-dim. data [total of
(k2/2 ─ k) scatterplots]

37
Landscapes

 Visualization of the data as


Used by permission of B. Wright, Visible Decisions Inc.

perspective landscape

 The data needs to be


transformed into a (possibly
artificial) 2D spatial
representation which
preserves the
characteristics of the data

news articles visualized as a landscape


38
Parallel Coordinates
 n equidistant axes which are parallel to
one of the screen axes and correspond
to the attributes
 The axes are scaled to the [minimum,
maximum]: range of the corresponding
attribute
 Every data item corresponds to a
polygonal line which intersects each of
the axes at the point which
corresponds to the value for the
attribute

39
Parallel Coordinates of a Data Set

40
Announcements: Homework #1 and 4th Credit Project
 CS412: The First Homework
 Assignment #1 is ready and is distributed today!
 Please check lecture page linking to the assignment #1
 Information About the Project for the 4th Credit
 This project is part of WSDM 2017 Cup (http://www.wsdm-cup-2017.org/triple-
scoring.html)
 Please choose one from the following two competition tasks.
 Choice #1: Triple Scoring: Computing relevance scores for triples from type-like
relations
 Choice #2: Vandalism Detection for Wikipages
 Submission: You can team up and each team will submit one program to the WSDM
2017 Cup evaluation system—Grading based on WSDM 2017 Cup evaluation results
 The information about groups and registrations will be given later
41
Project #1: Triple Scoring: Relevance Scores for Triples
 Triple Scoring: Computing relevance scores for triples from type-like relations
 Example:
 The triple “Johnny_Depp profession Actor” should get a high score, because
acting is Depp’s main profession, whereas “Quentin_Tarantino profession
Actor” should get a low score, because Tarantino is more of a director than an
actor. Such scores are a basic ingredient for ranking results in entity search.
 Training data (given by cup organizers)
 A training set consisting of triples and their relevance scores (in the range of
[0, 1]), as obtained from human judges
 Additional information that can be used for distant supervision learning, such
as text corpus
 The objective is to predict the relevance scores for the given triples: The
prediction accuracy will be evaluated against ground truth from human judges
42
Project #2: Vandalism Detection for Wikipages
 Background: Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation
which can be edited by anyone. Its knowledge is increasingly used within Wikipedia as well
as in all kinds of information systems, which imposes high demands on its integrity.
Nevertheless, Wikidata frequently gets vandalized, exposing all its users to the risk of
spreading vandalized and falsified information.
 Task: Given a Wikidata revision, compute a vandalism score denoting the likelihood of this
revision being vandalism (or similarly damaging).
 Data
 Training: We will be provided with a training corpus, consisting of Wikidata revisions and
whether they are considered vandalism  
 Testing: There will be a test data which is not published during the contest, but to be used
in final evaluation
 Submission: You may team up to work on this project.  If there are multiple teams working
on this project, we may ensemble different teams' results to generate one model and
submit to WSDM Cup's competition, based on your agreement.  Grading will be based on
43
your performance and final report.
Icon-Based Visualization Techniques
 Visualization of the data values as features of icons
 Typical visualization methods
 Chernoff Faces
 Stick Figures
 General techniques
 Shape coding: Use shape to represent certain information encoding
 Color icons: Use color icons to encode more information
 Tile bars: Use small icons to represent the relevant feature vectors in document
retrieval

44
Chernoff Faces
 A way to display variables on a two-dimensional surface, e.g., let x be eyebrow slant, y be
eye size, z be nose length, etc.
 The figure shows faces produced using 10 characteristics--head eccentricity, eye size, eye
spacing, eye eccentricity, pupil size, eyebrow slant, nose size, mouth shape, mouth size,
and mouth opening): Each assigned one of 10 possible values, generated using
Mathematica (S. Dickson)

 REFERENCE: Gonick, L. and Smith, W.


The Cartoon Guide to Statistics. New York:
Harper Perennial, p. 212, 1993
 Weisstein, Eric W. "Chernoff Face." From
MathWorld--A Wolfram Web Resource.
mathworld.wolfram.com/ChernoffFace.html
45
Stick Figure

 A census data figure showing


age, income, gender,
used by permission of G. Grinstein, University of Massachusettes at Lowell

education, etc.

 A 5-piece stick figure (1 body


and 4 limbs w. different
angle/length)

46
Hierarchical Visualization Techniques
 Visualization of the data using a hierarchical partitioning into subspaces
 Methods
 Dimensional Stacking
 Worlds-within-Worlds
 Tree-Map
 Cone Trees
 InfoCube

47
Dimensional Stacking

 Partitioning of the n-dimensional attribute space in 2-D subspaces, which are


‘stacked’ into each other
 Partitioning of the attribute value ranges into classes. The important attributes
should be used on the outer levels.
 Adequate for data with ordinal attributes of low cardinality
 But, difficult to display more than nine dimensions
 Important to map dimensions appropriately
48
Dimensional Stacking
Used by permission of M. Ward, Worcester Polytechnic Institute

Visualization of oil mining data with longitude and latitude mapped to the
outer x-, y-axes and ore grade and depth mapped to the inner x-, y-axes
49
Worlds-within-Worlds
 Assign the function and two most important parameters to innermost world
 Fix all other parameters at constant values - draw other (1 or 2 or 3 dimensional
worlds choosing these as the axes)
 Software that uses this paradigm
 N–vision: Dynamic
interaction through data
glove and stereo displays,
including rotation, scaling
(inner) and translation
(inner/outer)
 Auto Visual: Static
interaction by means of
queries
50
Tree-Map
 Screen-filling method which uses a hierarchical partitioning of the screen into regions
depending on the attribute values
 The x- and y-dimension of the screen are partitioned alternately according to the
attribute values (classes)

Schneiderman@UMD: Tree-Map of a File System Schneiderman@UMD: Tree-Map to support


51 large data sets of a million items
InfoCube
 A 3-D visualization technique where hierarchical information is displayed as nested
semi-transparent cubes
 The outermost cubes correspond to the top level data, while the subnodes or the lower
level data are represented as smaller cubes inside the outermost cubes, etc.

52
Three-D Cone Trees
 3D cone tree visualization technique works well for
up to a thousand nodes or so
 First build a 2D circle tree that arranges its nodes in
concentric circles centered on the root node
 Cannot avoid overlaps when projected to 2D
 G. Robertson, J. Mackinlay, S. Card. “Cone Trees:
Animated 3D Visualizations of Hierarchical
Information”, ACM SIGCHI'91
 Graph from Nadeau Software Consulting website:
Visualize a social network data set that models the
way an infection spreads from one person to the
next
53
Visualizing Complex Data and Relations: Tag Cloud
Tag cloud: Visualizing user-generated
tags
 The importance of tag is
represented by font size/color
 Popularly used to visualize
word/phrase distributions

KDD 2013 Research Paper Title Tag Cloud


Newsmap: Google News Stories in 2005

54
Visualizing Complex Data and Relations: Social Networks
 Visualizing non-numerical data: social and information networks

organizing
information networks

A typical network structure

A social network

55
Chapter 2. Getting to Know Your Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary
56
Similarity, Dissimilarity, and Proximity
 Similarity measure or similarity function
 A real-valued function that quantifies the similarity between two objects
 Measure how two data objects are alike: The higher value, the more alike
 Often falls in the range [0,1]: 0: no similarity; 1: completely similar
 Dissimilarity (or distance) measure
 Numerical measure of how different two data objects are
 In some sense, the inverse of similarity: The lower, the more alike
 Minimum dissimilarity is often 0 (i.e., completely similar)
 Range [0, 1] or [0, ∞) , depending on the definition
 Proximity usually refers to either similarity or dissimilarity
57
Data Matrix and Dissimilarity Matrix
 Data matrix  x11 x12 ... x1l 
 
 A data matrix of n data points with l dimensions  x21 x22 ... x2 l 
D
 Dissimilarity (distance) matrix     
 
 n data points, but registers only the distance d(i, j)  xn1 xn 2 ... xnl 
(typically metric)
 0 
 Usually symmetric, thus a triangular matrix  
 d (2,1) 0 
 Distance functions are usually different for real,     
boolean, categorical, ordinal, ratio, and vector variables  
 d ( n,1) d ( n, 2) ... 0 
 Weights can be associated with different variables based
on applications and data semantics

58
Standardizing Numeric Data
 Z-score: x
z   
 X: raw score to be standardized, μ: mean of the population, σ: standard deviation
 the distance between the raw score and the population mean in units of the
standard deviation
 negative when the raw score is below the mean, “+” when above
 An alternative way: Calculate the mean absolute deviation
s f  1n (| x1 f  m f |  | x2 f  m f | ... | xnf  m f |)
where
m f  1n (x1 f  x2 f  ...  xnf )
.

xif  m f
 standardized measure (z-score): zif  sf
 Using mean absolute deviation is more robust than using standard deviation
59
Example: Data Matrix and Dissimilarity Matrix
Data Matrix
x2 x4
point attribute1 attribute2
4 x1 1 2
x2 3 5
x3 2 0
x4 4 5
2 x1
Dissimilarity Matrix (by Euclidean Distance)
x1 x2 x3 x4
x1 0
x3
x2 3.61 0
0 2 4 x3 2.24 5.1 0
x4 4.24 1 5.39 0

60
Distance on Numeric Data: Minkowski Distance
 Minkowski distance: A popular distance measure
d (i, j )  p | xi1  x j1 | p  | xi 2  x j 2 | p    | xil  x jl | p
where i = (xi1, xi2, …, xil) and j = (xj1, xj2, …, xjl) are two l-dimensional data
objects, and p is the order (the distance so defined is also called L-p norm)
 Properties
 d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positivity)
 d(i, j) = d(j, i) (Symmetry)
 d(i, j)  d(i, k) + d(k, j) (Triangle Inequality)
 A distance that satisfies these properties is a metric
 Note: There are nonmetric dissimilarities, e.g., set differences

61
Special Cases of Minkowski Distance
 p = 1: (L1 norm) Manhattan (or city block) distance
 E.g., the Hamming distance: the number of bits that are different between
two binary vectors d (i, j ) | x  x |  | x  x |    | x  x |
i1 j1 i2 j2 il jl

 p = 2: (L2 norm) Euclidean distance

d (i, j )  | xi1  x j1 |2  | xi 2  x j 2 |2    | xil  x jl |2


 p  : (Lmax norm, L norm) “supremum” distance
 The maximum difference between any component (attribute) of the vectors
l
d (i, j )  lim | xi1  x j1 |  | xi 2  x j 2 |    | xil  x jl |  max | xif  xif |
p p p p
p  f 1

62
Example: Minkowski Distance at Special Cases
point attribute 1 attribute 2 Manhattan (L1)
x1 1 2 L x1 x2 x3 x4
x2 3 5 x1 0
x3 2 0 x2 5 0
x4 4 5 x3 3 6 0
x4 6 1 7 0

x2 x4 Euclidean (L2)
L2 x1 x2 x3 x4
4 x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0

2 x1
Supremum (L)
L x1 x2 x3 x4
x1 0
x2 3 0
x3 x3 2 5 0
0 2 4 x4 3 1 5 0
63
Proximity Measure for Binary Attributes
 A contingency table for binary data
Object j

Object i

 Distance measure for symmetric binary variables:


 Distance measure for asymmetric binary variables:
 Jaccard coefficient (similarity measure for
asymmetric binary variables):
 Note: Jaccard coefficient is the same as “coherence”: (a concept discussed in Pattern Discovery)

64
Example: Dissimilarity between Asymmetric Binary Variables
Mary
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N 1 0 ∑row
Mary F Y N P N P N
Jack 1 2 0 2
Jim M Y P N N N N
0 1 3 4
 Gender is a symmetric attribute (not counted in) ∑col 3 3 6
Jim
 The remaining attributes are asymmetric binary 1 0 ∑row
 Let the values Y and P be 1, and the value N be 0 1 1 1 2
Jack
 Distance: 0 1 3 4
∑col 2 4 6
01 Mary
d ( jack , mary )   0.33
2 01 1 0 ∑row
11
d ( jack , jim )   0.67 1 1 1 2
111
1 2 Jim 0 2 2 4
d ( jim , mary )   0.75
11 2 ∑col 3 3 6
65
Proximity Measure for Categorical Attributes
 Categorical data, also called nominal attributes
 Example: Color (red, yellow, blue, green), profession, etc.
 Method 1: Simple matching
 m: # of matches, p: total # of variables

p
d (i, j)  p m

 Method 2: Use a large number of binary attributes


 Creating a new binary attribute for each of the M nominal states

66
Ordinal Variables
 An ordinal variable can be discrete or continuous
 Order is important, e.g., rank (e.g., freshman, sophomore, junior, senior)
 Can be treated like interval-scaled
 Replace an ordinal variable value by its rank: rif  {1,..., M f }
 Map the range of each variable onto [0, 1] by replacing i-th object in
the f-th variable by rif  1
zif 
M f 1
 Example: freshman: 0; sophomore: 1/3; junior: 2/3; senior 1
 Then distance: d(freshman, senior) = 1, d(junior, senior) = 1/3
 Compute the dissimilarity using methods for interval-scaled variables

67
Attributes of Mixed Type
 A dataset may contain all attribute types
 Nominal, symmetric binary, asymmetric binary, numeric, and ordinal
 One may use a weighted formula to combine their effects:
p

 ij dij
w (f) (f)

f 1
d (i, j )  p

 ij
w (f)

f 1
 If f is numeric: Use the normalized distance
 If f is binary or nominal: dij(f) = 0 if xif = xjf; or dij(f) = 1 otherwise
 If f is ordinal
rif  1
 Compute ranks zif (where zif  )
M f 1
 Treat zif as interval-scaled
68
Cosine Similarity of Two Vectors
 A document can be represented by a bag of terms or a long vector, with each
attribute recording the frequency of a particular term (such as word, keyword, or
phrase) in the document

 Other vector objects: Gene features in micro-arrays


 Applications: Information retrieval, biologic taxonomy, gene feature mapping, etc.
 Cosine measure: If d1 and d2 are two vectors (e.g., term-frequency vectors), then
d1  d 2
cos (d1 , d 2 ) 
|| d1 ||  || d 2 ||
where  indicates vector dot product, ||d||: the length of vector d
69
Example: Calculating Cosine Similarity
 Calculating Cosine Similarity: d1  d 2
cos (d1 , d 2 ) 
|| d1 ||  || d 2 ||
where  indicates vector dot product, ||d||: the length of vector d
 Ex: Find the similarity between documents 1 and 2.
d1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0) d2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)
 First, calculate vector dot product
d1d2 = 5 X 3 + 0 X 0 + 3 X 2 + 0 X 0 + 2 X 1 + 0 X 1 + 0 X 1 + 2 X 1 + 0 X 0 + 0 X 1 = 25
 Then, calculate ||d1|| and ||d2||

|| d1 || 5  5  0  0  3  3  0  0  2  2  0  0  0  0  2  2  0  0  0  0  6.481
|| d 2 || 3  3  0  0  2  2  0  0  11  11  0  0  1 1  0  0  1 1  4.12
 Calculate cosine similarity: cos(d1, d2 ) = 26/ (6.481 X 4.12) = 0.94
70
Announcements: Meetine of the 4th Credit Project
 CS412: Assignment #1 was distributed last Tuesday!
 The due date is Sept. 15. No late homework will be accepted!!
 Waitlist is cleared: We took 50 additional students into the video only session
 Please find your status with Holly. You are either in or out (wait for Spring 2017)
 Meeting for Project for the 4th Credit
 You can change from 4 to 3 credit or from 3 to 4 credits by sending me e-mails
 Meeting time and location: 10-11am Friday (tomorrow!) at 0216 SC
 This project is part of WSDM 2017 Cup
 Choice #1: Triple Scoring: Computing relevance scores for triples from type-like
relations
 Choice #2: Vandalism Detection for Wikipages
 Tas/PhD student/postdoc will give you the details in the Friday meeting! Must
attend if you want to do the 4th credit project!!!
71
KL Divergence: Comparing Two Probability Distributions
 The Kullback-Leibler (KL) divergence:
Measure the difference between two
probability distributions over the same
variable x
 From information theory, closely
related to relative entropy,
information divergence, and
information for discrimination
 DKL(p(x) || q(x)): divergence of q(x) from
p(x), measuring the information lost
when q(x) is used to approximate p(x)

Ack.: Wikipedia entry: The Kullback-Leibler (KL)


Discrete form divergence
Continuous form
72
More on KL Divergence
 The KL divergence measures the expected number of extra bits required to code
samples from p(x) (“true” distribution) when using a code based on q(x), which
represents a theory, model, description, or approximation of p(x)
 The KL divergence is not a distance measure, not a metric: asymmetric, not satisfy
triangular inequality (DKL(P‖Q) does not equal DKL(Q‖P))
 In applications, P typically represents the "true" distribution of data, observations, or
a precisely calculated theoretical distribution, while Q typically represents a theory,
model, description, or approximation of P.
 The Kullback–Leibler divergence from Q to P, denoted DKL(P‖Q), is a measure of the
information gained when one revises one's beliefs from the prior probability
distribution Q to the posterior probability distribution P. In other words, it is the
amount of information lost when Q is used to approximate P.
 The KL divergence is sometimes also called the information gain achieved if P is used
instead of Q. It is also called the relative entropy of P with respect to Q.
73
Subtlety at Computing the KL Divergence
 Base on the formula, DKL(P,Q) ≥ 0 and DKL(P || Q) = 0 if and only if P = Q
 How about when p = 0 or q = 0?
 limp→0 p log p = 0
 when p != 0 but q = 0, DKL(p || q) is defined as ∞, i.e., if one event e is possible (i.e.,
p(e) > 0), and the other predicts it is absolutely impossible (i.e., q(e) = 0), then the
two distributions are absolutely different
 However, in practice, P and Q are derived from frequency distributions, not counting
the possibility of unseen events. Thus smoothing is needed
 Example: P : (a : 3/5, b : 1/5, c : 1/5). Q : (a : 5/9, b : 3/9, d : 1/9)
 need to introduce a small constant ϵ, e.g., ϵ = 10−3
 The sample set observed in P, SP = {a, b, c}, SQ = {a, b, d}, SU = {a, b, c, d}
 Smoothing, add missing symbols to each distribution, with probability ϵ
 P′ : (a : 3/5 − ϵ/3, b : 1/5 − ϵ/3, c : 1/5 − ϵ/3, d : ϵ)
 Q′ : (a : 5/9 − ϵ/3, b : 3/9 − ϵ/3, c : ϵ, d : 1/9 − ϵ/3)
 DKL(P’ || Q’) can then be computed easily
74
Chapter 2. Getting to Know Your Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary
75
Summary
 Data attribute types: nominal, binary, ordinal, interval-scaled, ratio-scaled
 Many types of data sets, e.g., numerical, text, graph, Web, image.
 Gain insight into the data by:
 Basic statistical data description: central tendency, dispersion, graphical displays
 Data visualization: map data onto graphical primitives
 Measure data similarity
 Above steps are the beginning of data preprocessing
 Many methods have been developed but still an active area of research

76
References
 W. Cleveland, Visualizing Data, Hobart Press, 1993
 T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003
 U. Fayyad, G. Grinstein, and A. Wierse. Information Visualization in Data Mining and Knowledge
Discovery, Morgan Kaufmann, 2001
 L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John
Wiley & Sons, 1990.
 H. V. Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of the Tech. Committee on
Data Eng., 20(4), Dec. 1997
 D. A. Keim. Information visualization and visual data mining, IEEE trans. on Visualization and Computer
Graphics, 8(1), 2002
 D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
 S.  Santini and R. Jain,” Similarity measures”, IEEE Trans. on Pattern Analysis and Machine Intelligence,
21(9), 1999
 E. R. Tufte. The Visual Display of Quantitative Information, 2 nd ed., Graphics Press, 2001
 C. Yu, et al., Visual data mining of multimedia data for social and behavioral studies, Information
Visualization, 8(1), 2009
77

You might also like