Transportation Data Mining: Chapter 2. Getting To Know Your Data
Transportation Data Mining: Chapter 2. Getting To Know Your Data
Data Visualization
Summary
2
Types of Data Sets: (1) Record Data
Relational records
Relational tables, highly structured
Data matrix, e.g., numerical matrix, crosstabs
Transaction data
timeout
season
coach
game
score
team
ball
lost
pla
wi
n
y
TID Items
1 Bread, Coke, Milk
2 Beer, Bread Document 1 3 0 5 0 2 6 0 2 0 2
3 Beer, Coke, Diaper, Milk
Document 2 0 7 0 2 1 0 0 3 0 0
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk Document 3 0 1 0 0 1 2 2 0 3 0
Molecular Structures
Image data:
Video data:
6
Important Characteristics of Structured Data
Dimensionality
Curse of dimensionality
Sparsity
Only presence counts
Resolution
Patterns depend on the scale
Distribution
Centrality and dispersion
7
Data Objects
Data sets are made up of data objects
A data object represents an entity
Examples:
sales database: customers, store items, sales
medical database: patients, treatments
university database: students, professors, courses
Also called samples , examples, instances, data points, objects, tuples
Data objects are described by attributes
Database rows → data objects; columns → attributes
8
Attributes
Attribute (or dimensions, features, variables)
A data field, representing a characteristic or feature of a data object.
E.g., customer _ID, name, address
Types:
Nominal (e.g., red, blue)
Binary (e.g., {true, false})
Ordinal (e.g., {freshman, sophomore, junior, senior})
Numeric: quantitative
Interval-scaled: 100○C is interval scales
Ratio-scaled: 100○K is ratio scaled since it is twice as high as 50 ○K
Q1: Is student ID a nominal, ordinal, or interval-scaled data?
Q2: What about eye color? Or color in the color spectrum of physics?
9
Attribute Types
Nominal: categories, states, or “names of things”
Hair_color = {auburn, black, blond, brown, grey, red, white}
marital status, occupation, ID numbers, zip codes
Binary
Nominal attribute with only 2 states (0 and 1)
Symmetric binary: both outcomes equally important
e.g., gender
Asymmetric binary: outcomes not equally important.
e.g., medical test (positive vs. negative)
Convention: assign 1 to most important outcome (e.g., HIV positive)
Ordinal
Values have a meaningful order (ranking) but magnitude between successive
values is not known
Size = {small, medium, large}, grades, army rankings
10
Numeric Attribute Types
Quantity (integer or real-valued)
Interval
Measured on a scale of equal-sized units
Values have order
E.g., temperature in C˚or F˚, calendar dates
No true zero-point
Ratio
Inherent zero-point
We can speak of values as being an order of magnitude larger than the unit
of measurement (10 K˚ is twice as high as 5 K˚).
e.g., temperature in Kelvin, length, counts, monetary quantities
11
Discrete vs. Continuous Attributes
Discrete Attribute
Has only a finite or countably infinite set of values
E.g., zip codes, profession, or the set of words in a collection of documents
Sometimes, represented as integer variables
Note: Binary attributes are a special case of discrete attributes
Continuous Attribute
Has real numbers as attribute values
E.g., temperature, height, or weight
Practically, real values can only be measured and represented using a finite
number of digits
Continuous attributes are typically represented as floating-point variables
12
Chapter 2. Getting to Know Your Data
Data Visualization
Summary
13
Basic Statistical Descriptions of Data
Motivation
To better understand the data: central tendency, variation and spread
14
Measuring the Central Tendency: (1) Mean
Mean (algebraic measure) (sample vs. population):
Note: n is sample size and N is population size.
1 n
x xi x
n i 1 N
n
Weighted arithmetic mean:
w x i i
x i 1
n
w
i 1
i
Trimmed mean:
Chopping extreme values (e.g., Olympics gymnastics score computation)
15
Measuring the Central Tendency: (2) Median
Median:
Middle value if odd number of values, or average of the middle two values otherwise
Estimated by interpolation (for grouped data):
Unimodal
Empirical formula:
mean mode 3 (mean median)
Multi-modal
Bimodal
Trimodal
17
Symmetric vs. Skewed Data
symmetric
Median, mean and mode of symmetric,
positively and negatively skewed data
positively skewed
negatively skewed
18
Properties of Normal Distribution Curve
← — ————Represent data dispersion, spread — ————→
n n n
1 1 1
2
s
2
( xi x ) 2
[ x i ( x i ]
) 2
n 1 i 1 n 1 i 1
Standard deviation s (or σ) is the square root of variance s2 (or σ2)
n i 1
n n
1 1
( xi ) xi
2 2 2 2
N i 1 N i 1
20
Graphic Displays of Basic Statistical Descriptions
Boxplot: graphic display of five-number summary
Histogram: x-axis are values, y-axis repres. frequencies
Quantile plot: each value xi is paired with fi indicating that approximately 100 fi %
of data are xi
Quantile-quantile (q-q) plot: graphs the quantiles of one univariant distribution
against the corresponding quantiles of another
Scatter plot: each pair of values is a pair of coordinates and plotted as points in the
plane
21
Measuring the Dispersion of Data: Quartiles & Boxplots
Quartiles: Q1 (25th percentile), Q3 (75th percentile)
Inter-quartile range: IQR = Q3 – Q1
Five number summary: min, Q1, median, Q3, max
Boxplot: Data is represented with a box
Q1, Q3, IQR: The ends of the box are at the first and
third quartiles, i.e., the height of the box is IQR
Median (Q2) is marked by a line within the box
Whiskers: two lines outside the box extended to
Minimum and Maximum
Outliers: points beyond a specified outlier threshold, plotted individually
Outlier: usually, a value higher/lower than 1.5 x IQR
22
Visualization of Data Dispersion: 3-D Boxplots
23
Histogram Analysis
Histogram
Histogram: Graph display of tabulated frequencies, 40
35
shown as bars
30
Differences between histograms and bar charts 25
25
Quantile Plot
Displays all of the data (allowing the user to assess both the overall behavior and
unusual occurrences)
Plots quantile information
For a data xi data sorted in increasing order, fi indicates that approximately 100
fi% of the data are below or equal to the value xi
27
Scatter plot
Provides a first look at bivariate data to see clusters of points, outliers, etc.
Each pair of values is treated as a pair of coordinates and plotted as points in the
plane
28
Positively and Negatively Correlated Data
30
Chapter 2. Getting to Know Your Data
Data Visualization
Summary
31
Data Visualization
Why data visualization?
Gain insight into an information space by mapping data onto graphical primitives
Provide qualitative overview of large data sets
Search for patterns, trends, structure, irregularities, relationships among data
Help find interesting regions and suitable parameters for further quantitative
analysis
Provide a visual proof of computer representations derived
Categorization of visualization methods:
Pixel-oriented visualization techniques
Geometric projection visualization techniques
Icon-based visualization techniques
Hierarchical visualization techniques
Visualizing complex data and relations
32
Pixel-Oriented Visualization Techniques
For a data set of m dimensions, create m windows on the screen, one for each dimension
The m dimension values of a record are mapped to m pixels at the corresponding positions
in the windows
The colors of the pixels reflect the corresponding values
(a) Income (b) Credit Limit (c) transaction volume (d) age
33
Laying Out Pixels in Circle Segments
To save space and show the connections among multiple dimensions, space filling is often
done in a circle segment
35
Direct Data Visualization
Matrix of scatterplots
(x-y-diagrams) of the
k-dim. data [total of
(k2/2 ─ k) scatterplots]
37
Landscapes
perspective landscape
39
Parallel Coordinates of a Data Set
40
Announcements: Homework #1 and 4th Credit Project
CS412: The First Homework
Assignment #1 is ready and is distributed today!
Please check lecture page linking to the assignment #1
Information About the Project for the 4th Credit
This project is part of WSDM 2017 Cup (http://www.wsdm-cup-2017.org/triple-
scoring.html)
Please choose one from the following two competition tasks.
Choice #1: Triple Scoring: Computing relevance scores for triples from type-like
relations
Choice #2: Vandalism Detection for Wikipages
Submission: You can team up and each team will submit one program to the WSDM
2017 Cup evaluation system—Grading based on WSDM 2017 Cup evaluation results
The information about groups and registrations will be given later
41
Project #1: Triple Scoring: Relevance Scores for Triples
Triple Scoring: Computing relevance scores for triples from type-like relations
Example:
The triple “Johnny_Depp profession Actor” should get a high score, because
acting is Depp’s main profession, whereas “Quentin_Tarantino profession
Actor” should get a low score, because Tarantino is more of a director than an
actor. Such scores are a basic ingredient for ranking results in entity search.
Training data (given by cup organizers)
A training set consisting of triples and their relevance scores (in the range of
[0, 1]), as obtained from human judges
Additional information that can be used for distant supervision learning, such
as text corpus
The objective is to predict the relevance scores for the given triples: The
prediction accuracy will be evaluated against ground truth from human judges
42
Project #2: Vandalism Detection for Wikipages
Background: Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation
which can be edited by anyone. Its knowledge is increasingly used within Wikipedia as well
as in all kinds of information systems, which imposes high demands on its integrity.
Nevertheless, Wikidata frequently gets vandalized, exposing all its users to the risk of
spreading vandalized and falsified information.
Task: Given a Wikidata revision, compute a vandalism score denoting the likelihood of this
revision being vandalism (or similarly damaging).
Data
Training: We will be provided with a training corpus, consisting of Wikidata revisions and
whether they are considered vandalism
Testing: There will be a test data which is not published during the contest, but to be used
in final evaluation
Submission: You may team up to work on this project. If there are multiple teams working
on this project, we may ensemble different teams' results to generate one model and
submit to WSDM Cup's competition, based on your agreement. Grading will be based on
43
your performance and final report.
Icon-Based Visualization Techniques
Visualization of the data values as features of icons
Typical visualization methods
Chernoff Faces
Stick Figures
General techniques
Shape coding: Use shape to represent certain information encoding
Color icons: Use color icons to encode more information
Tile bars: Use small icons to represent the relevant feature vectors in document
retrieval
44
Chernoff Faces
A way to display variables on a two-dimensional surface, e.g., let x be eyebrow slant, y be
eye size, z be nose length, etc.
The figure shows faces produced using 10 characteristics--head eccentricity, eye size, eye
spacing, eye eccentricity, pupil size, eyebrow slant, nose size, mouth shape, mouth size,
and mouth opening): Each assigned one of 10 possible values, generated using
Mathematica (S. Dickson)
education, etc.
46
Hierarchical Visualization Techniques
Visualization of the data using a hierarchical partitioning into subspaces
Methods
Dimensional Stacking
Worlds-within-Worlds
Tree-Map
Cone Trees
InfoCube
47
Dimensional Stacking
Visualization of oil mining data with longitude and latitude mapped to the
outer x-, y-axes and ore grade and depth mapped to the inner x-, y-axes
49
Worlds-within-Worlds
Assign the function and two most important parameters to innermost world
Fix all other parameters at constant values - draw other (1 or 2 or 3 dimensional
worlds choosing these as the axes)
Software that uses this paradigm
N–vision: Dynamic
interaction through data
glove and stereo displays,
including rotation, scaling
(inner) and translation
(inner/outer)
Auto Visual: Static
interaction by means of
queries
50
Tree-Map
Screen-filling method which uses a hierarchical partitioning of the screen into regions
depending on the attribute values
The x- and y-dimension of the screen are partitioned alternately according to the
attribute values (classes)
52
Three-D Cone Trees
3D cone tree visualization technique works well for
up to a thousand nodes or so
First build a 2D circle tree that arranges its nodes in
concentric circles centered on the root node
Cannot avoid overlaps when projected to 2D
G. Robertson, J. Mackinlay, S. Card. “Cone Trees:
Animated 3D Visualizations of Hierarchical
Information”, ACM SIGCHI'91
Graph from Nadeau Software Consulting website:
Visualize a social network data set that models the
way an infection spreads from one person to the
next
53
Visualizing Complex Data and Relations: Tag Cloud
Tag cloud: Visualizing user-generated
tags
The importance of tag is
represented by font size/color
Popularly used to visualize
word/phrase distributions
54
Visualizing Complex Data and Relations: Social Networks
Visualizing non-numerical data: social and information networks
organizing
information networks
A social network
55
Chapter 2. Getting to Know Your Data
Data Visualization
Summary
56
Similarity, Dissimilarity, and Proximity
Similarity measure or similarity function
A real-valued function that quantifies the similarity between two objects
Measure how two data objects are alike: The higher value, the more alike
Often falls in the range [0,1]: 0: no similarity; 1: completely similar
Dissimilarity (or distance) measure
Numerical measure of how different two data objects are
In some sense, the inverse of similarity: The lower, the more alike
Minimum dissimilarity is often 0 (i.e., completely similar)
Range [0, 1] or [0, ∞) , depending on the definition
Proximity usually refers to either similarity or dissimilarity
57
Data Matrix and Dissimilarity Matrix
Data matrix x11 x12 ... x1l
A data matrix of n data points with l dimensions x21 x22 ... x2 l
D
Dissimilarity (distance) matrix
n data points, but registers only the distance d(i, j) xn1 xn 2 ... xnl
(typically metric)
0
Usually symmetric, thus a triangular matrix
d (2,1) 0
Distance functions are usually different for real,
boolean, categorical, ordinal, ratio, and vector variables
d ( n,1) d ( n, 2) ... 0
Weights can be associated with different variables based
on applications and data semantics
58
Standardizing Numeric Data
Z-score: x
z
X: raw score to be standardized, μ: mean of the population, σ: standard deviation
the distance between the raw score and the population mean in units of the
standard deviation
negative when the raw score is below the mean, “+” when above
An alternative way: Calculate the mean absolute deviation
s f 1n (| x1 f m f | | x2 f m f | ... | xnf m f |)
where
m f 1n (x1 f x2 f ... xnf )
.
xif m f
standardized measure (z-score): zif sf
Using mean absolute deviation is more robust than using standard deviation
59
Example: Data Matrix and Dissimilarity Matrix
Data Matrix
x2 x4
point attribute1 attribute2
4 x1 1 2
x2 3 5
x3 2 0
x4 4 5
2 x1
Dissimilarity Matrix (by Euclidean Distance)
x1 x2 x3 x4
x1 0
x3
x2 3.61 0
0 2 4 x3 2.24 5.1 0
x4 4.24 1 5.39 0
60
Distance on Numeric Data: Minkowski Distance
Minkowski distance: A popular distance measure
d (i, j ) p | xi1 x j1 | p | xi 2 x j 2 | p | xil x jl | p
where i = (xi1, xi2, …, xil) and j = (xj1, xj2, …, xjl) are two l-dimensional data
objects, and p is the order (the distance so defined is also called L-p norm)
Properties
d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positivity)
d(i, j) = d(j, i) (Symmetry)
d(i, j) d(i, k) + d(k, j) (Triangle Inequality)
A distance that satisfies these properties is a metric
Note: There are nonmetric dissimilarities, e.g., set differences
61
Special Cases of Minkowski Distance
p = 1: (L1 norm) Manhattan (or city block) distance
E.g., the Hamming distance: the number of bits that are different between
two binary vectors d (i, j ) | x x | | x x | | x x |
i1 j1 i2 j2 il jl
62
Example: Minkowski Distance at Special Cases
point attribute 1 attribute 2 Manhattan (L1)
x1 1 2 L x1 x2 x3 x4
x2 3 5 x1 0
x3 2 0 x2 5 0
x4 4 5 x3 3 6 0
x4 6 1 7 0
x2 x4 Euclidean (L2)
L2 x1 x2 x3 x4
4 x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0
2 x1
Supremum (L)
L x1 x2 x3 x4
x1 0
x2 3 0
x3 x3 2 5 0
0 2 4 x4 3 1 5 0
63
Proximity Measure for Binary Attributes
A contingency table for binary data
Object j
Object i
64
Example: Dissimilarity between Asymmetric Binary Variables
Mary
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N 1 0 ∑row
Mary F Y N P N P N
Jack 1 2 0 2
Jim M Y P N N N N
0 1 3 4
Gender is a symmetric attribute (not counted in) ∑col 3 3 6
Jim
The remaining attributes are asymmetric binary 1 0 ∑row
Let the values Y and P be 1, and the value N be 0 1 1 1 2
Jack
Distance: 0 1 3 4
∑col 2 4 6
01 Mary
d ( jack , mary ) 0.33
2 01 1 0 ∑row
11
d ( jack , jim ) 0.67 1 1 1 2
111
1 2 Jim 0 2 2 4
d ( jim , mary ) 0.75
11 2 ∑col 3 3 6
65
Proximity Measure for Categorical Attributes
Categorical data, also called nominal attributes
Example: Color (red, yellow, blue, green), profession, etc.
Method 1: Simple matching
m: # of matches, p: total # of variables
p
d (i, j) p m
66
Ordinal Variables
An ordinal variable can be discrete or continuous
Order is important, e.g., rank (e.g., freshman, sophomore, junior, senior)
Can be treated like interval-scaled
Replace an ordinal variable value by its rank: rif {1,..., M f }
Map the range of each variable onto [0, 1] by replacing i-th object in
the f-th variable by rif 1
zif
M f 1
Example: freshman: 0; sophomore: 1/3; junior: 2/3; senior 1
Then distance: d(freshman, senior) = 1, d(junior, senior) = 1/3
Compute the dissimilarity using methods for interval-scaled variables
67
Attributes of Mixed Type
A dataset may contain all attribute types
Nominal, symmetric binary, asymmetric binary, numeric, and ordinal
One may use a weighted formula to combine their effects:
p
ij dij
w (f) (f)
f 1
d (i, j ) p
ij
w (f)
f 1
If f is numeric: Use the normalized distance
If f is binary or nominal: dij(f) = 0 if xif = xjf; or dij(f) = 1 otherwise
If f is ordinal
rif 1
Compute ranks zif (where zif )
M f 1
Treat zif as interval-scaled
68
Cosine Similarity of Two Vectors
A document can be represented by a bag of terms or a long vector, with each
attribute recording the frequency of a particular term (such as word, keyword, or
phrase) in the document
|| d1 || 5 5 0 0 3 3 0 0 2 2 0 0 0 0 2 2 0 0 0 0 6.481
|| d 2 || 3 3 0 0 2 2 0 0 11 11 0 0 1 1 0 0 1 1 4.12
Calculate cosine similarity: cos(d1, d2 ) = 26/ (6.481 X 4.12) = 0.94
70
Announcements: Meetine of the 4th Credit Project
CS412: Assignment #1 was distributed last Tuesday!
The due date is Sept. 15. No late homework will be accepted!!
Waitlist is cleared: We took 50 additional students into the video only session
Please find your status with Holly. You are either in or out (wait for Spring 2017)
Meeting for Project for the 4th Credit
You can change from 4 to 3 credit or from 3 to 4 credits by sending me e-mails
Meeting time and location: 10-11am Friday (tomorrow!) at 0216 SC
This project is part of WSDM 2017 Cup
Choice #1: Triple Scoring: Computing relevance scores for triples from type-like
relations
Choice #2: Vandalism Detection for Wikipages
Tas/PhD student/postdoc will give you the details in the Friday meeting! Must
attend if you want to do the 4th credit project!!!
71
KL Divergence: Comparing Two Probability Distributions
The Kullback-Leibler (KL) divergence:
Measure the difference between two
probability distributions over the same
variable x
From information theory, closely
related to relative entropy,
information divergence, and
information for discrimination
DKL(p(x) || q(x)): divergence of q(x) from
p(x), measuring the information lost
when q(x) is used to approximate p(x)
Data Visualization
Summary
75
Summary
Data attribute types: nominal, binary, ordinal, interval-scaled, ratio-scaled
Many types of data sets, e.g., numerical, text, graph, Web, image.
Gain insight into the data by:
Basic statistical data description: central tendency, dispersion, graphical displays
Data visualization: map data onto graphical primitives
Measure data similarity
Above steps are the beginning of data preprocessing
Many methods have been developed but still an active area of research
76
References
W. Cleveland, Visualizing Data, Hobart Press, 1993
T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003
U. Fayyad, G. Grinstein, and A. Wierse. Information Visualization in Data Mining and Knowledge
Discovery, Morgan Kaufmann, 2001
L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John
Wiley & Sons, 1990.
H. V. Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of the Tech. Committee on
Data Eng., 20(4), Dec. 1997
D. A. Keim. Information visualization and visual data mining, IEEE trans. on Visualization and Computer
Graphics, 8(1), 2002
D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
S. Santini and R. Jain,” Similarity measures”, IEEE Trans. on Pattern Analysis and Machine Intelligence,
21(9), 1999
E. R. Tufte. The Visual Display of Quantitative Information, 2 nd ed., Graphics Press, 2001
C. Yu, et al., Visual data mining of multimedia data for social and behavioral studies, Information
Visualization, 8(1), 2009
77