0% found this document useful (0 votes)
33 views

Unit 1

Uploaded by

artemis36909
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Unit 1

Uploaded by

artemis36909
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 95

Data Mining

T Praveen Kumar
Asst .Prof,CSE
[email protected]
Poll Question : www.pollev.com/thumukuntapr500

1
UNIT 1:Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
2
Why Data Mining?
 The Explosive Growth of Data: from terabytes to petabytes
 Data collection and data availability

Automated data collection tools, database systems,
Web, computerized society
 Major sources of abundant data

Business: Web, e-commerce, transactions, stocks, …

Science: Remote sensing, bioinformatics, scientific
simulation, …

Society and everyone: news, digital cameras, YouTube
 We are drowning in data, but starving for knowledge!
 “Necessity is the mother of invention”—Data mining—
Automated analysis of massive data sets
3
Evolution of Sciences
 Before 1600, empirical science
 1600-1950s, theoretical science
 Each discipline has grown a theoretical component. Theoretical models
often motivate experiments and generalize our understanding.
 1950s-1990s, computational science
 Over the last 50 years, most disciplines have grown a third, computational
branch (e.g. empirical, theoretical, and computational ecology, or physics,
or linguistics.)
 Computational Science traditionally meant simulation. It grew out of our
inability to find closed-form solutions for complex mathematical models.
 1990-now, data science
 The flood of data from new scientific instruments and simulations
 The ability to economically store and manage petabytes of data online
 The Internet and computing Grid that makes all these archives universally
accessible
 Scientific info. management, acquisition, organization, query, and
visualization tasks scale almost linearly with data volumes. Data mining
is a major new challenge!
 Jim Gray and Alex Szalay, The World Wide Telescope: An Archetype for Online 4
Evolution of Database
Technology
 1960s:
 Data collection, database creation, IMS and network DBMS
 1970s:
 Relational data model, relational DBMS implementation
 1980s:
 RDBMS, advanced data models (extended-relational, OO,
deductive, etc.)
 Application-oriented DBMS (spatial, scientific, engineering, etc.)
 1990s:
 Data mining, data warehousing, multimedia databases, and Web
databases
 2000s
 Stream data management and mining
 Data mining and its applications
 Web technology (XML, data integration) and global information
systems 5
UNIT 1. Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
6
What Is Data Mining?

 Data mining (knowledge discovery from data)


 Extraction of interesting (non-trivial, implicit, previously
unknown and potentially useful) patterns or knowledge
from huge amount of data
 Data mining: a misnomer?
 Alternative names
 Knowledge discovery (mining) in databases (KDD),
knowledge extraction, data/pattern analysis, data
archeology, data dredging, information harvesting,
business intelligence, etc.
 Watch out: Is everything “data mining”?
 Simple search and query processing
 (Deductive) expert systems
7
Knowledge Discovery (KDD) Process
 This is a view from typical
database systems and data
Pattern Evaluation
warehousing communities
 Data mining plays an
essential role in the
knowledge discovery process Data Mining

Task-relevant Data

Data Warehouse Selection

Data Cleaning

Data Integration

Databases
8
Example: A Web Mining
Framework

 Web mining usually involves


 Data cleaning
 Data integration from multiple sources
 Warehousing the data
 Data cube construction
 Data selection for data mining
 Data mining
 Presentation of the mining results
 Patterns and knowledge to be used or stored
into knowledge-base

9
Data Mining in Business Intelligence

Increasing potential
to support
business decisions End User
Decisio
n
Making
Data Presentation Business
Analyst
Visualization Techniques
Data Mining Data
Information Discovery Analyst

Data Exploration
Statistical Summary, Querying, and Reporting

Data Preprocessing/Integration, Data Warehouses


DBA
Data Sources
Paper, Files, Web documents, Scientific experiments, Database Systems
10
Example: Mining vs. Data
Exploration
 Business intelligence view
 Warehouse, data cube, reporting but not much
mining
 Business objects vs. data mining tools
 Supply chain example: tools
 Data presentation
 Exploration

11
KDD Process: A Typical View from ML
and Statistics

Input Data Data Pre- Data Post-


Processing Mining Processin
g

Data integration Pattern discovery Pattern evaluation


Normalization Association & Pattern selection
correlation
Feature selection Classification Pattern
Dimension reduction interpretation
Clustering
Outlier analysis Pattern visualization
…………

 This is a view from typical machine learning and statistics


communities
12
Example: Medical Data
Mining

 Health care & medical data mining – often


adopted such a view in statistics and
machine learning
 Preprocessing of the data (including feature
extraction and dimension reduction)
 Classification or/and clustering processes
 Post-processing for presentation

13
UNIT 1. Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
14
Multi-Dimensional View of Data
Mining
 Data to be mined
 Database data (extended-relational, object-oriented,

heterogeneous, legacy), data warehouse, transactional


data, stream, spatiotemporal, time-series, sequence, text
and web, multi-media, graphs & social and information
networks
 Knowledge to be mined (or: Data mining functions)
 Characterization, discrimination, association, classification,

clustering, trend/deviation, outlier analysis, etc.


 Descriptive vs. predictive data mining

 Multiple/integrated functions and mining at multiple levels

 Techniques utilized
 Data-intensive, data warehouse (OLAP), machine learning,

statistics, pattern recognition, visualization, high-


performance, etc.
 Applications adapted
 Retail, telecommunication, banking, fraud analysis, bio-data 15
UNIT1. Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
16
Data Mining: On What Kinds of
Data?
 Database-oriented data sets and applications
 Relational database, data warehouse, transactional database
 Advanced data sets and advanced applications
 Data streams and sensor data
 Time-series data, temporal data, sequence data (incl. bio-
sequences)
 Structure data, graphs, social networks and multi-linked data
 Object-relational databases
 Heterogeneous databases and legacy databases
 Spatial data and spatiotemporal data
 Multimedia database
 Text databases
 The World-Wide Web
17
UNIT 1. Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
18
Data Mining Function: (1)
Generalization
 Information integration and data warehouse
construction
 Data cleaning, transformation, integration, and
multidimensional data model
 Data cube technology
 Scalable methods for computing (i.e.,
materializing) multidimensional aggregates
 OLAP (online analytical processing)
 Multidimensional concept description:
Characterization and discrimination
 Generalize, summarize, and contrast data
characteristics, e.g., dry vs. wet region
19
Data Mining Function: (2)
Association and Correlation Analysis
 Frequent patterns (or frequent itemsets)
 What items are frequently purchased together
in your Walmart?
 Association, correlation vs. causality
 A typical association rule

Diaper  Beer [0.5%, 75%] (support,
confidence)
 Are strongly associated items also strongly
correlated?
 How to mine such patterns and rules efficiently in
large datasets?
 How to use such patterns for classification,
20
Data Mining Function: (3)
Classification
 Classification and label prediction
 Construct models (functions) based on some training
examples
 Describe and distinguish classes or concepts for future
prediction

E.g., classify countries based on (climate), or classify
cars based on (gas mileage)
 Predict some unknown class labels
 Typical methods
 Decision trees, naïve Bayesian classification, support
vector machines, neural networks, rule-based
classification, pattern-based classification, logistic
regression, …
 Typical applications:
 Credit card fraud detection, direct marketing, classifying 21
Data Mining Function: (4) Cluster
Analysis
 Unsupervised learning (i.e., Class label is unknown)
 Group data to form new categories (i.e., clusters),
e.g., cluster houses to find distribution patterns
 Principle: Maximizing intra-class similarity &
minimizing interclass similarity
 Many methods and applications

22
Data Mining Function: (5) Outlier
Analysis
 Outlier analysis
 Outlier: A data object that does not comply with the
general behavior of the data
 Noise or exception? ― One person’s garbage could be
another person’s treasure
 Methods: by product of clustering or regression analysis, …
 Useful in fraud detection, rare events analysis

23
Time and Ordering: Sequential
Pattern, Trend and Evolution Analysis
 Sequence, trend and evolution analysis
 Trend, time-series, and deviation analysis: e.g.,

regression and value prediction


 Sequential pattern mining


e.g., first buy digital camera, then buy large
SD memory cards
 Periodicity analysis

 Motifs and biological sequence analysis


Approximate and consecutive motifs
 Similarity-based analysis

 Mining data streams


 Ordered, time-varying, potentially infinite, data

streams 24
Structure and Network Analysis
 Graph mining
 Finding frequent subgraphs (e.g., chemical compounds),

trees (XML), substructures (web fragments)


 Information network analysis
 Social networks: actors (objects, nodes) and relationships

(edges)

e.g., author networks in CS, terrorist networks
 Multiple heterogeneous networks


A person could be multiple information networks:
friends, family, classmates, …
 Links carry a lot of semantic information: Link mining

 Web mining
 Web is a big information network: from PageRank to

Google
 Analysis of Web information networks


Web community discovery, opinion mining, usage 25
Evaluation of Knowledge
 Are all mined knowledge interesting?
 One can mine tremendous amount of “patterns” and
knowledge
 Some may fit only certain dimension space (time, location,
…)
 Some may not be representative, may be transient, …
 Evaluation of mined knowledge → directly mine only
interesting knowledge?
 Descriptive vs. predictive
 Coverage
 Typicality vs. novelty
 Accuracy
 Timeliness 26
UNIT 1. Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
27
Data Mining: Confluence of Multiple
Disciplines

Machine Pattern Statistics


Learning Recognition

Applications Data Mining Visualization

Algorithm Database High-Performance


Technology Computing

28
Why Confluence of Multiple
Disciplines?
 Tremendous amount of data
 Algorithms must be highly scalable to handle such as tera-
bytes of data
 High-dimensionality of data
 Micro-array may have tens of thousands of dimensions
 High complexity of data
 Data streams and sensor data
 Time-series data, temporal data, sequence data
 Structure data, graphs, social networks and multi-linked
data
 Heterogeneous databases and legacy databases
 Spatial, spatiotemporal, multimedia, text and Web data
 Software programs, scientific simulations
 New and sophisticated applications
29
UNIT 1. Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
30
Applications of Data Mining
 Web page analysis: from web page classification, clustering
to PageRank & HITS algorithms
 Collaborative analysis & recommender systems
 Basket data analysis to targeted marketing
 Biological and medical data analysis: classification, cluster
analysis (microarray data analysis), biological sequence
analysis, biological network analysis
 Data mining and software engineering (e.g., IEEE Computer,
Aug. 2009 issue)
 From major dedicated data mining systems/tools (e.g., SAS,
MS SQL-Server Analysis Manager, Oracle Data Mining Tools)
to invisible data mining

31
UNIT 1. Introduction
 Why Data Mining?
 What Is Data Mining?
 A Multi-Dimensional View of Data Mining
 What Kind of Data Can Be Mined?
 What Kinds of Patterns Can Be Mined?
 What Technology Are Used?
 What Kind of Applications Are Targeted?
 Major Issues in Data Mining
 A Brief History of Data Mining and Data Mining Society
 Summary
32
Major Issues in Data Mining
(1)
 Mining Methodology
 Mining various and new kinds of knowledge
 Mining knowledge in multi-dimensional space
 Data mining: An interdisciplinary effort
 Boosting the power of discovery in a networked
environment
 Handling noise, uncertainty, and incompleteness of data
 Pattern evaluation and pattern- or constraint-guided
mining
 User Interaction
 Interactive mining
 Incorporation of background knowledge
 Presentation and visualization of data mining results 33
Major Issues in Data Mining
(2)

 Efficiency and Scalability


 Efficiency and scalability of data mining algorithms
 Parallel, distributed, stream, and incremental mining
methods
 Diversity of data types
 Handling complex types of data
 Mining dynamic, networked, and global data repositories
 Data mining and society
 Social impacts of data mining
 Privacy-preserving data mining
 Invisible data mining

34
UNIT 1: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

35
Types of Data Sets
 Record
 Relational records
 Data matrix, e.g., numerical matrix,

timeout

season
coach

game
score
team

ball

lost
pla

wi
crosstabs

n
y
 Document data: text documents:
term-frequency vector
Document 1 3 0 5 0 2 6 0 2 0 2
 Transaction data
 Graph and network Document 2 0 7 0 2 1 0 0 3 0 0
 World Wide Web
Document 3 0 1 0 0 1 2 2 0 3 0
 Social or information networks
 Molecular Structures
 Ordered TID Items
 Video data: sequence of images
1 Bread, Coke, Milk
 Temporal data: time-series
 Sequential Data: transaction 2 Beer, Bread
sequences 3 Beer, Coke, Diaper, Milk
 Genetic sequence data 4 Beer, Bread, Diaper, Milk
 Spatial, image and multimedia: 5 Coke, Diaper, Milk
 Spatial data: maps
 Image data:
 Video data:
36
Important Characteristics of
Structured Data

 Dimensionality
 Curse of dimensionality
 Sparsity
 Only presence counts
 Resolution

Patterns depend on the scale
 Distribution
 Centrality and dispersion

37
Data Objects

 Data sets are made up of data objects.


 A data object represents an entity.
 Examples:
 sales database: customers, store items, sales
 medical database: patients, treatments
 university database: students, professors,
courses
 Also called samples , examples, instances, data
points, objects, tuples.
 Data objects are described by attributes.
38
 Database rows -> data objects; columns -
Attributes
 Attribute (or dimensions, features,
variables): a data field, representing a
characteristic or feature of a data object.

E.g., customer _ID, name, address
 Types:

Nominal

Binary

Numeric: quantitative

Interval-scaled

Ratio-scaled

39
Attribute Types
 Nominal: categories, states, or “names of things”
 Hair_color = {auburn, black, blond, brown, grey, red,
white}
 marital status, occupation, ID numbers, zip codes
 Binary
 Nominal attribute with only 2 states (0 and 1)
 Symmetric binary: both outcomes equally important

e.g., gender
 Asymmetric binary: outcomes not equally important.

e.g., medical test (positive vs. negative)

Convention: assign 1 to most important outcome (e.g.,
HIV positive)
 Ordinal
 Values have a meaningful order (ranking) but magnitude
between successive values is not known.
 Size = {small, medium, large}, grades, army rankings
40
Numeric Attribute Types
 Quantity (integer or real-valued)
 Interval

Measured on a scale of equal-sized units

Values have order
 E.g., temperature in C˚or F˚, calendar
dates

No true zero-point
 Ratio

Inherent zero-point

We can speak of values as being an order of
magnitude larger than the unit of
measurement (10 K˚ is twice as high as 5
K˚).
 e.g., temperature in Kelvin, length,
41 counts, monetary quantities
Discrete vs. Continuous
Attributes
 Discrete Attribute
 Has only a finite or countably infinite set of

values

E.g., zip codes, profession, or the set of words
in a collection of documents
 Sometimes, represented as integer variables

 Note: Binary attributes are a special case of

discrete attributes
 Continuous Attribute
 Has real numbers as attribute values


E.g., temperature, height, or weight
 Practically, real values can only be measured and

represented using a finite number of digits


 Continuous attributes are typically represented

42
as floating-point variables
UNIT 1: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

43
Basic Statistical Descriptions of
Data
 Motivation
 To better understand the data: central
tendency, variation and spread
 Data dispersion characteristics
 median, max, min, quantiles, outliers, variance,
etc.
 Numerical dimensions correspond to sorted
intervals
 Data dispersion: analyzed with multiple
granularities of precision
 Boxplot or quantile analysis on sorted intervals
 Dispersion analysis on computed measures
 Folding measures into numerical dimensions
 Boxplot or quantile analysis on the transformed
cube
44
Measuring the Central Tendency
 Mean (algebraic measure) (sample vs. population): 1 n
x   xi   x
Note: n is sample size and N is population size. n i 1 N
n
 Weighted arithmetic mean:

w x i i
Trimmed mean: chopping extreme valuesx  i 1
n
 Median:
w
i 1
i
 Middle value if odd number of values, or
average of the middle two values otherwise
 Estimated by interpolation (for grouped data):
n / 2  ( freq )l

median L1  ( ) width
Mode freq median
 Value that occurs most frequently in the data
 Unimodal, bimodal, trimodal
 Empirical formula: mean  mode 3 (mean  median)
45
Symmetric vs.
Skewed Data
 Median, mean and mode of symmetric

symmetric, positively and


negatively skewed data

positively skewed negatively


skewed

 Data Mining: Concepts and


 November 23, 20 Techniques 46
Measuring the Dispersion of
Data
 Quartiles, outliers and boxplots
 Quartiles: Q1 (25th percentile), Q3 (75th percentile)
 Inter-quartile range: IQR = Q3 – Q1
 Five number summary: min, Q1, median, Q3, max
 Boxplot: ends of the box are the quartiles; median is marked; add
whiskers, and plot outliers individually
 Outlier: usually, a value higher/lower than 1.5 x IQR
 Variance and standard deviation (sample: s, population: σ)
 Variance: (algebraic, scalable computation)
1 n 1 n 2 1 n 2 1 n 1 n
2
s  
n  1 i 1

2
( xi  x )  
[ xi  ( xi ) ]
n  1 i 1 n i 1
   ( xi   ) 2 
2

N i 1 N
 xi   2
i 1
2

 Standard deviation s (or σ) is the square root of variance s2 (or


σ2)
47
Boxplot Analysis
 Five-number summary of a distribution
 Minimum, Q1, Median, Q3, Maximum
 Boxplot
 Data is represented with a box
 The ends of the box are at the first and
third quartiles, i.e., the height of the
box is IQR
 The median is marked by a line within
the box
 Whiskers: two lines outside the box
extended to Minimum and Maximum
 Outliers: points beyond a specified
outlier threshold, plotted individually
48
Visualization of Data Dispersion: 3-D
Boxplots

Data Mining: Concepts and


49November 23, 2024 Techniques
Properties of Normal Distribution
Curve
 The normal (distribution) curve

From μ–σ to μ+σ: contains about 68% of the
measurements (μ: mean, σ: standard deviation)

From μ–2σ to μ+2σ: contains about 95% of it

From μ–3σ to μ+3σ: contains about 99.7% of it

50
Graphic Displays of Basic Statistical
Descriptions

 Boxplot: graphic display of five-number summary


 Histogram: x-axis are values, y-axis repres.
frequencies
 Quantile plot: each value xi is paired with fi
indicating that approximately 100 fi % of data are
 xi
 Quantile-quantile (q-q) plot: graphs the
quantiles of one univariant distribution against the
corresponding quantiles of another
51
Histogram Analysis
 Histogram: Graph display of
tabulated frequencies, shown 40
as bars 35
 It shows what proportion of 30
cases fall into each of several
25
categories
20
 Differs from a bar chart in that
it is the area of the bar that 15
denotes the value, not the 10
height as in bar charts, a crucial
5
distinction when the categories
are not of uniform width 0
10000 30000 50000 70000 90000
 The categories are usually
specified as non-overlapping
intervals of some variable. The
categories (bars) must be 52
Histograms Often Tell More than
Boxplots

 The two
histograms
shown in the left
may have the
same boxplot
representation
 The same
values for:
min, Q1,
median, Q3,
53 max
Quantile Plot
 Displays all of the data (allowing the user to
assess both the overall behavior and unusual
occurrences)
 Plots quantile information

For a data xi data sorted in increasing order, fi
indicates that approximately 100 fi% of the data
are below or equal to the value xi

Data Mining: Concepts and


54 Techniques
Quantile-Quantile (Q-Q) Plot
 Graphs the quantiles of one univariate distribution against
the corresponding quantiles of another
 View: Is there is a shift in going from one distribution to
another?
 Example shows unit price of items sold at Branch 1 vs.
Branch 2 for each quantile. Unit prices of items sold at
Branch 1 tend to be lower than those at Branch 2.

55
Scatter plot
 Provides a first look at bivariate data to see
clusters of points, outliers, etc
 Each pair of values is treated as a pair of
coordinates and plotted as points in the plane

56
Positively and Negatively Correlated
Data

 The left half fragment is


positively correlated
 The right half is negative
correlated
57
Uncorrelated Data

58
UNIT 1 : Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

59
Data Visualization
 Why data visualization?
 Gain insight into an information space by mapping data onto
graphical primitives
 Provide qualitative overview of large data sets
 Search for patterns, trends, structure, irregularities, relationships
among data
 Help find interesting regions and suitable parameters for further
quantitative analysis
 Provide a visual proof of computer representations derived
 Categorization of visualization methods:
 Pixel-oriented visualization techniques
 Geometric projection visualization techniques
 Icon-based visualization techniques
 Hierarchical visualization techniques
 Visualizing complex data and relations

60
Pixel-Oriented Visualization
Techniques
 For a data set of m dimensions, create m windows on the
screen, one for each dimension
 The m dimension values of a record are mapped to m pixels at
the corresponding positions in the windows
 The colors of the pixels reflect the corresponding values

(a) Income (b) Credit (c) transaction (d) age


Limit volume 61
Laying Out Pixels in Circle
Segments
 To save space and show the connections among multiple
dimensions, space filling is often done in a circle segment

(a) Representing a data


(b) Laying out pixels in circle
record in circle segment
segment
62
Geometric Projection
Visualization Techniques
 Visualization of geometric transformations and
projections of the data
 Methods
 Direct visualization
 Scatterplot and scatterplot matrices
 Landscapes
 Projection pursuit technique: Help users find
meaningful projections of multidimensional data
 Prosection views
 Hyperslice
 Parallel coordinates
63
Direct Data Visualization
Ribbons with Twists Based on Vorticity

Data Mining: Concepts and


64 Techniques
Scatterplot Matrices

Used by ermission of M. Ward, Worcester Polytechnic Institute

Matrix of scatterplots (x-y-diagrams) of the k-dim. data [total of (k2/2-k)


scatterplots]
65
Landscapes
Used by permission of B. Wright, Visible Decisions Inc.

news articles
visualized as
a landscape

 Visualization of the data as perspective landscape


 The data needs to be transformed into a (possibly artificial)
2D spatial representation which preserves the characteristics
of the data
66
Parallel Coordinates
 n equidistant axes which are parallel to one of the screen
axes and correspond to the attributes
 The axes are scaled to the [minimum, maximum]: range of
the corresponding attribute
 Every data item corresponds to a polygonal line which
intersects each of the axes at the point which corresponds to
the value for the attribute

• • •

Attr. 1 Attr. 2 Attr. 3 Attr. k


67
Parallel Coordinates of a Data Set

68
Icon-Based Visualization
Techniques
 Visualization of the data values as features of icons
 Typical visualization methods
 Chernoff Faces
 Stick Figures
 General techniques
 Shape coding: Use shape to represent certain
information encoding
 Color icons: Use color icons to encode more
information
 Tile bars: Use small icons to represent the
relevant feature vectors in document retrieval
69
Chernoff Faces
 A way to display variables on a two-dimensional surface, e.g.,
let x be eyebrow slant, y be eye size, z be nose length, etc.
 The figure shows faces produced using 10 characteristics--
head eccentricity, eye size, eye spacing, eye eccentricity,
pupil size, eyebrow slant, nose size, mouth shape, mouth
size, and mouth opening): Each assigned one of 10 possible
values, generated using Mathematica (S. Dickson)
 REFERENCE: Gonick, L. and Smith, W.
The Cartoon Guide to Statistics. New
York: Harper Perennial, p. 212, 1993
 Weisstein, Eric W. "Chernoff Face."
From MathWorld--A Wolfram Web
Resource.
mathworld.wolfram.com/ChernoffFace.
70
Stick Figure
A census
data figure
showing
used by permission of G. Grinstein, University of Massachusettes at Lowell

age, income,
gender,
education,
etc.

A 5-piece
stick figure
(1 body
and 4 limbs
w. different
71Two attributes mapped to axes, remaining attributes mapped to angle or length of limbs”. Look at
Hierarchical Visualization
Techniques

 Visualization of the data using a hierarchical


partitioning into subspaces
 Methods
 Dimensional Stacking
 Worlds-within-Worlds
 Tree-Map
 Cone Trees
 InfoCube

72
Dimensional Stacking

attribute 4
attribute 2

attribute 3

attribute 1
 Partitioning of the n-dimensional attribute space in
2-D subspaces, which are ‘stacked’ into each other
 Partitioning of the attribute value ranges into
classes. The important attributes should be used on
the outer levels.
 Adequate for data with ordinal attributes of low
cardinality
 But, difficult to display more than nine dimensions
 Important to map dimensions appropriately
73
Dimensional Stacking
Used by permission of M. Ward, Worcester Polytechnic Institute

Visualization of oil mining data with longitude and latitude mapped to the
outer x-, y-axes and ore grade and depth mapped to the inner x-, y-axes
74
Worlds-within-Worlds
 Assign the function and two most important parameters to
innermost world
 Fix all other parameters at constant values - draw other (1 or 2
or 3 dimensional worlds choosing these as the axes)
 Software that uses this paradigm

N–vision: Dynamic
interaction through
data glove and stereo
displays, including
rotation, scaling
(inner) and
translation
(inner/outer)

Auto Visual: Static
interaction by means
of queries

75
Tree-Map
 Screen-filling method which uses a hierarchical
partitioning of the screen into regions depending on
the attribute values
 The x- and y-dimension of the screen are partitioned
alternately according to the attribute values
(classes)

MSR Netscan Image

76 Ack.:
Tree-Map of a File System
(Schneiderman)

77
InfoCube
 A 3-D visualization technique where
hierarchical information is displayed as
nested semi-transparent cubes
 The outermost cubes correspond to the top
level data, while the subnodes or the lower
level data are represented as smaller cubes
inside the outermost cubes, and so on

78
Three-D Cone Trees
 3D cone tree visualization technique
works well for up to a thousand nodes
or so
 First build a 2D circle tree that
arranges its nodes in concentric
circles centered on the root node
 Cannot avoid overlaps when
projected to 2D
 G. Robertson, J. Mackinlay, S. Card.
“Cone Trees: Animated 3D
Visualizations of Hierarchical
Information”, ACM SIGCHI'91
 Graph from Nadeau Software
Consulting website: Visualize a social
network data set that models the
Ack.: way
http://nadeausoftware.com/articles/visualization
79 an infection spreads from one person
Visualizing Complex Data and
Relations
 Visualizing non-numerical data: text and social
networks
 Tag cloud: visualizing user-generated tags
The
importance of
tag is
represented
by font
size/color
 Besides text
data, there are
also methods to
visualize
Newsmap: Google News Stories in
UNIT 1 : Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

81
Similarity and Dissimilarity
 Similarity

Numerical measure of how alike two data objects
are

Value is higher when objects are more alike

Often falls in the range [0,1]
 Dissimilarity (e.g., distance)

Numerical measure of how different two data
objects are

Lower when objects are more alike

Minimum dissimilarity is often 0

Upper limit varies
 Proximity refers to a similarity or dissimilarity

82
Data Matrix and Dissimilarity
Matrix
 Data matrix

n data points with  x11 ... x1f ... x1p 
p dimensions  

Two modes  ... ... ... ... ... 
x ... x if ... x ip 
 i1 
 ... ... ... ... ... 
x x np 
 Dissimilarity matrix  n1 ... x nf ...


n data points, but
registers only the
distance  0 
 d(2,1) 0 

A triangular matrix  

Single mode  d(3,1) d ( 3,2) 0 
 
 : : : 
 d ( n,1) d ( n,2) ... ... 0

83
Proximity Measure for Nominal
Attributes

 Can take 2 or more states, e.g., red, yellow,


blue, green (generalization of a binary
attribute)
 Method 1: Simple matching
d (i, j)  p p
#m
 m: # of matches, p: total of variables

 Method 2: Use a large number of binary


attributes
 creating a new binary attribute for each of the M
84
nominal states
Proximity Measure for Binary
Attributes
Object j
 A contingency table for binary
Object i
data

 Distance measure for


symmetric binary variables:
 Distance measure for
asymmetric binary variables:
 Jaccard coefficient (similarity
measure for asymmetric binary
 variables):
Note: Jaccard coefficient is the same as “coherence”:

85
Dissimilarity between Binary
Variables
 Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
 Jim
GenderMis a symmetric
Y P attribute
N N N N

The remaining attributes are asymmetric binary

Let the values Y and P be 1, and the value N 0

0 1
d ( jack , mary )  0.33
2  0 1
11
d ( jack , jim )  0.67
111
1 2
d ( jim , mary )  0.75
11 2
86
Standardizing Numeric Data
x  
 Z-score: z  
 X: raw score to be standardized, μ: mean of the
population, σ: standard deviation
 the distance between the raw score and the population
mean in units of the standard deviation
 negative when the raw score is below the mean, “+”
when above
s 1Calculate
An alternative way: (| x  m the
|  | xmean
 mabsolute
| ... | x deviation

f n 1f f 2f f
 m |) nf f
m f  1n (x1 f  x2 f  ...  xnf )
xif  m f
.

where
zif  s
f
 standardized measure (z-score):
 Using mean absolute deviation is more robust than using
standard deviation
87
Example:
Data Matrix and Dissimilarity Matrix
Data Matrix
x2 x4
point attribute1 attribute2
4 x1 1 2
x2 3 5
x3 2 0
x4 4 5
2 x1
Dissimilarity Matrix
(with Euclidean Distance)
x3
0 4 x1 x2 x3 x4
2
x1 0
x2 3.61 0
x3 5.1 5.1 0
x4 4.24 1 5.39 0

88
Distance on Numeric Data: Minkowski
Distance
 Minkowski distance: A popular distance measure

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp)


are two p-dimensional data objects, and h is the
order (the distance so defined is also called L-h
norm)
 Properties

d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive
definiteness)

d(i, j) = d(j, i) (Symmetry)

d(i, j)  d(i, k) + d(k, j) (Triangle Inequality)
 A distance that satisfies these properties is a
metric
89
Special Cases of Minkowski Distance

h = 1: Manhattan (city block, L1 norm) distance

E.g., the Hamming distance: the number of bits that are
different between two binary vectors
d (i, j) | x  x |  | x  x | ... | x  x |
i1 j1 i2 j 2 ip jp

h = 2: (L2 norm) Euclidean distance
d (i, j)  (| x  x |2  | x  x |2 ... | x  x |2 )
i1 j1 i2 j 2 ip jp


h  . “supremum” (Lmax norm, L norm) distance.

This is the maximum difference between any component
(attribute) of the vectors

90
Example: Minkowski
Distance
Dissimilarity Matrices
point attribute 1 attribute 2 Manhattan
x1 1 2
x2 3 5 (L1x1
)L x1
0
x2 x3 x4

x3 2 0 x2 5 0
x4 4 5 x3 3 6 0
x4 6 1 7 0

x2 x4
Euclidean (L2)
L2 x1 x2 x3 x4
4 x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0

2 x1 Supremum
L x1 x2 x3 x4
x1 0
x2 3 0
x3 x3 2 5 0
0 2 4 x4 3 1 5 0
91
Ordinal Variables

 An ordinal variable can be discrete or continuous


 Order is important, e.g., rank
 Can be treated like interval-scaled
 replace x
if by their rank
rif {1,...,M f }
 map the range of each variable onto [0, 1] by
replacing i-th object in the f-th variable by
rif  1
zif 
Mf 1
 compute the dissimilarity using methods for
interval-scaled variables

92
Attributes of Mixed Type
 A database may contain all attribute types
 Nominal, symmetric binary, asymmetric binary,

numeric, ordinal
 One may use a weighted formula to combine their
effects p
 f 1 ij dij
(f) (f)
d (i, j)  p
 f 1 ij( f )
 f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise
 f is numeric: use the normalized distance
 f is ordinal
zif  rif  1

Compute ranks rif and Mf 1

Treat zif as interval-scaled 93
Cosine Similarity
 A document can be represented by thousands of attributes,
each recording the frequency of a particular word (such as
keywords) or phrase in the document.

 Other vector objects: gene features in micro-arrays, …


 Applications: information retrieval, biologic taxonomy, gene
feature mapping, ...
 Cosine measure: If d1 and d2 are two vectors (e.g., term-
frequency vectors), then
cos(d1, d2) = (d1  d2) /||d1|| ||d2|| ,
where  indicates vector dot product, ||d||: the length of
vector d
94
Example: Cosine Similarity
 cos(d1, d2) = (d1  d2) /||d1|| ||d2|| ,
where  indicates vector dot product, ||d|: the length of
vector d

 Ex: Find the similarity between documents 1 and 2.

d1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0)
d2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)

d1d2 = 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1 = 25
||d1||=
(5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)0.5=(42)0.
5
= 6.481
||d2||=
(3*3+0*0+2*2+0*0+1*1+1*1+0*0+1*1+0*0+1*1)0.5=(17)0.
5
= 4.12
95

You might also like