Chapter 1
Chapter 1
The rule indicates that of the AllElectronics customers under study, 2% are 20 to
29 years old with an income of $40,000 to $49,000 and have purchased a laptop
(computer) at AllElectronics. There is a 60% probability that a customer in this age
and income group will purchase a laptop.
This is an association involving more than one attribute or predicate (i.e., age,
income, and buys). The above rule can be referred to as a multidimensional
association rule.
Correlation Mining
Additional analysis can be performed to uncover interesting statistical
correlations between associated attribute–value pairs. Correlation analysis will find
the relationship between two items. Strong association rule is not interesting to user,
so there is new technique called Correlation.
Example; Total transaction 10,000 in which computer game is 6,000 and videos
is 7,500. This condition is not satisfied and this rule is may not be interested, So we
choose Correlation analysis technique.
Methods in Correlation;
Lift method
X2 method
Other method
All confidence
Cosine
Classification and Regression for Predictive Analysis
Classification is the process of finding a model (or function) that describes
and distinguishes data classes or concepts. The model are derived based on the
analysis of a set of training data (i.e., data objects for which the class labels are
known). The model is used to predict the class label of objects for which the the
class label is unknown.
What is classification?
Examples of cases where the data analysis task is Classification −
A bank loan officer wants to analyze the data in order to know which
customer (loan applicant) are risky or which are safe.
What is prediction?
Note − Regression analysis is a statistical methodology that is most often used for
numeric prediction.
“How is the derived model presented?” The derived model may be represented in
various forms, such as classification rules i.e., IF-THEN rule, decision trees,
mathematical formula, or neural networks.
A decision tree is a flowchart-like tree structure, where each node denotes a test on
an attribute value, each branch represents an outcome of the test, and tree leaves represent
classes or class distributions. Decision trees can easily be converted to classification
rules.
Regression analysis is a statistical methodology that is most often used for numeric
prediction, although other methods exist as well. Regression also encompasses the
identification of distribution trends based on the available data.
Cluster Analysis
Outlier Analysis
A data set may contain objects that do not comply with the general behavior or
model of the data. These data objects are outliers. Many data mining methods discard
outliers as noise or exceptions. However, in some applications (e.g., fraud detection)
the rare events can be more interesting than the more regularly occurring ones. The
analysis of outlier data is referred to as outlier analysis or anomaly mining.
Outliers may be detected using statistical tests that assume a distribution or
probability model for the data, or using distance measures where objects that are
remote from any other cluster are considered outliers. Rather than using statistical or
distance measures, density-based methods may identify outliers in a local region,
although they look normal from a global statistical distribution view.
Example; Outlier analysis may uncover fraudulent usage of credit cards by
detecting purchases of unusually large amounts for a given account number in
comparison to regular charges incurred by the same account. Outlier values may also
be detected with respect to the locations and types of purchase, or the purchase
frequency.
Statistics
Machine learning,
Supervised learning
Unsupervised learning
Semi-supervised learning
Active learning
Database and data warehouse systems
Information retrieval
Visualization
Statistics
Statistics studies the collection, analysis, interpretation or explanation,
and presentation of data. Data mining has an inherent connection with
statistics.
A statistical model is a set of mathematical functions that describe the
behavior of the objects in a target class in terms of random variables and
their associated probability distributions. Statistical models are widely used
to model data and data classes.
For example, in data mining tasks like data characterization and
classification, statistical models of target classes can be built. In other words,
such statistical models can be the outcome of a data mining task.
Statistics research develops tools for prediction and forecasting using
data and statistical models. Statistical methods can be used to summarize or
describe a collection of data.
Statistical methods can also be used to verify data mining results. For
example, after a classification or prediction model is mined, the model
should be verified by statistical hypothesis testing.
A statistical hypothesis test (sometimes called confirmatory data
analysis) makes statistical decisions using experimental data.
Machine Learning
Machine learning investigates how computers can learn (or improve
their performance) based on data.
For example, a typical machine learning problem is to program a
computer so that it can automatically recognize handwritten postal codes on
mail after learning from a set of examples.
Supervised learning
Supervised learning is basically a synonym for classification. The
supervision in the learning comes from the labeled examples in the training
data set. For example, in the postal code recognition problem, a set of
handwritten postal code images and their corresponding machine-readable
translations are used as the training examples, which supervise the learning
of the classification model.
Unsupervised learning
Unsupervised learning is essentially a synonym for clustering. The
learning process is unsupervised since the input examples are not class
labeled. Typically, we may use clustering to discover classes within the data.
For example, an unsupervised learning method can take, as input, a set
of images of handwritten digits. Suppose that it finds 10 clusters of data.
These clusters may correspond to the 10 distinct digits of 0 to 9, respectively.
However, since the training data are not labeled, the learned model cannot
tell us the semantic meaning of the clusters found.
Semi-supervised learning
Semi-supervised learning is a class of machine learning techniques that
make use of both labeled and unlabeled examples when learning a model.
For a two-class problem, we can think of the set of examples belonging to
one class as the positive examples and those belonging to the other class as
the negative examples.
Active learning
Active learning is a machine learning approach that lets users play an
active role in the learning process. An active learning approach can ask a
user (e.g., a domain expert) to label an example, which may be from a set of
unlabeled examples or synthesized by the learning program.
The goal is to optimize the model quality by actively acquiring
knowledge from human users, given a constraint on how many examples
they can be asked to label.
You can see there are many similarities between data mining and machine
learning. For classification and clustering tasks, machine learning research
often focuses on the accuracy of the model. In addition to accuracy, data
mining research places strong emphasis on the efficiency and scalability of
mining methods on large data sets, as well as on ways to handle complex
types of data and explore new, alternative methods.
Database Systems and Data Warehouses
Database systems research focuses on the creation, maintenance, and use of
databases for organizations and end-users. Particularly, database systems
researchers have established highly recognized principles in data models, query
languages, query processing and optimization methods, data storage, and
indexing and accessing methods. Database systems are often well known for
their high scalability in processing very large, relatively structured data sets.
Many data mining tasks need to handle large data sets or even real-time, fast
streaming data. Therefore, data mining can make good use of scalable database
technologies to achieve high efficiency and scalability on large data sets.
Moreover, data mining tasks can be used to extend the capability of existing
database systems to satisfy advanced users’ sophisticated data analysis
requirements. A data warehouse integrates data originating from multiple sources
and various timeframes. It consolidates data in multidimensional space to form
partially materialized data cubes. The data cube model not only facilitates OLAP
in multidimensional databases but also promotes multidimensional data mining.
Information Retrieval
Information retrieval (IR) is the science of searching for documents or
information in documents. Documents can be text or multimedia, and may reside
on the Web. The differences between traditional information retrieval and
database systems are twofold: Information retrieval assumes that (1) the data
under search are unstructured; and (2) the queries are formed mainly by
keywords, which do not have complex structures.
i. Mining methodology
ii. User interaction
iii. Efficiency and Scalability
iv. Diversity of data types
v. Data mining and society
Many of these issues have been addressed in recent data mining research and
development to a certain extent and are now considered data mining requirements; others
are still at the research stage. The issues continue to stimulate further investigation and
improvement in data mining.
i. Mining Methodology
This involves the investigation of new kinds of knowledge, mining in
multidimensional space, integrating methods from other disciplines, and the consideration
of semantic ties among data objects.
Mining methodologies should consider issues such as data uncertainty, noise,
and incompleteness. Some mining methods explore how user specified measures can be
used to assess the interestingness of discovered patterns as well as guide the discovery
process.
Various aspects of mining methodology.
Mining various and new kinds of knowledge
Mining knowledge in multidimensional space
Data mining—an interdisciplinary effort
Boosting the power of discovery in a networked environment
Handling uncertainty, noise, or incompleteness of data
Pattern evaluation and pattern- or constraint-guided mining
Mining various and new kinds of knowledge:
These tasks may use the same database in different ways and require the
development of numerous data mining techniques. Due to the diversity of applications,
new mining tasks continue to emerge, making data mining a dynamic and fast-growing
field.
For example, for effective knowledge discovery in information networks,
integrated clustering and ranking may lead to the discovery of high-quality clusters and
object ranks in large networks.
Mining knowledge in multidimensional space:
When searching for knowledge in large data sets, we can explore the data in
multidimensional space. That is, we can search for interesting patterns among
combinations of dimensions (attributes) at varying levels of abstraction. Such mining is
known as (exploratory) multidimensional data mining. In many cases, data can be
aggregated or viewed as a multidimensional data cube. Mining knowledge in cube space
can substantially enhance the power and flexibility of data mining.
Data mining—an interdisciplinary effort:
The power of data mining can be substantially enhanced by integrating new
methods from multiple disciplines. For example,to mine data with natural language text,
it makes sense to fuse data mining methods with methods of information retrieval and
natural language processing.
Boosting the power of discovery in a networked environment:
Most data objects reside in a linked or interconnected environment, whether it
be the Web, database relations, files, or documents. Knowledge derived in one set of
objects can be used to boost the discovery of knowledge in a “related” or semantically
linked set of objects.
Handling uncertainty, noise, or incompleteness of data:
Data often contain noise, errors, exceptions, or uncertainty, or are incomplete.
Errors and noise may confuse the data mining process, leading to the derivation of
erroneous patterns. Data cleaning, data preprocessing, outlier detection and removal, and
uncertainty reasoning are examples of techniques that need to be integrated with the data
mining process.
Pattern evaluation and pattern- or constraint-guided mining:
All the patterns generated by data mining processes are interesting. What makes
a pattern interesting may vary from user to user. Therefore, techniques are needed to
assess the interestingness of discovered patterns based on subjective measures. These
estimate the value of patterns with respect to a given user class, based on user beliefs or
expectations.
ii. User Interaction
In User Interaction how to interact with a data mining system, how to
incorporate a user’s background knowledge in mining, and how to visualize and
comprehend data mining results. We introduce each of these here.
Interactive mining
Incorporation of background knowledge
Ad hoc data mining and data mining query languages
Presentation and visualization of data mining results
Interactive mining:
The data mining process should be highly interactive. It is important to build
flexible user interfaces and an exploratory mining environment, facilitating the user’s
interaction with the system.
A user may like to first sample a set of data, explore general characteristics of
the data, and estimate potential mining results.
Interactive mining should allow users to dynamically change the focus of a
search, to refine mining requests based on returned results, and to drill, dice, and pivot
through the data and knowledge space interactively, dynamically exploring “cube space”
while mining.
Incorporation of background knowledge:
Background knowledge, constraints, rules, and other information regarding the
domain under study should be incorporated into the knowledge discovery process. Such
knowledge can be used for pattern evaluation as well as to guide the search toward
interesting patterns.
Ad hoc data mining and data mining query languages:
Query languages (e.g., SQL) have played an important role in flexible
searching because they allow users to pose ad hoc queries.
Similarly, high-level data mining query languages or other high-level flexible
user interfaces will give users the freedom to define ad hoc data mining tasks.
This should facilitate specification of the relevant sets of data for analysis, the
domain knowledge, the kinds of knowledge to be mined, and the conditions and
constraints to be enforced on the discovered patterns.
Presentation and visualization of data mining results:
How can a data mining system present data mining results, vividly and flexibly,
so that the discovered knowledge can be easily understood and directly usable by humans?
This is especially crucial if the data mining process is interactive. It requires the system to
adopt expressive knowledge representations, user-friendly interfaces, and visualization
techniques.
iii. Efficiency and Scalability
Efficiency and scalability are always considered when comparing data mining
algorithms. As data amounts continue to multiply, these two factors are especially
critical.
Efficiency and scalability of data mining algorithms
Parallel, distributed, and incremental mining algorithms
Efficiency and scalability of data mining algorithms:
Data mining algorithms must be efficient and scalable in order to effectively
extract information from huge amounts of data in many data repositories or in dynamic
data streams. In other words, the running time of a data mining algorithm must be
predictable, short, and acceptable by applications. Efficiency, scalability, performance,
optimization, and the ability to execute in real time are key criteria that drive the
development of many new data mining algorithms.
Parallel, distributed, and incremental mining algorithms:
The humongous size of many data sets, the wide distribution of data, and the
computational complexity of some data mining methods are factors that motivate the
development of parallel and distributed data-intensive mining algorithms.
The high cost of some data mining processes and the incremental nature of
input promote incremental data mining, which incorporates new data updates without
having to mine the entire data “from scratch.”
iv. Diversity of Database Types
The wide diversity of database types brings about challenges to data mining.
These include,
Handling complex types of data
Mining dynamic, networked, and global data repositories
Handling complex types of data
Diverse applications generate a wide spectrum of new data types, from
structured data such as relational and data warehouse data to semi-structured and
unstructured data; from stable data repositories to dynamic data streams; from simple
data objects to temporal data, biological sequences, sensor data, spatial data,
hypertext data, multimedia data, software program code, Web data, and social
network data.
It is unrealistic to expect one data mining system to mine all kinds of data, given
the diversity of data types and the different goals of data mining. Domain- or
application-dedicated data mining systems are being constructed for in depth mining
of specific kinds of data.
Mining dynamic, networked, and global data repositories
Multiple sources of data are connected by the Internet and various kinds of
networks, forming gigantic, distributed, and heterogeneous global information
systems and networks.
The discovery of knowledge from different sources of structured,
semi-structured, or unstructured yet interconnected data with diverse data semantics
poses great challenges to data mining.
v. Data Mining and Society
How does data mining impact society? What steps can data mining take to
preserve the privacy of individuals? Do we use data mining in our daily lives without
even knowing that we do? These questions raise the following issues: