0% found this document useful (0 votes)
47 views58 pages

Scsa3001 1 58

The document outlines the course structure for 'Data Mining and Data Warehousing' at Sathyabama Institute of Science and Technology, detailing course objectives, units of study, and expected outcomes. It covers key topics such as data mining processes, data warehousing components, and various data mining techniques including classification, clustering, and association rule mining. The course aims to equip students with the skills to analyze data and apply appropriate data mining algorithms for real-world applications.

Uploaded by

ozearcorp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views58 pages

Scsa3001 1 58

The document outlines the course structure for 'Data Mining and Data Warehousing' at Sathyabama Institute of Science and Technology, detailing course objectives, units of study, and expected outcomes. It covers key topics such as data mining processes, data warehousing components, and various data mining techniques including classification, clustering, and association rule mining. The course aims to equip students with the skills to analyze data and apply appropriate data mining algorithms for real-world applications.

Uploaded by

ozearcorp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

SATHYABAMA INSTITUTE OF SCIENCE AND TECHNOLOGY SCHOOL OF COMPUTING

DATA MINING AND DATA L T P Credits Total Marks


SCSA3001
WAREHOUSING 3 0 0 3 100

COURSE OBJECTIVES
➢ Identify the scope and necessity of Data Mining & Warehousing for the society.
➢ Describe various Data Models and Design Methodologies of Data Warehousing destined to solve the root
problems.
➢ To understand various Tools of Data Mining and their Techniques to solve the real time problems.
➢ To learn how to analyze the data, identify the problems, and choose the relevant algorithms to apply.
➢ To assess the Pros and Cons of various algorithms and analyze their behaviour on real datasets.

UNIT 1 DATA MINING 9 Hrs.


Introduction - Steps in KDD - System Architecture – Types of data -Data mining functionalities - Classification of data mining
systems - Integration of a data mining system with a data warehouse - Issues - Data Preprocessing - Data Mining
Application

UNIT 2 DATA WAREHOUSING 9 Hrs.


Data warehousing components - Building a data warehouse - Multi Dimensional Data Model - OLAP Operation in the Multi-
Dimensional Model - Three Tier Data Warehouse Architecture - Schemas for Multi-dimensional data Model - Online
Analytical Processing (OLAP) - OLAP Vs OLTP Integrated OLAM and OLAP Architecture

UNIT 3 ASSOCIATION RULE MINING 9 Hrs.


Mining frequent patterns - Associations and correlations - Mining methods - Finding Frequent itemset using Candidate
Generation - Generating Association Rules from Frequent Itemsets - Mining Frequent itemset without Candidate Generation
- Mining various kinds of association rules - Mining Multi-Level Association Rule-Mining MultiDimensional Association Rule-
Mining Correlation analysis - Constraint based association mining.

UNIT 4 CLASSIFICATION AND PREDICTION 9 Hrs.


Classification and prediction - Issues Regarding Classification and Prediction - Classification by Decision Tree Induction -
Bayesian classification - Baye’s Theorem - Naïve Bayesian Classification - Bayesian Belief Network - Rule based
classification - Classification by Backpropagation - Support vector machines - Prediction - Linear Regression

UNIT 5 CLUSTERING, APPLICATIONS AND TRENDS IN DATA MINING 9 Hrs.


Cluster analysis - Types of data in Cluster Analysis - Categorization of major clustering methods -Partitioning methods -
Hierarchical methods - Density-based methods - Grid-based methods - Model based clustering methods -Constraint Based
cluster analysis - Outlier analysis - Social Impacts of Data Mining- Case Studies: Mining WWW- Mining Text Database-
Mining Spatial Databases
Max.45 Hrs.
COURSE OUTCOMES
On completion of the course the student will be able to
CO1: Assess Raw Input Data and process it to provide suitable input for a range of data mining algorithm.
CO2: Design and Modelling of Data Warehouse.
CO3: Discover interesting pattern from large amount of data
CO4: Design and Deploy appropriate Classification Techniques
CO5: Able to cluster high dimensional Data
CO6: Apply suitable data mining techniques for various real time applications

58
B.E /B.TECH REGULAR REGULATION 2019
SCSA3001 Data Mining And Data Warehousing

SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

UNIT – I - DATA MINING - SCSA3001

1
SCSA3001 Data Mining And Data Warehousing

DATA MINING
Introduction - Steps in KDD - System Architecture – Types of data -Data mining
functionalities - Classification of data mining systems - Integration of a data mining
system with a data warehouse - Issues - Data Preprocessing - Data Mining
Application.
INTRODUCTION

What is Data?
• Collection of data objects and their attributes
• An attribute is a property or characteristic of an object – Examples: eye color of a person,
temperature, etc. – Attribute is also known as variable, field, characteristic, or feature
• A collection of attributes describe an object – Object is also known as record, point, case,
sample, entity, or instance Attributes
Data sets are made up of data objects. A data object represents an entity—in a sales database, the
objects may be customers, store items, and sales; in a medical database, the objects may be
patients; in a university database, the objects may be students, professors, and courses. Data
objects are typically described by attributes. Data objects can also be referred to as samples,
examples, instances, data points, or objects. If the data objects are stored in a database, they
are data tuples. That is, the rows of a database correspond to the data objects, and the columns
correspond to the attributes.
Attribute:
It can be seen as a data field that represents characteristics or features of a data object. For a
customer object attributes can be customer Id, address etc.
We can say that a set of attributes used to describe a given object are known as attribute
vector or feature vector.
Type of attributes:
This is the First step of Data Data-preprocessing. We differentiate between different types of
attributes and then pre process the data. So here is description of attribute types.
1. Qualitative (Nominal (N), Ordinal (O), Binary (B)).
2. Quantitative (Discrete, Continuous)

2
SCSA3001 Data Mining And Data Warehousing

Figure 1.1 Type of attributes


Qualitative Attributes
1. Nominal Attributes – related to names:
The values of a Nominal attribute are name of things, some kind of symbols. Values of
Nominal attributes represents some category or state and that’s why nominal attribute also
referred as categorical attributes and there is no order among values of nominal attribute.
Example

Table 1.1 Nominal Attributes


2. Binary Attributes: Binary data has only 2 values/states. For Example yes or no, affected or
unaffected, true or false.
3. i) Symmetric: Both values are equally important (Gender).
ii) Asymmetric: Both values are not equally important (Result).

Table 1.2 binary Attributes

3
SCSA3001 Data Mining And Data Warehousing

Ordinal Attributes : The Ordinal Attributes contains values that have a meaningful sequence or
ranking(order) between them, but the magnitude between values is not actually known, the order
of values that shows what is important but don’t indicate how important it is.

Table 1.3 Ordinal Attributes


Quantitative Attributes
1. Numeric: A numeric attribute is quantitative because, it is a measurable quantity, represented
in integer or real values. Numerical attributes are of 2 types, interval and ratio.
i) An interval-scaled attribute has values, whose differences are interpretable, but the
numerical attributes do not have the correct reference point or we can call zero point. Data can
be added and subtracted at interval scale but cannot be multiplied or divided. Consider an
example of temperature in degrees Centigrade. If a day’s temperature of one day is twice than
the other day we cannot say that one day is twice as hot as another day.
ii) A ratio-scaled attribute is a numeric attribute with an fix zero-point. If a measurement is ratio-
scaled, we can say of a value as being a multiple (or ratio) of another value. The values are
ordered, and we can also compute the difference between values, and the mean, median, mode,
Quantile-range and five number summaries can be given.
2. Discrete: Discrete data have finite values it can be numerical and can also be in categorical
form. These attributes has finite or countable infinite set of values.
Example

Table 1.4 Discrete Attributes


3. Continuous: Continuous data have infinite no of states. Continuous data is of float type.
There can be many values between 2 and 3.
Example:

4
SCSA3001 Data Mining And Data Warehousing

Table 1.5 Continuous Attributes


STEPS INVOLVED IN KDD PROCESS

Data Mining also known as Knowledge Discovery in Databases refers to the nontrivial extraction
of implicit, previously unknown and potentially useful information from data stored in databases.

Figure 1.2 KDD Process


1. Data Cleaning: Data cleaning is defined as removal of noisy and irrelevant data from
collection.
 Cleaning in case of Missing values.
 Cleaning noisy data, where noise is a random or variance error.
 Cleaning with Data discrepancy detection and Data transformation tools.
2. Data Integration: Data integration is defined as heterogeneous data from multiple sources
combined in a common source (Data Warehouse).
 Data integration using Data Migration tools.
 Data integration using Data Synchronization tools.
 Data integration using ETL (Extract-Load-Transformation) process.

5
SCSA3001 Data Mining And Data Warehousing

3. Data Selection: Data selection is defined as the process where data relevant to the analysis is
decided and retrieved from the data collection.
 Data selection using Neural network.
 Data selection using Decision Trees.
 Data selection using Naive bayes.
 Data selection using Clustering, Regression, etc.
4. Data Transformation: Data Transformation is defined as the process of transforming data
into appropriate form required by mining procedure.
Data Transformation is a two-step process:

 Data Mapping: Assigning elements from source base to destination to capture


transformations.
 Code generation: Creation of the actual transformation program.
5. Data Mining: Data mining is defined as clever techniques that are applied to extract patterns
potentially useful.
 Transforms task relevant data into patterns.
 Decides purpose of model using classification or characterization.
6. Pattern Evaluation: Pattern Evaluation is defined as as identifying strictly increasing
patterns representing knowledge based on given measures.
 Find interestingness score of each pattern.
 Uses summarization and Visualization to make data understandable by user.
7. Knowledge representation: Knowledge representation is defined as technique which utilizes
visualization tools to represent data mining results.
 Generate reports.
 Generate tables.
 Generate discriminant rules, classification rules, characterization rules, etc.
Note:
 KDD is an iterative process where evaluation measures can be enhanced, mining can be
refined, new data can be integrated and transformed in order to get different and more
appropriate results.
 Preprocessing of databases consists of Data cleaning and Data Integration.

6
SCSA3001 Data Mining And Data Warehousing

SYSTEM ARCHITECTURE
Data mining is a very important process where potentially useful and previously unknown
information is extracted from large volumes of data. There are a number of components involved
in the data mining process. These components constitute the architecture of a data mining system.
Data Mining Architecture

The major components of any data mining system are data source, data warehouse server, data
mining engine, pattern evaluation module, graphical user interface and knowledge base.

Figure 1.3 system Architecture


a) Data Sources
Database, data warehouse, World Wide Web (WWW), text files and other documents are the
actual sources of data. You need large volumes of historical data for data mining to be successful.
Organizations usually store data in databases or data warehouses. Data warehouses may contain
one or more databases, text files, spreadsheets or other kinds of information repositories.
Sometimes, data may reside even in plain text files or spreadsheets. World Wide Web or the
Internet is another big source of data.
Different Processes
The data needs to be cleaned, integrated and selected before passing it to the database or data
warehouse server. As the data is from different sources and in different formats, it cannot be used
directly for the data mining process because the data might not be complete and reliable. So, first

7
SCSA3001 Data Mining And Data Warehousing

data needs to be cleaned and integrated. Again, more data than required will be collected from
different data sources and only the data of interest needs to be selected and passed to the server.
These processes are not as simple as we think. A number of techniques may be performed on the
data as part of cleaning, integration and selection.
b) Database or Data Warehouse Server
The database or data warehouse server contains the actual data that is ready to be processed.
Hence, the server is responsible for retrieving the relevant data based on the data mining request
of the user.
c) Data Mining Engine
The data mining engine is the core component of any data mining system. It consists of a number
of modules for performing data mining tasks including association, classification,
characterization, clustering, prediction, time-series analysis etc.
d) Pattern Evaluation Modules
The pattern evaluation module is mainly responsible for the measure of interestingness of the
pattern by using a threshold value. It interacts with the data mining engine to focus the search
towards interesting patterns.
e) Graphical User Interface
The graphical user interface module communicates between the user and the data mining system.
This module helps the user use the system easily and efficiently without knowing the real
complexity behind the process. When the user specifies a query or a task, this module interacts
with the data mining system and displays the result in an easily understandable manner.
f) Knowledge Base
The knowledge base is helpful in the whole data mining process. It might be useful for guiding the
search or evaluating the interestingness of the result patterns. The knowledge base might even
contain user beliefs and data from user experiences that can be useful in the process of data
mining. The data mining engine might get inputs from the knowledge base to make the result
more accurate and reliable. The pattern evaluation module interacts with the knowledge base on a
regular basis to get inputs and also to update it.
Summary
Each and every component of data mining system has its own role and importance in completing
data mining efficiently.

8
SCSA3001 Data Mining And Data Warehousing

DATA MINING FUNCTIONALITIES


Data mining functionalities are used to specify the kind of patterns to be found in data mining
tasks. Data mining tasks can be classified into two categories: descriptive and predictive.
Descriptive mining tasks characterize the general properties of the data in the database.
Predictive mining tasks perform inference on the current data in order to make predictions.
Concept/Class Description: Characterization and Discrimination
Data can be associated with classes or concepts. For example, in the Electronics store, classes of
items for sale include computers and printers, and concepts of customers include big Spenders and
budget Spenders.
Data characterization
Data characterization is a summarization of the general characteristics or features of a target class
of data.
Data discrimination
Data discrimination is a comparison of the general features of target class data objects with the
general features of objects from one or a set of contrasting classes.
Mining Frequent Patterns, Associations, and Correlations
Frequent patterns, are patterns that occur frequently in data. There are many kinds of frequent
patterns, including itemsets, subsequences, and substructures.
Association analysis
Suppose, as a marketing manager, you would like to determine which items are frequently
purchased together within the same transactions.
buys(X,“computer”)=buys(X,“software”) [support=1%,confidence=50%]
Where X is a variable representing a customer. Confidence=50% means that if a customer buys a
computer, there is a 50% chance that she will buy software as well.
Support=1% means that 1% of all of the transactions under analysis showed that computer and
software were purchased together.
Classification:
There is a large variety of data mining systems available. Data mining systems may integrate
techniques from the following −
 Spatial Data Analysis
 Information Retrieval

9
SCSA3001 Data Mining And Data Warehousing

 Pattern Recognition
 Image Analysis
 Signal Processing
 Computer Graphics
 Web Technology
 Business
 Bioinformatics
DATA MINING SYSTEM CLASSIFICATION
A data mining system can be classified according to the following criteria −
 Database Technology
 Statistics
 Machine Learning
 Information Science
 Visualization
 Other Disciplines

Figure 1.4 system Architecture

Apart from these, a data mining system can also be classified based on the kind of (a) databases
mined, (b) knowledge mined, (c) techniques utilized, and (d) applications adapted.
Classification Based on the Databases Mined
We can classify a data mining system according to the kind of databases mined. Database system
can be classified according to different criteria such as data models, types of data, etc. And the
data mining system can be classified accordingly.
For example, if we classify a database according to the data model, then we may have a
relational, transactional, object-relational, or data warehouse mining system.

10
SCSA3001 Data Mining And Data Warehousing

Classification Based on the kind of Knowledge Mined


We can classify a data mining system according to the kind of knowledge mined. It means the
data mining system is classified on the basis of functionalities such as −
 Characterization
 Discrimination
 Association and Correlation Analysis
 Classification
 Prediction
 Outlier Analysis
 Evolution Analysis
Classification Based on the Techniques Utilized
We can classify a data mining system according to the kind of techniques used. We can describe
these techniques according to the degree of user interaction involved or the methods of analysis
employed.
Classification Based on the Applications Adapted
We can classify a data mining system according to the applications adapted. These applications
are as follows −
 Finance
 Telecommunications
 DNA
 Stock Markets
 E-mail
Data Mining Task Primitives
Each user will have a data mining task in mind, that is, some form of data analysis that he or she
would like to have performed. A data mining task can be specified in the form of a data mining
query, which is input to the data mining system. A data mining query is defined in terms of data
mining task primitives. These primitives allow the user to interactively communicate with the data
mining system during discovery in order to direct the mining process, or examine the findings
from different angles or depths. set of task-relevant data to be mined: This specifies the portions
of the database or the set of data in which the user is interested. This includes the database
attributes or data warehouse dimensions of interest (referred to as the relevant attributes or

11
SCSA3001 Data Mining And Data Warehousing

dimensions). The kind of knowledge to be mined: This specifies the data mining functions to be
performed, such as characterization, discrimination, association or correlation analysis,
classification, prediction, clustering, outlier analysis, or evolution analysis.
The background knowledge to be used in the discovery process: This knowledge about the
domain to be mined is useful for guiding the knowledge discovery process and for evaluating the
patterns found. Concept hierarchies are a popular form of background knowledge, which allow
data to be mined at multiple levels of abstraction. User beliefs regarding relationships in the data
are another form of background knowledge. The interestingness measures and thresholds for
pattern evaluation: They may be used to guide the mining process or, after discovery, to evaluate
the discovered patterns. Different kinds of knowledge may have different interestingness
measures. For example, interestingness measures for association rules include support and
confidence. Rules whose support and confidence values are below user-specified thresholds are
considered uninteresting. The expected representation for visualizing the discovered patterns: This
refers to the form in which discovered patterns are to be displayed, which may include rules,
tables, charts, graphs, decision trees, and cubes. A data mining query language can be designed to
incorporate these primitives, allowing users to flexibly interact with data mining systems. Having
a data mining query language provides a foundation on which user-friendly graphical interfaces
can be built.

Figure 1.5 Data mining tasks

12
SCSA3001 Data Mining And Data Warehousing

INTEGRATING A DATA MINING SYSTEM WITH A DB/DW SYSTEM


If a data mining system is not integrated with a database or a data warehouse system, then there
will be no system to communicate with. This scheme is known as the non-coupling scheme. In
this scheme, the main focus is on data mining design and on developing efficient and effective
algorithms for mining the available data sets.
The list of Integration Schemes is as follows −
 No Coupling − In this scheme, the data mining system does not utilize any of the database or
data warehouse functions. It fetches the data from a particular source and processes that data
using some data mining algorithms. The data mining result is stored in another file.
 Loose Coupling − In this scheme, the data mining system may use some of the functions of
database and data warehouse system. It fetches the data from the data respiratory managed by
these systems and performs data mining on that data. It then stores the mining result either in a
file or in a designated place in a database or in a data warehouse.
 Semi−tight Coupling − In this scheme, the data mining system is linked with a database or a
data warehouse system and in addition to that, efficient implementations of a few data mining
primitives can be provided in the database.
 Tight coupling − In this coupling scheme, the data mining system is smoothly integrated into
the database or data warehouse system. The data mining subsystem is treated as one functional
component of an information system.
MAJOR ISSUES IN DATA WAREHOUSING AND MINING
• Mining methodology and user interaction
– Mining different kinds of knowledge in databases
– Interactive mining of knowledge at multiple levels of abstraction – Incorporation of background
knowledge
– Data mining query languages and ad-hoc data mining
– Expression and visualization of data mining results
– Handling noise and incomplete data
– Pattern evaluation: the interestingness problem
• Performance and scalability
– Efficiency and scalability of data mining algorithms
– Parallel, distributed and incremental mining methods

13
SCSA3001 Data Mining And Data Warehousing

• Issues relating to the diversity of data types


– Handling relational and complex types of data
– Mining information from heterogeneous databases and global information systems (WWW)
• Issues related to applications and social impacts
– Application of discovered knowledge
• Domain-specific data mining tools
Issues:
Data mining is not an easy task, as the algorithms used can get very complex and data is not
always available at one place. It needs to be integrated from various heterogeneous data sources.
These factors also create some issues. Here in this tutorial, we will discuss the major issues
regarding −
 Mining Methodology and User Interaction
 Performance Issues
 Diverse Data Types Issues
The following diagram describes the major issues.

Figure 1.6 Data Mining Issues

14
SCSA3001 Data Mining And Data Warehousing

Mining Methodology and User Interaction Issues:

It refers to the following kinds of issues −


 Mining different kinds of knowledge in databases − Different users may be interested in
different kinds of knowledge. Therefore it is necessary for data mining to cover a broad range
of knowledge discovery task.
 Interactive mining of knowledge at multiple levels of abstraction − The data mining process
needs to be interactive because it allows users to focus the search for patterns, providing and
refining data mining requests based on the returned results.
 Incorporation of background knowledge − To guide discovery process and to express the
discovered patterns, the background knowledge can be used. Background knowledge may be
used to express the discovered patterns not only in concise terms but at multiple levels of
abstraction.
 Data mining query languages and ad hoc data mining − Data Mining Query language that
allows the user to describe ad hoc mining tasks, should be integrated with a data warehouse
query language and optimized for efficient and flexible data mining.
 Presentation and visualization of data mining results − Once the patterns are discovered it
needs to be expressed in high level languages, and visual representations. These representations
should be easily understandable.
 Handling noisy or incomplete data − The data cleaning methods are required to handle the
noise and incomplete objects while mining the data regularities. If the data cleaning methods
are not there then the accuracy of the discovered patterns will be poor.
 Pattern evaluation − The patterns discovered should be interesting because either they
represent common knowledge or lack novelty.
Performance Issues:
There can be performance-related issues such as follows −
 Efficiency and scalability of data mining algorithms− In order to effectively extract the
information from huge amount of data in databases; data mining algorithm must be efficient
and scalable.
 Parallel, distributed, and incremental mining algorithms − The factors such as huge size of
databases, wide distribution of data, and complexity of data mining methods motivate the
development of parallel and distributed data mining algorithms. These algorithms divide the

15
SCSA3001 Data Mining And Data Warehousing

data into partitions which is further processed in a parallel fashion. Then the results from the
partitions are merged. The incremental algorithms, update databases without mining the data
again from scratch.
Diverse Data Types Issues:
 Handling of relational and complex types of data − The database may contain complex data
objects, multimedia data objects, spatial data, temporal data etc. It is not possible for one
system to mine all these kind of data.
 Mining information from heterogeneous databases and global information systems − The
data is available at different data sources on LAN or WAN. These data source may be
structured, semi structured or unstructured. Therefore mining the knowledge from them adds
challenges to data mining.
DATA PREPROCESSING
Data preprocessing is a data mining technique that involves transforming raw data into an
understandable format. Real-world data is often incomplete, inconsistent, and/or lacking in certain
behaviors or trends, and is likely to contain many errors. Data preprocessing is a proven method
of resolving such issues. Data preprocessing prepares raw data for further processing.
Data preprocessing is used database-driven applications such as customer relationship
management and rule-based applications (like neural networks).
Data goes through a series of steps during pre processing:
 Data Cleaning: Data is cleansed through processes such as filling in missing values,
smoothing the noisy data, or resolving the inconsistencies in the data.
 Data Integration: Data with different representations are put together and conflicts within
the data are resolved.
 Data Transformation: Data is normalized, aggregated and generalized.
 Data Reduction: This step aims to present a reduced representation of the data in a data
warehouse.
 Data Discretization: Involves the reduction of a number of values of a continuous attribute
by dividing the range of attribute intervals.
Integration of a data mining system with a data warehouse:
DB and DW systems, possible integration schemes include no coupling, loose coupling, semi-
tight coupling, and tight coupling. We examine each of these schemes, as follows:

16
SCSA3001 Data Mining And Data Warehousing

1. No coupling: No coupling means that a DM system will not utilize any function of a DB or
DW system. It may fetch data from a particular source (such as a file system), process data using
some data mining algorithms, and then store the mining results in another file.
2. Loose coupling: Loose coupling means that a DM system will use some facilities of a DB or
DW system, fetching data from a data repository managed by these systems, performing data
mining, and then storing the mining results either in a file or in a designated place in a database or
data Warehouse. Loose coupling is better than no coupling because it can fetch any portion of data
stored in databases or data warehouses by using query processing, indexing, and other system
facilities.
However, many loosely coupled mining systems are main memory-based. Because mining does
not explore data structures and query optimization methods provided by DB or DW systems, it is
difficult for loose coupling to achieve high scalability and good performance with large data sets.
3. Semi-tight coupling: Semi-tight coupling means that besides linking a DM system to a
DB/DW system, efficient implementations of a few essential data mining primitives (identified by
the analysis of frequently encountered data mining functions) can be provided in the DB/DW
system. These primitives can include sorting, indexing, aggregation, histogram analysis, multi
way join, and pre computation of some essential statistical measures, such as sum, count, max,
min ,standard deviation,
4. Tight coupling: Tight coupling means that a DM system is smoothly integrated into the
DB/DW system. The data mining subsystem is treated as one functional component of
information system. Data mining queries and functions are optimized based on mining query
analysis, data structures, indexing schemes, and query processing methods of a DB or DW
system.

17
SCSA3001 Data Mining And Data Warehousing

Figure 1.7 Integration of a data mining system with a data warehouse:


DATA MINING APPLICATIONS

Here is the list of areas where data mining is widely used −


 Financial Data Analysis
 Retail Industry
 Telecommunication Industry
 Biological Data Analysis
 Other Scientific Applications
 Intrusion Detection
Financial Data Analysis
The financial data in banking and financial industry is generally reliable and of high quality which
facilitates systematic data analysis and data mining. Some of the typical cases are as follows −
 Design and construction of data warehouses for multidimensional data analysis and data
mining.
 Loan payment prediction and customer credit policy analysis.
 Classification and clustering of customers for targeted marketing.
 Detection of money laundering and other financial crimes.

18
SCSA3001 Data Mining And Data Warehousing

Retail Industry
Data Mining has its great application in Retail Industry because it collects large amount of data
from on sales, customer purchasing history, goods transportation, consumption and services. It is
natural that the quantity of data collected will continue to expand rapidly because of the increasing
ease, availability and popularity of the web.
Data mining in retail industry helps in identifying customer buying patterns and trends that lead to
improved quality of customer service and good customer retention and satisfaction. Here is the list
of examples of data mining in the retail industry −
 Design and Construction of data warehouses based on the benefits of data mining.
 Multidimensional analysis of sales, customers, products, time and region.
 Analysis of effectiveness of sales campaigns.
 Customer Retention.
 Product recommendation and cross-referencing of items.
Telecommunication Industry
Today the telecommunication industry is one of the most emerging industries providing various
services such as fax, pager, cellular phone, internet messenger, images, e-mail, web data
transmission, etc. Due to the development of new computer and communication technologies, the
telecommunication industry is rapidly expanding. This is the reason why data mining is become
very important to help and understand the business.
Data mining in telecommunication industry helps in identifying the telecommunication patterns,
catch fraudulent activities, make better use of resource, and improve quality of service. Here is the
list of examples for which data mining improves telecommunication services −
 Multidimensional Analysis of Telecommunication data.
 Fraudulent pattern analysis.
 Identification of unusual patterns.
 Multidimensional association and sequential patterns analysis.
 Mobile Telecommunication services.
 Use of visualization tools in telecommunication data analysis.
Biological Data Analysis
In recent times, we have seen a tremendous growth in the field of biology such as genomics,
proteomics, functional Genomics and biomedical research. Biological data mining is a very

19
SCSA3001 Data Mining And Data Warehousing

important part of Bioinformatics. Following are the aspects in which data mining contributes for
biological data analysis −
 Semantic integration of heterogeneous, distributed genomic and proteomic databases.
 Alignment, indexing, similarity search and comparative analysis multiple nucleotide
sequences.
 Discovery of structural patterns and analysis of genetic networks and protein pathways.
 Association and path analysis.
 Visualization tools in genetic data analysis.
Other Scientific Applications
The applications discussed above tend to handle relatively small and homogeneous data sets for
which the statistical techniques are appropriate. Huge amount of data have been collected from
scientific domains such as geosciences, astronomy, etc. A large amount of data sets is being
generated because of the fast numerical simulations in various fields such as climate and
ecosystem modelling, chemical engineering, fluid dynamics, etc. Following are the applications of
data mining in the field of Scientific Applications −
 Data Warehouses and data preprocessing.
 Graph-based mining.
 Visualization and domain specific knowledge.
Intrusion Detection
Intrusion refers to any kind of action that threatens integrity, confidentiality, or the availability of
network resources. In this world of connectivity, security has become the major issue. With
increased usage of internet and availability of the tools and tricks for intruding and attacking
network prompted intrusion detection to become a critical component of network administration.
Here is the list of areas in which data mining technology may be applied for intrusion detection −
 Development of data mining algorithm for intrusion detection.
 Association and correlation analysis, aggregation to help select and build discriminating
attributes.
 Analysis of Stream data.
 Distributed data mining.
 Visualization and query tools.

20
SCSA3001 Data Mining And Data Warehousing

Data Mining System Products

There are many data mining system products and domain specific data mining applications. The
new data mining systems and applications are being added to the previous systems. Also, efforts
are being made to standardize data mining languages.
Choosing a Data Mining System
The selection of a data mining system depends on the following features −
 Data Types − The data mining system may handle formatted text, record-based data, and
relational data. The data could also be in ASCII text, relational database data or data warehouse
data. Therefore, we should check what exact format the data mining system can handle.
 System Issues − We must consider the compatibility of a data mining system with different
operating systems. One data mining system may run on only one operating system or on several.
There are also data mining systems that provide web-based user interfaces and allow XML data as
input.
 Data Sources − Data sources refer to the data formats in which data mining system will
operate. Some data mining system may work only on ASCII text files while others on multiple
relational sources. Data mining system should also support ODBC connections or OLE DB for
ODBC connections.
 Data Mining functions and methodologies − There are some data mining systems that provide
only one data mining function such as classification while some provides multiple data mining
functions such as concept description, discovery-driven OLAP analysis, association mining,
linkage analysis, statistical analysis, classification, prediction, clustering, outlier analysis,
similarity search, etc.
 Coupling data mining with databases or data warehouse systems − Data mining systems need
to be coupled with a database or a data warehouse system. The coupled components are integrated
into a uniform information processing environment. Here are the types of coupling listed below −
o No coupling
o Loose Coupling
o Semi tight Coupling
o Tight Coupling
 Scalability − There are two scalability issues in data mining −

21
SCSA3001 Data Mining And Data Warehousing

o Row (Database size) Scalability − A data mining system is considered as row scalable when
the number or rows are enlarged 10 times. It takes no more than 10 times to execute a query.
o Column (Dimension) Scalability − A data mining system is considered as column scalable if
the mining query execution time increases linearly with the number of columns.
 Visualization Tools − Visualization in data mining can be categorized as follows −
o Data Visualization
o Mining Results Visualization
o Mining process visualization
o Visual data mining
 Data Mining query language and graphical user interface − An easy-to-use graphical user
interface is important to promote user-guided, interactive data mining. Unlike relational database
systems, data mining systems do not share underlying data mining query language.
Trends in Data Mining
Data mining concepts are still evolving and here are the latest trends that we get to see in this field
 Application Exploration.
 Scalable and interactive data mining methods.
 Integration of data mining with database systems, data warehouse systems and web database
systems.
 Standardization of data mining query language.
 Visual data mining.
 New methods for mining complex types of data.
 Biological data mining.
 Data mining and software engineering.
 Web mining.
 Distributed data mining.
 Real time data mining.
 Multi database data mining.
 Privacy protection and information security in data mining

22
SCSA3001 Data Mining And Data Warehousing

PART-A

Q. No Questions Competence BT Level

1. Define Data mining. List out the steps in data mining. Remember BTL-1

2. Compare Discrete versus Continuous Attributes. Analyze BTL-4

3. Give the applications of Data Mining. Understand BTL-2

4. Analyze the issues in Data Mining Techniques. Apply BTL-3

5. Generalize in detail about Numeric Attributes. Create BTL-6

6. Evaluate the major tasks of data preprocessing. Evaluate BTL-5

7. Define an efficient procedure for cleaning the noisy data. Remember BTL-1

8. Distinguish between data similarity and dissimilarity. Understand BTL-2


Show the Displays of Basic Statistical Descriptions of
9. Analyze BTL-4
Data.
10. Formulate what is data discretization. Create BTL-6

PART-B

Q. No Questions Competence BT Level


i) Describe the issues of data mining. (7)

1. ii) Describe in detail about the applications of data mining Remember BTL-1
(6)
i) State and explain the various classifications of data
mining systems with example. (7)
2. Analyze BTL-4
ii) Explain the various data mining functionalities in
detail. (6)
i) Describe the steps involved in Knowledge discovery in
databases (KDD). (7)
3. Remember BTL-1
ii) Draw the diagram and Describe the architecture of data
mining system. (6)

23
SCSA3001 Data Mining And Data Warehousing

Suppose that the data for analysis include the attributed


age. The age values for the data tuples are
13,15,16,19,20,20,21,22,22,25,25,25,25,30,33,33,35,35,
4. Create BTL-6
35,35,36,40,45,46,52,70.
i)Use smoothing by bin depth of 3.Illustrate your steps (6)
ii) Classify the various methods for data smoothing. (7)
(i) Discuss whether or not each of the following activities
is a data mining task.(5)
1. Credit card fraud detection using transaction records.
2. Dividing the customers of a company according to their
gender.

5. 3. Computing the total sales of a company Understand BTL-2


4. Predicting the future stock price of a company using
historical records.
5. Monitoring seismic waves for earthquake activities.
(ii) Discuss on descriptive and predictive data mining
tasks with illustrations. (8)
i) Generalize why do we need data preprocessing step in
data mining (8)
6. Evaluate BTL-5
ii) Explain the various methods of data cleaning and data
reduction techniques (7)
i) Compose in detail the various data transformation

7. techniques (7) Create BTL-6


ii) Develop a short note on discretization techniques (6)

TEXT / REFERENCE BOOKS


1. Jiawei Han and Micheline Kamber, “Data Mining Concepts and Techniques”, 2nd Edition,
Elsevier, 2007
2. Alex Berson and Stephen J. Smith, “ Data Warehousing, Data Mining & OLAP”, Tata McGraw
Hill, 2007.

24
SCSA3001 Data Mining And Data Warehousing

3. Pang-Ning Tan, Michael Steinbach and Vipin Kumar, “Introduction To Data Mining”, Person
Education, 2007.
4. K.P. Soman, Shyam Diwakar and V. Ajay, “Insight into Data mining Theory and Practice”,
Easter Economy Edition,
Prentice Hall of India, 2006.
5. G. K. Gupta, “Introduction to Data Mining with Case Studies”, Easter Economy Edition,
Prentice Hall of India, 2006.
6. Daniel T.Larose, “Data Mining Methods and Models”, Wile-Interscience, 2006

25
SCSA3001 Data Mining And Data Warehousing

UNIT – II - DATA WAREHOUSING- SCSA3001

26
SCSA3001 Data Mining And Data Warehousing

DATA WAREHOUSING

Data warehousing components - Building a data warehouse - Multi Dimensional


Data Model - OLAP Operation in the Multi-Dimensional Model - Three Tier Data
Warehouse Architecture - Schemas for Multi-dimensional data Model - Online
Analytical Processing (OLAP) - OLAP Vs OLTP Integrated OLAM and OLAP
Architecture
DATA WAREHOUSING COMPONENTS
What is Data warehouse?
Data warehouse is an information system that contains historical and commutative data from
single or multiple sources. It simplifies reporting and analysis process of the organization. It is
also a single version of truth for any company for decision making and forecasting.
Characteristics of Data warehouse
 Subject-Oriented
 Integrated
 Time-variant
 Non-volatile
Subject-Oriented
A data warehouse is subject oriented as it offers information regarding a theme instead of
companies’ on-going operations. These subjects can be sales, marketing, distributions, etc.
A data warehouse never focuses on the on-going operations. Instead, it put emphasis on modelling
and analysis of data for decision making. It also provides a simple and concise view around the
specific subject by excluding data which not helpful to support the decision process.
Integrated
In Data Warehouse, integration means the establishment of a common unit of measure for all
similar data from the dissimilar database. The data also needs to be stored in the Data warehouse
in common and universally acceptable manner.
A data warehouse is developed by integrating data from varied sources like a mainframe,
relational databases, flat files, etc. Moreover, it must keep consistent naming conventions, format,
and coding.

27
SCSA3001 Data Mining And Data Warehousing

This integration helps in effective analysis of data. Consistency in naming conventions, attribute
measures, encoding structure etc. has to be ensured.
Time-Variant
The time horizon for data warehouse is quite extensive compared with operational systems. The
data collected in a data warehouse is recognized with a particular period and offers information
from the historical point of view. It contains an element of time, explicitly or implicitly. One such
place where Data warehouse data display time variance is in in the structure of the record key.
Every primary key contained with the DW should have either implicitly or explicitly an element
of time. Like the day, week month, etc. Another aspect of time variance is that once data is
inserted in the warehouse, it can't be updated or changed.
Non-volatile
Data warehouse is also non-volatile means the previous data is not erased when new data is
entered in it. Data is read-only and periodically refreshed. This also helps to analyze historical
data and understand what & when happened. It does not require transaction process, recovery and
concurrency control mechanisms.
Activities like delete, update, and insert which are performed in an operational application
environment are omitted in Data warehouse environment. Only two types of data operations
performed in the Data Warehousing are
1. Data loading
2. Data access
Data Warehouse Architectures
Single-tier architecture
The objective of a single layer is to minimize the amount of data stored. This goal is to remove
data redundancy. This architecture is not frequently used in practice.
Two-tier architecture
Two-layer architecture separates physically available sources and data warehouse. This
architecture is not expandable and also not supporting a large number of end-users. It also has
connectivity problems because of network limitations.
Three-tier architecture
This is the most widely used architecture.
It consists of the Top, Middle and Bottom Tier.

28
SCSA3001 Data Mining And Data Warehousing

1. Bottom Tier: The database of the Data warehouse servers as the bottom tier. It is usually a
relational database system. Data is cleansed, transformed, and loaded into this layer using
back-end tools.
2. Middle-Tier: The middle tier in Data warehouse is an OLAP server which is implemented
using either ROLAP or MOLAP model. For a user, this application tier presents an abstracted
view of the database. This layer also acts as a mediator between the end-user and the database.
3. Top-Tier: The top tier is a front-end client layer. Top tier is the tools and API that you connect
and get data out from the data warehouse. It could be Query tools, reporting tools, managed
query tools, Analysis tools and Data mining tools.
DATA WAREHOUSE COMPONENTS

Figure 2.1 Data warehouse Components


The data warehouse is based on an RDBMS server which is a central information repository that
is surrounded by some key components to make the entire environment functional, manageable
and accessible
There are mainly five components of Data Warehouse:
Data Warehouse Database:
The central database is the foundation of the data warehousing environment. This database is
implemented on the RDBMS technology. Although, this kind of implementation is constrained by
the fact that traditional RDBMS system is optimized for transactional database processing and not

29
SCSA3001 Data Mining And Data Warehousing

for data warehousing. For instance, ad-hoc query, multi-table joins, aggregates are resource
intensive and slow down performance.
Hence, alternative approaches to Database are used as listed below-
 In a data warehouse, relational databases are deployed in parallel to allow for scalability.
Parallel relational databases also allow shared memory or shared nothing model on various
multiprocessor configurations or massively parallel processors.
 New index structures are used to bypass relational table scan and improve speed.
 Use of multidimensional database (MDDBs) to overcome any limitations which are placed
because of the relational data model. Example: Essbase from Oracle.
Sourcing, Acquisition, Clean-up and Transformation Tools (ETL)
The data sourcing, transformation, and migration tools are used for performing all the
conversions, summarizations, and all the changes needed to transform data into a unified format in
the data warehouse. They are also called Extract, Transform and Load (ETL) Tools.
Their functionality includes:
 Anonymize data as per regulatory stipulations.
 Eliminating unwanted data in operational databases from loading into Data warehouse.
 Search and replace common names and definitions for data arriving from different sources.
 Calculating summaries and derived data
 In case of missing data, populate them with defaults.
 De-duplicated repeated data arriving from multiple data sources.
These Extract, Transform, and Load tools may generate cron jobs, background jobs, Cobol
programs, shell scripts, etc. that regularly update data in data warehouse. These tools are also
helpful to maintain the Metadata.
These ETL Tools have to deal with challenges of Database & Data heterogeneity.
Metadata
The name Meta Data suggests some high- level technological concept. However, it is quite
simple. Metadata is data about data which defines the data warehouse. It is used for building,
maintaining and managing the data warehouse.
In the Data Warehouse Architecture, meta-data plays an important role as it specifies the source,
usage, values, and features of data warehouse data. It also defines how data can be changed and
processed. It is closely connected to the data warehouse.

30
SCSA3001 Data Mining And Data Warehousing

Metadata helps to answer the following questions


 What tables, attributes, and keys does the Data Warehouse contain?
 Where did the data come from?
 How many times do data get reloaded?
 What transformations were applied with cleansing?
Metadata can be classified into following categories:
1. Technical Meta Data: This kind of Metadata contains information about warehouse
which is used by Data warehouse designers and administrators.
2. Business Meta Data: This kind of Metadata contains detail that gives end-users a way
easy to understand information stored in the data warehouse.
Query Tools
One of the primary objects of data warehousing is to provide information to businesses to make
strategic decisions. Query tools allow users to interact with the data warehouse system.
These tools fall into four different categories:
1. Query and reporting tools
2. Application Development tools
3. Data mining tools
4. OLAP tools
1. Query and reporting tools:
Query and reporting tools can be further divided into
 Reporting tools
 Managed query tools
Reporting tools: Reporting tools can be further divided into production reporting tools and
desktop report writer.
1. Report writers: This kind of reporting tool is tools designed for end-users for their analysis.
2. Production reporting: This kind of tools allows organizations to generate regular operational
reports. It also supports high volume batch jobs like printing and calculating. Some popular
reporting tools are Brio, Business Objects, Oracle, Power Soft, SAS Institute.
Managed query tools:
This kind of access tools helps end users to resolve snags in database and SQL and database
structure by inserting meta-layer between users and database.

31
SCSA3001 Data Mining And Data Warehousing

2. Application development tools:


Sometimes built-in graphical and analytical tools do not satisfy the analytical needs of an
organization. In such cases, custom reports are developed using Application development tools.
3. Data mining tools:
Data mining is a process of discovering meaningful new correlation, pattens, and trends by mining
large amount data. Data mining tools are used to make this process automatic.
4. OLAP tools:
These tools are based on concepts of a multidimensional database. It allows users to analyse the
data using elaborate and complex multidimensional views.
Data warehouse Bus Architecture
Data warehouse Bus determines the flow of data in your warehouse. The data flow in a data
warehouse can be categorized as Inflow, Upflow, Downflow, Outflow and Meta flow.
While designing a Data Bus, one needs to consider the shared dimensions, facts across data marts.
Data Marts
A data mart is an access layer which is used to get data out to the users. It is presented as an
option for large size data warehouse as it takes less time and money to build. However, there is no
standard definition of a data mart is differing from person to person.
In a simple word Data mart is a subsidiary of a data warehouse. The data mart is used for partition
of data which is created for the specific group of users.
Data marts could be created in the same database as the Data warehouse or a physically separate
Database.
Data warehouse Architecture Best Practices
To design Data Warehouse Architecture, you need to follow below given best practices:
 Use a data model which is optimized for information retrieval which can be the dimensional
mode, denormalized or hybrid approach.
 Need to assure that Data is processed quickly and accurately. At the same time, you should
take an approach which consolidates data into a single version of the truth.
 Carefully design the data acquisition and cleansing process for Data warehouse.
 Design a Meta Data architecture which allows sharing of metadata between components of
Data Warehouse

32
SCSA3001 Data Mining And Data Warehousing

 Consider implementing an ODS model when information retrieval need is near the bottom of
the data abstraction pyramid or when there are multiple operational sources required to be
accessed.
 One should make sure that the data model is integrated and not just consolidated. In that case,
you should consider 3NF data model. It is also ideal for acquiring ETL and Data cleansing
tools
Summary:
 Data warehouse is an information system that contains historical and commutative data from
single or multiple sources.
 A data warehouse is subject oriented as it offers information regarding subject instead of
organization's ongoing operations.
 In Data Warehouse, integration means the establishment of a common unit of measure for all
similar data from the different databases
 Data warehouse is also non-volatile means the previous data is not erased when new data is
entered in it.
 A Data warehouse is Time-variant as the data in a DW has high shelf life.
 There are 5 main components of a Data warehouse. 1) Database 2) ETL Tools 3) Meta Data 4)
Query Tools 5) Data Marts
 These are four main categories of query tools 1. Query and reporting, tools 2. Application
Development tools, 3. Data mining tools 4. OLAP tools
 The data sourcing, transformation, and migration tools are used for performing all the
conversions and summarizations.
 In the Data Warehouse Architecture, meta-data plays an important role as it specifies the
source, usage, values, and features of data warehouse data.
BUILDING A DATA WAREHOUSE
In general, building any data warehouse consists of the following steps:
1. Extracting the transactional data from the data sources into a staging area
2. Transforming the transactional data
3. Loading the transformed data into a dimensional database
4. Building pre-calculated summary values to speed up report generation
5. Building (or purchasing) a front-end reporting tool

33
SCSA3001 Data Mining And Data Warehousing

2.2 Diagram for building a data warehouse


Extracting Transactional Data:
A large part of building a DW is pulling data from various data sources and placing it in a
central storage area. In fact, this can be the most difficult step to accomplish due to the reasons
mentioned earlier: Most people who worked on the systems in place have moved on to other jobs.
Even if they haven't left the company, you still have a lot of work to do: You need to figure out
which database system to use for your staging area and how to pull data from various sources into
that area.
Fortunately for many small to mid-size companies, Microsoft has come up with an excellent tool
for data extraction. Data Transformation Services (DTS), which is part of Microsoft SQL Server
7.0 and 2000, allows you to import and export data from any OLE DB or ODBC-compliant
database as long as you have an appropriate provider. This tool is available at no extra cost when
you purchase Microsoft SQL Server. The sad reality is that you won't always have an OLE DB or
ODBC-compliant data source to work with, however. If not, you're bound to make a considerable
investment of time and effort in writing a custom program that transfers data from the original
source into the staging database.

34
SCSA3001 Data Mining And Data Warehousing

Transforming Transactional Data:


An equally important and challenging step after extracting is transforming and relating the
data extracted from multiple sources. As I said earlier, your source systems were most likely built
by many different IT professionals. Let's face it. Each person sees the world through their own
eyes, so each solution is at least a bit different from the others. The data model of your mainframe
system might be very different from the model of the client-server system.
Most companies have their data spread out in a number of various database management systems:
MS Access, MS SQL Server, Oracle, Sybase, and so on. Many companies will also have much of
their data in flat files, spread sheets, mail systems and other types of data stores. When building a
data warehouse, you need to relate data from all of these sources and build some type of a staging
area that can handle data extracted from any of these source systems. After all the data is in the
staging area, you have to massage it and give it a common shape. Prior to massaging data, you
need to figure out a way to relate tables and columns of one system to the tables and columns
coming from the other systems.
Creating a Dimensional Model:
The third step in building a data warehouse is coming up with a dimensional model. Most
modern transactional systems are built using the relational model. The relational database is
highly normalized; when designing such a system, you try to get rid of repeating columns and
make all columns dependent on the primary key of each table. The relational systems perform
well in the On-Line Transaction Processing (OLTP) environment. On the other hand, they
perform rather poorly in the reporting (and especially DW) environment, in which joining
multiple huge tables just is not the best idea.
The relational format is not very efficient when it comes to building reports with summary and
aggregate values. The dimensional approach, on the other hand, provides a way to improve query
performance without affecting data integrity. However, the query performance improvement
comes with a storage space penalty; a dimensional database will generally take up much more
space than its relational counterpart. These days, storage space is fairly inexpensive, and most
companies can afford large hard disks with a minimal effort.
The dimensional model consists of the fact and dimension tables. The fact tables consist of
foreign keys to each dimension table, as well as measures. The measures are a factual
representation of how well (or how poorly) your business is doing (for instance, the number of

35
SCSA3001 Data Mining And Data Warehousing

parts produced per hour or the number of cars rented per day). Dimensions, on the other hand, are
what your business users expect in the reports—the details about the measures. For example, the
time dimension tells the user that 2000 parts were produced between 7 a.m. and 7 p.m. on the
specific day; the plant dimension specifies that these parts were produced by the Northern plant.
Just like any modeling exercise the dimensional modeling is not to be taken lightly. Figuring out
the needed dimensions is a matter of discussing the business requirements with your users over
and over again. When you first talk to the users they have very minimal requirements: "Just give
me those reports that show me how each portion of the company performs." Figuring out what
"each portion of the company" means is your job as a DW architect. The company may consist of
regions, each of which report to a different vice president of operations. Each region, on the other
hand, might consist of areas, which in turn might consist of individual stores. Each store could
have several departments. When the DW is complete, splitting the revenue among the regions
won't be enough. That's when your users will demand more features and additional drill-down
capabilities. Instead of waiting for that to happen, an architect should take proactive measures to
get all the necessary requirements ahead of time.
It's also important to realize that not every field you import from each data source may fit into the
dimensional model. Indeed, if you have a sequential key on a mainframe system, it won't have
much meaning to your business users. Other columns might have had significance eons ago when
the system was built. Since then, the management might have changed its mind about the
relevance of such columns. So don't worry if all of the columns you imported are not part of your
dimensional model.
Loading the Data:
After you've built a dimensional model, it's time to populate it with the data in the staging
database. This step only sounds trivial. It might involve combining several columns together or
splitting one field into several columns. You might have to perform several lookups before
calculating certain values for your dimensional model.
Keep in mind that such data transformations can be performed at either of the two stages: while
extracting the data from their origins or while loading data into the dimensional model. I wouldn't
recommend one way over the other—make a decision depending on the project. If your users need
to be sure that they can extract all the data first, wait until all data is extracted prior to

36
SCSA3001 Data Mining And Data Warehousing

transforming it. If the dimensions are known prior to extraction, go on and transform the data
while extracting it.
Generating Precalculated Summary Values:
The next step is generating the precalculated summary values which are commonly referred to
as aggregations. This step has been tremendously simplified by SQL Server Analysis Services (or
OLAP Services, as it is referred to in SQL Server 7.0). After you have populated your
dimensional database, SQL Server Analysis Services does all the aggregate generation work for
you. However, remember that depending on the number of dimensions you have in your DW,
building aggregations can take a long time. As a rule of thumb, the more dimensions you have, the
more time it'll take to build aggregations. However, the size of each dimension also plays a
significant role.
Prior to generating aggregations, you need to make an important choice about which dimensional
model to use: ROLAP (Relational OLAP), MOLAP (Multidimensional OLAP), or HOLAP
(Hybrid OLAP). The ROLAP model builds additional tables for storing the aggregates, but this
takes much more storage space than a dimensional database, so be careful! The MOLAP model
stores the aggregations as well as the data in multidimensional format, which is far more efficient
than ROLAP. The HOLAP approach keeps the data in the relational format, but builds
aggregations in multidimensional format, so it's a combination of ROLAP and MOLAP.
Regardless of which dimensional model you choose, ensure that SQL Server has as much memory
as possible. Building aggregations is a memory-intensive operation, and the more memory you
provide, the less time it will take to build aggregate values.
Building (or Purchasing) a Front-End Reporting Tool
After you've built the dimensional database and the aggregations you can decide how
sophisticated your reporting tools need to be. If you just need the drill-down capabilities, and
your users have Microsoft Office 2000 on their desktops, the Pivot Table Service of Microsoft
Excel 2000 will do the job. If the reporting needs are more than what Excel can offer, you'll have
to investigate the alternative of building or purchasing a reporting tool. The cost of building a
custom reporting (and OLAP) tool will usually outweigh the purchase price of a third-party tool.
That is not to say that OLAP tools are cheap.
There are several major vendors on the market that have top-notch analytical tools. In addition to
the third-party tools, Microsoft has just released its own tool, Data Analyzer, which can be a cost-

37
SCSA3001 Data Mining And Data Warehousing

effective alternative. Consider purchasing one of these suites before delving into the process of
developing your own software because reinventing the wheel is not always beneficial or
affordable. Building OLAP tools is not a trivial exercise by any means.
MULTIDIMENSIONAL DATA MODEL
Multidimensional data model stores data in the form of data cube. Mostly, data warehousing
supports two or three-dimensional cubes.
A data cube allows data to be viewed in multiple dimensions. Dimensions are entities with respect
to which an organization wants to keep records. For example in store sales record, dimensions
allow the store to keep track of things like monthly sales of items and the branches and locations.
A multidimensional database helps to provide data-related answers to complex business queries
quickly and accurately. Data warehouses and Online Analytical Processing (OLAP) tools are
based on a multidimensional data model. OLAP in data warehousing enables users to view data
from different angles and dimensions

Figure 2.3 Multidimensional Data Representation

The multi-Dimensional Data Model is a method which is used for ordering data in the database
along with good arrangement and assembling of the contents in the database.
The Multi-Dimensional Data Model allows customers to interrogate analytical questions
associated with market or business trends, unlike relational databases which allow customers to

38
SCSA3001 Data Mining And Data Warehousing

access data in the form of queries. They allow users to rapidly receive answers to the requests
which they made by creating and examining the data comparatively fast.
OLAP (online analytical processing) and data warehousing uses multi-dimensional databases. It is
used to show multiple dimensions of the data to users.
Working on a Multidimensional Data Model
The following stages should be followed by every project for building a Multi-Dimensional Data
Model:
Stage 1: Assembling data from the client: In first stage, a Multi-Dimensional Data Model collects
correct data from the client. Mostly, software professionals provide simplicity to the client about
the range of data which can be gained with the selected technology and collect the complete data
in detail.
Stage 2: Grouping different segments of the system: In the second stage, the Multi-Dimensional
Data Model recognizes and classifies all the data to the respective section they belong to and also
builds it problem-free to apply step by step.
Stage 3: Noticing the different proportions: In the third stage, it is the basis on which the design
of the system is based. In this stage, the main factors are recognized according to the user’s point
of view. These factors are also known as “Dimensions”.
Stage 4: Preparing the actual-time factors and their respective qualities: In the fourth stage, the
factors which are recognized in the previous step are used further for identifying the related
qualities. These qualities are also known as “attributes” in the database.
Stage 5: Finding the actuality of factors which are listed previously and their qualities: In the
fifth stage, A Multi-Dimensional Data Model separates and differentiates the actuality from the
factors which are collected by it. These actually play a significant role in the arrangement of a
Multi-Dimensional Data Model.
Stage 6: Building the Schema to place the data, with respect to the information collected from
the steps above: In the sixth stage, on the basis of the data which was collected previously, a
Schema is built.
For Example:
1. Let us take the example of a firm. The revenue cost of a firm can be recognized on the basis of
different factors such as geographical location of firm’s workplace, products of the firm,
advertisements done, time utilized to flourish a product, etc.

39
SCSA3001 Data Mining And Data Warehousing

Figure 2.4 Multidimensional Data Model


2. Let us take the example of the data of a factory which sells products per quarter in Bangalore.
The data is represented in the table given below:

Table 2.1 2D Factory Data


In the above given presentation, the factory’s sales for Bangalore are, for the time dimension,
which is organized into quarters and the dimension of items, which is sorted according to the kind
of item which is sold. The facts here are represented in rupees (in thousands).
Now, if we desire to view the data of the sales in a three-dimensional table, then it is represented
in the diagram given below. Here the data of the sales is represented as a two dimensional table.
Let us consider the data according to item, time and location (like Kolkata, Delhi, and Mumbai).
Here is the table:

40
SCSA3001 Data Mining And Data Warehousing

Figure 2.2 3D Data Representation as 2D


This data can be represented in the form of three dimensions conceptually, which is shown in the
image below:

Figure 2.5 Figure 3D data representation


Advantages of Multi-Dimensional Data Model
The following are the advantages of a multi-dimensional data model:
 A multi-dimensional data model is easy to handle.
 It is easy to maintain.
 Its performance is better than that of normal databases (e.g. relational databases).
 The representation of data is better than traditional databases. That is because the multi -
dimensional databases are multi-viewed and carry different types of factors.
 It is workable on complex systems and applications, contrary to the simple one-dimensional
database systems.

41
SCSA3001 Data Mining And Data Warehousing

Disadvantages of Multi-Dimensional Data Model


The following are the disadvantages of a Multi-Dimensional Data Model:
 The multi-dimensional Data Model is slightly complicated in nature and it requires
professionals to recognize and examine the data in the database.
 During the work of a Multi-Dimensional Data Model, when the system caches, there is a
great effect on the working of the system.
 It is complicated in nature due to which the databases are generally dynamic in design.
OLAP OPERATIONS
Online Analytical Processing Server (OLAP) is based on the multidimensional data model. It
allows managers, and analysts to get an insight of the information through fast, consistent, and
interactive access to information. This chapter cover the types of OLAP, operations on OLAP,
difference between OLAP, and statistical databases and OLTP.

Since OLAP servers are based on multidimensional view of data, we will discuss OLAP
operations in multidimensional data.
Here is the list of OLAP operations −
 Roll-up
 Drill-down
 Slice and dice
 Pivot (rotate)
Roll-up
Roll-up performs aggregation on a data cube in any of the following ways −
 By climbing up a concept hierarchy for a dimension
 By dimension reduction
The following diagram illustrates how roll-up works.
 Roll-up is performed by climbing up a concept hierarchy for the dimension location.
 Initially the concept hierarchy was "street < city < province < country".
 On rolling up, the data is aggregated by ascending the location hierarchy from the level of
city to the level of country.
 The data is grouped into cities rather than countries.
 When roll-up is performed, one or more dimensions from the data cube are removed.

42
SCSA3001 Data Mining And Data Warehousing

Figure 2.6 Roll up Operation


Drill-down
Drill-down is the reverse operation of roll-up. It is performed by either of the following ways −
 By stepping down a concept hierarchy for a dimension
 By introducing a new dimension.
The following diagram illustrates how drill-down works –
 Drill-down is performed by stepping down a concept hierarchy for the dimension time.
 Initially the concept hierarchy was "day < month < quarter < year."
 On drilling down, the time dimension is descended from the level of quarter to the level of
month.
 When drill-down is performed, one or more dimensions from the data cube are added.
 It navigates the data from less detailed data to highly detailed data.

43
SCSA3001 Data Mining And Data Warehousing

Figure 2.7 Drill-down Operation


Slice
The slice operation selects one particular dimension from a given cube and provides a new sub-
cube. Consider the following diagram that shows how slice works.
 Here Slice is performed for the dimension "time" using the criterion time = "Q1".
 It will form a new sub-cube by selecting one or more dimensions.

44
SCSA3001 Data Mining And Data Warehousing

Figure 2.8 Slice Operation


Dice
Dice selects two or more dimensions from a given cube and provides a new sub-cube. Consider
the following diagram that shows the dice operation.
The dice operation on the cube based on the following selection criteria involves three
dimensions.
 (location = "Toronto" or "Vancouver")
 (time = "Q1" or "Q2")
 (item =" Mobile" or "Modem")

45
SCSA3001 Data Mining And Data Warehousing

Figure 2.9 Dice Operation


Pivot
The pivot operation is also known as rotation. It rotates the data axes in view in order to provide
an alternative presentation of data. Consider the following diagram that shows the pivot
operation.

Figure 2.10 Pivot Operation

46
SCSA3001 Data Mining And Data Warehousing

THREE-TIER DATA WAREHOUSE ARCHITECTURE

Generally a data warehouses adopts three-tier architecture. Following are the three tiers of the data
warehouse architecture.
These 3 tiers are:
1. Bottom Tier (Data warehouse server)
2. Middle Tier (OLAP server)
3. Top Tier (Front end tools)

Figure 2.11 Three Tier Data Warehouse Architecture


 Bottom Tier − The bottom tier of the architecture is the data warehouse database server. It is
the relational database system. We use the back end tools and utilities to feed data into the
bottom tier. These back end tools and utilities perform the Extract, Clean, Load, and refresh
functions.

47
SCSA3001 Data Mining And Data Warehousing

 Middle Tier − In the middle tier, we have the OLAP Server that can be implemented in either
of the following ways.
 By Relational OLAP (ROLAP), which is an extended relational database management
system? The ROLAP maps the operations on multidimensional data to standard relational
operations.
 By Multidimensional OLAP (MOLAP) model, which directly implements the
multidimensional data and operations?
 Top-Tier − This tier is the front-end client layer. This layer holds the query tools and reporting
tools, analysis tools and data mining tools.
The following diagram depicts the three-tier architecture of data warehouse −
Data Warehouse Models
From the perspective of data warehouse architecture, we have the following data warehouse
models
 Virtual Warehouse
 Data mart
 Enterprise Warehouse
Virtual Warehouse
The view over an operational data warehouse is known as a virtual warehouse. It is easy to build
a virtual warehouse. Building a virtual warehouse requires excess capacity on operational
database servers.
Data Mart
Data mart contains a subset of organization-wide data. This subset of data is valuable to specific
groups of an organization.
In other words, we can claim that data marts contain data specific to a particular group. For
example, the marketing data mart may contain data related to items, customers, and sales. Data
marts are confined to subjects.
Points to remember about data marts −
 Window-based or Unix/Linux-based servers are used to implement data marts. They are
implemented on low-cost servers.
 The implementation data mart cycles is measured in short periods of time, i.e., in weeks
rather than months or years.

48
SCSA3001 Data Mining And Data Warehousing

 The life cycle of a data mart may be complex in long run, if its planning and design are
not organization-wide.
 Data marts are small in size.
 Data marts are customized by department.
 The source of a data mart is departmentally structured data warehouse.
 Data marts are flexible.
Enterprise Warehouse
 An enterprise warehouse collects all the information and the subjects spanning an entire
organization
 It provides us enterprise-wide data integration.
 The data is integrated from operational systems and external information providers.
 This information can vary from a few gigabytes to hundreds of gigabytes, terabytes or
beyond.
SCHEMAS FOR MULTI-DIMENSIONAL DATA MODEL
Schema is a logical description of the entire database. It includes the name and description of
records of all record types including all associated data-items and aggregates. Much like a
database, a data warehouse also requires to maintain a schema. A database uses relational model,
while a data warehouse uses Star, Snowflake, and Fact Constellation schema. In this chapter, we
will discuss the schemas used in a data warehouse.
Star Schema

Figure 2.12 Star Schema

49
SCSA3001 Data Mining And Data Warehousing

 Each dimension in a star schema is represented with only one-dimension table.


 This dimension table contains the set of attributes.
 The following diagram shows the sales data of a company with respect to the four
dimensions, namely time, item, branch, and location.
 There is a fact table at the center. It contains the keys to each of four dimensions.
 The fact table also contains the attributes, namely dollars sold and units sold.
Note − Each dimension has only one dimension table and each table holds a set of attributes. For
example, the location dimension table contains the attribute set {location_key, street, city,
province_or_state,country}. This constraint may cause data redundancy. For example,
"Vancouver" and "Victoria" both the cities are in the Canadian province of British Columbia.
The entries for such cities may cause data redundancy along the attributesprovince_or_state and
country.
Snowflake Schema
 Some dimension tables in the Snowflake schema are normalized.
 The normalization splits up the data into additional tables.
 Unlike Star schema, the dimensions table in a snowflake schema are normalized. For
example, the item dimension table in star schema is normalized and split into two
dimension tables, namely item and supplier table.

Figure 2.13 Snowflake Schema

50
SCSA3001 Data Mining And Data Warehousing

 Now the item dimension table contains the attributes item_key, item_name, type, brand,
and supplier-key.
 The supplier key is linked to the supplier dimension table. The supplier dimension table
contains the attributes supplier_key and supplier_type.
Note − Due to normalization in the Snowflake schema, the redundancy is reduced and therefore,
it becomes easy to maintain and the save storage space.
Fact Constellation Schema
 A fact constellation has multiple fact tables. It is also known as galaxy schema.

 The following diagram shows two fact tables, namely sales and shipping.

Figure 2.14 Fact Constellation Schema


 The sales fact table is same as that in the star schema.
 The shipping fact table has the five dimensions, namely item_key, time_key, shipper_key,
from_location, to_location.
 The shipping fact table also contains two measures, namely dollars sold and units sold.
 It is also possible to share dimension tables between fact tables. For example, time, item,
and location dimension tables are shared between the sales and shipping fact table.

51
SCSA3001 Data Mining And Data Warehousing

OLAP (ONLINE ANALYTICAL PROCESSING)


The most popular data model for data warehouses is a multidimensional model. This model can
exist in the form of a star schema, a snowflake schema, or a fact constellation schema. Let's have a
look at each of these schema types

Figure 2.15 Multidimensional Data


1. Roll-up: The roll-up operation performs aggregation on a data cube, either by climbing-up a
concept hierarchy for a dimension or by dimension reduction. Figure shows the result of a roll-
up operation performed on the central cube by climbing up the concept hierarchy for location.
This hierarchy was defined as the total order street < city < province or state <country.
2. Drill-down: Drill-down is the reverse of roll-up. It navigates from less detailed data to more
detailed data. Drill-down can be realized by either stepping-down a concept hierarchy for a
dimension or introducing additional dimensions. Figure shows the result of a drill-down
operation performed on the central cube by stepping down a concept hierarchy for time defined
as day < month < quarter < year. Drill-down occurs by descending the time hierarchy from the
level of quarter to the more detailed level of month.
3. Slice and dice: The slice operation performs a selection on one dimension of the given cube,
resulting in a subcube. Figure shows a slice operation where the sales data are selected from the
central cube for the dimension time using the criteria time=”Q2". The dice operation defines a
subcube by performing a selection on two or more dimensions.
4. Pivot (rotate): Pivot is a visualization operation which rotates the data axes in view in order
to provide an alternative presentation of the data. Figure shows a pivot operation where the item
and location axes in a 2-D slice are rotated.

52
SCSA3001 Data Mining And Data Warehousing

Figure 2.16 Examples of typical OLAP operations on multidimensional data


Types of OLAP Servers
We have four types of OLAP servers −
 Relational OLAP (ROLAP)
 Multidimensional OLAP (MOLAP)
 Hybrid OLAP (HOLAP)
 Specialized SQL Servers

53
SCSA3001 Data Mining And Data Warehousing

Relational OLAP
ROLAP servers are placed between relational back-end server and client front-end tools. To store
and manage warehouse data, ROLAP uses relational or extended-relational DBMS.
ROLAP includes the following −
 Implementation of aggregation navigation logic.
 Optimization for each DBMS back end.
 Additional tools and services.
Multidimensional OLAP
MOLAP uses array-based multidimensional storage engines for multidimensional views of data.
With multidimensional data stores, the storage utilization may be low if the data set is sparse.
Therefore, many MOLAP server use two levels of data storage representation to handle dense
and sparse data sets.
Hybrid OLAP
Hybrid OLAP is a combination of both ROLAP and MOLAP. It offers higher scalability of
ROLAP and faster computation of MOLAP. HOLAP servers allow to store the large data
volumes of detailed information. The aggregations are stored separately in MOLAP store.
Specialized SQL Servers
Specialized SQL servers provide advanced query language and query processing support for
SQL queries over star and snowflake schemas in a read-only environment.
INTEGRATED OLAP AND OLAM ARCHITECTURE

Online Analytical Mining integrates with Online Analytical Processing with data mining and
mining knowledge in multidimensional databases. Here is the diagram that shows the integration
of both OLAP and OLAM
OLAM is important for the following reasons −
High quality of data in data warehouses − The data mining tools are required to work on
integrated, consistent, and cleaned data. These steps are very costly in the preprocessing of data.
The data warehouses constructed by such preprocessing are valuable sources of high quality data
for OLAP and data mining as well.
Available information processing infrastructure surrounding data warehouses − Information
processing infrastructure refers to accessing, integration, consolidation, and transformation of

54
SCSA3001 Data Mining And Data Warehousing

multiple heterogeneous databases, web-accessing and service facilities, reporting and OLAP
analysis tools

Figure 2.17 OLAM Architecture


Available information processing infrastructure surrounding data warehouses − Information
processing infrastructure refers to accessing, integration, consolidation, and transformation of
multiple heterogeneous databases, web-accessing and service facilities, reporting and OLAP
analysis tools
OLAP−based exploratory data analysis − Exploratory data analysis is required for effective data
mining. OLAM provides facility for data mining on various subsets of data and at different levels
of abstraction.
55
SCSA3001 Data Mining And Data Warehousing

Online selection of data mining functions − Integrating OLAP with multiple data mining
functions and online analytical mining provide users with the flexibility to select desired data
mining functions and swap data mining tasks dynamically
Features of OLTP and OLAP:
The major distinguishing features between OLTP and OLAP are summarized as follows.
1. Users and system orientation: An OLTP system is customer-oriented and is used for
transaction and query processing by clerks, clients, and information technology professionals. An
OLAP system is market-oriented and is used for data analysis by knowledge workers, including
managers, executives, and analysts.
2. Data contents: An OLTP system manages current data that, typically, are too detailed to be
easily used for decision making. An OLAP system manages large amounts of historical data,
provides facilities for summarization and aggregation, and stores and manages information at
different levels of granularity. These features make the data easier for use in informed decision
making.
3. Database design: An OLTP system usually adopts an entity-relationship (ER) data model and
an application oriented database design. An OLAP system typically adopts either a star or
snowflake model and a subject-oriented database design.
4. View: An OLTP system focuses mainly on the current data within an enterprise or department,
without referring to historical data or data in different organizations. In contrast, an OLAP system
often spans multiple versions of a database schema. OLAP systems also deal with information that
originates from different organizations, integrating information from many data stores. Because of
their huge volume, OLAP data are stored on multiple storage media.
5. Access patterns: The access patterns of an OLTP system consist mainly of short, atomic
transactions. Such a system requires concurrency control and recovery mechanisms. However,
accesses to OLAP systems are mostly read-only operations although many could be complex
queries.

56
SCSA3001 Data Mining And Data Warehousing

PART-A

Q. No Questions Competence BT Level


How is data ware house different from a database? Also
1. Remember BTL-1
Identify the similarity.
2. Compare OLTP and OLAP system. Analyze BTL-4

3. Differentiate metadata and data mart. Understand BTL-2


How would you show your understanding of Multi-
4. Apply BTL-3
dimensional data model?
5. Generalize the function of OLAP tools in the internet. Create BTL-6

6. How would you evaluate the goals of data mining? Evaluate BTL-5

7. Can you list the categories of tools in business analysis? Remember BTL-1

8. Give the need for OLAP. Understand BTL-2

9. Compare drill down with roll up approach. Analyze BTL-4

10. Design the data warehouse architecture. Create BTL-6

PART-B

Q. No Questions Competence BT Level


i) Diagrammatically illustrate and describe the
architecture of MOLAP and ROLAP. (7)
1. Remember BTL-1
ii) Identify the major differences between MOLAP and
ROLAP. (6)
(i) Draw the data warehouse architecture and explain its
2. components. (7) Analyze BTL-4
(ii) Explain the different types of OLAP tools. (6)
(i) Discuss in detail about components of data
warehousing. (7)
3. Understand BTL-2
(ii) Describe the overall architecture of data warehouse?
(6)

57

You might also like