Data Mining Questions
Data Mining Questions
Data Mining
Points to Note
This partitioning is complex to manage.
It requires metadata to identify what data is stored in each partition.
2
Partitioning Dimensions
If a dimension contains large number of entries, then it is required to partition the dimensions. Here we have to check the
size of a dimension.
Consider a large design that changes over time. If we need to store all the variations in order to apply comparisons, that
dimension may be very large. This would definitely affect the response time.
Round Robin Partitions
In the round robin technique, when a new partition is needed, the old one is archived. It uses metadata to allow user
access tool to refer to the correct table partition.
This technique makes it easy to automate table management facilities within the data warehouse.
Vertical Partition
Vertical partitioning, splits the data vertically. The following images depicts how vertical partitioning is done.
Normalization
Row Splitting
Normalization
Normalization is the standard relational method of database organization. In this method, the rows are collapsed into a
single row, hence it reduce space. Take a look at the following tables that show how normalization is performed.
Row Splitting
Row splitting tends to leave a one-to-one map between partitions. The motive of row splitting is to speed up the access to
large table by reducing its size.
Note − While using vertical partitioning, make sure that there is no requirement to perform a major join operation
between two partitions.
Identify Key to Partition
3
It is very crucial to choose the right partition key. Choosing a wrong partition key will lead to reorganizing the fact table.
We can choose to partition on any key.
Suppose the business is organized in 30 geographical regions and each region has different number of branches. That
will give us 30 partitions, which is reasonable. This partitioning is good enough because our requirements capture has
shown that a vast majority of queries are restricted to the user's own business region.
If we partition by transaction_date instead of region, then the latest transaction from every region will be in one partition.
Now the user who wants to look at data within his own region has to query across multiple partitions.
Hence it is worth determining the right partitioning key.
Q.2 Mention the guidelines given by E.F Codd for OLAP System ?
Ans. OLAP Guidelines (Codd’s Rule)
On-line Analytical Processing (OLAP) is a category of software technology that enables analytics, managers and
executives to gain insight into data through fast, consistent, interactive access in a wide variety of infomation that has
been transformed from the raw data to reflect the real dimensionality of the enterprise as understood by the user.
OLAP was introduces by Dr.E.F.Codd in 1993 and he presented 12 rules regarding OLAP:
Transparency:
It makes the technology, underlying data repository, computing architecture and the diverse nature of source data
totally transparent to users.
Accessibility:
Access should provided only to the data that is actually needed to perform the specific analysis, presenting a single,
coherent and consistent view to the users.
Consistent Reporting Performance:
Users should not experience any significant degradation in reporting performance as the number of dimensions or the
size of the database increases. It also ensures users must perceive consistent run time, response time or machine
utilization every time a given query is run.
Client/Server Architecture:
It conforms the system to the principles of client/server architecture for optimum performance, flexibility, adaptability
and interoperability.
Generic Dimensionality:
It should be ensured that very data dimension is equivalent in both structure and operational capabilities. Have one
logical structure for all dimensions.
Dynamic Sparse Matrix Handling:
Adaption should be of the physical schema to the specific analytical model being created and loaded that optimizes
sparse matrix handling.
Multi-user Support:
Support should be provided for end users to work concurrently with either the same analytical model or to create
different models from the same data.
4
Unrestricted Cross-dimensional Operations:
System should have abilities to recognize dimensional and automatically perform roll-up and drill-down operations
within a dimension or across dimensions.
Intuitive Data Manipulation:
Consolidation path reorientation, drill-down and roll-up and other manipulations to be accomplished intuitively should
be enabled and directly via point and click actions.
Flexible Reporting:
Business user is provided capabilities to arrange columns, rows and cells in manner that gives the facility of easy
manipulation, analysis and synthesis of information.
Unlimited Dimensions and Aggregation Levels:
There should be at least fifteen or twenty data dimensions within a common analytical model.
Tablespaces
Views
Integrity Constraints
Dimensions
Some of these structures require disk space. Others exist only in the data dictionary. Additionally, the following structures
may be created for performance improvement:
Materialized Views
Tablespaces
A tablespace consists of one or more datafiles, which are physical structures within the operating system you are using. A
datafile is associated with only one tablespace. From a design perspective, tablespaces are containers for physical design
structures.
5
Tablespaces need to be separated by differences. For example, tables should be separated from their indexes and small
tables should be separated from large tables. Tablespaces should also represent logical business units if possible. Because
a tablespace is the coarsest granularity for backup and recovery or the transportable tablespaces mechanism, the logical
business design affects availability and maintenance operations.
You can now use ultralarge data files, a significant improvement in very large databases.
Tables and Partitioned Tables
Tables are the basic unit of data storage. They are the container for the expected amount of raw data in your data
warehouse.
Using partitioned tables instead of nonpartitioned ones addresses the key problem of supporting very large data volumes
by allowing you to divide them into smaller and more manageable pieces. The main design criterion for partitioning is
manageability, though you will also see performance benefits in most cases because of partition pruning or intelligent
parallel processing. For example, you might choose a partitioning strategy based on a sales transaction date and a monthly
granularity. If you have four years' worth of data, you can delete a month's data as it becomes older than four years with a
single, fast DDL statement and load new data while only affecting 1/48th of the complete table. Business questions
regarding the last quarter will only affect three months, which is equivalent to three partitions, or 3/48ths of the total
volume.
Partitioning large tables improves performance because each partitioned piece is more manageable. Typically, you
partition based on transaction dates in a data warehouse. For example, each month, one month's worth of data can be
assigned its own partition.
Table Compression
You can save disk space by compressing heap-organized tables. A typical type of heap-organized table you should
consider for table compression is partitioned tables.
To reduce disk use and memory use (specifically, the buffer cache), you can store tables and partitioned tables in a
compressed format inside the database. This often leads to a better scaleup for read-only operations. Table compression
can also speed up query execution. There is, however, a cost in CPU overhead.
Table compression should be used with highly redundant data, such as tables with many foreign keys. You should avoid
compressing tables with much update or other DML activity. Although compressed tables or partitions are updatable,
there is some overhead in updating these tables, and high update activity may work against compression by causing some
space to be wasted.
Views
A view is a tailored presentation of the data contained in one or more tables or other views. A view takes the output of a
query and treats it as a table. Views do not require any space in the database.
Indexes and Partitioned Indexes
Indexes are optional structures associated with tables or clusters. In addition to the classical B-tree indexes, bitmap
indexes are very common in data warehousing environments. Bitmap indexes are optimized index structures for set-
oriented operations. Additionally, they are necessary for some optimized data access methods such as star
transformations.
Indexes are just like tables in that you can partition them, although the partitioning strategy is not dependent upon the
table structure. Partitioning indexes makes it easier to manage the data warehouse during refresh and improves query
performance.
Materialized Views
6
Materialized views are query results that have been stored in advance so long-running calculations are not necessary when
you actually execute your SQL statements. From a physical design point of view, materialized views resemble tables or
partitioned tables and behave like indexes in that they are used transparently and improve performance.
Dimensions
A dimension is a schema object that defines hierarchical relationships between columns or column sets. A hierarchical
relationship is a functional dependency from one level of a hierarchy to the next one. A dimension is a container of logical
relationships and does not require any space in the database. A typical dimension is city, state (or province), region, and
country.
7
Q.5 Describe 4 main activities of data warehouse deployment .
Ans. Project Scoping and Planning
Determine the scope of the project – what you would like to accomplish? This can be defined by questions to be
answered. The number of logical star and number of the OLTP sources
Time – What is the target date for the system to be available to the users
Resource – What is our budget? What is the role and profile requirement of the resources needed to make this happen.
1. Requirement
5. What are the business questions? How does the answers of these questions can change the business decision or trigger
actions.
6. What are the role of the users? How often do they use the system? Do they do any interactive reporting or just view
the defined reports in guided navigation?
7. How do you measure? What are the metrics?
2. Front-End Design
The front end design needs for both interactive analysis and the designed analytics workflow.
How does the user interact with the system?
What are their analysis process?
Dimensional modeling – define the dimensions and fact and define the grain of each star schema.
Define the physical schema – depending on the technology decision. If you use the relational tecknology, design the
database tables
4. OLTP to data warehouse mapping
8
Logical mapping – table to table and column to column mapping. Also define the transformation rules
You may need to perform OLTP data profiling. How often the data changes? What are the data distribution?
ETL Design -include data staging and the detail ETL process flow.
5. Implementation
6. Deployment
1. Ongoing support of the end-users, including security, training, and enhancing the system.
2. You need to monitor the growth of the data.
🎗️ 🎗️ 🎗️