Programmed and Non-Programmed Decisions
Programmed and Non-Programmed Decisions
DSS uses the summary information, exceptions, patterns, and trends using
the analytical models. A decision support system helps in decision-making
but does not necessarily give a decision itself. The decision makers compile
useful information from raw data, documents, personal knowledge, and/or
business models to identify and solve problems and make decisions.
These decisions are based on the manger's discretion, instinct, perception and
judgment.
Ease of use
Ease of development
Extendibility
Characteristics of a DSS
Support for decision-makers in semi-structured and unstructured problems.
Support for managers at various managerial levels, ranging from top executive
to line managers.
Support for individuals and groups. Less structured problems often requires the
involvement of several individuals from different departments and organization
level.
Benefits of DSS
Improves efficiency and speed of decision-making activities.
Components of a DSS
Following are the components of the Decision Support System:
Support Tools: Support tools like online help; pulls down menus, user
interfaces, graphical analysis, error correction mechanism, facilitates the user
interactions with the system.
Classification of DSS
There are several ways to classify DSS. Hoi Apple and Whinstone classifies
DSS as follows:
Rules Oriented DSS: Procedures are adopted in rules oriented DSS. Export
system is the example.
Types of DSS
Following are some typical DSSs:
expert system
An expert system is a computer program that uses artificial
intelligence (AI) technologies to simulate the judgment and behavior of a
human or an organization that has expert knowledge and experience in a
particular field.
The concept of expert systems was first developed in the 1970s by Edward
Feigenbaum, professor and founder of the Knowledge Systems Laboratory at
Stanford University. Feigenbaum explained that the world was moving from
data processing to "knowledge processing," a transition which was being
enabled by new processor technology and computer architectures.
Software architecture
Advantages
Ease of maintenance is the most obvious benefit. This was achieved in two
ways. First, by removing the need to write conventional code, many of the
normal problems that can be caused by even small changes to a system
could be avoided with expert systems. Essentially, the logical flow of the
program (at least at the highest level) was simply a given for the system,
simply invoke the inference engine. This also was a reason for the second
benefit: rapid prototyping. With an expert system shell it was possible to enter
a few rules and have a prototype developed in days rather than the months or
year typically associated with complex IT projects.
A claim for expert system shells that was often made was that they removed
the need for trained programmers and that experts could develop systems
themselves. In reality, this was seldom if ever true. While the rules for an
expert system were more comprehensible than typical computer code, they
still had a formal syntax where a misplaced comma or other character could
cause havoc as with any other computer language. Also, as expert systems
moved from prototypes in the lab to deployment in the business world, issues
of integration and maintenance became far more critical. Inevitably demands
to integrate with, and take advantage of, large legacy databases and systems
arose. To accomplish this, integration required the same skills as any other
type of system.
Disadvantages
The most common disadvantage cited for expert systems in the academic
literature is the knowledge acquisition problem. Obtaining the time of domain
experts for any software application is always difficult, but for expert systems it
was especially difficult because the experts were by definition highly valued
and in constant demand by the organization. As a result of this problem, a
great deal of research in the later years of expert systems was focused on
tools for knowledge acquisition, to help automate the process of designing,
debugging, and maintaining rules defined by experts. However, when looking
at the life-cycle of expert systems in actual use, other problems – essentially
the same problems as those of any other large system – seem at least as
critical as knowledge acquisition: integration, access to large databases, and
performance.
Applications
Organizations employ Database Management Systems (or DBMS) to help them effectively manage their
data and derive relevant information out of it. A DBMS is a technology tool that directly supports data
management. It is a package designed to define, manipulate, and manage data in a database.
Designed to allow the definition, creation, querying, update, and administration of databases
Define rules to validate the data and relieve users of framing programs for data maintenance
Some well-known DBMSs are Microsoft SQL Server, Microsoft Access, Oracle, SAP, and others.
Components of DBMS
DBMS have several components, each performing very significant tasks in the database management
system environment. Below is a list of components within the database and its environment.
Software
This is the set of programs used to control and manage the overall database. This includes the DBMS
software itself, the Operating System, the network software being used to share the data among users,
and the application programs used to access data in the DBMS.
Hardware
Consists of a set of physical electronic devices such as computers, I/O devices, storage devices, etc., this
provides the interface between computers and the real world systems.
Data
DBMS exists to collect, store, process and access data, the most important component. The database
contains both the actual or operational data and the metadata.
Procedures
These are the instructions and rules that assist on how to use the DBMS, and in designing and running
the database, using documented procedures, to guide the users that operate and manage it.
Query Processor
This transforms the user queries into a series of low level instructions. This reads the online user’s query
and translates it into an efficient series of operations in a form capable of being sent to the run time
data manager for execution.
Data Manager
Also called the cache manger, this is responsible for handling of data in the database, providing a
recovery to the system that allows it to recover the data after a failure.
Database Engine
The core service for storing, processing, and securing data, this provides controlled access and rapid
transaction processing to address the requirements of the most demanding data consuming
applications. It is often used to create relational databases for online transaction processing or online
analytical processing data.
Data Dictionary
This is a reserved space within a database used to store information about the database itself. A data
dictionary is a set of read-only table and views, containing the different information about the data used
in the enterprise to ensure that database representation of the data follow one standard as defined in
the dictionary.
Report Writer
Also referred to as the report generator, it is a program that extracts information from one or more files
and presents the information in a specified format. Most report writers allow the user to select records
that meet certain conditions and to display selected fields in rows and columns, or also format the data
into different charts.
A company’s performance is greatly affected by how it manages its data. And one of the most basic
tasks of data management is the effective management of its database. Understanding the different
components of the DBMS and how it works and relates to each other is the first step to employing an
effective DBMS.
NOT NULL Constraint − Ensures that a column cannot have NULL value.
CHECK Constraint − The CHECK constraint ensures that all the values in a
column satisfies certain conditions.
INDEX − Used to create and retrieve data from the database very quickly.
Constraints can be specified when a table is created with the CREATE TABLE
statement or you can use the ALTER TABLE statement to create constraints
even after the table is created.
Dropping Constraints
Any constraint that you have defined can be dropped using the ALTER
TABLE command with the DROP CONSTRAINT option.
For example, to drop the primary key constraint in the EMPLOYEES table,
you can use the following command.
Integrity Constraints
Integrity constraints are used to ensure accuracy and consistency of the
data in a relational database. Data integrity is handled in a relational
database through the concept of referential integrity.
Independent Data Mart – This data mart does not depend on the enterprise data warehouse and works in
bottom-up manner.
Hybrid Data Marts
A hybrid data mart allows you to combine input from sources other than a data
warehouse. This could be useful for many situations, especially when you need ad hoc
integration, such as after a new group or product is added to the
organization. illustrates a hybrid data mart.
Hybrid Data Mart
The following table summarizes the major differences between OLTP and OLAP system
design.
Data Warehousing
The term "Data Warehouse" was first coined by Bill Inmon in 1990.
According to Inmon, a data warehouse is a subject oriented, integrated,
time-variant, and non-volatile collection of data. This data helps analysts to
take informed decisions in an organization.
A data warehouse helps executives to organize, understand, and use their data
to take strategic decisions.
Non-volatile − Non-volatile means the previous data is not erased when new
data is added to it. A data warehouse is kept separate from the operational
database and therefore frequent changes in operational database is not
reflected in the data warehouse.
Banking services
Consumer goods
Retail sectors
Controlled manufacturing
we will discuss the business analysis framework for the data warehouse
design and architecture of a data warehouse.
Since a data warehouse can gather information quickly and efficiently, it can
enhance business productivity.
The top-down view − This view allows the selection of relevant information
needed for a data warehouse.
The data source view − This view presents the information being captured,
stored, and managed by the operational system.
The data warehouse view − This view includes the fact tables and dimension
tables. It represents the information stored inside the data warehouse.
The business query view − It is the view of the data from the viewpoint of
the end-user.
Bottom Tier − The bottom tier of the architecture is the data warehouse
database server. It is the relational database system. We use the back end tools
and utilities to feed data into the bottom tier. These back end tools and utilities
perform the Extract, Clean, Load, and refresh functions.
Middle Tier − In the middle tier, we have the OLAP Server that can be
implemented in either of the following ways.
Top-Tier − This tier is the front-end client layer. This layer holds the query
tools and reporting tools, analysis tools and data mining tools.
The following diagram depicts the three-tier architecture of data warehouse
−
Virtual Warehouse
Data mart
Enterprise Warehouse
Virtual Warehouse
The view over an operational data warehouse is known as a virtual
warehouse. It is easy to build a virtual warehouse. Building a virtual
warehouse requires excess capacity on operational database servers.
Data Mart
Data mart contains a subset of organization-wide data. This subset of data
is valuable to specific groups of an organization.
In other words, we can claim that data marts contain data specific to a
particular group. For example, the marketing data mart may contain data
related to items, customers, and sales. Data marts are confined to subjects.
The implementation data mart cycles is measured in short periods of time, i.e.,
in weeks rather than months or years.
The life cycle of a data mart may be complex in long run, if its planning and
design are not organization-wide.
Enterprise Warehouse
An enterprise warehouse collects all the information and the subjects spanning
an entire organization
Load Manager
This component performs the operations required to extract and load
process.
The size and complexity of the load manager varies between specific
solutions from one data warehouse to other.
Perform simple transformations into structure similar to the one in the data
warehouse.
Fast Load
In order to minimize the total load window the data need to be loaded into the
warehouse in the fastest possible time.
The transformations affects the speed of data processing.
It is more effective to load the data into relational database prior to applying
transformations and checks.
Simple Transformations
While loading it may be required to perform simple transformations. After
this has been completed we are in position to do the complex checks.
Suppose we are loading the EPOS sales transaction we need to perform the
following checks:
Strip out all the columns that are not required within the warehouse.
Warehouse Manager
A warehouse manager is responsible for the warehouse management
process. It consists of third-party system software, C programs, and shell
scripts.
Backup/Recovery tool
SQL Scripts
Operations Performed by Warehouse Manager
A warehouse manager analyzes the data to perform consistency and referential
integrity checks.
Creates indexes, business views, partition views against the base data.
Transforms and merges the source data into the published data warehouse.
Archives the data that has reached the end of its captured life.
Query Manager
Query manager is responsible for directing the queries to the suitable tables.
Query manager is responsible for scheduling the execution of the queries posed
by the user.
Query Manager Architecture
The following screenshot shows the architecture of a query manager. It
includes the following:
Stored procedures
Detailed Information
Detailed information is not kept online, rather it is aggregated to the next
level of detail and then archived to tape. The detailed information part of
data warehouse keeps the detailed information in the starflake schema.
Detailed information is loaded into the data warehouse to supplement the
aggregated data.
Summary Information
Summary Information is a part of data warehouse that stores predefined
aggregations. These aggregations are generated by the warehouse
manager. Summary Information must be treated as transient. It changes
on-the-go in order to respond to the changing query profiles.
It needs to be updated whenever new data is loaded into the data warehouse.
It may not have been backed up, since it can be generated fresh from the
detailed information.
Data Manipulation Language
Data manipulation is the process of changing data in an effort to make it easier to read or be
more organized. For example, a log of data could be organized in alphabetical order, making
individual entries easier to locate.
When we have to insert records into table or get specific record from the table, or need to change
some record, or delete some record or perform any other actions on records in the database, we
need to have some media to perform it. DML helps to handle user requests. It helps to insert, delete,
update, and retrieve the data from the database. Let us see some of them.
Querying Tables
Inserting Data
Updating Data
Deleting Data
Committing/Rollbacking Changes
Inserting Data
Inserting data is with the syntax: INSERT INTO <table> [<columns,..>]
VALUES (value [...]);
Only one row is inserted into a table at one time using this syntax.
For example,
Assume the employee table is only of three fields empno, name and dept_no. Then the
above statement can be rewritten in a simplified format (by omitting the column list)
as:
Let's assume the fifth column of the employee table is a DATE field called hiredate,
then the SQL statement can be rewritten to include the SYSDATE function call.
To interactively prompt users to enter values for the fields at SQL*Plus terminal, use
the following syntax:
The SQL*Plus terminal then will prompt users to enter values for the three
variables employee_id, name, and salary.
For example,
Note that to insert into a table a large amount data that are stored external files,
click here for guide.
Updating Data
To change the data, use the syntax
The WHERE condition specifies what data to be updated. Without WHERE condition
all table rows will be updated.
For example,
UPDATE employee
Deleting Data
To delete a row from a table, use the syntax
DELETE FROM <table name> WHERE <condition>;
For example,
It is illegal to delete a row that contains a primary key that is a foreign key in another
table. Oracle will throw an execution error on this.
For example, attempt to delete a row in the department table with dept_no = 10 is not
allowed if the employee table has a foreign key referential constraint in the
department table on dept_no column, and some employees in the employee table have
the dept_no as 10.
Committing/Rollbacking Changes
Database transactions end end with the following events:
With COMMIT/ROLLBACK, the server is able to achieve data consistency and allow
users to preview the data changes before making the changes permanently.
The COMMIT statement ends the current transaction and makes permanent any changes
made during that transaction. Until you commit the changes, other users cannot access
the changed data; they see the data as it was before you made the changes. An
automatic COMMIT is performed when a DDL statement is issued or normal exit
from SQL*Plus without explicitly issuing COMMIT or ROLLBACK.
Consider a simple transaction that transfers money from one bank account to another.
The transaction requires two updates because it debits the first account, then credits
the second. In the example below, after crediting the second account, you issue a
commit, which makes the changes permanent. Only then do other users see the
changes.
The ROLLBACK statement ends the current transaction and undoes any changes made
during that transaction. Rolling back is useful for two reasons. First, if you make a
mistake like deleting the wrong row from a table, a rollback restores the original data.
Second, if you start a transaction that you cannot finish because an exception is raised
or a SQL statement fails, a rollback lets you return to the starting point to take
corrective action and perhaps try again. An automatic ROLLBACK is performed
when the abnormal termination of SQL*Plus or system failure.
SAVEPOINT names and marks the current point in the processing of a transaction. Used
with the ROLLBACK TO statement, savepoints let you undo parts of a transaction instead
of the whole transaction. In the example below, you mark a savepoint before doing an
insert. If the INSERT statement tries to store a duplicate value in the empno column, the
predefined exception DUP_VAL_ON_INDEX is raised. In that case, you roll back to the
savepoint, undoing just the insert.
The following figure illustrates the description of COMMIT, SAVEPOINT and
ROLLBACK.
The following commands are the sample use of COMMIT, SAVEPOINT, and
ROLLBACK commands.
1 row updated.
SQL> COMMIT;
Commit complete.
49 rows deleted.
SQL> ROLLBACK;
Rollback complete.
Savepoint created.
Savepoint created.
Rollback complete.