We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 25
Decision Support System
A Decision Support System (DSS) is a collection of
integrated software applications and hardware that form the backbone of an organization’s decision making process. Companies across all industries rely on decision support tools, techniques, and models to help them assess and resolve everyday business questions. The decision support system is data-driven, as the entire process feeds off of the collection and availability of data to analyze. Business Intelligence (BI) reporting tools, processes, and methodologies are key components to any decision support system and provide end users with rich reporting, monitoring, and data analysis. High-level Decision Support System Requirements: •Data collection from multiple sources (sales data, inventory data, supplier data, market research data. etc.) •Data formatting and collation •A suitable database location and format built for decision support -based reporting and analysis •Robust tools and applications to report, monitor, and analyze the data Decision support systems have become critical and ubiquitous across all types of business. In today’s global marketplace, it is imperative that companies respond quickly to market changes. Companies with comprehensive decision support systems have a significant competitive advantage. Decision Support Systems delivered by MicroStrategy Business Intelligence Business Intelligence (BI) reporting tools, processes, and methodologies are key components to any decision support system and provide end users with rich reporting, monitoring, and data analysis. MicroStrategy provides companies with a unified reporting, analytical, and monitoring platform that forms the core of any Decision Support System. The software exemplifies all of the important characteristics of an ideal Decision Support System: •Supports individual and group decision making: MicroStrategy provides a single platform that allows all users to access the same information and access the same version of truth, while providing autonomy to individual users and development groups to design reporting content locally. •Easy to Develop and Deploy: MicroStrategy delivers an interactive, scalable platform for rapidly developing and deploying projects. Multiple projects can be created within a single shared metadata. Within each project, development teams create a wide variety of re-usable metadata objects. As decision support system deployment expands within an organization, the MicroStrategy platform effortlessly supports an increasing concurrent user base. •Comprehensive Data Access: MicroStrategy software allows users to access data from different sources concurrently, leaving organizations the freedom to choose the data warehouse that best suits their unique requirements and preferences. •Integrated software: MicroStrategy’s integrated platform enables administrators and IT professionals to develop data models, perform sophisticated analysis, generate analytical reports, and deliver these reports to end users via different channels (Web, email, file, print and mobile devices). This eliminates the need for companies to spend countless effort purchasing and integrating disparate software products in an attempt to deliver a consistent user experience. •Flexibility: MicroStrategy SDK (Software Development Kit) exposes its vast functionality through an extensive library of APIs. MicroStrategy customers can choose to leverage the power of the software’s flexible APIs to design and deploy solutions tailored to their unique business needs Simon's Model Simon's Model is based on premise that decision rash null. Decision making in Simon's Model is characterized by limited information processing and use of rules. Simmons decision- making model there are four phases 1) Intelligence phase 2) Design phase 3) Choice phase 4) Implementation phase Initially the problem comes and we are in the intelligence phase thinking of the problem as it comes and then we try to find out what the solution to the given problem and then we move to design phase. In the design phase the way and method to solve the problem is thought and we actually try analyze the problem, we try to find the algorithms and the way that can actually solve the problem and hence we use the genetic algorithm to find the solution to the given problem . After finding the method which is to be applied to the given problem we move to choice phase and here the actual work of finding the best algorithm come .Here we try to find the best algorithm from the given set of algorithm we have the option of choosing the algorithms such as "ACO" algorithm which is called the ant colony optimization algorithm or we have the choice of finding the algorithm such as Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule. After deciding that genetic algorithm is the most suitable algorithm for the programming we move to the next step which is the implemetation phase here the real implemeation of the slotuin is done we implemet the solution to the given problem by using the geneteic algorithm according to the given problem. In the given problem a list of 26 items is given they all have different price, different weights and different volumes. The problem says that we have to find the items which can be fitted in to the given space of the container the number of items chosen to be fitted in to the given space should be such that the weight and the volume of the selected items should not be more than the total allowed volume and weight in the container. The care has to be taken such that the total weight and volume of the selected items should not exceed more than the allowed weight and the volume. Structured Query Language To communicate with the database system itself we need a language. SQL is an international standard language for manipulating relational databases. It is based on an IBM product. SQL is short for Structured Query Language. SQL can create schemas, delete them, and change them. It can also put data into schemas and remove data. It is a data handling language, but it is not a programming language. SQL is a DSL (Data Sub Language), which is really a combination of two languages. These are the Data Definition Language (DDL) and the Data Manipulation Language (DML). Schema changes are part of the DDL, while data changes are part of the DML. We will consider both parts of the DSL in this discussion of SQL. Database Models A data model comprises •a data structure •a set of integrity constraints •operations associated with the data structure Examples of data models include: •hierarchic •network •relational Models other than the relational database module used to be quite popular. Each model type is appropriate to particular types of problem. The Relational model type is the most popular in use today, and the other types are not discussed further. Relational Databases The relational data model comprises: •relational data structure •relational integrity constraints •relational algebra or equivalent (SQL) · SQL is an ISO language based on relational algebra · relational algebra is a mathematical formulation Relational Data Structure A relational data structure is a collection of tables or relations. •A relation is a collection of rows or tuples •A tuple is a collection of columns or attributes •A domain is a pool of values from which the actual attribute values are taken Online Transaction Processing Online Transaction processing database applications are optimal for managing changing data, and usually have a large number of users who will be simultaneously performing transactions that change real-time data. Although individual requests by users for data tend to reference few records, many of these requests are being made at the same time. Common examples of these types of databases are airline ticketing systems and banking transaction systems. The primary concerns in this type of application are concurrency and atomicity. Concurrency controls in a database system ensure that two users cannot change the same data, or that one user cannot change a piece of data before another user is done with it. For example, if you are talking to an airline ticket agent to reserve the last available seat on a flight and the agent begins the process of reserving the seat in your name, another agent should not be able to tell another passenger that the seat is available. Atomicity ensures that all of the steps involved in a transaction complete successfully as a group. If any step fails, no other steps should be completed. For example, a banking transaction may involve two steps: taking funds out of your checking account and placing them into your savings account. If the step that removes the funds from your checking account succeeds, you want to make sure that the funds are placed into your savings account or put back into your checking account. Online Transaction Processing Design Considerations Transaction processing system databases should be designed to promote: 1)Good data placement. I/O bottlenecks are a big concern for OLTP systems due to the number of users modifying data all over the database. Determine the likely access patterns of the data and place frequently accessed data together. Use filegroups and RAID (redundant array of independent disks) systems to assist in this. 2)Short transactions to minimize long-term locks and improve concurrency. Avoid user interaction during transactions. Whenever possible, execute a single stored procedure to process the entire transaction. The order in which you reference tables within your transactions can affect concurrency. Place references to frequently accessed tables at the end of the transaction to minimize the duration that locks are held. 3)Online backup. OLTP systems are often characterized by continuous operations (24 hours a day, 7 days a week) for which downtime is kept to an absolute minimum. Although Microsoft® SQL Server™ 2000 can back up a database while it is being used, schedule the backup process to occur during times of low activity to minimize effects on users. 4)High normalization of the database. Reduce redundant information as much as possible to increase the speed of updates and hence improve concurrency. Reducing data also improves the speed of backups because less data needs to be backed up. 5)Little or no historical or aggregated data. Data that is rarely referenced can be archived into separate databases, or moved out of the heavily updated tables into tables containing only historical data. This keeps tables as small as possible, improving backup times and query performance. 6)Careful use of indexes. Indexes must be updated each time a row is added or modified. To avoid over-indexing heavily updated tables, keep indexes narrow. Use the Index Tuning Wizard to design your indexes. 7)Optimum hardware configuration to handle the large numbers of concurrent users and quick response times required by an OLTP system. Group Decision Support System The tools we provide are collectively known as Group Decision Support Software (GDSS) and they are designed to enable a group of participants to work interactively in an electronic environment. GDSS systems help users to solve complex problems, prepare detailed plans and proposals, resolve conflicts, and analyze and prioritize issues effectively. They are excellent in situations involving visioning, planning, conflict resolution, team building, and evaluation. Each participant has a computer terminal from which they interact with the rest of the group. The computers are networked so that each individual's screen is private, but the information they enter is displayed anonymously on a public screen. The Process A typical GDSS session includes four phases: •Idea Generation •Idea Consolidation •Idea Evaluation •Implementation Planning GDSS does not replace human interaction, but rather supports and enhances the group's decision making process; typically 30% of interactions take place on the computers. The Benefits •More information in less time: Since GDSS allows group members to contribute in parallel, significantly more information can be gathered in a shorter period. •Greater Participation: The anonymity provided by GDSS enables group members to express themselves freely, reducing the risk of 'group think' and conformance pressure. The loudest voice need not dominate the discussion. •More structure: More focused and concentrated discussions result with GDSS than would be possible in traditional meetings. Irrelevant digressions are minimized. •Automated Documentation: Comments are never forgotten, results are available immediately, and excellent graphics make it easy to see (and therefore discuss) areas of dispute. Applications •Strategic Planning - Analyze the environment, develop a vision, identify objectives, and build action plans. •Project Evaluation - Assess objectives achievement, impacts, relevance, cost effectiveness, and future directions. •Focus Groups and Expert Panels - Elicit opinions and understand needs. •Conflict Resolution - Compare points-of-view, understand differences, and seek common ground. •Problem Solving - Identify causes, suggest alternatives, choose solutions, and develop implementation plans. Online Analytical Processing OLAP (Online Analytical Processing) is a methodology to provide end users with access to large amounts of data in an intuitive and rapid manner to assist with deductions based on investigative reasoning. OLAP (online analytical processing) is computer processing that enables a user to easily and selectively extract and view data from different points of view. For example, a user can request that data be analyzed to display a spreadsheet showing all of a company's beach ball products sold in Florida in the month of July, compare revenue figures with those for the same products in September, and then see a comparison of other product sales in Florida in the same time period. To facilitate this kind of analysis, OLAP data is stored in a multidimensional database. Whereas a relational database can be thought of as two-dimensional, a multidimensional database considers each data attribute (such as product, geographic sales region, and time period) as a separate "dimension." OLAP software can locate the intersection of dimensions (all products sold in the Eastern region above a certain price during a certain time period) and display them. Attributes such as time periods can be broken down into subattributes. OLAP can be used for data mining or the discovery of previously undiscerned relationships between data items. An OLAP database does not need to be as large as a data warehouse, since not all transactional data is needed for trend analysis. Using Open Database Connectivity (ODBC), data can be imported from existing relational databases to create a multidimensional database for OLAP. Two leading OLAP products are Hyperion Solution's Essbase and Oracle's Express Server. OLAP products are typically designed for multiple-user environments, with the cost of the software based on the number of users. Online Analytical Processing (OLAP) Systems for Decision Support IT organizations are faced with the challenge of delivering systems that allow knowledge workers to make strategic and tactical decisions based on corporate information. These decision support systems are referred to as Online Analytical Processing (OLAP) systems, and they allow knowledge workers to intuitively, quickly, and flexibly manipulate operational data using familiar business terms, in order to provide analytical insight. OLAP systems need to: 1.Support the complex analysis requirements of decision-makers, 2.Analyze the data from a number of different perspectives (business dimensions), and 3.Support complex analyses against large input (atomic-level) data sets Expert systems an expert system is a computer program that simulates the thought process of a human expert to solve complex decision problems in a specific domain. This chapter addresses the characteristics of expert systems that make them different from conventional programming and traditional decision support tools. The growth of expert systems is expected to continue for several years. With the continuing growth, many new and exciting applications will emerge. An expert system operates as an interactive system that responds to questions, asks for clarification, makes recommendations, and generally aids the decision-making process. Expert systems provide expert advice and guidance in a wide variety of activities, from computer diagnosis to delicate medical surgery. Various definitions of expert systems have been offered by several authors. A general definition that is representative of the intended functions of expert systems is: An expert system is an interactive computer-based decision tool that uses both facts and heuristics to solve difficult decision problems based on knowledge acquired from an expert. An expert system may be viewed as a computer simulation of a human expert. Expert systems are an emerging technology with many areas for potential applications. Past applications range from MYCIN, used in the medical field to diagnose infectious blood diseases, to XCON, used to configure computer systems. These expert systems have proven to be quite successful. Most applications of expert systems will fall into one of the following categories: • Interpreting and identifying • Predicting • Diagnosing • Designing • Planning • Monitoring • Debugging and testing • Instructing and training • Controlling Applications that are computational or deterministic in nature are not good candidates for expert systems. Traditional decision support systems such as spreadsheets are very mechanistic in the way they solve problems. They operate under mathematical and Boolean operators in their execution and arrive at one and only one static solution for a given set of data. Calculation intensive applications with very exacting requirements are better handled by traditional decision support tools or conventional programming. The best application candidates for expert systems are those dealing with expert heuristics for solving problems. Conventional computer programs are based on factual knowledge, an indisputable strength of computers. Humans, by contrast, solve problems on the basis of a mixture of factual and heuristic knowledge. Heuristic knowledge, composed of intuition, judgment, and logical inferences, is an indisputable strength of humans. Successful expert systems will be those that combine facts and heuristics and thus merge human knowledge with computer power in solving problems.