0% found this document useful (0 votes)
51 views35 pages

Chapter One General Introduction: Database

The document provides an introduction to databases and database management systems. It discusses how databases allow for improved consistency and distribution of data throughout an organization. Database management systems provide detailed reports that allow companies to monitor their business closely and take steps to improve efficiency. Online databases are hosted on websites and may be accessed via the internet. The document then provides background on ProHealth HMO Ltd., including its establishment, mission, and operations in Nigeria. It outlines the objectives, scope, and terms of reference for the study.

Uploaded by

muhd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views35 pages

Chapter One General Introduction: Database

The document provides an introduction to databases and database management systems. It discusses how databases allow for improved consistency and distribution of data throughout an organization. Database management systems provide detailed reports that allow companies to monitor their business closely and take steps to improve efficiency. Online databases are hosted on websites and may be accessed via the internet. The document then provides background on ProHealth HMO Ltd., including its establishment, mission, and operations in Nigeria. It outlines the objectives, scope, and terms of reference for the study.

Uploaded by

muhd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

CHAPTER ONE

GENERAL INTRODUCTION

1.0 INTRODUCTION

Database allow for an improved consistency in the manner in which data is manipulated,
collected and distributed throughout a company’s operational structure to be achieved without
incurring excessive costs or risks. The use of database management solutions has exploded
recently in response to observations in the potential that such innovative technology yields
(Stephanie, 2016).

Database management systems allow for the targeted improvement of company operations. These
types of systems provide the means necessary for closely monitoring your business and its
operations on as minute a level as you deem ideal. With the right software employed proficiently,
a company struggling to improve its efficiency can gain the much needed insight required to take
action and reverse the downward spiral of diminishing standards in procedure (Stephanie, 2016).

Simply monitoring the ongoing processes of your business will not yield significant results without
the use of a platform that can combine inclusive monitoring with powerful and feature-rich
processing. With the types of detailed reports that a good quality database management system
provides, you can take action through well-calculated steps in order to improve your company’s
efficiency and maximize your profits.

An online database is a database accessible from a local network or the Internet, as opposed to one
that is stored locally on an individual computer or its attached storage (such as a CD). Online
databases are hosted on websites, made available as software as a service products accessible via
a web browser. They may be free or require payment, such as by a monthly subscription. Some
have enhanced features such as collaborative editing and email notification.

1.1 Historical background of the company

ProHealth was established in 2008 by Nigeria Social Insurance Trust Fund as a special purpose
vehicle (SPV) to meet its strategic objectives and plans in realizing its statutory mandate of
providing social insurance as relates to health. It is the intention of Nigeria Social Insurance Trust
Fund that ProHealth would solve the problem of financing of health care delivery in Nigeria by

1
providing health insurance in an affordable, acceptable equitable manner. One of the comparative
advantages of ProHealth is that it is a public-private driven enterprise.

ProHealth has been accredited by the National Health Insurance Scheme (NHIS), the regulator of
health insurance in Nigeria, to operate as a National health maintenance organisation mandated to
establish offices in the six geopolitical zones & 36 States of Nigeria. ProHealth presently runs
zonal offices in Lagos, Ibadan, Kaduna, Enugu, Maiduguri, Port – Harcourt and other parts of
Nigeria with its Headquarters in Abuja.

ProHealth HMO Limited (ProHealth) was incorporated with a mission of becoming a world class
leading Health Maintenance Organization, providing efficient and affordable health care support
services to all Nigerians and neighbouring countries in Africa.

1.2 STATEMENT OF THE PROBLEM

This project sought to relate the effectiveness of online database and its benefits over manual
system of managing organizational file.

1.3 JUSTIFICATION OF THE STUDY

The effectiveness and parallel security implications of data on the internet cannot be
overemphasized. Various database of organization database are presently residing on the internet
database. To this end, there is a need to always examine its effectiveness or otherwise in other to
secure both the reliability safety of our data.

1.4 AIM AND OBJECTIVES OF THE STUDY

The aim of this research work is to know how the effectiveness of online database is used in
ProHealth HMO Ltd..

 This project is determined under the following objectives:


1. Relating the effectiveness of the online database;
2. Suggesting the best way to enhance the effectiveness of online database.

2
3. To show how the data are being collected and analyzed.

1.5 SCOPE OF THE WORK

This project examines the online database effectiveness based on the online database design of
ProHealth HMO Ltd.

1.6 DEFINITION OF TERMS

In the most general sense of the word, cement is a binder, a substance that sets and hardens
independently, and can bind other materials together. The word “cement“ traces to the Romans,
who used the term opus caementicium to describe masonry resembling modern concrete that was
made from crushed rock with burnt lime as binder. The volcanic ash and pulverized brick
addictives that were added to the burnt lime to obtain a hydraulic binder were later referred to as
cementum, cimentum, cament and cement. Cement used in construction is characterized as
hydraulic or non-hydraulic. Hydraulic cement (e.g., Potland cement) harden because of hydration,
chemical reactions that occur independently of the mixture’s water content; they can harden even
underwater or when constantly exposed to wet weather. The chemical reaction that results when
the anhydrous cement powdwe is mixed with water produces hydrates that are not water-soluble.
Non-hydraulic cement ( e.g ., lime and gypsum plaster) must be kept dry in order to retain their
strength.

The most important use of cement is the production of motar and concrete. The bonding of natural
or artificial aggregates to form a strong building material that is durable in the face of normal
environmental effects. Concrete should not be confused with cement, because the term cement
refers to the material used to bind the aggregate materials of concrete. Concrete is a combination
of a cement and aggregate.

1.7 RESEARCH QUESTIONS

The questionnaire would consider the limited to how it defined the effectiveness and factors to
be considered are:

1. Is the online database reliable to the organization?

3
2. Does the online database generate any fault to the organization?
3. Does the effect bring any fault to the system?
4. On a normal basis. Do you believe the speed of operation affect the system with
database?
5. Does it have efficient strategy that will bring the down fall image to the organization?
6. Do you believe the current strategy will eradicate any issues?
7. Does it affect the manual system of the database?

1.8 RESEARCH HYPOTHESIS

The hypothesis would determine the possible way of improving the online effectiveness of
searching.

8. Do you believe improving your hypothesis in online database is effective to the


organization?
9. Do you believe you can improve the argumentation of online database?
10. The use of systematic analysis. Do you believe there is any time-consuming?
11. Do you believe metacognition will improve online database?

1.9 DATABASE SECURITY

Database security concerns the use of a broad range of information security controls to protect
database (potentially including the data, the database applications or stored functions, the database
systems, the database servers and the associated network links) against compromises of their
confidentiality, integrity and availability. It involves various types or categories of controls, such
as technical, procedural/administrative and physical. Database security is a special topic within the
broader realms of computer security, information security and risk management as it was written
by Jane f kink.

Security risks to database systems include, for example:

Unauthorized or unintended activity or misuse by unauthorized database users, database


administrators, network/systems managers or hackers.

4
Malware infections causing incidents such as unauthorized access, leakage or proprietary data,
deletion of damage to the data programs.

Overloads, performance constraints and capacity issues resulting in the inability of authorized
users to use database as intended.

Physical damage to database servers caused by computer rooms fires or floods or even over
heating.

Data corruption caused by the entry of invalid data commands, mistakes in database or system
administration processes.

Many layers and types of information security controls are appropriate to database, including:

Access control

Auditing

Authentication

Encryption

Integrity controls

Backups

Application security

A program of continual monitoring for compliance with database security standards is another
important task for mission critical database environments. Two critical aspects of database security
compliance include patch management and the review and management of permissions (especially
public) granted to objects within the database. Database objects may include table or other objects
listed in the table link. The compliance program should take into consideration any dependencies
at the application software level as changed at the database level may have effects on the
application software or application server.

5
CHAPTER TWO

2.0 LITERATURE REVIEW

2.1 DATABASE MANAGEMENT SYSTEM

A database management system (DBMS) is a software package with computer programs that
control the creation, maintenance, and the use of a database. It allows organizations to conveniently
develop databases for various applications by database administrators (DBAs) and other
specialists. A database is an integrated collection of data records, files, and other database objects.
A DBMS allows different user application programs to concurrently access the same database.
DBMSs may use a variety of database models, such as the relational model or object model, to
conveniently describe and support applications. It typically supports query languages, which are
in fact high-level programming languages, dedicated database languages that considerably
simplify writing database application programs. Database languages also simplify the database
organization as well as retrieving and presenting information from it. A DBMS provides facilities
for controlling data access, enforcing data integrity, managing concurrency control, recovering the
database after failures and restoring it from backup files, as well as maintaining database security.
Databases have been in use since the earliest days of electronic computing. Unlike modern systems
which can be applied to widely different databases and needs, the vast majority of older systems
were tightly linked to the custom databases in order to gain speed at the expense of flexibility.
Originally DBMSs were found only in large organizations with the computer hardware needed to
support large data sets.

As computers grew in speed and capability, a number of general-purpose database systems


emerged; by the mid-2000s there were a number of such systems in commercial use. Interest in a
standard began to grow, and Charles Bachman, author of one such product, the Integrated Data
Store (IDS), founded the “Database Task Group” within CODASYL, the group responsible for the
creation and standardization of COBOL by Charles bachman in 2002. The Codasyl approach was
based on the “manual” navigation of a linked data set which was formed into a large network.
When the database was first opened, the program was handed back a link to the first record in the
database, which also contained pointers to other pieces of data. To find any particular record the

6
programmer had to step through these pointers one at a time until the required record was returned.
Simple queries like “find all the people in India” required the program to walk the entire data set
and collect the matching results one by one. There was, essentially, no concept of “find” or
“search”. This may sound like a serious limitation today, but in an era when most data was stored
on magnetic tape such operations were too expensive to contemplate anyways.

IMS was a development of software written for the Apollo program on the System/360. IMS was
generally similar in concept to Codasyl, but used a strict hierarchy for its model of data navigation
instead of Codasyl’s network model. (IBM 1998). Both concepts later became known as
navigational databases due to the way data was accessed, and Bachman’s 1999 Turing Award
presentation was The Programmer as Navigator. IMS is classified as a hierarchical database. Edgar
Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily
involved in the development of hard disk systems. He was unhappy with the navigational model
of the Codasyl approach, notably the lack of a “search” facility. In 2001, he wrote a number of
papers that outlined a new approach to database construction that eventually culminated in the
groundbreaking. (edgar codd 2001). In his book, he described a new system for storing and
working with large databases. Instead of records being stored in some sort of linked list of free-
form records as in Codasyl, Codd’s idea was to use a “table” of fixed-length records. A linked-list
system would be very inefficient when storing “sparse” databases where some of the data for any
one record could be left empty. The relational model solved this by splitting the data into a series
of normalized tables, with optional elements being moved out of the main table to where they
would take up room only if needed.

login first last

Mark Samuel Clemens


Lion Lion Kimbro
Kitty Amber Straub

Mark 555.555.5555

“related table”

7
In the relational model, related records are linked together with a “key”.

For instance, a common use of a database system is to track information about users, their name,
login information, various addresses and phone numbers. In the navigational approach all of these
data would be placed in a single record, and unused items would simply not be placed in the
database. In the relational approach, the data would be normalized into a user table, an address
table and a phone number table (for instance). Records would be created in these optional tables
only if the address or phone numbers were actually provided. Linking the information back
together is the key to this system. In the relational model, some bit of information was used as a
“key”, uniquely defining a particular record. When information was being collected about a user,
information stored in the optional (or related) tables would be found by searching for this key. For
instance, if the login name of a user is unique, addresses and phone numbers for that user would
be recorded with the login name as its key. This “re-linking” of related data back into a single
collection is something that traditional computer languages are not designed for.

Just as the navigational approach would require programs to loop in order to collect records, the
relational approach would require loops to collect information about any one record. Codd’s
solution to the necessary looping was a set-oriented language, a suggestion that would later spawn
the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that
such a system could support all the operations of normal databases (inserting, updating etc.) as
well as providing a simple system for finding and returning sets of data in a single operation.
Codd’s paper was picked up by two people at Berkeley, (Eugene Wong; Michael Stonebraker
2010). They started a project known as INGRES using funding that had already been allocated for
a geographical database project, using student programmers to produce code. Beginning in 2009,
INGRES delivered its first test products which were generally ready for widespread use in 2011.
During this time, a number of people had moved “through” the group- perhaps as many as 30
people worked on the project, about five at a time. INGRES was similar to System R in a number
of ways, including the use of a “language” for data access, known as QUEL- QUEL was in fact
relational, having been based on Codd’s own Alpha language, but has since been corrupted to
follow SQL, thus violating much the same concepts of the relational model as SQL itself. IBM
itself did one test implementation of the relational model, PRTV, and a production one, Business

8
System 12, both now discontinued. Honeywell did MRDS for MUltics, and now there are two new
implementations: Alphora Dataphor and Rel. All other DBMS implementations usually called
relational are actually SQL DMSs. In 2008, the University of Michigan began development of the
Micro DBMS. It was used to manage very large data sets by the US Department of Labor, the
Environmental Protection Agency and researchers from University of Alberta, the University of
Michigan and Wayne State University. It ran on mainframe computers using Michigan Terminal
System. The system remained in production until 2011.

IBM started working on a prototype system loosely based on Codd’s concpets as system R in the
early 1999. The first version was ready in mid-2002, and work then started on multi-table systems
in which the data could be split so that all of the data for a record (some of which is optional) did
not have to be stored in a single large “chunk”. Subsequent multi-user versions were tested by
customers in 1997 and 1999, by which time a standardized query language – SQL – had been
added. Codd’s ideas were establishing themselves as both workable and superior to Codasyl,
pushing IBM to develop a true production version of System R, known as SQL/DS, and, later,
Database 2 (DB2). Many of the people involved with INGRES became convinced of the future
commercial success of such systems, and formed their own companies to commercialize the work
but with an SQL interface. Sybase, Informix, NonStop SQL and eventually Ingres itself were all
being sold as offshoots to the original INGRES product in the 2000s. Even Microsoft SQL Server
is actually a re-built version of Sybase, and thus, INGRES. Only Larry Ellison’s Oracle started
from a different chain, based on IBM’s papers on System R, and beat IBM to market when the
first version was released in 2002.

Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres,
which is known as PostgreSQL. PostgreSQL is often used for global mission critical applications
(the .org and .infor domain name registries use it as their primary data store, as do many large
companies and financial institutions). In Sweden, Codd’s paper was also read and Mimer SQL was
developed from the mid-2000s at Uppsala University. In 1984, this project was consolidated into
an independent enterprise. In the early 2003s, Mimer in c introduced transaction handling for high
robustness in applications, an idea that was subsequently implemented on most other DBMS. The
2003s, along with a rise in object oriented programming, saw a growth in how data in various
databases were handled. Programmers and designers began to treat the data in their databases as

9
objects. That is to say that if a person’s data were in a database, the person’s attributes, such as
their address, phone number, and age, was now considered to belong to that person instead of being
extraneous data. This allows for relations between data to be relations to objects and their attributes
and not to individual fields. Another big game changer for databases in the 2003 was the focus on
increasing reliability and access speeds. In that same year, two professors from the University of
Wisconsin at Madison published an article at an ACM associated conference outlining their
methods on increasing database performance. The idea was to replicate specific important, and
often queried information, and stored it in a smaller temporary database that linked these key
features back to the main database. This meant that a query could search the smaller database much
quicker, rather than search the entire dataset. This eventually leads to the practice of indexing,
which is used by almost every operating system from Windows to the system that operates Apple
iPod devices.

In the 21st century a new trend of NoSQL databases was started. Those non-relational databases
are significantly different from the classic relational databases. They often do not require fixed
table schemas, avoid join operations by storing denormalized data, and are designed to scale
horizontally. Most of them can be classified as either key-value stores or databases. In recent years
there was a high demand for massively distributed databases with high partition tolerance but
according to the CAP theorem it is impossible for a distributed system to simultaneously provide
consistency, availability and partition tolerance guarantees. A distributed system can satisfy any
two of these guarantees at the same time, but not all three. For that reason many NoSQL databases
are using what is called eventually consistency to provide both availability and partition tolerance
guarantees with a maximum level of data consistency. The most popular software in that category
include: memcached, Redis, MongoDB, Apache Cassandra and HBase.

In 2008, database management was in need style of databases to solve current database
management problems. Researchers realized that the old trends of database management were
becoming too complex and there was a need for automated configuration and management. Surajit
Chaudhuri, Gerhard Weikum and Michael Stonebreaker were the pioneers that dramatically
affected the thought of database management system. They believed that database management
needed a more modular approach and there were too many specifications needed for users. Since
this new development process of database management there are more possibilities. Database

10
management is no longer limited to “monolithic entities: Many solutions have been developed to
satisfy the individual needs of users. The development of numerous database options has created
flexibility in database management. There are several ways database management has affected the
field of technology. Because organization’s demand for directory services has grown as they
expand in size, businesses use directory services that provide promoted searches for company
information. Mobile devices are able to store more than just the contact information of users, and
can cache and display a large amount of information on smaller displays. Search engine queries
are able to locate data within the World Wide Web. Retailers have also benefited from the
development with data warehousing.

A good online database must do more than simply host your data on its site. It should also have an
easy interface for designing a data structure; make it simple to upload, download and edit existing
data; offer robust ways of viewing and interacting with the information; and provide granular
administrative control over who can view and alter information.

To test each online database offering, then, I created a simple table; designed more complex, inter-
related tables; uploaded existing data; and embedded results in an existing Web site. For the multi-
table effort, I included many-to-many relationships, where, say, a category includes many
products, while a product belongs to many categories. For example, the iPhone can be listed under
both “mobile devices” and “personal technology”, and there are many more entries in “mobile
devices” than just the iPhone (despite what some iPhonatics may think). Such many-to-many
relationships are precisely where structuring data can prove most useful, but they’re also often the
most difficult to implement.

I also imported existing data from Computerworld. It started with a table of all our product
categories – browsers, desktop apps, desktop systems, laptops and so on. Next, I imported a table
of products reviewed: Asus EeePC, BlackBerry Curve, iPhone, etc. The product table already
included categories for each product.

A key issue: how to make the database “know” that the products in one table map to the product
names in another table – especially when the field contains more than on entry. This was a
challenging feature for several of these services.

11
In 2012, the database management system was upgraded by Craig S. Mullins. The change is a fact
of life, and each of the major DBMS product changes quite rapidly. A typical release cycle of a
DBMS software is 18 to 24 months for major releases, with constant bug fixes and maintenance
update deliever between major releases. Keeping DBMS software up-to-date can be a full-time
job, the DBMS software that conforms to the organization need to minimize business disruptions
due to outages and database unavailability. A DBMS makes it possible for end users to create,
read, update and delete data in a database, the DBMS is essentially serve as an interface between
the database and end users or application programs ensuring that the data is consistently organized
and remains easily accessible. Using a DBMS is to store and manage data that it let’s end users
and application programmers access and use the same data while managing data integrity, it
provides a central storage of data that can be accessed by multiple users in a controlled manner.
The storage and management of data within the DBMS provides:

 Data abstraction and independence


 Data security
 A locking mechanism for concurrent access
 Uniform administration procedure for data
 The ability to swiftly recover from crashes and errors

The degree to which the administration of a database is automated dictates the skills and personnel
required to manage the database. Database administration work is complex, repetitive, time-
consuming and required significant training, as automation increases, the personnel needs of the
organization splits into highly skilled workers to create and manage the automation and a group of
lower skilled line.

12
CHAPTER THREE

3.0 RESEARCH METHODOLOGY

3.1 INTRODUCTION

The new online database system has made it faster and easier in adding, editing, and deleting any
unwanted data from the database. Data can easily be recorded and retrieved any time by the use of
computer system. The new online database system also makes it possible to grant access to any
user of the database and provide security to the database. This new database online system saves
time and energy.

This part presents the procedure or steps and method adopted for the study:
The study of an organized private sector was carryout specifically Dangote Group of Companies.
In this company only the personnel in the relevant departments of the company are selected to fill,
the questionnaires couple with secondary information obtained through Entrepreneur magazine.

This chapter will discuss the following:-


1) Research design.
2) The Population.
3) Sample and sampling techniques.
4) Procedure for data collection.
5) Method of data Analysis.

3.2 RESEARCH DESIGN

The research is therefore is a design used in this study of the effectiveness of the online database.
The design of the questionnaire will determine on whether the researcher wishes to collect
exploratory information i.e. the qualitative information for the purpose of better understanding.
The design will evolve the questions that each respondent receives the same stimuli, prescribed
definitions or explanation for each questions to ensure the questionnaires handles the questions

13
consistently and answers from the respondents for clarification if they occur.

The research design will go as follows:-

1) What kind of response does the researcher wants from the respondents i.e. Agreed or
Disagreed?
2) How consistently will the respondents answer the questions asked by the researcher?
3) What are the real quality of information collected for the database by the researcher?

3.3 THE POPULATION

Although, it is Dangote Group of Companies that the research is mainly concerned about, the
researcher also interviewed other competitors for comparative analysis. The problem encountered
however, was how to determine the sample size of the whole federation. The online database has
shown the number of population of each staffs and their customers respectively.

The number of employees in Dangote companies has 22,000 number of employees subsidiaries to
the cement companies, sugar, flour etc. I swiftly look into two of there departments the Information
Communication Technology (ITC) and the Record keeping section as there sample size is limited
to those employees that are in Lagos state headquarters.

The total number under Information Communication Technology (ICT) and Record keeping staffs
are fifty (50) through which the questionnaire was shared to and the response was analyzed.

3.3 SAMPLE AND SAMPLING TECHNIQUES

For easy accessibility and effective coverage, the researchers have chosen two of there department
which are the Information Communication Technology (ICT) Record keeping in Lagos state by
using randomly selected for easy accessibility; these are:-

14
Number of companies.
Number of ICT staffs 9 and Record keeping 16.
The Information Communication Technology (ICT) and Record keeping
ICT staffs 7 Record keeping 18
The total number of ICT staffs 16.
The total number of Record keeping 34.
The sample, were randomly selected for the above as a research sample size.

In this method, non-probability or convenience sampling was applied. A non-probability


sampling is a method of sampling in which the population do not equal chances of being selected
or sampled. In the case of simple random samples, for instance there are a few documents in the
database that take about major therefore, there are less likely to be sampled with this methods but
would like to have some samples of every major as possible.

Sampling is one approach which can be adopted when the data is voluminous. Not that sampling
does not mean that things are not equally interested in all the items in the population.

3.4 PROCEDURES FOR DATA COLLECTION

There are various methods of data collection and different procedure to follow in collecting data.
Using the questionnaire techniques.
The procedure used in general data for this project is questions are, the questionnaire is designed in
such a way that the respondents are expected to respond based on the question. The questionnaires
comprises two (2) sections, section A- and B, section. A, which is the personal information of the
respondents, while section B contained the items on which the respodentd id requires to fill after
printing the questions.
These procedures are intended to help improve the usefulness, timeliness, accuracy, and
comparability of education data that inform key policy decisions. Take steps to minimize the time,
cost, and effort required of data providers. Schedule the data collection, to the extent possible, at
the convenience of the data providers and with adequate time to respond.

15
3.5 METHOD OF DATA ANALYSIS

The method includes the use of questionnaire. This questionnaire was address to the company
officers to obtain relevant information.
For the analysis of the collected data, the simple percentage techniques will be adopted, this is a
techniques were be frequency of response for any particular items on questionnaires is calculated
from the percentage of the respondents which is arrive at for that items.
The frequency use is based on responses received in relation to questionnaires returned by
the company’s officers and percentages computed based on observed frequency. Information
obtained from Dangote group is used to support the questionnaire analysis and interpretation.
Besides this, information obtained from verbal interview is also used in the analysis. Before the
various ways of analyzing and discussing data, you need to review the differences between
qualitative research/quantitative research and qualitative data/ quantitative data.
The purpose of analyzing data is to obtain usable and useful information. The analysis, regardless
of whether the data is qualitative or quantitative, may:

1) Describe and summarize the data.


2) Identify relationship between variables.
3) Compare variables.
4) Identify the difference between variables.
5) Forecast the outcomes

Hence, the literature survey which carried out the guides through the various data analysis methods
that have been used in similar studies. Depending upon the research paradigm and methodology
and the type of data collection, this also assists in data analysis. The number of software packages
available that facilitate data analysis. This multivariate statistics concepts and application was
designed by: David W. Stockburger in 2003.

16
CHAPTER FOUR

4.0 DATA PRESENTATION AND ANALYSIS:

INTRODUCTION

In order to attain the aims and objectives of this research, This chapter of the project will analyze
the results from information collected through questionnaires, the questionnaires was distributed
to the respondents and the information collection were statically analyzed and the result were
presented and discussed accordingly.

The questionnaire has been divided into two main parts, the first part is section “A” which demands
four personal information of the respondent, while the second part of the questionnaire is section
“B” which constitutes the problems questions of the research. The section is specified on whether
yes or no, that is two options in which the respondent select.

Fifty “50” copies of the questionnaires were distributed, according to our sample randomly
sampling techniques such as:

1. Number of companies.

Number of Information Communication Technology 9 Record keeping 16

2. Information Communication Technology and Record keeping.

Information Communication Technology 7 Record keeping 18

Successfully, all the fifty copies of the questionnaires were returned, and also all the fifty (50)
copies of the questionnaires that returned contained sufficient and relevant clue, that are needed to
answer the research questions.

The interpretation of data gives meaning to the data analysis which will enable the individuals to
read the research, and understanding what analysis is all about.

The data analysis consisted the tables in lines with the research questions, on the effectiveness of
online database ( A Case Study of Dangote Group of Company)

17
The data collection is thus presents in tabular form followed by simple percentage of the responses
were the recorded sides written the frequency distribution after which the percentage of these
frequency are collected in the course of data interpretation.

These are the following tables.

To really find out whether, is the online database reliable to the organization?

The respondents were asked whether the online database is reliable to the organization or not?

The respondents view is tabulated in table 4:0

TABLE 4:0

RESPONSE FREQUENCY PERCENTAGE


Yes 45 90%
No 5 10%
Total 50 100%

From the table above 90% were of the opinion may lead on whether is the online database reliable
to the organization, while 10% were of the opinion that the online database cannot lead the
reliability of the organization.

Base on the above table, it clearly shown that, the majority believed that online database is reliable
to the organization while the minority believed that it cannot be a reliability to the organization.

This clearly indicates that the online database is reliable to the organization.

The respondents were also asked. Does the online database generate any fault?

The respondents view were tabulated in table 1:1

TABLE 4:1

RESPONSE FREQUENCY PERCENTAGE


Yes 48 96%
No 2 4%
Total 50 100%

18
From the above table 96% had agreed that the online database generate fault to the organization
which 4% are of the opinion that online database is not generation fault to the organization.

This shown that online database generating fault is lowering the standard of the organization.

The respondents were also asked. Does the effect brings any fault to the system?

The respondents were tabulated in table 1:2

TABLE 4:2

RESPONSE FREQUENCY PERCENTAGE


Yes 44 88%
No 6 12%
Total 50 100%

From the above table 87.75% respondents have believed that the effect bring fault to their system
while 12% are opinion that it brings effect does not bring fault to their system

For this reason, the table above shows that the system brings fault to their system is bringing a
serious consequences to their national life.

The respondents also were asked in table 1:3. On a normal basis. Do you believe the speed of
operation affect the system with database basis?

The responses were tabulated in table 4:3

TABLE 4:3

REPONSE FREQUENCY PERCENTAGE


Yes 47 94%
No 3 6%
Total 50 100%

From the table above 94% were of the opinion that the speed of the operation in regards to their
system with database is, it speed of operation is working most of the time while 6% only opinioned

19
that the speed of operation in regard to the system is sometime slow and not adequate to the system
and it can generate lot of problems to the organization

The respondents were also asked. Does it have efficient strategy that will bring the down fall image
of the organization?

The response opinions were tabulated in table 1e

TABLE 4:4

RESPONSE FREQUENCY PERCENTAGE


Yes 46 92%
No 4 8%
Total 50 100%

From the above table 92% were of the opinion that the efficient strategy used will bring the down
fall image of the organization while 8% were opinion that the strategy cannot bring about the down
fall image of the organization.

This show that efficient strategy used to bring the down fall image of the organization.

The respondents were also asked. Do you believe the current strategy will eradicate any issues?

The responses opinions were tabulated in table 4:5

RESPONSE FREQUENCY PERCENTAGE


Yes 20 40%
No 30 60%
Total 50 100%

It clearly indicate that, in the above table 40% were of the opinion that the current strategy will
eradicate the issues of the organization, while 60% are of the opinion that there is no current
strategy that will eradicate any issues of the organization.

This means that, the organization has no any fault from the current strategy.

20
The respondent were also asked. Does it affect the manual system of the database?

The responses opinions were tabulated in table 4:6

RESPONSE FREQUENCY PERCENTAGE


Yes 10 20%
No 40 80%
Total 50 100%

From the above table, 20% are opinion that it affect the manual system of the database, while 80%
are opinion that it does not affect the manual system of the database due to the easy accessing of
little records kept in the organization. The online technology has made everything easier for
companies, large organization etc.

This clearly indicate that having a manual system records does not affect the database of the
organization.

The respondents were also asked. Do you believe improving your hypothesis in online database is
effective to the organization?

The responses opinion were tabulated above in table 4:7

REPONSE FREQUENCY PERCENTAGE


Yes 47 94%
No 3 6%
Total 50 100%

The table above 94% are opinion that, the improvement of hypothesis testing the online database
will enable the customers to have access to the organization profile or to know what the
organization or companies is all about, while the minority 6% are opinion that some of the
documents and records of the organization are classified and should not be exposed.

21
From the above statement and table, it indicate that letting the customers knowing some of the
organization profile is not a problem, in times of knowing their latest updates and job
recruitments/vacancies.

The respondents were also asked. Do you believe you can improve argumentation of online
database?

The response views were tabulated in table 4:8

RESPOSE FREQUENCY PERCENTAGE


Yes 15 30%
No 35 70%
Total 50 100%

The table 2B shows that the minority which is the 30% are viewed that, the improvement of
argumentation of online searching does not bring any impact or achievement to the online
searching of the organization while, the majority 70% are believed that with the help of the
argumentation of online searching adds knowledge of what the organization is all about.

This means that, argumentation of online searching is an ideal of the organization history
background and adding knowledge through the argument.

The respondents were also asked. The use of systematic analysis. Do you believe there is any time-
consuming?

The response were tabulated in table 4:9

RESPONSE FREQUENCY PERCENTAGE


Yes 48 97.9%
No 1 2.1%
Total 50 100%

From the above table, 97.9% are of the opinion that the use of systematic analysis for
comprehensive (through time-consuming) has improve the way the organization analyzing their

22
records, goods of product etc while 2.1% were opinion and believed that it has no side effects on
the organization.

The respondents were also asked. Do you believe metacognition will improve online database?

The response view were tabulated in table 4:10

RESPONSE FREQUENCY PERCENTAGE


Yes 48 90%
No 2 10%

Total 50 100%

From the above table 90% are opinion that to improve the metacognition of online database will
help in higher-order thinking that will enables the understanding, analyzing and control of the
cognitive processing of the organization, while 10% were believed that there is no need of the
metacognition in the organization.

4.1 HYPOTHESIS TESTING

In order to test the hypothesis stated, the expected frequency is calculated based on the responses
of the most relevant questions from the questionnaire and they were compared with the observed
frequency using a table test.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:0

𝐻0 : Online database is reliable to the organization?

𝐻1 : Online database is not reliable to the organization.

23
Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether online database is
reliable to the organization.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:1

𝐻0 : The online database generate fault

𝐻1 : Online database does not generate any fault.

Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether online database
generate fault.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:2

𝐻0 : The effect of online database has effect that brings fault to the system

𝐻1 : Online database does not bring fault effect to the system.

Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether the effect brings
any fault to the system.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

24
Hypothesis 4:3

𝐻0 : The speed of operation is working normally and most of the time with the database.

𝐻1 : Online database speed of operation is slow and not adequate.

Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether the speed of
operation is working most of the time.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:4

𝐻0 : The efficient strategy used to bring the down fall image of the organization

𝐻1 : The strategy cannot bring about the down fall image of the organization.

Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether the efficient
strategy bring the down fall image of the organization.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:5

𝐻0 : The current strategy will not eradicate any issues of the organization

𝐻1 : The current strategy will eradicate any issues of the organization.

Since the calculated frequency is less than the table value, we therefore reject the null hypothesis
(𝐻0 ) and accept the alternative hypothesis (𝐻1 ) which states whether that the current strategy will
eradicate any issues of the organization.

25
Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:6

𝐻0 : The manual system of the database is affecting the organization.

𝐻1 : The manual system is not affecting the organization.

Since the calculated frequency is less than the table value, we therefore reject the null hypothesis
(𝐻0 ) and accept the alternative hypothesis (𝐻1 ) which states whether the manual system is affecting
the organization.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:7

𝐻0 : The hypothesis testing in online database is effective to the organization

𝐻1 : The hypothesis testing in online database is not effective to the organization.

Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether the improving the
hypothesis testing is effective to the organization.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:8

𝐻0 : The improvement of argumentation of online database does not bring any impact.

𝐻1 : The improvement of argumentation of online database adds knowledge to the organization.

26
Since the calculated frequency is less than the table value, we therefore reject the null hypothesis
(𝐻0 ) and accept the alternative hypothesis (𝐻1 ) which states whether the argumentation will
improve the organization.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:9

𝐻0 : The systematic analysis for time consuming has improve the organization.

𝐻1 : The systematic analysis has side effects on the organization.

Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether the systematic
analysis for time consuming to the organization.

Decision rule:-

Reject the null hypothesis 𝐻0 if frequency calculation is greater than the table value. Otherwise,
the attribute hypothesis 𝐻1 is accepted.

Hypothesis 4:10

𝐻0 : The metacognition of online database has improve high-order thinking for understanding to
the organization.

𝐻1 : No need for the metacognition to the organization.

Since the calculated frequency is greater than the table value, we therefore reject the null
hypothesis (𝐻1 ) and accept the alternative hypothesis (𝐻0 ) which states whether the metacognition
will improve the organization.

27
4.2 RESULT

Following the analysis that was employed, and the responses obtained from the questionnaire, it
was observed that the 50 (100%) copies of questionnaire was successful retrieved from
respondents, the research results were as follows:

1. The result indicate that online database is reliable to the organization.


2. The result clearly shows that online database generating fault is lowering the standard of
the organization.
3. The result also indicate that the effect is bringing fault to their system and a serious
consequences in their national life.
4. From the respondents, it indicate that the speed of operation is working most of the time.
5. The result shoes that the efficient strategy used will bring the down fall image of the
organization.
6. It shows that the organization has no fault in the current strategy.
7. It indicate that having a manual system records does not affect the database in the
organization.
8. The result finally show that to allow the customers knowing some of the organization
profile is not a problem, in terms of their latest updates and job recruitment/vacancies.
9. The result shows that the argumentation of online database is an ideal of the organization
is adding the knowledge through the argument.
10. The result indicate that the time consuming has improve the organization.
11. It clearly indicate that metacognition will improve the thinking of the organization.

28
4.4 DISCUSSION OF THE PROBLEMS

The Dangote Group of Company system is almost or nearly on the verge of collapse following the
apparent measures that will enable the organization restore it standard. Any effort uplift the
standard of Dangote Group should include plans and measures to eradicate or at least to control
the resources of the organizations.

For this reasons, this project research is purposely designed in order to find out the effectiveness
of online database (A Case Study of Dangote Group of company).

The effectiveness of online database is increasing rampant in the company. However, the future of
the companies has more threat especially in terms of insecurity of the standard of the organization
because, the cases of how is the company reliable in terms of online database and the strategies of
any fault to their system. The companies where the leaders of tomorrow are trained, but the
companies are today corrupted with misguide of job recruitment.

A lots of the staffs feel that there is no loss of confidence in higher companies of their learning
system, more especially in Dangote Group of Company because of high rate of the company
because of what the company are going to produce and how it will be developed.

To remove all the lapses of the company, it must be pragmatically address the issues of things to
be done in the right way in order to move the company forward, unless if they do not value the
system of selfishness and materialism.

29
CHAPTER FIVE

5.0 SUMMARY:-

The project is purposely designed to carry out the research in the topic. The effectiveness of online
database (A Case Study of Dangote Group of Company).

The effectiveness of the database shows how their database are conducted and prevented from any
issues or how secured the database are. This research was presented and analyzed with some of
the information obtained using the questionnaire to view the management is very relevant to the
crux of the activities of the effectiveness initiated and practiced in the company. Soughing to relate
the effectiveness of online database and suggesting the best way to enhance the place in order the
company policy formulation is necessary to guide a specific action for maintenance and continuity
some of the policy determination area used by the company.

5.1 CONCLUSION

The objective of this work is to establish the effectiveness of online database. From the tables, the
result shows that the tables are been facilitated in way to avoid data redundancy. To promote and
rest the issues of the company, the company must overcome the value of the system of selfishness
and materialism. Changing the original road in achieving your goal base on land down the policy
of the company requires innovation to be a successful company and different ball game altogether.

5.2 SUMMARY OF THE FINDINGS

This project research was carried out based on the effectiveness of online database (A Case Study
of Dangote Group of Company)

The study of an organized private sector was carryout specifically Dangote Group of Companies.
In this company only the personnel in the relevant departments of the company are selected to fill,
the questionnaires couple with secondary information obtained. The research was taken with a
sampling and the sample size of the company using questionnaire for random selection of two

30
departments under the company. The findings are the method of data collection, analyzing the
method and population of the company.

5.3 REOMMENDATION
This research work was carried out base on the lapses from Dangote Group of Company and to
show how effective is the online database to the organization, using all the data gathered from the
respondents.

In order to effectively utilize the database, the following information is imperatives for the future
work.

To maintain;

Data consistency and integrity

Reduce data redundancy

Increase system performance

Maintain maximum user flexibility

Create a useable system

The database should prevent:-

Unnecessary or forgotten data

Inflexibility for database re-sizing or modification

Poor data element specification

Poor database integration between the parts of the database

Unsupported application

Major database update cost

31
REFERENCES

(By Edward .p. mccaffery,R 2010). Global cement directorate. Publication Epson uk.
According to Edgar codd (2001), relational model of data for large shared data
banks.
(By California Hill Donald 1999), A history of engineering in classical and medieval
Time
Charles Bachman, founded the database task group within codasyl, sanjose in (December,
2002)
Steve Bruce (2009), design and control of Apache Cassandra and NoSQL.
Design and control of database, Washington DC, natural population science in October
2008, by Micheal Stonbreaker
Rochas Majj (January 2010), United State geological survey the problems of
database.
According to Craig S. Mullins (2012), database system upgraded.
Surajit Chaudhuri 2008, automated configuration and management of database report.
Publication.
W. Stockburger in 2003, data analysis. This multivariate statistics concepts and application
was design
(By Edgar codd 1999), worked at IBM in San Jose, California as hierarchical database

32
APPENDIX

PUBLIC QUESTIONNAIRE FOR RESEARCH PROJECTS:

Dear Respondents,

This questionnaire is meant to gather information for a research project on the effectiveness
of online database (A Case Study of Dangote Group of Company).

SECTION “A”

Personal Information: -

Sex Male ( ) Female ( )

Age: 20-30 ( ) 31-40 ( ) 41 and above ( )

Status: Phd ( ) Doctor ( ) Civil Servant ( ) Masters ( ).

SECTION “B”

1. Is the online database reliable to the organization?


Yes ( ) No ( )

2. Does the online database generate any fault?

Yes ( ) No ( )

3. Does the effect brings any fault to the system?

33
Yes ( ) No ( )

4. On a normal basis. Do you believe the speed of operation in regards to the system with
database basis?
Yes ( ) No ( )

5. Does it have efficient strategy used to bring the down fall image of the organization?
Yes ( ) No ( )

6. Do you believe the current strategy will eradicate any issues?


Yes ( ) No ( )

7. Does it affect the manual system of the database?


Yes ( ) No ( )

8. Do you believe improving your hypothesis testing in online database is effective to the
organization?
Yes ( ) No ( )

9. Do you believe that you can improve argumentation of online database?


Yes ( ) No ( )

10. The use systematic analysis for comprehensive. Do you believe there is any time-
consuming?
Yes ( ) No ( )

34
11. Do you believe that metacognition will improve online database?
Yes ( ) No ( )

35

You might also like