Attend
Attend
INTRODUCTION
SYSTEM ANALYSIS
Analysis is a delivered study of the various operations performed by the system and
their relationship within the system and outside the system. System analysis may be
considered as an interface between actual problem and computer. It is management technique
used in designing a new system.
EXISTING SYSTEM
The current method of tracking employee attendance relies on manual processes, such
as using physical attendance registers or basic spreadsheets. In this system, employees are
required to sign in and out, and their attendance information is recorded manually.
DISADVANTAGES:
Inaccuracy
Time-Consuming
Limited Accessibility
Scalability Issues
Security Concerns
Difficulty in Leave Management
Lack of Modern Features
PROPOSED SYSTEM
ADVANTAGES:
Accuracy Improvement
Time Efficiency
Real-time Accessibility
Scalability
Enhanced Security
Streamlined Leave Management
Modern Features
Regularize Module
CHAPTER III
DEVELOPMENT ENVIRONMENT
The Development Environment comprises of hardware requirements and software
requirements. The Hardware requirement consists of Processor, Hard Disk, Mouse, RAM and
Keyboard. The Software requirement consists of Operating System, Front end tool, Back end
tool and coding language.
HARDWARE REQUIREMENTS
Processor and RAM play a vital role in hardware process. For the development of
this project, the following hardware requirements have been considered.
Hardware Requirements
RAM 8GB(minimum)
SOFTWARE REQUIREMENTS
Operating System is the major part of software requirements. The Front End Tool
and Back End Tool is used for storing and retrieving the information. The Coding Language
is most important in developing the site. For the development of this project, the following
software requirements have been considered.
TABLE: 3.2 Software Requirements
Software requirement
SOFTWARE DESCRIPTION
FRONT END:
ANDROID STUDIO:
Android Studio was announced on May 16, 2013, at the Google I/O conference. It was in
early access preview stage starting from version 0.1 in May 2013, then entered the beta stage
starting from version 0.8 which was released in June 2014. The first stable build was released
in December 2014, starting from version 1.0. At the end of 2015, Google dropped support for
Eclipse ADT, making Android Studio the only officially supported IDE for Android
development. On May 7, 2019, Kotlin replaced Java as Google's preferred language for
Android app development. Java is still supported, as is C++.
Features:
JAVA:
Java was originally developed by James Gosling at Sun Microsystems. It was released
in May 1995 as a core component of Sun Microsystems' Java platform. The original
and reference implementation of Java compilers, virtual machines, and class libraries were
originally released by Sun under proprietary licenses. As of May 2007, in compliance with
the specifications of the Java Community Process, Sun had relicensed most of its Java
technologies under the GPL-2.0-only license. Oracle offers its own Hotspots Java Virtual
Machine, however the official reference implementation is the OpenJDK JVM which is free
open-source software and used by most developers and is the default JVM for almost all
Linux distributions.
Principles
There were five primary goals in the creation of the Java language:
XML:
Extensible Mark-up Language (XML) is a mark-up and file format for storing,
transmitting, and reconstructing arbitrary data. It defines a set of rules for
encoding documents in a format that is both human-readable and machine-readable.
The World Wide Web Consortium's XML 1.0 Specification of 1998 and several other related
specifications—all of them free open standards—define XML. The design goals of XML
emphasize simplicity, generality, and usability across the Internet. It is a textual data format
with strong support via Unicode for different human languages. Although the design of XML
focuses on documents, the language is widely used for the representation of arbitrary data
structures such as those used in web services.
The main purpose of XML is serialization, i.e. storing, transmitting, and reconstructing
arbitrary data. For two disparate systems to exchange information, they need to agree upon a
file format. XML standardizes this process. XML is analogous to a lingua franca for
representing information. As a mark-up language, XML labels, categorizes, and structurally
organizes information. XML tags represent the data structure and contain metadata. What's
within the tags is data, encoded in the way the XML standard specifies. An additional XML
schema (XSD) defines the necessary metadata for interpreting and validating XML. (This is
also referred to as the canonical schema.) An XML document that adheres to basic XML
rules is "well-formed"; one that adheres to its schema is "valid."
FIREBASE:
Firebase is a set of hosting services for any type of application (Android, iOS,
Javascript, Node.js, Java, Unity, PHP, C++ ...). It offers NoSQL and real-time hosting of
databases, content, social authentication (Google, Facebook, Twitter and Github), and
notifications, or services, such as a real-time communication server. Firebase is a backend
platform for building Web, Android and IOS applications. It offers real time database,
different APIs, multiple authentication types and hosting platform. This is an introductory
tutorial, which covers the basics of the Firebase platform and explains how to deal with its
various components and sub-components.
Firebase Features
Firebase Advantages
Firebase Limitations
NoSQL originally referring to non SQL or non relational is a database that provides a
mechanism for storage and retrieval of data. This data is modelled in means other than the
tabular relations used in relational databases. Such databases came into existence in the late
1960s, but did not obtain the NoSQL moniker until a surge of popularity in the early twenty-
first century.
NoSQL databases are used in real-time web applications and big data and their use are
increasing over time. NoSQL systems are also sometimes called Not only SQL to emphasize
the fact that they may support SQL-like query languages. A NoSQL database includes
simplicity of design, simpler horizontal scaling to clusters of machines and finer control over
availability. The data structures used by NoSQL databases are different from those used by
default in relational databases which makes some operations faster in NoSQL. The suitability
of a given NoSQL database depends on the problem it should solve. Data structures used by
NoSQL databases are sometimes also viewed as more flexible than relational database tables.
Many NoSQL stores compromise consistency in favor of availability, speed and
partition tolerance. Barriers to the greater adoption of NoSQL stores include the use of low-
level query languages, lack of standardized interfaces, and huge previous investments in
existing relational databases. Most NoSQL stores lack true ACID (Atomicity, Consistency,
Isolation, Durability) transactions but a few databases, such as Mark Logic, Aerospike,
FairCom c-treeACE, Google Spanner (though technically a NewSQL database), Symas
LMDB, and OrientDB have made them central to their designs. Most NoSQL databases offer
a concept of eventual consistency in which database changes are propagated to all nodes so
queries for data might not return updated data immediately or might result in reading data that
is not accurate which is a problem known as stale reads. Also some NoSQL systems may
exhibit lost writes and other forms of data loss. Some NoSQL systems provide concepts such
as write-ahead logging to avoid data loss. For distributed transaction processing across
multiple databases, data consistency is an even bigger challenge. This is difficult for both
NoSQL and relational databases. Even current relational databases do not allow referential
integrity constraints to span databases. There are few systems that maintain both X/Open XA
standards
and ACID transactions for distributed transaction processing.
Advantages of NoSQL:
High scalability – NoSQL databases use shading for horizontal scaling. Partitioning
of data and placing it on multiple machines in such a way that the order of the data is
preserved
is sharding. Vertical scaling means adding more resources to the existing machine whereas
horizontal scaling means adding more machines to handle the data. Vertical scaling is not that
easy to implement but horizontal scaling is easy to implement. Examples of horizontal
scaling databases are MongoDB, Cassandra, etc. NoSQL can handle a huge amount of data
because of scalability, as the data grows NoSQL scale itself to handle that data in an efficient
manner.
Disadvantages of NoSQL
Narrow focus – NoSQL databases have a very narrow focus as it is mainly designed for
storage but it provides very little functionality. Relational databases are a better choice in the
field of Transaction Management than NoSQL.
Management challenge – The purpose of big data tools is to make the management of a
large amount of data as simple as possible. But it is not so easy. Data management in NoSQL
is much more complex than in a relational database. NoSQL, in particular, has a reputation
for being challenging to install and even more hectic to manage on a daily basis.
Backup – Backup is a great weak point for some NoSQL databases like MongoDB.
MongoDB has no approach for the backup of data in a consistent manner.
Large document size – Some database systems like MongoDB and Couch DB store data
in JSON format. This means that documents are quite large (Big Data, network bandwidth,
speed), and having descriptive key names actually hurts since they increase the document
size.
CHAPTER IV
SYSTEM DESIGN
DATA MODEL
Data Model is a set of concepts to describe the structure of the database and certain
constraints that the database should obey. The main aim of data model is to support the
development of information system by providing the definition and format of data. A data
model can be a diagram or flowchart that illustrates the relationships between data. Usually
data models are specified in a data modelling language. Although capturing all the possible
relationships in a data model can be very time intensive, it’s an important step and shouldn’t
be rushed. Well documented models allow stake holders to identify errors and make changes
before any programming code has been written. Data modellers often use multiple models to
view the same data and ensure that all processes, entities, relationships, and data flows have
been identified. Data Model can be classified into various evolutions. Some of the evolutions
are Hierarchical Model, Network Model, Relational Model, Entity Relationship Model and
Object-Oriented Model.
The structural part of a data model theory refers to the collection of data structures
which make up a data when it is being created. These data structures represent entities and
objects in a database model. The manipulation part of a data model refers to the collection of
operators which be applied to the data structures.
The main aim of data model is to support the development information system by
providing the definition and format of data. If this is done consistently across systems then
compatibility of data can be achieved. If the same data structures are used to store and access
data then different applications can share data. Data model is based on data, data relationship,
data semantic and data constraint. A data model provides the details of information to be
stored, and is of primary use when the final product is the generation of computer software
code for an application or the preparation of a functional specification to aid computer
software make or buy decision.
CATEGORIES OF DATA MODEL:
Conceptual Data Model: This data model provides the concept that is close to the way
many users perceive data.
Physical Data Model: This data model provides the concept that describes the details of
how data is stored in the computer.
Implementation Data Model: This data model provides the concept that fall between the
above two, balancing user views with some computer storage details.
A Firebase data structure diagram is a visual representation that illustrates the organization
and relationships of data within a Firebase database. Firebase is a cloud-based platform that
offers various services, including a NoSQL database. Unlike traditional relational databases
with fixed schemas, Firebase's Realtime Database and Fire store are NoSQL databases that
store data in a JSON-like format, allowing for more flexibility in data structures.
In a Firebase data structure diagram, you would typically represent collections, documents,
and the fields within those documents. Here's an example of how you might structure such a
diagram:
1. Collections:
2. Documents:
Documents are individual records within a collection. Each document contains a set offields,
and these fields can hold various types of data. Represent documents within the
corresponding collection.
Document: [email protected]
Document: [email protected]
3. Fields:
Fields represent the individual pieces of data within a document. They can be strings,
numbers, arrays, or nested objects. Include the fields within each document to illustrate the
data they contain
Document: [email protected]
DOB: “ ”
DOJ: “ ”
Email Id: “ ”
Emp id: “ ”
Phone: “ ”
Qualification: “ ”
Reporting Manager Email Id: “ ”
Reporting Manager Id: “ ”
Reporting Manager Name: “ ”
User Name: “ ”
User Type: “ ”
Weekly Off: “ ”
data, origin, usage, and format. Data base design is concerned with the data focus from the
perspective of the system designer. The end product is called a database the definition phase
• A database should provide for the efficient storage, update and retrieval of data.
• The technique used to improve a data model in preparation for database design is
called data analysis.
• Data analysis is a process that prepares a data model for implementation as a simple,
non-redundant, flexible and adaptable database.
PROCESS MODEL
Process models are processes of the same nature that are classified together into a model.
Thus, a process model is a description of a process at the type level. The same process model
is used repeatedly for the development of many applications and thus, has many
instantiations.
A context diagram, sometimes called a level 0 data-flow diagram, is drawn in order to define
and clarify the boundaries of the software system. It identifies the flows of information
between the system and external entities. The entire software system is shown as a single
process.
DATAFLOW DIAGRAM
DFD describes the processes that are involved in a system to transfer data from the input to
the file storage and reports generation. It uses defined symbols like rectangles, circles and
arrows, plus short text labels, to show data inputs, outputs, storage points and the routes
between each destination. It depicts the data flow from one step to another.
● 0-level DFD
● 1-level DFD
● 2-level DFD.
0- level DFD: A context diagram is a top-level data flow diagram which is also known as
"Level 0". It only contains one process node ("Process 0") that generalizes the function of the
entire system in relationship to external entities.
1- level DFD: A level-1 DFD notates each of the main sub-processes that together form the
complete system. A level-1 DFD is an “exploded view” of the context diagram.
2- level DFD: A level 2 data flow diagram (DFD) offers a more detailed look at the processes
that make up an information system than a level-1 DFD does. It can be used to plan or record
the specific makeup of a system.
Components of Data Flow Diagram:
The Data Flow Diagrams includes four main component elements. They are entity,
process, data store and data flow. External Entity – Also known as actors, sources or sinks,
and terminators, external entities produce and consume data that flows between the entity and
the system being diagrammed.
CHAPTER V
SOFTWARE DEVELOPMENT
WATERFALL MODEL:
The first formal description of the method is often cited as article published by
Winston Royce in 1970, although Royce did not use the term “waterfall” in this article.
Royce presented this model as an example of a flawed, non working model.
● Project is divided into sequential phases, with some overlap and splash back
acceptable between phases.
● Emphasis is on planning, time schedules, target dates, budgets and
implementation of an entire system at one time.
● Tight control is maintained over the life of the project via extensive written
documentation, formal reviews, and approval/signoff by the user and information
technology management occurring at the end of most phases before beginning the
next phase. Written documentation is an explicit deliverable of each phase.
Planning is to generate a high-level view of the intended project and determine the
goals of the project. Our project plan depends on the customer needs and their requirements.
The first phase will be how much time it will take and how much cost it will take to finish. It
plays vital role in developing the software project.
PHASE II-ANALYSIS
The goal of systems analysis is to determine where the problem is in an attempt to fix
the system. This step involves breaking down the system in different diagram to analyse the
situation. Analyse project goals, break down functions that need to be created, and attempt to
engage users so the definite requirements can be defined.
PHASE III-DESIGN
Systems design is the phase where system engineers analyse and understand the
business of the proposed system by studying the user requirements document. MHS system
design is developed by the hypertext pre-processor based on the user requirements
documents. They figure out possibilities and techniques by which the user requirements can
be implemented. Software specification document which serves as a blueprint for the
development phase is generated.
PHASE IV-DEVELOPMENT
Modular and subsystem programming code will be accomplished during this stage.
Unit testing and module testing are done in this stage by the MHS develops. Development
stage is intermingled with the next in that individual modules will need testing. Goals
&targets, of MHS system is achieved by establishing schedules during project development.
MODULAR DESCRIPTION
The modular description contains full description about every module used inthe
Log In
Admin
Employee Module
Report and Mail Module
Login Module:
Admin Module:
Employee Module:
Check In & Out: Employees use this to check in and check out for attendance.
apply leave: Allows employee to request for the leave and also mention the reason for
the request.
Regularization: If employee forget to sign out, then using this the employee can send
the reason to the admin.
REPORT:
TESTING
Testing is the process or group of procedures carried out to evaluate some aspect of a
piece of software. Testing plays a vital role in the success of the system. System testing
makes a logical assumption that if all parts of the system are correct, the goal will be
successfully achieved. Once program code has been developed, testing begins. The minimum
aim of testing procession identifies all defects existing in software product.
Testing establishes the software which has attained a specified degree of quality with
respect to selected attributes. The testing process focuses on the logical internals of the
software, ensuring that all statements have been tested, and on the functional externals, that is
conducted tests to uncover errors and ensure that defined input will procedure actual results
that agree with required results. Testing is related with two processes namely Validation and
Verification.
TYPES OF TESTING
Unit Testing
Integration Testing
System Testing
Acceptance Testing
SYSTEM TESTING
System Testing begins at the requirements phase with the development of a master test plan
and requirements-based tests. System Testing is more complicated task. It requires large
amount of resources. The goal is to ensure that the system performs according to its
requirements. It evaluates both functional behaviour and quality requirements such as
reliability, usability, performance and security. Testing is one of the important steps in the
software development phase.
Testing checks for the errors, as a whole of the project testing involves the following testcases:
Functional Testing
Performance Testing
Stress Testing
Configuration Testing
Security Testing
Recovery Testing
Functional Testing:
Functional Testing are used to ensure that the behaviour of the system to the requirements
specification. All functional requirements for the system must be achievable by the system. It
focuses on the inputs and proper outputs for each function. Improper and Illegal inputs must
also be handles by the system. All functions must be tested. Some of the goals of functional
testing are;
The goal of system performance testing is to see the software meets the performance
requirements. Performance Testing allows testers to tune the system; that is to optimize the
allocation of system resources. Resources for system testing must be allocated in the system
test plan. Results of performance tests are quantifiable. Performance testing requires the test-
bed requirement that includes special laboratory equipment and space that must be reserved
for the tests.
Test Managers should ascertain the availability of these resources and allocate the
necessary time for training in the test plan. Usage requirements for these resources need tube
described as part of the test plan.
Stress Testing:
When a system is tested with a load that causes it to allocate its resources in
maximum amounts, this is called stress testing. Stress testing is most important because it can
reveal defects in real-time and other types of systems, as well as weak areas where poor
design could cause unavailability of service. This is particularly important for real-time
systems where unpredictable events may occur resulting in input loads that exceed those
described in the requirements documents.
Configuration Testing:
Security testing handles safety and security issues for commercial applications for use
on the Internet. The Internet users believe that their personal Information is not secure and is
available to those with intent to do harm; the future of e-commerce is in peril. Security testing
evaluates system characteristics that relate to the scalability, integrity, and confidentially or
system data and services. Damages can be done through various means such as viruses,
Trojan horses, trap doors, illicit channels.
Loss of information
Corruption of information.
Misinformation
Privacy violations.
Denial of services
Recovery Testing:
Test data is data which has been specifically identified for use in tests, typically of a
computer program.
Some data may be used in a confirmatory way, typically to verify that a given set of input to a
given function produces some expected result. Other data may be used in order to challenges
the ability of the program to respond to unusual, extreme, exceptional or unexpected input.
Test data may be produced in a focused or systematic way (as is typically the case in domain
testing) or by using other, less-focused approaches (as is typically the case in high- volume
randomized automated tests). Test data may be produced by the tester, or by a program or
function that aids the tester. Test data may be recorded for re-use, or used once and then
forgotten.
UNIT TESTING
A unit is the smallest possible testable software component. A unit can be function or
procedure implemented in a procedural programming language. A unit may also be a small
sized COTS component purchased from an outside vendor that is undergoing evaluation by
the purchaser, or a simple module retrieved from an in-house reuse library. Unit test results
are recorded for future testing process. This result document used for integration and system
tests.
Unit Testing:
1 Unit Enter the Email The character should be Displays the Success
displayed character
Testing
2 Unit Enter the Email Error should display Accept the mail Failure
Testing without @ id
TESTCASE SUCCESS:
Unit testing
TESTCASE FAILURE:
INTEGRATION TESTING
Integration testing is a systematic technique for construction the program structure while at
the same time conducting tests to uncover errors associated with interfacing. i.e., integration
testing is the complete testing of the set of modules which makes up the product. The
objective is to take untested modules and build a program structure tester should identify
critical modules. Critical modules should be tested as early as possible. One approach is to
wait until all the units have passed testing, and then combine them and then tested. This
approach is evolved from unstructured testing of small programs.
Another strategy is to construct the product in increments of tested units. A small set of
modules are integrated together and tested, to which another module is added and tested in
combination. And so on. The advantages of this approach are that, interface dispenses can be
easily found and corrected.
The major error that was faced during the project is linking error. When all the modules are
combined the link is not set properly with all support files. Then we checked out for
interconnection and the links. Errors are localized to the new module and its
intercommunications. The product development can be staged, and modules integrated in as
they complete unit testing. Testing is completed when the last module is integrated and
tested. Integration testing is a systematic technique for constructing the program structure
while at the same time conducting tests to uncover errors associated with. Individual
modules, which are highly prone to interface errors, should not be assumed to work instantly
when we put them together. The problem of course, is ―putting them together‖- interfacing.
There may be the chances of data lost across on another’s sub functions, when combined may
not produce the desired major function; individually acceptable impression may be magnified
to unacceptable levels; global data structures can present problems.
TABLE: 6.2 Integration Testing
INTEGRATE TESTING
TESTING TECHNIQUES / TESTING STRATEGIES
The description of behaviour or functionality for the software under test may come
from a formal specification, an Input/Process/Output Diagram (IPO), or a well defined set of
pre and post condition. Another source for information is a requirements specification
document that usually describes the functionality of the software under test and its inputs and
expected outputs.
This testing is also called as Glass box testing. In this testing, by knowing the specific
functions that a product has been design to perform test can be conducted that demonstrate
each function is fully operational at the same time searching for errors in each function. It is a
test case design method that uses the control structure of the procedural design to derive test
cases. Basis path testing is a white box testing. White-box testing techniques are as follows:
In this testing by knowing the internal operation of a product, test can be conducted to
ensure, that is the internal operation performs according to specification and all internal
components have been adequately exercised. It fundamentally focuses on the functional
requirements of the software. Black-box testing techniques:
VALIDATION TESTING
Validation testing takes place after integration testing. Software is completely assembled
as a package. Interfacing errors have been uncovered and corrected and a final series of
software test-validation testing begins. Validation testing can be defined in many ways, but a
simple definition is that validation succeeds when the software functions in manner that is
reasonably expected by the customer. Software validation is achieved through a series of
black box tests that demonstrate conformity with requirement.
After validation test has been conducted, one of two conditions exists.
Deviation or errors discovered at this step in this project is corrected prior to completion of
the project with the help of the user by negotiating to establish a method for resolving
deficiencies. Thus the proposed system under consideration has been tested by using
validation testing and found to be working satisfactorily. Though there were deficiencies in
the system they were not catastrophic. The process of evaluating software during the
development process or at the end of the development process to determine whether it
satisfies specified business requirements. Validation Testing ensures that the product actually
meets the client's needs.
TABLE 6.3: Validation Testing
CHAPTER - VII
SYSTEM IMPLEMENTATION:
INTRODUCTION
IMPLEMENTATION: