0% found this document useful (0 votes)
30 views41 pages

Attend

Uploaded by

417NAVEEN M 763B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views41 pages

Attend

Uploaded by

417NAVEEN M 763B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

ABSTRACT

A robust complaint tracking infrastructure seamlessly integrated into a mobile


platform, exemplified by an apartment maintenance system catering to both commercial and
non-commercial needs, serves as the cornerstone of this project. The primary objective
revolves around addressing numerous flaws within the flats that have resulted in the
compromise of residents' basic necessities. Upon thorough analysis of these issues and the
formulation of effective solutions, the secretary will convene a meeting of the apartment
association to initiate immediate corrective measures. Subsequently, upon successful
resolution of each complaint, a comprehensive review of the residents' feedback will be
conducted. This systematic approach ensures that every resident's concerns are methodically
addressed, facilitating the swift resolution of their problems and enhancing overall
satisfaction with their living environment.
CHAPTER I

INTRODUCTION

Revolutionizing resident engagement in apartment complexes, this innovative system seamlessly


integrates advanced technology with user-centric design. At its core, an Android application acts as both
client app and web server, ensuring unmatched connectivity and efficiency. Unlike traditional complaint
management systems, this groundbreaking solution enables effortless registration and login via mobile
devices, streamlining the grievance resolution process. Featuring a user-friendly interface, residents can
swiftly lodge complaints through the app, prompting rapid responses from the server. Yet, this system
goes beyond mere complaint handling, marking a shift towards resident empowerment. Users can now
challenge authority within the complex, fostering a new era of community engagement. In essence, it
heralds a transformative approach to communication and harmony within apartment living.

1.1 PROBLEM DEFINITION

Many organizations struggle with outdated methods of tracking employee attendance,


like manual registers and basic spreadsheets. These methods lead to errors, take up a lot of
time, and don't provide real-time insights into attendance. As companies grow, these systems
become less effective, causing issues like payroll errors and difficulties in managing remote
work. The problem we're addressing is the need for a modern solution – an Automated
Attendance Management System using Java. The current manual processes are prone to
mistakes and don't adapt well to today's diverse work environments. We need a system that
not only automates attendance tracking but also handles leave requests, provides real-time
monitoring, generates reports, and offers a user-friendly calendar interface. This project aims
to create a simple, efficient solution to make attendance management hassle-free for both
employees and administrators.
CHAPTER II

SYSTEM ANALYSIS

Analysis is a delivered study of the various operations performed by the system and
their relationship within the system and outside the system. System analysis may be
considered as an interface between actual problem and computer. It is management technique
used in designing a new system.

EXISTING SYSTEM

The current method of tracking employee attendance relies on manual processes, such
as using physical attendance registers or basic spreadsheets. In this system, employees are
required to sign in and out, and their attendance information is recorded manually.

DISADVANTAGES:

 Inaccuracy
 Time-Consuming
 Limited Accessibility
 Scalability Issues
 Security Concerns
 Difficulty in Leave Management
 Lack of Modern Features
PROPOSED SYSTEM

The proposed system aims to implement an Automated Attendance Management


System using Java, providing a modern solution to the challenges posed by manual
attendance tracking methods.

ADVANTAGES:

 Accuracy Improvement
 Time Efficiency
 Real-time Accessibility
 Scalability
 Enhanced Security
 Streamlined Leave Management
 Modern Features
 Regularize Module
CHAPTER III

DEVELOPMENT ENVIRONMENT
The Development Environment comprises of hardware requirements and software
requirements. The Hardware requirement consists of Processor, Hard Disk, Mouse, RAM and
Keyboard. The Software requirement consists of Operating System, Front end tool, Back end
tool and coding language.

HARDWARE REQUIREMENTS

Processor and RAM play a vital role in hardware process. For the development of
this project, the following hardware requirements have been considered.

Table: 3.1 Hardware Requirements

Hardware Requirements

Processor Intel gen 10

Hard Disk 500GB

RAM 8GB(minimum)

SOFTWARE REQUIREMENTS

Operating System is the major part of software requirements. The Front End Tool
and Back End Tool is used for storing and retrieving the information. The Coding Language
is most important in developing the site. For the development of this project, the following
software requirements have been considered.
TABLE: 3.2 Software Requirements

Software requirement

Operating System Android 10 and above

Front End Java, XML

IDE Android Studio

Back End Firebase

SOFTWARE DESCRIPTION

FRONT END:

ANDROID STUDIO:

Android Studio is the official integrated development environment (IDE)


for Google's Android operating system, built on Jet Brains IntelliJ IDEA software and
designed specifically for Android development. It is available for download
on Windows, macOS, and Linux-based operating systems. It is a replacement for the Eclipse
Android Development Tools (E-ADT) as the primary IDE for native Android application
development.

Android Studio was announced on May 16, 2013, at the Google I/O conference. It was in
early access preview stage starting from version 0.1 in May 2013, then entered the beta stage
starting from version 0.8 which was released in June 2014. The first stable build was released
in December 2014, starting from version 1.0. At the end of 2015, Google dropped support for
Eclipse ADT, making Android Studio the only officially supported IDE for Android
development. On May 7, 2019, Kotlin replaced Java as Google's preferred language for
Android app development. Java is still supported, as is C++.
Features:

● Gradle-based build support


● Android-specific refactoring and quick fixes
● Lint tools to catch performance, usability, version compatibility and other problems
● ProGuard integration and app-signing capabilities
● Template-based wizards to create common Android designs and components
● A rich layout editor that allows users to drag-and-drop UI components, option
to preview layouts on multiple screen configurations
● Support for building Android Wear apps
● Built-in support for Google Cloud Platform, enabling integration with Firebase Cloud
Messaging (Earlier 'Google Cloud Messaging') and Google App Engine
● Android Virtual Device (Emulator) to run and debug apps in the Android studio.

JAVA:

Java is a high-level, class-based, object-oriented programming language that is


designed to have as few implementation dependencies as possible. It is a general-
purpose programming language intended to let programmers write once, run
anywhere (WORA), meaning that compiled Java code can run on all platforms that support
Java without the need to recompile. Java applications are typically compiled to bytecode that
can run on any Java virtual machine (JVM) regardless of the underlying computer
architecture. The syntax of Java is similar to C and C++ but has fewer low-level facilities
than either of them. The Java runtime provides dynamic capabilities (such as reflection and
runtime code modification) that are typically not available in traditional compiled languages.

Java was originally developed by James Gosling at Sun Microsystems. It was released
in May 1995 as a core component of Sun Microsystems' Java platform. The original
and reference implementation of Java compilers, virtual machines, and class libraries were
originally released by Sun under proprietary licenses. As of May 2007, in compliance with
the specifications of the Java Community Process, Sun had relicensed most of its Java
technologies under the GPL-2.0-only license. Oracle offers its own Hotspots Java Virtual
Machine, however the official reference implementation is the OpenJDK JVM which is free
open-source software and used by most developers and is the default JVM for almost all
Linux distributions.
Principles

There were five primary goals in the creation of the Java language:

1. It must be simple, object-oriented, and familiar.


2. It must be robust and secure.
3. It must be architecture-neutral and portable.
4. It must execute with high performance.
5. It must be interpreted, threaded, and dynamic.

XML:

Extensible Mark-up Language (XML) is a mark-up and file format for storing,
transmitting, and reconstructing arbitrary data. It defines a set of rules for
encoding documents in a format that is both human-readable and machine-readable.
The World Wide Web Consortium's XML 1.0 Specification of 1998 and several other related
specifications—all of them free open standards—define XML. The design goals of XML
emphasize simplicity, generality, and usability across the Internet. It is a textual data format
with strong support via Unicode for different human languages. Although the design of XML
focuses on documents, the language is widely used for the representation of arbitrary data
structures such as those used in web services.

The main purpose of XML is serialization, i.e. storing, transmitting, and reconstructing
arbitrary data. For two disparate systems to exchange information, they need to agree upon a
file format. XML standardizes this process. XML is analogous to a lingua franca for
representing information. As a mark-up language, XML labels, categorizes, and structurally
organizes information. XML tags represent the data structure and contain metadata. What's
within the tags is data, encoded in the way the XML standard specifies. An additional XML
schema (XSD) defines the necessary metadata for interpreting and validating XML. (This is
also referred to as the canonical schema.) An XML document that adheres to basic XML
rules is "well-formed"; one that adheres to its schema is "valid."
FIREBASE:

Firebase is a set of hosting services for any type of application (Android, iOS,
Javascript, Node.js, Java, Unity, PHP, C++ ...). It offers NoSQL and real-time hosting of
databases, content, social authentication (Google, Facebook, Twitter and Github), and
notifications, or services, such as a real-time communication server. Firebase is a backend
platform for building Web, Android and IOS applications. It offers real time database,
different APIs, multiple authentication types and hosting platform. This is an introductory
tutorial, which covers the basics of the Firebase platform and explains how to deal with its
various components and sub-components.

Firebase Features

● Real-time Database − Firebase supports JSON data and all users


connected to it receive live updates after every change.
● Authentication − We can use anonymous, password or different
social authentications.
● Hosting − The applications can be deployed over secured
connection to Firebase servers.

Firebase Advantages

● It is simple and user friendly. No need for complicated configuration.


● The data is real-time, which means that every change will automatically update
connected clients.
● Firebase offers simple control dashboard.
● There are a number of useful services to choose.

Firebase Limitations

● Firebase free plan is limited to 50 Connections and 100 MB of storage.


NO SQL:

NoSQL originally referring to non SQL or non relational is a database that provides a
mechanism for storage and retrieval of data. This data is modelled in means other than the
tabular relations used in relational databases. Such databases came into existence in the late
1960s, but did not obtain the NoSQL moniker until a surge of popularity in the early twenty-
first century.

NoSQL databases are used in real-time web applications and big data and their use are
increasing over time. NoSQL systems are also sometimes called Not only SQL to emphasize
the fact that they may support SQL-like query languages. A NoSQL database includes
simplicity of design, simpler horizontal scaling to clusters of machines and finer control over
availability. The data structures used by NoSQL databases are different from those used by
default in relational databases which makes some operations faster in NoSQL. The suitability
of a given NoSQL database depends on the problem it should solve. Data structures used by
NoSQL databases are sometimes also viewed as more flexible than relational database tables.
Many NoSQL stores compromise consistency in favor of availability, speed and
partition tolerance. Barriers to the greater adoption of NoSQL stores include the use of low-
level query languages, lack of standardized interfaces, and huge previous investments in
existing relational databases. Most NoSQL stores lack true ACID (Atomicity, Consistency,
Isolation, Durability) transactions but a few databases, such as Mark Logic, Aerospike,
FairCom c-treeACE, Google Spanner (though technically a NewSQL database), Symas
LMDB, and OrientDB have made them central to their designs. Most NoSQL databases offer
a concept of eventual consistency in which database changes are propagated to all nodes so
queries for data might not return updated data immediately or might result in reading data that
is not accurate which is a problem known as stale reads. Also some NoSQL systems may
exhibit lost writes and other forms of data loss. Some NoSQL systems provide concepts such
as write-ahead logging to avoid data loss. For distributed transaction processing across
multiple databases, data consistency is an even bigger challenge. This is difficult for both
NoSQL and relational databases. Even current relational databases do not allow referential
integrity constraints to span databases. There are few systems that maintain both X/Open XA
standards
and ACID transactions for distributed transaction processing.

Advantages of NoSQL:

High scalability – NoSQL databases use shading for horizontal scaling. Partitioning
of data and placing it on multiple machines in such a way that the order of the data is
preserved
is sharding. Vertical scaling means adding more resources to the existing machine whereas
horizontal scaling means adding more machines to handle the data. Vertical scaling is not that
easy to implement but horizontal scaling is easy to implement. Examples of horizontal
scaling databases are MongoDB, Cassandra, etc. NoSQL can handle a huge amount of data
because of scalability, as the data grows NoSQL scale itself to handle that data in an efficient
manner.

High availability – Auto replication feature in NoSQL databases makes it highly


available because in case of any failure data replicates itself to the previous consistent state.

Disadvantages of NoSQL

Narrow focus – NoSQL databases have a very narrow focus as it is mainly designed for
storage but it provides very little functionality. Relational databases are a better choice in the
field of Transaction Management than NoSQL.

Open-source – NoSQL is open-source database. There is no reliable standard for


NoSQL yet. In other words, two database systems are likely to be unequal.

Management challenge – The purpose of big data tools is to make the management of a
large amount of data as simple as possible. But it is not so easy. Data management in NoSQL
is much more complex than in a relational database. NoSQL, in particular, has a reputation
for being challenging to install and even more hectic to manage on a daily basis.

Backup – Backup is a great weak point for some NoSQL databases like MongoDB.
MongoDB has no approach for the backup of data in a consistent manner.

Large document size – Some database systems like MongoDB and Couch DB store data
in JSON format. This means that documents are quite large (Big Data, network bandwidth,
speed), and having descriptive key names actually hurts since they increase the document
size.
CHAPTER IV

SYSTEM DESIGN

DATA MODEL

Data Model is a set of concepts to describe the structure of the database and certain
constraints that the database should obey. The main aim of data model is to support the
development of information system by providing the definition and format of data. A data
model can be a diagram or flowchart that illustrates the relationships between data. Usually
data models are specified in a data modelling language. Although capturing all the possible
relationships in a data model can be very time intensive, it’s an important step and shouldn’t
be rushed. Well documented models allow stake holders to identify errors and make changes
before any programming code has been written. Data modellers often use multiple models to
view the same data and ensure that all processes, entities, relationships, and data flows have
been identified. Data Model can be classified into various evolutions. Some of the evolutions
are Hierarchical Model, Network Model, Relational Model, Entity Relationship Model and
Object-Oriented Model.

The structural part of a data model theory refers to the collection of data structures
which make up a data when it is being created. These data structures represent entities and
objects in a database model. The manipulation part of a data model refers to the collection of
operators which be applied to the data structures.

ROLE OF DATA MODEL

The main aim of data model is to support the development information system by
providing the definition and format of data. If this is done consistently across systems then
compatibility of data can be achieved. If the same data structures are used to store and access
data then different applications can share data. Data model is based on data, data relationship,
data semantic and data constraint. A data model provides the details of information to be
stored, and is of primary use when the final product is the generation of computer software
code for an application or the preparation of a functional specification to aid computer
software make or buy decision.
CATEGORIES OF DATA MODEL:

i. Conceptual Data Model


ii. Physical Data Model
iii. Implementation Data Model

Conceptual Data Model: This data model provides the concept that is close to the way
many users perceive data.

Physical Data Model: This data model provides the concept that describes the details of
how data is stored in the computer.

Implementation Data Model: This data model provides the concept that fall between the
above two, balancing user views with some computer storage details.

FIREBASE DATA STRUCTURE:

A Firebase data structure diagram is a visual representation that illustrates the organization
and relationships of data within a Firebase database. Firebase is a cloud-based platform that
offers various services, including a NoSQL database. Unlike traditional relational databases
with fixed schemas, Firebase's Realtime Database and Fire store are NoSQL databases that
store data in a JSON-like format, allowing for more flexibility in data structures.

In a Firebase data structure diagram, you would typically represent collections, documents,
and the fields within those documents. Here's an example of how you might structure such a
diagram:

1. Collections:

In Firebase, data is organized into collections. Each collection is a group of documents

that share a common structure. Represent collections as containers in your diagram.

Collection: User data

Collection: Regularize data

Collection: Time data

Collection: Leave data

2. Documents:
Documents are individual records within a collection. Each document contains a set offields,

and these fields can hold various types of data. Represent documents within the
corresponding collection.

Collection: User data

Document: [email protected]

Document: [email protected]

3. Fields:

Fields represent the individual pieces of data within a document. They can be strings,
numbers, arrays, or nested objects. Include the fields within each document to illustrate the
data they contain

Collection: User data

Document: [email protected]

DOB: “ ”
DOJ: “ ”
Email Id: “ ”
Emp id: “ ”
Phone: “ ”
Qualification: “ ”
Reporting Manager Email Id: “ ”
Reporting Manager Id: “ ”
Reporting Manager Name: “ ”
User Name: “ ”
User Type: “ ”
Weekly Off: “ ”

Collection: Regularize data


Accepted: “ ”
Accepted By: “ ”
Accepted On: “ ”
Date: “ ”
Id: “ ”
New In Time: “ ”
New Out Time: “ ”
Reason: “ ”
User id: “ ”
User Name: “ ”

Collection: Time data


Date: “ ”
In hr: “ ”
In Time: “ ”
Leave: “ ”
Leave Type: “ ”
Out Time: “ ”

Collection: Leave data


Approved: “ ”
Approved Date: “ ”
Approver Name: “ ”
Cancelled: “ ”
From Date: “ ”
Leave Id: “ ”
Leave Type: “ ”
Name: “ ”
Reason: “ ”
To Date: “ ”
User Id: “ ”

4. Relationships: If your Firebase data model includes relationships between collections


or documents, represent these relationships using lines or annotations. This could include
referencing document IDs or embedding related data within a document.
DATA DICTONARY

A data dictionary or metadata repository, as defined in the IBM Dictionary of computing, is a

centralized repository of information about data such as meaning, relationships to other

data, origin, usage, and format. Data base design is concerned with the data focus from the

perspective of the system designer. The end product is called a database the definition phase

in to data structures supported by the chosen database technology.

The goals of database design are as follows.

• A database should provide for the efficient storage, update and retrieval of data.
• The technique used to improve a data model in preparation for database design is
called data analysis.
• Data analysis is a process that prepares a data model for implementation as a simple,
non-redundant, flexible and adaptable database.

PROCESS MODEL

Process models are processes of the same nature that are classified together into a model.
Thus, a process model is a description of a process at the type level. The same process model
is used repeatedly for the development of many applications and thus, has many
instantiations.

CONTEXT ANALYSIS DIAGRAM

A context diagram, sometimes called a level 0 data-flow diagram, is drawn in order to define

and clarify the boundaries of the software system. It identifies the flows of information
between the system and external entities. The entire software system is shown as a single
process.
DATAFLOW DIAGRAM

A data-flow diagram (DFD) is a way of representing a flow of a data of a process or a system


(usually an information system). The DFD also provides information about the outputs and
inputs of each entity and the process itself. A data-flow diagram has no control flow. There is
no decision rules and no loops. DFD is a graphical representation of the “flow” of data
through an information system. DFDs can also be used for the visualization of data
processing (structured design).

DFD describes the processes that are involved in a system to transfer data from the input to
the file storage and reports generation. It uses defined symbols like rectangles, circles and
arrows, plus short text labels, to show data inputs, outputs, storage points and the routes
between each destination. It depicts the data flow from one step to another.

Data Flow Diagram Levels:

There are 3 levels in data flow diagram. They are;

● 0-level DFD

● 1-level DFD

● 2-level DFD.

0- level DFD: A context diagram is a top-level data flow diagram which is also known as
"Level 0". It only contains one process node ("Process 0") that generalizes the function of the
entire system in relationship to external entities.

1- level DFD: A level-1 DFD notates each of the main sub-processes that together form the
complete system. A level-1 DFD is an “exploded view” of the context diagram.

2- level DFD: A level 2 data flow diagram (DFD) offers a more detailed look at the processes
that make up an information system than a level-1 DFD does. It can be used to plan or record
the specific makeup of a system.
Components of Data Flow Diagram:

The Data Flow Diagrams includes four main component elements. They are entity,
process, data store and data flow. External Entity – Also known as actors, sources or sinks,
and terminators, external entities produce and consume data that flows between the entity and
the system being diagrammed.
CHAPTER V

SOFTWARE DEVELOPMENT

Software Development Life Cycle is a process used by software industry to design,


develop and test high quality software’s. The SDLC aims to produce high quality software
that meets or exceeds customer expectations, reaches completion within time and cost
estimates. SDLC is a process followed for a software project within a software organization.
It consists of a detailed plan describing how to develop, maintain, replace and alter or
enhance specific software. The life cycle defines a methodology for improving the quality of
software and the overall development process.

WATERFALL MODEL:

The waterfall model is a sequential development approach, in which development is


seen as flowing steadily downwards (like a waterfall) through several phases,

● Requirement analysis resulting in a software requirements specification


● Software design
● Implementation
● Testing
● Integration, if there are multiple subsystems
● Deployment
● Maintenance

The first formal description of the method is often cited as article published by
Winston Royce in 1970, although Royce did not use the term “waterfall” in this article.
Royce presented this model as an example of a flawed, non working model.

The basic principles are:

● Project is divided into sequential phases, with some overlap and splash back
acceptable between phases.
● Emphasis is on planning, time schedules, target dates, budgets and
implementation of an entire system at one time.
● Tight control is maintained over the life of the project via extensive written
documentation, formal reviews, and approval/signoff by the user and information
technology management occurring at the end of most phases before beginning the
next phase. Written documentation is an explicit deliverable of each phase.

The waterfall model is a traditional engineering approach to software engineering.


A strict waterfall approach discourages revisiting and revising any prior phase once it is
complete. This “inflexibility” in a pure waterfall model has been a source of criticism by
supporters of other more “flexible” models. It has been widely blamed for several large-scale
government projects running over budget, over time and sometimes failing to deliver on
requirements due to the Big Design Up Front approach.

PHASES OF SOFTWARE DEVELOPMENT

The normal phases of a development project are planning, analysis, design,


development, testing, implementation, and enhancement. Different methodologies may call
these phases by different names, but they are always present. Each phase has its own
products, which may be documents, code, or test results. In DCOII system V-model software
development life cycle is followed, which is diagrammatically represented as follows.
PHASE I-PLANNING

Planning is to generate a high-level view of the intended project and determine the
goals of the project. Our project plan depends on the customer needs and their requirements.
The first phase will be how much time it will take and how much cost it will take to finish. It
plays vital role in developing the software project.

PHASE II-ANALYSIS

The goal of systems analysis is to determine where the problem is in an attempt to fix
the system. This step involves breaking down the system in different diagram to analyse the
situation. Analyse project goals, break down functions that need to be created, and attempt to
engage users so the definite requirements can be defined.

PHASE III-DESIGN

Systems design is the phase where system engineers analyse and understand the
business of the proposed system by studying the user requirements document. MHS system
design is developed by the hypertext pre-processor based on the user requirements
documents. They figure out possibilities and techniques by which the user requirements can
be implemented. Software specification document which serves as a blueprint for the
development phase is generated.

PHASE IV-DEVELOPMENT

Modular and subsystem programming code will be accomplished during this stage.
Unit testing and module testing are done in this stage by the MHS develops. Development
stage is intermingled with the next in that individual modules will need testing. Goals
&targets, of MHS system is achieved by establishing schedules during project development.
MODULAR DESCRIPTION

The modular description contains full description about every module used inthe

project. The various modules used in this project are

 Log In
 Admin
 Employee Module
 Report and Mail Module

Login Module:

 Access to Admin Module


 Access to Employee Module

Admin Module:

 Add Employee: Used to Create Employee Accounts for That Branch


 Leave Permission: Admin Can View the Employee Reason to Apply for Leave and
Allows To Accept Or Decline The Request
 Week Off: Allows Admin to Assign Week Off to The Employees for Every Week.

Employee Module:

 Check In & Out: Employees use this to check in and check out for attendance.
 apply leave: Allows employee to request for the leave and also mention the reason for
the request.
 Regularization: If employee forget to sign out, then using this the employee can send
the reason to the admin.

Report and Mail Module:

REPORT:

 Employee: This allows Employees to keep track of their current attendance


percentage and number of leaves.
 Admin: This gives permission for admin to view attendance details of all employee
 MAIL: When the admin accepts or rejects the employee leave request, this process
will send the response through mail.
CHAPTER – VI

TESTING

Testing is the process or group of procedures carried out to evaluate some aspect of a
piece of software. Testing plays a vital role in the success of the system. System testing
makes a logical assumption that if all parts of the system are correct, the goal will be
successfully achieved. Once program code has been developed, testing begins. The minimum
aim of testing procession identifies all defects existing in software product.

Testing establishes the software which has attained a specified degree of quality with
respect to selected attributes. The testing process focuses on the logical internals of the
software, ensuring that all statements have been tested, and on the functional externals, that is
conducted tests to uncover errors and ensure that defined input will procedure actual results
that agree with required results. Testing is related with two processes namely Validation and
Verification.

VALIDATION: Validation is a process of evaluating a software system or component


during or at the end of the development cycle in order to determine whether it satisfies
specified requirements. It is usually associated with traditional execution-based testing.

VERIFICATION: Verification is a process of evaluating a software system or component to


determine whether the products of a given development phase satisfy the conditions imposed
at the start of that phase. It is associated with activities such as inspection and reviews of the
software deliverable.

TYPES OF TESTING

 Unit Testing
 Integration Testing
 System Testing
 Acceptance Testing
SYSTEM TESTING

System Testing begins at the requirements phase with the development of a master test plan
and requirements-based tests. System Testing is more complicated task. It requires large
amount of resources. The goal is to ensure that the system performs according to its
requirements. It evaluates both functional behaviour and quality requirements such as
reliability, usability, performance and security. Testing is one of the important steps in the
software development phase.

Testing checks for the errors, as a whole of the project testing involves the following testcases:

 Static analysis is used to investigate the structural properties of the Source


code.
 Dynamic testing is used to investigate the behaviour of the source code by
 executing the program on the test data.

TYPES OF SYSTEM TESTING:

 Functional Testing
 Performance Testing
 Stress Testing
 Configuration Testing
 Security Testing
 Recovery Testing

Functional Testing:

Functional Testing are used to ensure that the behaviour of the system to the requirements
specification. All functional requirements for the system must be achievable by the system. It
focuses on the inputs and proper outputs for each function. Improper and Illegal inputs must
also be handles by the system. All functions must be tested. Some of the goals of functional
testing are;

 All types or classes of legal inputs must be accepted by the


 software.
 All classes of illegal inputs must be rejected.
 All possible classes of the system output must exercise and
 examined.
 All functions must be exercised.
 Performance Testing:

The goal of system performance testing is to see the software meets the performance
requirements. Performance Testing allows testers to tune the system; that is to optimize the
allocation of system resources. Resources for system testing must be allocated in the system
test plan. Results of performance tests are quantifiable. Performance testing requires the test-
bed requirement that includes special laboratory equipment and space that must be reserved
for the tests.

Test Managers should ascertain the availability of these resources and allocate the
necessary time for training in the test plan. Usage requirements for these resources need tube
described as part of the test plan.

Stress Testing:

When a system is tested with a load that causes it to allocate its resources in
maximum amounts, this is called stress testing. Stress testing is most important because it can
reveal defects in real-time and other types of systems, as well as weak areas where poor
design could cause unavailability of service. This is particularly important for real-time
systems where unpredictable events may occur resulting in input loads that exceed those
described in the requirements documents.

Stress Testing often uncovers race conditions, deadlocks, depletion of resources in


unusual or unplanned patterns and upsets in normal operation of the operating system. All
these conditions are likely to reveal defects and design flaws which may not be revealed
under normal testing condition.

Configuration Testing:

Configuration testing allows developers/users to evaluate the system performance and


availability when hardware exchanges and reconfigurations occurs. Software system interacts
with the hardware devices such as disk drives, tape drives and printers. Many software
systems interact with multiple CPUs, some of which are redundant. Software that controls
real time processes or embedded software also interfaces with devices but these are very
specialized hardware items such as missile launchers and nuclear power device sensors.

Several types of operations should be performed during configuration testing.


Security Testing:

Security testing handles safety and security issues for commercial applications for use
on the Internet. The Internet users believe that their personal Information is not secure and is
available to those with intent to do harm; the future of e-commerce is in peril. Security testing
evaluates system characteristics that relate to the scalability, integrity, and confidentially or
system data and services. Damages can be done through various means such as viruses,
Trojan horses, trap doors, illicit channels.

Some of the effects that cause Security breaches:

 Loss of information
 Corruption of information.
 Misinformation
 Privacy violations.
 Denial of services

Recovery Testing:

Recovery Testing subjects a system to losses of resources in order to determine if it


can recover properly from these losses. Multiple CPUs or multiple instances of devices are
used to detect the failure of devices. The recovery testers must ensure that the device
monitoring system and the check point software are working properly. It focuses on the
process of restart and switchover. It can be detected by loss of transaction, merging of
transactions, incorrect transactions and unnecessary duplication of a transaction.

TEST DATA AND OUTPUT

Test data is data which has been specifically identified for use in tests, typically of a
computer program.

Some data may be used in a confirmatory way, typically to verify that a given set of input to a
given function produces some expected result. Other data may be used in order to challenges
the ability of the program to respond to unusual, extreme, exceptional or unexpected input.

Test data may be produced in a focused or systematic way (as is typically the case in domain
testing) or by using other, less-focused approaches (as is typically the case in high- volume
randomized automated tests). Test data may be produced by the tester, or by a program or
function that aids the tester. Test data may be recorded for re-use, or used once and then
forgotten.

UNIT TESTING

A unit is the smallest possible testable software component. A unit can be function or
procedure implemented in a procedural programming language. A unit may also be a small
sized COTS component purchased from an outside vendor that is undergoing evaluation by
the purchaser, or a simple module retrieved from an in-house reuse library. Unit test results
are recorded for future testing process. This result document used for integration and system
tests.

Some of the phases for unit test planning are:

• Describe Unit Test Approach and Risks.


• Identify Unit features to be tested.
• Add levels of detail to the test plan

TABLE 6.1: Unit Testing

Unit Testing:

TEST TEST TEST INPUT EXPECTED OUTPUT ACTUAL RESULT


NO TYPE OUTPUT

1 Unit Enter the Email The character should be Displays the Success
displayed character
Testing

2 Unit Enter the Email Error should display Accept the mail Failure
Testing without @ id

3 Unit Enter the The password should Displays the Success


password display password
Testing
4 Unit Enter the Invalid password Accept the Failure
Testing password more password
than six character

5 Unit Password not Enter the Password Can’t Login Success


entered
Testing

TESTCASE SUCCESS:

Unit testing
TESTCASE FAILURE:
INTEGRATION TESTING

Integration testing is a systematic technique for construction the program structure while at
the same time conducting tests to uncover errors associated with interfacing. i.e., integration
testing is the complete testing of the set of modules which makes up the product. The
objective is to take untested modules and build a program structure tester should identify
critical modules. Critical modules should be tested as early as possible. One approach is to
wait until all the units have passed testing, and then combine them and then tested. This
approach is evolved from unstructured testing of small programs.

Another strategy is to construct the product in increments of tested units. A small set of
modules are integrated together and tested, to which another module is added and tested in
combination. And so on. The advantages of this approach are that, interface dispenses can be
easily found and corrected.

The major error that was faced during the project is linking error. When all the modules are
combined the link is not set properly with all support files. Then we checked out for
interconnection and the links. Errors are localized to the new module and its
intercommunications. The product development can be staged, and modules integrated in as
they complete unit testing. Testing is completed when the last module is integrated and
tested. Integration testing is a systematic technique for constructing the program structure
while at the same time conducting tests to uncover errors associated with. Individual
modules, which are highly prone to interface errors, should not be assumed to work instantly
when we put them together. The problem of course, is ―putting them together‖- interfacing.

There may be the chances of data lost across on another’s sub functions, when combined may
not produce the desired major function; individually acceptable impression may be magnified
to unacceptable levels; global data structures can present problems.
TABLE: 6.2 Integration Testing

TEST TEST TYPE TEST INPUT EXPECTED ACTUAL RESULT


NO OUTPUT
OUTPUT

1 Integration Test with required Data inserted Data inserted Success


testing field

2 Integration Test without Data not inserted Inserted failed Failure


Testing required field

3 Integration Edit with required Data inserted Data inserted Success


testing field

4 Integration Edit without Data not inserted Inserted failed Failure


testing required field
TESTCASE SUCCESS:
INTEGRATE TESTING
TESTCASE FAILURE:

INTEGRATE TESTING
TESTING TECHNIQUES / TESTING STRATEGIES
The description of behaviour or functionality for the software under test may come
from a formal specification, an Input/Process/Output Diagram (IPO), or a well defined set of
pre and post condition. Another source for information is a requirements specification
document that usually describes the functionality of the software under test and its inputs and
expected outputs.

WHITE BOX TESTING

This testing is also called as Glass box testing. In this testing, by knowing the specific
functions that a product has been design to perform test can be conducted that demonstrate
each function is fully operational at the same time searching for errors in each function. It is a
test case design method that uses the control structure of the procedural design to derive test
cases. Basis path testing is a white box testing. White-box testing techniques are as follows:

 Control-flow testing - The purpose of the control-flow testing to set


up a test case which covers all statements and branch conditions. The
branch conditions are tested for both being true and false, so that all
statements can be covered.
 Data-flow testing - This testing technique emphasis to cover all the
data variables included in the program. It tests where the variables
were declared and defined and where they were used or changed.

BLACK BOX TESTING

In this testing by knowing the internal operation of a product, test can be conducted to
ensure, that is the internal operation performs according to specification and all internal
components have been adequately exercised. It fundamentally focuses on the functional
requirements of the software. Black-box testing techniques:

 Equivalence class - The input is divided into similar classes. If one


element of a class passes the test, it is assumed that all the class is
passed.
 Boundary values - The input is divided into higher and lower end
values. If these values pass the test, it is assumed that all values in
between may pass too.
 Cause-effect graphing - In both previous methods, only one input
value at a time is tested. Cause (input) – Effect (output) is a testing
technique where combinations of input values are tested in a
systematic way.
 Pair-wise Testing - The behavior of software depends on multiple
parameters. In pairwise testing, the multiple parameters are tested
pair- wise for their different values.
 State-based testing - The system changes state on provision of input.
These systems are tested based on their states and input.

VALIDATION TESTING

Validation testing takes place after integration testing. Software is completely assembled
as a package. Interfacing errors have been uncovered and corrected and a final series of
software test-validation testing begins. Validation testing can be defined in many ways, but a
simple definition is that validation succeeds when the software functions in manner that is
reasonably expected by the customer. Software validation is achieved through a series of
black box tests that demonstrate conformity with requirement.

After validation test has been conducted, one of two conditions exists.

 The function or performance characteristics confirm to specifications


and are accepted.
 A validation from specification is uncovered and a deficiency created.

Deviation or errors discovered at this step in this project is corrected prior to completion of
the project with the help of the user by negotiating to establish a method for resolving
deficiencies. Thus the proposed system under consideration has been tested by using
validation testing and found to be working satisfactorily. Though there were deficiencies in
the system they were not catastrophic. The process of evaluating software during the
development process or at the end of the development process to determine whether it
satisfies specified business requirements. Validation Testing ensures that the product actually
meets the client's needs.
TABLE 6.3: Validation Testing

TEST TEST TEST INPUT EXPECTED ACTUAL RESULT


NO OUTPUT OUTPUT
TYPE
1 Validation Enter email id Allowed if @ is Login Success
testing with @ symbol included in email id successfully

2 Validation Enter email id Not allowed if email id Login Failure


Testing without @ without @ symbol
Unsuccessful
symbol
TESTCASE SUCCESS:
VALIDATE TESTING:
TESTCASE FAILURE:
USER ACCEPTANCE TESTING
When the software is ready to hand over to the customer it has to go through last phase of
testing where it is tested for user-interaction and response. This is important because even if
the software matches all user requirements and if user does not like the way it appears or
works, it may be rejected.

 Alpha testing - The team of developer themselves perform alpha


testing by using the system as if it is being used in work environment.
They try to find out how user would react to some action in software
and how the system should respond to inputs.
 Beta testing - After the software is tested internally, it is handed over
to the users to use it under their production environment only for
testing purpose. This is not as yet the delivered product. Developers
expect that users at this stage will bring minute problems, which were
skipped to attend.

CHAPTER - VII

SYSTEM IMPLEMENTATION:

INTRODUCTION

The development of an attendance management application for cafeteria settings


represents a critical endeavor aimed at streamlining administrative tasks, enhancing employee
accountability, and fostering organizational efficiency. This system implementation plan
delineates the multifaceted approach undertaken to create a robust Android application using
Java and XML for frontend development, complemented by Firebase as the backend
infrastructure.

By seamlessly integrating authentication mechanisms, data storage solutions, email


communication, and user-friendly interfaces, the envisioned application seeks to address the
intricate requirements of both administrators and employees within the context of attendance
tracking and management. This introduction provides an overview of the comprehensive
implementation strategy encompassing various modules, functionalities, security
considerations, and deployment protocols, underscoring the significance of meticulous
planning and execution to achieve the desired outcomes in terms of functionality, usability,
and scalability.

Through this systematic approach, the attendance management app aspires to


revolutionize traditional processes, optimize resource utilization, and empower stakeholders
with a reliable and intuitive solution tailored to the unique needs of cafeteria environments.

IMPLEMENTATION:

Our project aims to develop an Android application for managing attendance in a


cafeteria setting. This app will facilitate both admin and employee functionalities, enabling
efficient tracking of attendance, leave requests, and generating reports. The
implementation will utilize Java and XML for frontend development, while Firebase will
serve as the backend infrastructure for authentication, data storage, and email
communication.
To ensure secure access, Firebase Authentication will be implemented for both admin
and employee logins. This feature will provide a seamless and reliable authentication
mechanism, allowing users to securely access their respective modules within the application.
The admin module will feature a user-friendly interface for login using Java and
XML. Upon successful login, the admin will be able to add employees, view leave
requests, regularize attendance, and generate reports. Firebase Realtime Database or Cloud
Fire store will be utilized to store employee data and leave requests. Additionally, email
functionality will be integrated to send notifications for leave request approvals or
rejections. The generation of PDF reports will be facilitated through libraries like intext or
Pdf Document.
Similarly, the employee module will include a login interface designed using Java and
XML. Once authenticated, employees will have options to mark attendance using fingerprint
authentication, ensuring accurate and secure time tracking. Location services will be enabled
to restrict attendance marking to the cafeteria premises. Employees will be able to view their
attendance history, submit leave requests, and request attendance regularization if they forget
to mark their attendance. Firebase Realtime Database or Cloud Fire store will store employee
attendance data and handle leave/attendance requests.
To enhance security and user experience, a session timeout of 1 hour will be implemented.
After the specified duration, users will be automatically logged out to prevent unauthorized
access. Proper session management will ensure data security and adherence to privacy
standards. The implementation will prioritize error handling and input validation to ensure
data integrity and prevent application crashes. Robust security measures will be incorporated
to safeguard user data and prevent unauthorized access. Thorough testing, including edge
cases and network connectivity scenarios, will be conducted to ensure the application's
reliability and performance.Upon completion, the application will be deployed on the
Google Play Store or distributed internally within the organization, depending on the specific
requirements and preferences.
The implementation plan outlines a systematic approach to developing an attendance
management app for cafeteria, leveraging Java, XML, and Firebase technologies. By
incorporating features such as authentication, admin functionalities, employee modules,
session management, and deployment considerations, the project aims to deliver a robust
and user-friendly solution for efficient attendance tracking and management.

You might also like