0% found this document useful (0 votes)
28 views

Blind Helper Documentation NEW

The document describes a system that aims to help blind and visually impaired people with daily tasks like reading, location, weather, time, and battery status using voice commands. It analyzes existing assistive technologies and proposes a new system with improved functionality using technologies like OCR and speech recognition to be more compatible at a lower cost.

Uploaded by

APCAC
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Blind Helper Documentation NEW

The document describes a system that aims to help blind and visually impaired people with daily tasks like reading, location, weather, time, and battery status using voice commands. It analyzes existing assistive technologies and proposes a new system with improved functionality using technologies like OCR and speech recognition to be more compatible at a lower cost.

Uploaded by

APCAC
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 70

CONTENT

CHAPTER TITLE PAGE NO

1 INTRODUCTION 1

1.1 ABSTRACT 2

2 SYSTEM ANALYSIS 5

2.1 EXISTING SYSTEM ARCHITECTURE 5

2.2 PROPOSED SYSTEM ARCHITECTURE 5

2.3 SYSTEM ARCHITECTURE 6

3 DEVELOPMENT ENVIRONMENT 8

3.1 HARDWARE ENVIRONMENT 8

3.2 SOFTWARE ENVIRONMENT 8

3.3 ABOUT SOFTWARE 9

4 SYSTEM DESIGN 17

4.1 APPLICATION ARCHITECTURE 18

4.2 UML DIAGRAMS 19

4.2.1 USE CASE DIAGRAM 19

4.2.2 SEQUENCE DIAGRAM 22

4.2.3 ACTIVITY DIAGRAM 24

4.2.4 STATE DIAGRAM 26

4.2.5 CLASS DIAGRAM 28

4.2.6 DATA FLOW DIAGRAM 30

5 SYSTEM IMPLEMENTATION 34

5.1 SAMPLE CODING 34


6 SYSTEM TESTING 53

6.1 TESTING 53

6.2 UNIT TESTING 54

6.3 INTEGRATION TESTING 54

6.4 FUNCTIONAL TESTING 54

6.5 USER INTERFACE TESTING 54

6.6 USABILITY TESTING 54

6.7 SECURITY TESTING 55

6.8 COMPATIBILITY TESTING 55

6.9 REGRESSION TESTING 55

7 PERFORMANCE AND LIMITATIONS 57

7.1 MERITS OF THE APPLICATIONS 57

7.2 LIMITATIONS OF THE APPLICATIONS 57

7.3 FUTURE ENHANCEMENTS OF THE APPLICATIONS 58

8 APPENDICES 59

8.1 HOME PAGE OF THE APPLICATION 60

8.2 VOICE ASSISTANT 61

8.3 FEATURES OF THE APPLICATION 62

9 CONCLUSION 64

10 REFERENCES 66

10.1 BIBLIOGRAPHY 66
INTRODUCTION
1. INTRODUCTION
Visually impaired are the ones who are completely or partially blind. According to an
estimate made by the World Health Organization (WHO) 285 million of the population suffer
from visual impairment and 39 people were blind and approximately 3% of all the ages in a
nation are visually impaired. This project was conceived keeping in mind the day-to-day
struggles such as reading, current location, weather detection, phone battery status and time and
date, etc. faced by the blind and visually impaired people. So, for that, I have used Google
speech input where the blind user has to say certain words to open those particular tasks. One of
the major factors in developing these technical aids is the compatibility with the user. He should
not have trouble getting acquainted with the product. The features of the product should not be
too difficult to use. This application has simple working like the user has either swipe right or
left on the screen to open the voice assistant and talk. I have also added a text-to-speech method
for listening to the working and use of the application. It is developed to help deaf-blind people
interact with others with ease. Provides the blind user the ability to perform some basic daily
activities by the combination of some mere touches and taps, such as reading, calculator,
weather, location, and getting the time and date phone battery status with someone. The project
is designed to address the daily struggles of blind and visually impaired people such as reading,
current location, weather detection, phone battery status, and time and date. So, for that, I have
used Google Speech Input where the blind user has to say a few words to open those specific
functions. This application has a simple function as the user swipes right or left on the screen to
open and speak to the voice assistant. I've also added a text-to-speech method to listen to the
application's functionality and usage. This application allows the blind user to perform some
basic daily activities such as reading, calculator, weather, location, time, and date to know the
battery status of the phone with the help of touch and tap on the mobile screen. Also, the main
thing I have implemented is the user will open this functionality using voice commands like
when I say read it automatically open that particular activity.

1
1.1 ABSTRACT

Blind individuals don't have the posh to browse and write. Hence, I’m creating an
associate in a android application that will modify the written text with a camera by simply
sound on the screen employing a speech engine. I actually have conjointly designed a talking
calculator so blind individuals will use the calculator via voice commands. On the side of that, I
actually have supplementary applications so they assist blind individuals to use them throughout
their standard of living. It conjointly shows your current location. With the assistance of this
application, the user also will notice the weather of any town or location. It conjointly needs
negligible effort from the user to use the appliance throughout the standard of living. With the
ascent of wireless communications, the requirement for voice recognition techniques has
enlarged greatly. Voice applications supported by voice interfaces, voice recognition, and voice
dialogue management will facilitate users to be targeted on their current work while not a further
effort for hands or eyes. This application listens to your commands and then responds with voice
commands by talking.

1.2 PROBLEM STATEMENT

Blind people can’t tell the exact time. As we know an assistant or helper always needs
for a blind person to go out. But, If they visit a place more than once, they can go there without
any helper or assistant and if it is a new place, they need an assistant to visit there. Blind people
face problems to recognize money. They cannot differentiate between departmental stores and
restaurants. Only hearing voices or noise, they can differentiate among humans, vehicles and
other animals. Mostly, Blind people cannot pass roads without any help. In communication, they
can recognize only their known voices.

2
1.3 LITERATURE SURVEY

The References surveys tell us that our society has a notable number of disadvantaged
people and large proportion of them are visually challenged. Vision challenged classifications:

(1) Mild disability – presenting visual acuity worse than 6/12.

(2) Moderate disability – presenting visual acuity worse than 6/18.

(3) Severe disability – presenting visual acuity worse than 6/60.

(4) Unsighted – presenting visual acuity worse than 3/60.

Among all the senses, capability to see via the eyes is one of the maximum crucial senses
in humans. And the loss of this potential significantly impacts all of the feasible moves an
character is likely to do in his/her existence. considering that such people are not expected to
develop of their profession as lots as an abled person, they frequently experience a violation of
their rights and also discrimination on social structures & at the place of job.

Government and civil society play an important role in making the visually impaired life
easier and safer by organizing campaigns and providing education with new tools and
technologies. In, the method is proposed when the image is captured using a camera and the
captured image is scanned from left to right to detect an obstacle and produce sound. Sound is
produced by analysing an image in which the top image is converted into a high frequency and
the lower part into a subtitle sound. They are able to operate mobile phone even laptop, computer
by using different applications. We contacted some blind people and told them about our project.
Most of them appreciated us and happily thanked us. An intelligent Assistant for Blind named
GUIDING LIGHT FOR THE BLIND COMMUNITY is an Android application. It used Speech
Synthesis and textual content reputation to understand the text from a pdf report and synthesize it
to the consumer. As the application is constructed on android, it makes use of pre-defined APIs
for text-to-speech conversion which makes the method even extra green. However, it doesn’t
recognize Text through image Google’s Vision API is used. Total percentage of blind people
among all population is 3.44%, from that 53.1% people use the android mobile and remaining
are not.

3
SYSTEM ANALYSIS

4
2. SYSTEM ANALYSIS
2.1 EXISTING SYSTEM ARCHITECTURE

The existing system for guiding light for the blind community project often face some
drawbacks. Additionally, they might lack robustness in real-world scenarios, have limited
compatibility with other devices or software, and may not adequately address the diverse needs
of visually impaired individuals.

DISADVANTAGES

 Limited functionality
 Dependency on specific hardware
 Lack of adaptability to different environments
 Sometimes high cost

2.2 PROPOSED SYSTEM ARCHITECHTURE

The proposed system for a guiding light for the blind community project could offer
several advantages. This project including enhanced functionality through innovative
technologies like OCR Reader.

ADVANTAGES

 Improved compatibility to different environments.


 Improved compatibility with existing application and software.
 It is lower cost model.
 Affordable hardware components.
 OCR reader: After swiping right on the screen user has to say “read” then it will ask if
you want to read say yes for continue and no to return to the main menu.

5
2.3 SYSTEM ARCHITECTURE

The system proposes the following applications:

1. OCR reader: - After swiping right on the screen user has to say “read” then it will ask if you
want to read say yes for continue and no to return to the main menu.

2. Calculator: - The user has to say “calculator” after that user has to tap on the screen and say
what to calculate then the application will say the answer.

3. Location: - In this user has to say location after that user will tap on the screen then it will say
current location.

4. Weather: - In this user will say “weather” and then say the name of the city. After that
application will say the weather of that particular city.

5. Battery: - To check the current phone battery status user has to say “battery”.

6. Time and date: - To check the current time and date user has to say “time and date”

6
DEVELOPMENT ENVIRONMENT

3. DEVELOPMENT ENVIRONMENT
3.1 HARDWARE ENVIRONMENT

Processor : AMD PRO A4-4350B R4

7
Ram : 4GB

Hard Disk : 320 GB

Monitor : 15’’LED Monitor

Mouse : Optical Mouse

Keyboard : Multimedia keyboard

3.2 SOFTWARE ENVIRONMENT

Operating System : Windows 10

Platform : Android Studio

Programming Language : Java

API : Google Speech API.

3.3 ABOUT SOFTWARE:


ANDROID STUDIO:

8
Android Studio is the official Integrated Development Environment (IDE) for Android
app development. Based on the powerful code editor and developer tools from IntelliJ IDEA ,
Android Studio offers even more features that enhance your productivity when building Android
apps, such as:

 A flexible Gradle-based build system

 A fast and feature-rich emulator

 A unified environment where you can develop for all Android devices

 Live Edit to update composables in emulators and physical devices in real time

 Code templates and GitHub integration to help you build common app features and import
sample code

 Extensive testing tools and frameworks

 Lint tools to catch performance, usability, version compatibility, and other problems

 C++ and NDK support

 Built-in support for Google Cloud Platform, making it easy to integrate Google Cloud Messaging
and App Engine

This page provides an introduction to basic Android Studio features. For a summary of the latest
changes, see the Android Studio release notes.

PROJECT STRUCTURE:

Each project in Android Studio contains one or more modules with source code files and
resource files. The types of modules include:

9
 Android app modules

 Library modules

 Google App Engine modules

By default, Android Studio displays your project files in the Android project view, as shown in
figure 1. This view is organized by modules to provide quick access to your project's key source
files. All the build files are visible at the top level, under Gradle Scripts.

Each app module contains the following folders:

 manifests: Contains the AndroidManifest.xml file.

 java: Contains the Kotlin and Java source code files, including JUnit test code.

 res: Contains all non-code resources such as UI strings and bitmap images.

The Android project structure on disk differs from this flattened representation. To see the actual
file structure of the project, select Project instead of Android from the Project menu.

Gradle build System :

Android Studio uses Gradle as the foundation of the build system, with more Android-specific
capabilities provided by the Android Gradle plugin. This build system runs as an integrated tool
from the Android Studio menu and independently from the command line.

You can use the features of the build system to do the following:

 Customize, configure, and extend the build process.

 Create multiple APKs for your app with different features, using the same project and modules.

10
 Reuse code and resources across source sets.

By employing the flexibility of Gradle, you can achieve all of this without modifying your app's
core source files.

Android Studio build files are named build.gradle.kts if you use Kotlin (recommended)
or build.gradle if you use Groovy. They are plain text files that use the Kotlin or Groovy syntax
to configure the build with elements provided by the Android Gradle plugin. Each project has
one top-level build file for the entire project and separate module-level build files for each
module. When you import an existing project, Android Studio automatically generates the
necessary build files.

Build variants:

The build system can help you create different versions of the same app from a single project.
This is useful when you have both a free version and a paid version of your app or if you want to
distribute multiple APKs for different device configurations on Google Play.

For more information about configuring build variants, see Configure build variants.

Multiple APK support:

Multiple APK support lets you efficiently create multiple APKs based on screen density or ABI.
For example, you can create separate APKs of an app for the hdpi and mdpi screen densities,
while still considering them a single variant and letting them share test APK, javac, dx, and
ProGuard settings.

Resource shrinking:

Resource shrinking in Android Studio automatically removes unused resources from your
packaged app and library dependencies. For example, if your app uses Google Play services to

11
access Google Drive functionality, and you are not currently using Google Sign-In, then resource
shrinking can remove the various drawable assets for the SignInButton buttons.

Manage dependencies:

Dependencies for your project are specified by name in the module-level build script. Gradle
finds dependencies and makes them available in your build. You can declare module
dependencies, remote binary dependencies, and local binary dependencies in
your build.gradle.kts file.

Android Studio configures projects to use the Maven Central Repository by default. This
configuration is included in the top-level build file for the project.

For more information about configuring dependencies, read Add build dependencies.

Debug and profile tools:

Android Studio helps you debug and improve the performance of your code, including inline
debugging and performance analysis tools.

Inline debugging

Use inline debugging to enhance your code walkthroughs in the debugger view with inline
verification of references, expressions, and variable values.

Inline debug information includes:

 Inline variable values

 Objects that reference a selected object

 Method return values

 Lambda and operator expressions

 Tooltip values

12
To enable inline debugging, in the Debug window, click Settings and select Show Variable
Values in Editor.

Performance profilers:

Android Studio provides performance profilers so you can easily track your app's memory and
CPU usage, find deallocated objects, locate memory leaks, optimize graphics performance, and
analyze network requests.

To use performance profilers, with your app running on a device or emulator, open the Android
Profiler by selecting View > Tool Windows > Profiler.

For more information about performance profilers, see Profile your app performance.

Heap dump:

When profiling memory usage in Android Studio, you can simultaneously initiate garbage
collection and dump the Java heap to a heap snapshot in an Android-specific HPROF binary
format file. The HPROF viewer displays classes, instances of each class, and a reference tree to
help you track memory usage and find memory leaks.

For more information about working with heap dumps, see Capture a heap dump.

Memory Profiler:

Use Memory Profiler to track memory allocation and watch where objects are being allocated
when you perform certain actions. These allocations help you optimize your app’s performance
and memory use by adjusting the method calls related to those actions.

For information about tracking and analyzing allocations, see View memory allocations.

Data file access:

The Android SDK tools, such as Systrace and Logcat, generate performance and debugging data
for detailed app analysis.

13
To view the available generated data files:

1.Open the Captures tool window.

2. In the list of the generated files, double-click a file to view the data.

3.Right-click any HPROF files to convert them to the standard.

4.Investigate your RAM usage file format.

Code inspections:

Whenever you compile your program, Android Studio automatically runs configured lint checks
and other IDE inspections to help you easily identify and correct problems with the structural
quality of your code.

The lint tool checks your Android project source files for potential bugs and optimization
improvements for correctness, security, performance, usability, accessibility, and
internationalization.

Annotations in Android Studio:

Android Studio supports annotations for variables, parameters, and return values to help you
catch bugs, such as null pointer exceptions and resource type conflicts.

The Android SDK Manager packages the Jetpack Annotations library in the Android Support
Repository for use with Android Studio. Android Studio validates the configured annotations
during code inspection.

Log messages:

When you build and run your app with Android Studio, you can view adb output and device log
messages in the Logcat window.

GOOGLE SPEECH API:

14
Advanced speech AI

Speech-to-Text can utilize Chirp, Google Cloud’s foundation model for speech trained
on millions of hours of audio data and billions of text sentences. This contrasts with traditional
speech recognition techniques that focus on large amounts of language-specific supervised data.
These techniques give users improved recognition and transcription for more spoken languages
and accents.
Support for 125 languages and variants
Build for a global user base with extensive language support. Transcribe short, long, and even
streaming audio data. Speech-to-Text also offers users more accurate and globe-spanning
translation and recognition with Chirp, the next generation of universal speech models. Chirp
was built using self-supervised training on millions of hours of audio and 28 billion sentences of
text spanning 100+ languages.

Transcribe short, long, or streaming audio

Pretrained or customizable models for transcription

Choose from a selection of trained models for voice control, phone call, and video
transcription optimized for domain-specific quality requirements. Easily customize, experiment
with, create, and manage custom resources with the Speech-to-Text UI.

15
SYSTEM DESIGN

4. DESIGN:
Design is the first step in the development phase for any technique and principles for the
purpose of defining a device, a process or system in sufficient detail to permit its physical
realization.
Once the software requirements have been analyzed and specified the software design
involves three technical activities – design, coding, implementation and testing that are required
to build and verify the software.

16
The design activities are main importance in this phase, because in this activity, decision
ultimately affecting the success of the software implementation and its ease of maintenance are
made. These decision have the final bearing upon reliability and maintainability of the system.
Design is the only way of accurately translate the customer’s requirement into finished software
or a system.
Design is the place where quality is fostered in development. Software design is a process
through which requirements are translated into a representation of software. Software design is
conducted in two steps. Preliminary design is concerned with the transformation of requirements
into data.

4.1 APPLICATION ARCHITECTURE

17
4.2 UML DIAGRAMS:
UML stands for Unified Modeling Language. UNL is a language for specifying,
visualizing and documenting the system. This is the step while developing any product after
analysis. The goal from this to produce a medal of the entities involved in the project which later
need to be built. The representation of the entities that are to be used in the product being
developed need to be designed. There are various kinds of methods in software design:
They are as follows:

 Use case diagram

 Sequence Diagram

 Activity Diagram

 State Diagram

18
 Class Diagram

 Data Flow Diagram

4.2.1 Use Case Diagram:

In UML, use-case diagrams model the behavior of a system and help to capture the requirements
of the system.

Use-case diagrams describe the high-level functions and scope of a system. These diagrams also
identify the interactions between the system and its actors. The use cases and actors in use-case
diagrams describe what the system does and how the actors use it, but not how the system
operates internally.

Use-case diagrams illustrate and define the context and requirements of either an entire system or
the important parts of the system. You can model a complex system with a single use-case
diagram, or create many use-case diagrams to model the components of the system. You would
typically develop use-case diagrams in the early phases of a project and refer to them throughout
the development process.

Use-case diagrams are helpful in the following situations:

 Before starting a project, you can create use-case diagrams to model a business so that all
participants in the project share an understanding of the workers, customers, and activities of the
business.
 While gathering requirements, you can create use-case diagrams to capture the system
requirements and to present to others what the system should do.
 During the analysis and design phases, you can use the use cases and actors from your use-case
diagrams to identify the classes that the system requires.
 During the testing phase, you can use use-case diagrams to identify tests for the system.

19
USE CASE DIAGRAM:

20
4.2.2 Sequence Diagram:

21
A sequence diagram is a Unified Modeling Language (UML) diagram that illustrates the
sequence of messages between objects in an interaction. A sequence diagram consists of a group
of objects that are represented by lifelines, and the messages that they exchange over time during
the interaction.

A sequence diagram shows the sequence of messages passed between objects. Sequence
diagrams can also show the control structures between objects. For example, lifelines in a
sequence diagram for a banking scenario can represent a customer, bank teller, or bank manager.
The communication between the customer, teller, and manager are represented by messages
passed between them. The sequence diagram shows the objects and the messages between the
object.

Purpose of a Sequence Diagram:

1. To model high-level interaction among active objects within a system.


2. To Model interaction among objects inside a collaboration realizing a use case.
3. It either models generic interaction or some certain instance interaction.

SEQUENCE DIAGRAM:

22
4.2.3 Activity Diagram:

23
An activity diagram is a type of Unified Modeling Language (UML) flowchart that
shows the flow from one activity to another in a system or process. It's used to describe the
different dynamic aspects of a system and is referred to as a 'behavior diagram' because it
describes what should happen in the modeled system.

Even very complex systems can be visualized by activity diagrams. As a result, activity diagrams
are often used in business process modeling or to describe the steps of a use case diagram within
organizations. They show the individual steps in an activity and the order in which they are
presented. They can also show the flow of data between activities.

Activity diagrams show the process from the start (the initial state) to the end (the final state).
Each activity diagram includes an action, decision node, control flows, start node, and end node.

Benefits of Activity Diagrams:

1. Shows the progress of workflow amongst the users, and the system

2. Demonstrates the logic of an algorithm

3. Models the software architecture elements, including the method, operation, and
function

4. Simplifies the majority of the UML processes by clarifying complicated use cases

ACTIVITY DIAGRAM:

24
4.2.4 State Diagram:

25
A state machine is any device that stores the status of an object at a given time and can
change status or cause other actions based on the input it receives. States refer to the different
combinations of information that an object can hold, not how the object behaves. In order to
understand the different states of an object, you might want to visualize all of the possible states
and show how an object gets to each state, and you can do so with a UML state diagram.

Each state diagram typically begins with a dark circle that indicates the initial state and
ends with a bordered circle that denotes the final state. However, despite having clear start and
end points, state diagrams are not necessarily the best tool for capturing an overall progression of
events. Rather, they illustrate specific kinds of behavior—in particular, shifts from one state to
another.

State diagrams mainly depict states and transitions. States are represented with rectangles
with rounded corners that are labeled with the name of the state. Transitions are marked with
arrows that flow from one state to another, showing how the states change. Below, you can see
both these elements at work in a basic diagram for student life. Our UML diagram tool can help
you design any custom state machine diagram.

STATE DIAGRAM:

26
4.2.5 Class Diagram:

27
A class diagram in the Unified Modeling Language (UML) is a type of static structure
diagram that describes the structure of a system by showing the system's classes their attributes,
operations (or methods), and the relationships among objects.

The class diagram is the main building block of object-oriented modeling. It is used for
general conceptual modeling of the structure of the application, and for detailed modeling,
translating the models into programming code. Class diagrams can also be used for data
modeling. The classes in a class diagram represent both the main elements, interactions in the
application, and the classes to be programmed.

In the diagram, classes are represented with boxes that contain three compartments:

 The top compartment contains the name of the class. It is printed in bold and centered, and
the first letter is capitalized.
 The middle compartment contains the attributes of the class. They are left-aligned and the
first letter is lowercase.
 The bottom compartment contains the operations the class can execute. They are also left-
aligned and the first letter is lowercase.

CLASS DIGRAM:

28
4.2.6 Data Flow Diagram:

29
A data flow diagram (DFD) maps out the flow of information for any process or system. It
uses defined symbols like rectangles, circles and arrows, plus short text labels, to show data
inputs, outputs, storage points and the routes between each destination. Data flowcharts can
range from simple, even hand-drawn process overviews, to in-depth, multi-level DFDs that dig
progressively deeper into how the data is handled. They can be used to analyze an existing
system or model a new one. Like all the best diagrams and charts, a DFD can often visually
“say” things that would be hard to explain in words, and they work for both technical and
nontechnical audiences, from developer to CEO. That’s why DFDs remain so popular after all
these years. While they work well for data flow software and systems, they are less applicable
nowadays to visualizing interactive, real-time or database-oriented software or systems.

Rules and tips:

 Each process should have at least one input and an output.

 Each data store should have at least one data flow in and one data flow out.

 Data stored in a system must go through a process.

 All processes in a DFD go to another process or a data store.

DATA FLOW DIAGRAM:

30
31
32
SYSTEM IMPLEMENTATION

33
5. SAMPLE CODING:

AndroidManifest.xml:

<?xml version="1.0" encoding="utf-8"?>

<manifest xmlns:android="http://schemas.android.com/apk/res/android"

xml<ns:tools="http://schemas.android.com/tools"

package="com.example.software2.ocrhy">

<uses-permission android:name="android.permission.INTERNET" />

<uses-permission android:name="android.permission.CAMERA" />

<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />

<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />

<application

android:allowBackup="true"

android:icon="@drawable/home"

android:label="@string/app_name"

android:roundIcon="@drawable/home"

android:supportsRtl="true"

android:theme="@style/Theme.OCRhy">

34
<activity

android:name=".MainActivity8"

android:exported="true" />

<activity

android:name=".MainActivity7"

android:exported="true" />

<activity

android:name=".MainActivity6"

android:exported="true" />

<activity

android:name=".MainActivity5"

android:exported="true" />

<activity

android:name=".MainActivity4"

android:exported="true" />

<activity

android:name=".MainActivity3"

android:exported="true"

android:screenOrientation="portrait"

35
android:supportsRtl="true" />

<activity

android:name=".MainActivity2"

android:exported="true" />

<activity

android:name=".MainActivity"

android:exported="true"

android:screenOrientation="portrait">

<intent-filter>

<action android:name="android.intent.action.MAIN" />

<category android:name="android.intent.category.LAUNCHER" />

</intent-filter>

</activity>

<service android:name=".GetAllData" />

</application>

</manifest>

36
GetAllData.java

package com.example.software2.ocrhy;

import android.app.IntentService;

import android.content.Intent;

import android.location.Address;

import android.location.Geocoder;

import android.location.Location;

import android.os.Bundle;

import android.os.ResultReceiver;

import android.speech.tts.TextToSpeech;

import android.util.Log;

import java.util.List;

import java.util.Locale;

import java.util.Objects;

import androidx.annotation.Nullable;

public class GetAllData extends IntentService {

private static final String IDENTIFIER = "GetAddressIntentService";

//An identifier is a name that identifies either a unique object or a unique class of objects from
another java activities

37
private ResultReceiver addressResultReceiver;

//create object ResultReciever to recive the address result

private TextToSpeech texttospeech;

public GetAllData() {

super(IDENTIFIER);

/*super keyword is used to access methods of the parent class

while this is used to access methods of the current class.

this is a reserved keyword in java i.e, we can't use it as

an identifier. this is used to refer current-class's instance

as well as static members.*/

@Override

protected void onHandleIntent(@Nullable Intent intent) {

//onHandleIntent is a intent service for passing the data

String msg;

addressResultReceiver =
Objects.requireNonNull(intent).getParcelableExtra("add_receiver");

if (addressResultReceiver == null) {

Log.e("GetAddressIntentService", "No receiver, not processing the request further");

38
return;

Location location = intent.getParcelableExtra("add_location");

if (location == null) {

msg = "No location, can't go further without location";

texttospeech.speak(msg, TextToSpeech.QUEUE_FLUSH, null);

sendResultsToReceiver(0, msg);

return;

Geocoder geocoder = new Geocoder(this, Locale.getDefault());

//Geocoding refers to transforming street address or any address

List<Address> addresses = null;

try {

addresses = geocoder.getFromLocation(location.getLatitude(), location.getLongitude(),


1);

catch (Exception ioException) {

Log.e("", "Error in getting address for the location");

39
if (addresses == null || addresses.size() == 0) {

msg = "No address found for the location";

sendResultsToReceiver(1, msg);

else {

Address address = addresses.get(0);

String addressDetails = address.getFeatureName() +"."+ "\n" +

"Locality is, " + address.getLocality() + "."+ "\n" + "City is ," +


address.getSubAdminArea()+"." + "\n" +

"State is, " + address.getAdminArea()+"."+ "\n" + "Country is, " +


address.getCountryName() +"."+ "\n";

sendResultsToReceiver(2, addressDetails);

private void sendResultsToReceiver(int resultCode, String message) {

Bundle bundle = new Bundle();

bundle.putString("address_result", message);

addressResultReceiver.send(resultCode, bundle);

40
MainActivity1.java:

package com.example.software2.ocrhy;

import android.content.ActivityNotFoundException;

import android.content.Intent;

import android.content.IntentSender;

import android.os.Bundle;

import android.os.Handler;

import android.os.Looper;

import android.speech.RecognizerIntent;

import android.speech.tts.TextToSpeech;

import android.util.Log;

import android.view.MotionEvent;

import android.view.SurfaceView;

import android.widget.TextView;

import androidx.annotation.NonNull;

import androidx.appcompat.app.AppCompatActivity;

41
import com.google.android.gms.common.ConnectionResult;

import com.google.android.gms.common.api.GoogleApiClient;

import com.google.android.gms.common.api.PendingResult;

import com.google.android.gms.common.api.ResultCallback;

import com.google.android.gms.common.api.Status;

import com.google.android.gms.location.LocationRequest;

import com.google.android.gms.location.LocationServices;

import com.google.android.gms.location.LocationSettingsRequest;

import com.google.android.gms.location.LocationSettingsResult;

import com.google.android.gms.location.LocationSettingsStatusCodes;

import com.google.android.gms.maps.GoogleMap;

import java.util.ArrayList;

import java.util.Locale;

public class MainActivity extends AppCompatActivity {

private static final int REQ_CODE_SPEECH_INPUT = 100;

private static int firstTime = 0;

42
private GoogleMap mMap;

private GoogleApiClient googleApiClient;

final static int REQUEST_LOCATION = 199;

private TextView mVoiceInputTv;

float x1, x2, y1, y2;

private TextView mSpeakBtn;

private SurfaceView surfaceView;

private static TextToSpeech textToSpeech;

@Override

protected void onCreate(Bundle savedInstanceState) {

super.onCreate(savedInstanceState);

setContentView(R.layout.activity_main);

textToSpeech = new TextToSpeech(this, new TextToSpeech.OnInitListener() {

43
@Override

public void onInit(int status) {

if (status != TextToSpeech.ERROR) {

textToSpeech.setLanguage(Locale.US);

textToSpeech.setSpeechRate(1f);

if (firstTime == 0)

textToSpeech.speak("Welcome to Blind App. Swipe left to listen the features of


the app and swipe right and say what you want", TextToSpeech.QUEUE_FLUSH, null);

});

mVoiceInputTv = (TextView) findViewById(R.id.voiceInput);

public boolean onTouchEvent(MotionEvent touchEvent) {

firstTime = 1;

switch (touchEvent.getAction()) {

case MotionEvent.ACTION_DOWN:

44
x1 = touchEvent.getX();

y1 = touchEvent.getY();

break;

case MotionEvent.ACTION_UP:

x2 = touchEvent.getX();

y2 = touchEvent.getY();

if (x1 < x2) {

firstTime = 1;

Intent intent = new Intent(MainActivity.this, MainActivity7.class);

startActivity(intent);

if (x1 > x2) {

startVoiceInput();

break;

break;

45
return false;

private void startVoiceInput() {

Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);

intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);

intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());

intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Hello, How can I help you?");

try {

startActivityForResult(intent, REQ_CODE_SPEECH_INPUT);

} catch (ActivityNotFoundException a) {

a.printStackTrace();

46
@Override

protected void onActivityResult(int requestCode, int resultCode, Intent data) {

super.onActivityResult(requestCode, resultCode, data);

if (requestCode == REQ_CODE_SPEECH_INPUT) {

if (resultCode == RESULT_OK && null != data) {

ArrayList<String> result =
data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);

mVoiceInputTv.setText(result.get(0));

if (mVoiceInputTv.getText().toString().equals("exit")) {

finishAffinity();

System.exit(0);

if (mVoiceInputTv.getText().toString().equals("read")) {

Intent intent = new Intent(getApplicationContext(), MainActivity2.class);

startActivity(intent);

mVoiceInputTv.setText(null);

} else {

textToSpeech.speak("Do not understand just Swipe right Say again",


TextToSpeech.QUEUE_FLUSH, null);

47
}

if (mVoiceInputTv.getText().toString().equals("calculator")) {

Intent intent = new Intent(getApplicationContext(), MainActivity3.class);

startActivity(intent);

mVoiceInputTv.setText(null);

} else {

textToSpeech.speak("Do not understand just Swipe right Say again",


TextToSpeech.QUEUE_FLUSH, null);

if (mVoiceInputTv.getText().toString().equals("time and date")) {

Intent intent = new Intent(getApplicationContext(), MainActivity4.class);

startActivity(intent);

mVoiceInputTv.setText(null);

} else {

textToSpeech.speak("Do not understand just Swipe right Say again",


TextToSpeech.QUEUE_FLUSH, null);

if (mVoiceInputTv.getText().toString().equals("weather")) {

Intent intent = new Intent(getApplicationContext(), MainActivity5.class);

startActivity(intent);

48
mVoiceInputTv.setText(null);

} else {

textToSpeech.speak("Do not understand just Swipe right Say again",


TextToSpeech.QUEUE_FLUSH, null);

if (mVoiceInputTv.getText().toString().equals("battery")) {

Intent intent = new Intent(getApplicationContext(), MainActivity6.class);

startActivity(intent);

mVoiceInputTv.setText(null);

} else {

textToSpeech.speak("Do not understand Swipe right Say again",


TextToSpeech.QUEUE_FLUSH, null);

if (mVoiceInputTv.getText().toString().equals("yes")) {

textToSpeech.speak(" Say Read for reading, calculator for calculator, time and
date, weather for weather, battery for battery. Do you want to listen again",
TextToSpeech.QUEUE_FLUSH, null);

mVoiceInputTv.setText(null);

} else if ((mVoiceInputTv.getText().toString().equals("no"))) {

49
textToSpeech.speak("then Swipe right and say what you want",
TextToSpeech.QUEUE_FLUSH, null);

} else if (mVoiceInputTv.getText().toString().equals("location")) {

Intent intent = new Intent(getApplicationContext(), MainActivity8.class);

startActivity(intent);

mVoiceInputTv.setText(null);

if (mVoiceInputTv.getText().toString().contains("exit")) {

mVoiceInputTv.setText(null);

finishAffinity();

50
}

public void onPause() {

if (textToSpeech != null) {

textToSpeech.stop();

super.onPause();

51
TESTING

6. SYSTEM TESTING:

52
After all phase have been perfectly done, the system will be implemented to the server
and the system can be used.

6.1 TESTING:

Software testing is the process of checking the quality, functionality, and performance of
a software product before launching. To do software testing, testers either interact with the
software manually or execute test scripts to find bugs and errors, ensuring that the software
works as expected. Software testing is also done to see if business logic is fulfilled, or if there are
any missing gaps in requirements that need immediate tackles.

Software testing is a crucial part of the software development life cycle. Without it, app-
breaking bugs that can negatively impact the bottom-line may go undetected. Over time, as
applications get more complex, software testing activities have also evolved, with many new
techniques and approaches introduced.

Software testing today is most effective when it is continuous, indicating that testing is
started during the design, continues as the software is built out, and even occurs when deployed
into production. Continuous testing means that organizations don’t have to wait for all the pieces
to be deployed before testing can start. Shift-left, which is moving testing closer to design, and
shift-right, where end-users perform validation, are also philosophies of testing that have recently
gained traction in the software community. When your test strategy and management plans are
understood, automation of all aspects of testing becomes essential to support the speed of
delivery that is required.

In the development of this Android app, various types of software testing can be
employed to ensure its quality and reliability. Here are some common types of software testing
that are often used in Android app development:

53
6.1.1 Unit Testing:

This involves testing individual components or units of code in isolation to ensure


they function correctly. Unit tests are typically written by developers and focus on verifying the
behavior of specific methods or classes.

6.1.2 Integration Testing:

Integration testing verifies that individual units of code work together correctly
when integrated into larger modules or components. It ensures that interactions between different
parts of the app are functioning as expected.

6.1.3 Functional Testing:

Functional testing focuses on testing the functionality of the app as a whole. Test
cases are designed to verify that the app behaves according to its specifications and meets the
intended requirements.

6.1.4 User Interface (UI) Testing:

UI testing involves testing the user interface of the app to ensure it is intuitive,
responsive, and visually consistent across different devices and screen sizes. It includes testing
elements such as layout, navigation, and user interactions.

6.1.5 Usability Testing:

Usability testing evaluates the overall user experience of the app by observing how
real users interact with it. It helps identify usability issues, user interface problems, and areas for
improvement to enhance user satisfaction.

6.1.6 Performance Testing:

54
Performance testing evaluates the app's performance under various conditions, such
as different levels of user load, network speeds, and device configurations. It helps ensure that
the app performs well and responds quickly to user interactions.

6.1.7 Security Testing:

Security testing involves identifying and addressing potential security vulnerabilities


in the app, such as data breaches, unauthorized access, and malicious attacks. It helps ensure the
app's data and functionality are protected from potential threats.

6.1.8 Compatibility Testing:

Compatibility testing verifies that the app functions correctly across different
devices, operating systems, and software configurations. It ensures the app is compatible with a
wide range of devices and platforms to reach a larger audience.

6.1.9 Regression Testing:

Regression testing involves retesting previously implemented features to ensure that


recent changes or updates have not introduced new bugs or issues. It helps maintain the stability
and reliability of the app over time.

By incorporating these various types of software testing into your Android app
development process, you can ensure that your app is of high quality, reliable, and meets the
needs and expectations of its users. Each type of testing serves a specific purpose and contributes
to the overall quality assurance of the app.

55
PERFORMANCE AND LIMITATIONS

7. PERFORMANCE AND LIMITATIONS:


7.1 MERITS OF THE APPLICATION:

Accessibility: Enhances accessibility for visually impaired individuals.

56
Enhanced Daily Living: Provides essential information for daily activities.

Reduced Struggles: Eases daily challenges through voice commands and swipe gestures.

Empowerment and Independence: Fosters independence and autonomy in users' lives.

Customization and Personalization: Allows users to tailor settings to their preferences.

Community Engagement and Support: Builds a supportive community among users.

7.2 LIMITATIONS OF THE APPLICATIONS:

Hardware Constraints: May face limitations on older or lower-end devices.

Accuracy and Reliability: Dependence on external services can lead to variations in


accuracy and reliability.

Dependency on External Services: Disruptions or limitations with external services can


impact performance.

User Interface Complexity: Swipe gestures may introduce complexity for some users.

Language and Cultural Considerations: Language barriers and cultural differences


may affect comprehension and usage.

Training and Adoption: Users may require training and ongoing support to fully utilize
the application.

7.3 FUTURE ENHANCEMENTS OF THE APPLICATIONS:

Advanced Voice Commands: Improve accuracy and responsiveness using advanced


natural language processing.

57
Augmented Reality Features: Provide spatial awareness and navigation assistance
through AR technology.

Machine Learning Algorithms: Personalize user experience and recommendations based


on behavior patterns.

Wearable Device Integration: Enable hands-free access to features through compatibility


with wearable devices.

Social Connectivity: Foster communication and support among users through social
features.

Offline Functionality: Ensure accessibility in areas with limited internet connectivity by


developing offline features.

Multimodal Interfaces: Enhance user experience with intuitive combinations of voice,


gesture, and touch inputs.

Accessibility Audits and Feedback: Continuously improve inclusivity based on regular


audits and user feedback.

58
APPENDICES

8. APPENDICES
8.1 HOME PAGE OF THE APPLICATION

59
8.2 VOICE ASSISTANT

60
8.3 FEATURES OF THE APPLICATIONS

61
62
CONCLUSION

9. CONCLUSION:

At present, mobile apps on smart phones are used to perform most of our daily activities.
But the people with vision impairment require assistance to access these mobile apps through
handheld devices like mobile and tablets. Google, Android applications have been developing

63
various mobile apps for visually impaired people Still it needs to provide more effective facilities
in the app by adopting and synergizing suitable techniques from Artificial Intelligence. This
report introduced two environmentally-friendly designs for blind people. We presented
information about the Blind people application.

This application will be more effective for blind people. It is important to develop this
application for the future. The system is used by Blind people, but normal people also can use it.
In the future, the proposed system will be able to interpret the textual description in a much
better way. The Image recognition can be enhanced with much more details about the image
captured through the camera. Enhancement to this system can be done by adding the features of
currency recognition.

64
BIBLIOGRAPHY

10. REFERENCES:

10.1 BIBLIOGRAPHY:

[1] Vijayalakshmi,N., Kiruthika, K.: Voice Based Navigation System for the Blind People.
International Journal of Scientific Research in Computer Science Engineering and Information
Technology. vol.5, pp. 256–259. IJSRCSEIT (2019).

65
[2]https://www.ijert.org/research/Assistance-System-for-Visually-Impaired-using-AI-
IJERTCONV7IS08078.pdf

[3] Noura A. Semary, Sondos M. Fadl, Magda S. Essa, Ahmed F. Gad,” Currency Recognition
System for Visually Impaired: Egyptian Banknote as a Study Case”, Menoufia, Egypt, 2015

[4] Visual impairment and blindness 2010.


http://www.who.int/blind-ness/data_maps/VIFACTSHEETGLODAT2010full.pdf, current
August 2016.

[5]. http://www.warse.org/IJATCSE/static/pdf/file/ijatcse27 912020.pd

[6] World Health Organization, “Visual Impairment and Blindness,” WHO Factsheet no. FS282 ,
Dec. 2014.

[7] Mingmin Zhao, FadelAdib, Dina Katabi Emotion Recognition using wireless signals.

[8] N. Senthil kumar, A. Abinaya, E. Arthi, M. Atchaya, M. Elakkiya, “SMART EYE FOR
VISUALLY IMPAIRED PEOPLE”, International Research Journal of Engineering and
Technology, Volume: 07 Issue: 06, June 2020.

[9] Liang – Bi Chen, Ming-Che Chen, “An implementation of an intelligent assistace system for
visually impaired/blind people, ”IEEE, 2018.

[10] Shagufta Md.Rafique Bagwan, Prof. L.J.Sankpal,” VisualPal: A Mobile App forOject
Recognition for the Visually Impaired”, IEEE International Conference on Computer,
Communication and Control (IC4- 2015).

66
[11] Shahed Anzarus Sabab, Md. Hamjajul Ashmafee, “Blind Reader: An Intelligent Assistant
for Blind”, 19th International Conference on Computer and Information Technology, December
18-20, 2016, North South University, Dhaka, Bangladesh.

[12] Shreyash Patil, Oshin Gawande, Shivam Kumar, Pradip Shewale,“Assistant Systems for the
Visually Impaired”, International Research Journal of Engineering and Technology (IRJET),
Volume: 07 Issue: 01 | Jan 2020.

[13] Gagandeep Singh, Omkar Kandale, Kevin Takhtani, Nandini Dadhwal, “A Smart Personal
AI Assistant for Visually Impaired People”, International Research Journal of Engineering and
Technology (IRJET), Volume: 07 Issue: 06 | June 2020.

[14] Pilling, D., Barrett, P. and Floyd, M. (2004). Disabled people and the Internet: experiences,
barriers and opportunities. York, UK: Joseph Rowntree Foundation, unpublished.

[15] Porter, P. (1997) ‘The reading washing machine’, Vine, Vol. 106, pp. 34–7

[16] JAWS - https://www.freedomscientific.com/products/software/jaws/ accessed in April 2020

[17] Ferati, Mexhid & Vogel, Bahtijar & Kurti, Arianit & Raufi, Bujar & Astals, David. (2016).
Web accessibility for visually impaired people: requirements and design issues. 9312. 79-96.
10.1007/978-3-319-45916- 5_6.

67

You might also like