PR3125
PR3125
A project report on
Mrs.Akhilaa
Assistant Professor
Dept. of ISE, CMRIT, Bengaluru
2019-20
VISVESVARAYA TECHNOLOGICAL UNIVERSITY
“Jnana Sangama”, Belgaum – 590 018
Certificate
This is to certify that the project entitled, “Android Application for Visually Impaired”, is a
bonafide work carried out by Shrikesh S (1CR16IS104), Tilak Hegde (1CR16IS117), Abhishek
Vijay (1CR16IS119), and Samruddhi S (1CR16IS092) in partial fulfillment of the award of the
degree of Bachelor of Engineering in Information Science & Engineering of Visvesvaraya
Technological University, Belgaum, during the year 2018-19. It is certified that all
corrections/suggestions indicated during reviews have been incorporated in the report. The project
report has been approved as it satisfies the academic requirements in respect of the project work
prescribed for the said Degree.
Name & Signature of Guide Name & Signature of HOD Signature of Principal
(Mrs. Akhilaa) (Dr. Farida Begam) ( Dr. Sanjay Jain)
External Viva
Name of the Examiners Signature with date
1.
2.
VISVESVARAYA TECHNOLOGICAL UNIVERSITY
“Jnana Sangama”, Belgaum – 590 018
Certificate
Shrikesh S (1CR16IS104)
Tilak Hegde (1CR16IS117)
Abhishek Vijay (1CR16IS119)
Samruddhi S (1CR16IS092)
Acknowledgment
The satisfaction and euphoria that accompany a successful completion of any task would be
incomplete without the mention of people who made it possible, success is the epitome of hard
work and perseverance, but steadfast of all is encouraging guidance.
So, with gratitude, We acknowledge all those whose guidance and encouragement served
as a beacon of light and crowned our effort with success.
We would like to thank Dr. Farida Begam, Associate Professor and HOD, Department of
Computer Science who shared her opinion and experience through which We received the required
information crucial for the seminar.
Finally, We would like to thank all my family members and friends whose encouragement
and support was invaluable.
Shrikesh S (1CR16IS104)
Tilak Hegde (1CR16IS117)
Abhishek Vijay (1CR16IS119)
Samruddhi S (1CR16IS092)
ii
ABSTRACT
As far as outdoor activities are concerned the blind face difficulties in safe and independent
mobility depriving them of normal professional and social life. Also, there are issues of
communication and access to information. This project is for the visually impaired people and is
based on the android platform. The major module of the project scans and detects the object in the
image captured by the camera of an in-built camera of a smartphone for the visually impaired. It
is a dedicated image recognition application running on an Android system smartphone.
Now-a-days, the mobile phone is one of the most powerful entities in the world, it helps for
communication and to ease our day-to-day tasks. This android application integrates accessibility
with navigation and features for safety of the user. The main aim is to guide visually impaired
people to travel from source to destination. The user gives input vocally and he is guided through
audio commands. We have included features like image recognition, making it easier for the
visually impaired to identify the object around them. The app also includes news readers, therefore
eliminating the issue of access to information. The user can get to know the day to day news
which the app will read out for them. Adding to this, the app also helps the user to set alarms as
well as remainders.
Keywords: Image Recognition (TensorFlow), Navigation, News Reader and Alarm and remainder
keeper.
i
TABLE OF CONTENTS
Chapter 1 1-4
1.1 Introduction……………………………………………………………… 1
1.2 Existing System…………………………………………………………… 2
1.2.1 Drawbacks…………………………………………………………. 3
1.3 Proposed System…………………………………………………………. 3
1.4 Problem Statement………………………………………………………. 4
1.5 Objective of Project……………………………………………………… 4
Chapter 2 5-6
LITERATURE SURVEY ……………………………………….. 5
Chapter 3 7-11
THEORETICAL BACKGROUND ………………………….. 7
3.1 Android framework……………………………………………………….. 7
3.2 Android History……………………………………………………………. 7
3.3 Development methods used……………………………………………….. 8
3.4 Rapid application development method………………………………….. 8
3.5 System Testing……………………………………………………………… 9
3.6 TensorFlow object detection API…………………………………………. 9
3.7 General Framework for object detection…………………………………. 10
iii
Chapter 4 12-15
SYSTEM REQUIREMENT SPECIFICATION………………….. 12
4.1 Introduction………………………………………………………………….. 13
4.1.1 Purpose…………………………………………………………………. 13
4.1.2 Need/Motivation……………………………………………………….. 13
4.2 Requirements………………………………………………………………… 14
4.2.1 Functional Requirements……………………………………………… 14
4.2.2 Non-Functional Requirements………………………………………... 14
4.2.2.1 Safety Requirements…………………………………………… 14
4.2.2.2 Security Requirements………………………………………… 14
4.2.2.3 Software Quality Attributes…………………………………… 15
4.3 Hardware Requirements…………………………………………………….. 15
4.4 Software Requirements……………………………………………………… 15
Chapter 5 16-17
SYSTEM ANALYSIS……………………………………………….. 16
5.1 Feasibility Study………………………………………………………………. 16
5.2 Economical Feasibility………………………………………………………... 16
5.3 Technical Feasibility………………………………………………………….. 17
Chapter 6 18-24
SYSTEM DESIGN………………………………………………….. 18
6.1 Product features………………………………………………………………. 18
6.2 Waterfall Model………………………………………………………………. 19
6.3 Use case diagram……………………………………………………………… 20
6.4 Sequence diagram…………………………………………………………….. 21
6.5 Class diagram…………………………………………………………………. 23
iv
Chapter 7 25-30
IMPLEMENTATION……………………………………………….. 25
Chapter 8 31-33
TESTING……………………………………………………………... 31
8.1 Testing Methodologies…………………………………………………………. 31
8.1.1 Structural Testing……………………………………………………….. 31
8.1.2 Functional Testing………………………………………………………. 32
8.2 Performance and Reliability Testing…………………………………………. 32
8.3 System Testing…………………………………………………………………. 32
8.4 Usability Testing………………………………………………………………… 33
8.5 Mobility Testing………………………………………………………………... 33
8.6 GUI Testing……………………………………………………………………… 33
Chapter 9 33-36
SCREENSHOTS WITH DESCRIPTION…………………………… 33
Chapter 10 37
CONCLUSION AND FUTURE SCOPE……………………………… 37
Chapter 11 38
REFERENCES…………………………………………………………. 38
Android Application for Visually Impaired
Chapter 1
PREAMBLE
1.1 Introduction
Android software development is the process by which new applications are created for devices
running the Android operating system. Google states that "Android apps can be written
using Kotlin, Java, and C++ languages" using the Android software development kit (SDK), while
using other languages is also possible. All non-JVM languages, such as Go, JavaScript, C, C++
or assembly, need the help of JVM language code, that may be supplied by tools, likely with
restricted API support. Some programming languages and tools allow cross-platform app support
(i.e. for both Android and iOS). Third party tools, development environments, and language support
have also continued to evolve and expand since the initial SDK was released in 2008. In addition,
with major business entities like Walmart, Amazon, and Bank of America eyeing to engage and sell
through mobiles, mobile application development is witnessing a transformation.
The android application is to help the partially blind carry out their day-to-day duties in an easier
fashion. The Smartphone technologies and mobile applications make it easier, helping us carry out
our day to day activities. As far as these activities are concerned the blind face difficulties in safe and
independent mobility depriving them of normal professional and social life. They also face issues
such as communication and access to information. The ability to navigate from one place to another
is an integral part of daily life for this vision plays a critical role but it would be difficult for visually
impaired people. Although it would be easier to go to a familiar environment without vision,
navigating to unfamiliar places without vision is very difficult. In spite of this, the visually challenged
people travel to different places independently on a daily basis. But to facilitate safe and efficient
navigation, visually challenged individuals must be guided properly. Also, the blind face difficulties
in recognising the objects around them. Image recognition, also known as computer vision, allows
applications using specific algorithms to understand images or videos. Image recognition helps to
identify objects through a mobile device’s camera.
Now-a-days, the mobile phone is one of the most powerful entities in the world, it helps for
communication and to ease our day-to-day tasks. This android application integrates accessibility with
navigation and features for safety of the user. The main aim is to guide visually impaired people to
travel from source to destination. The user gives input vocally and he is guided through audio
commands. We have included features like image recognition, making it easier for the visually
impaired to identify the object around them. The app also includes news readers, therefore eliminating
the issue of access to information. The user can get to know the day to day news which the app will
read out for them. Adding to this, the app also helps the user to set alarms as well as remainders.
TapTapSee is an app designed to help the blind and visually impaired identify objects they encounter
in their daily lives. The user has to simply double tap the screen to take the photo of anything around
them at any angle and hear the app speak the identification back. However, this app had little trouble
identifying currency. Therefore, in our project we have implemented image recognition using
TensorFlow, such that it identifies various objects including currency.
Another application which is part of Google’s android accessibility service, is designed to help the
visually impaired users with just using their mobile devices. The app monitors and speaks out every
movement the user makes on his or her phone. It can also read out the texts for them.
iSee is an android based application that benefits from commercially available technology to help the
visually impaired people to improve their day-to-day activities. In this app, the user has to just hold the
point and point anywhere he/she desires and tap on the screen. The application’s algorithm runs in the
background, and then communicates audibly, via a voice message, the object type, name and
description.
Therefore, our aim was to combine all these features together and implement in a single android
application. Thereby making it easier for the visually impaired.
1.2.1 Drawbacks
1. Although the coder designs an optimised code, the code readability is very inadequate and
cannot be modified by the user even if required. Any change to the code requires change in
the model itself.
2. The amount of complexity and requirement requested by the code has led to the reduced
usage of this application and manual code conversion is preferred
Android Application to help the partially blind carry out their day-to-day duties in an easier fashion.
The Image Recognition and navigation features of application enable users to identify items
proficiently and navigates them to the items which are of prime importance.
Chapter 2
LITERATURE SURVEY
Early work in the field of helping the blind derived its importance on developing scientific methods to
cure partial blindness. In response to this popular discourse, technology seeped in the cracks of science
to not only improve the standard of living but also to engage in curing blindness.
According to the WHO, the estimated number of people visually impaired in the world is 285 million,
39 million blind and 246 million have low vision; 65 % of people visually impaired and 82% of all
blind are 50 years and older. We are looking to build an Android Application to close the gap between
what is and what could be.
The World Health Organization and the International Agency for the Prevention of Blindness’s Vision
2020 initiative states a goal of “eliminating avoidable blindness by 2020.” We are looking to do part
in completing the aforementioned goal.
Blind-Not is a powerfly tool looking to eliminate, on the contrary, initiate a process which helps the
blind to carry out their everyday duties in a more efficient manner. The Initial software will be built on
the Android platform. We are looking to include numerous modules such as Google API’s, Firebase
to store user data, Python Libraries, Machine Learning etc.
According to our findings, there are few applications that work towards the same cause.
Few applications are mentioned below:
• LookTel: The Money Identifier Mobile App:
LookTel Money Reader instantly recognizes currency and speaks the denomination, enabling
people experiencing visual impairments or blindness to quickly and easily identify and count
bills.
TapTapSee is designed to help the blind and visually impaired identify objects they encounter
in their daily lives. Simply double tap the screen and take a photo of anything, at any angle.
You’ll hear the app speak the identification back to you.
• iSee:
iSee is an android based application that benefits from commercially available technology to
help the visually impaired people to improve their day-to-day activities. In this app, the user
has to just hold the point and point anywhere he/she desires and tap on the screen. The
application’s algorithm runs in the background, and then communicates audibly, via a voice
message, the object type, name and description.
Further research indicate that Blind-Not will be not only deriving a part of these applications but also
adding on the following functionalities:
• Navigation: Aim is to guide the visually impaired to travel from source to destination.
• Image Recognition: Helps the user recognize the objects that he/she scans.
• News Reader: Helps the user to update to day to day news.
According to an excerpt by Mr. Paul Adam on Quora, one of the primary difficulties for the blind
using an application are the numerous buttons and forms which are not labelled. These issues tend to
be on the rise in most applications dedicated to the blind.
Blind-Not will normalize these issues and turn to a user-friendly environment.
In conclusion, Blind-Not, although, might not be the end goal but is definitely a step towards that
goal.
Chapter 3
THEORITICAL BACKGROUND
Background theory in this thesis work serves as prophase for developing an application. That
allows us to understand more compatibly the principals and technologies of Android development
and can give us idea about further structure of prototype project.
3.1Android framework:
Android is one of an Open source platform. It is created by Google and owned by Open Handset
Alliance. It is designed with goal “accelerate innovation in mobile” As such android has taken over
a field of mobile innovation. It is definitely free and open platform that differs hardware from
software that runs on it. It results for much more devices be running the same application. Also, it
gives possibility of friendlier environment for developers and consumers. Android it is complete
software package for a mobile device. Since the beginning android team offers the developing kit
(tool and frameworks) for creating mobile applications quick and easy as possible. In some cases,
you do not specially need an android phone but you are very welcome to have one. It can work
right out of the box, but of course users can customize it for their particular needs. For manufactures
it is ready and free solution for their devices. Except specific driver’s android community provides
everything else to create their devices.
The actual history of Android starts when Google has had purchased and Android Inc. in 2005. But
the development did not start immediately. The actual progress on android platform starts when
2007 Open Handsets Alliance has announced the Android as Open Source platform and year later
the Android SDK 1.0. In the same 2008 the G1 phone was produced by HTC and was retailed
within the T-Mobile carrier. In the next two years came out 4 versions of Android. In 2010 there
were at least 60 devices running android and it becomes second after Blackberry the best spread
mobile platform.
The TensorFlow object detection API is the framework for creating a deep learning network that
solves object detection problems.
There are already pretrained models in their framework which they refer to as Model Zoo. This
includes a collection of pretrained models trained on the COCO dataset, the KITTI dataset, and the
Open Images Dataset. These models can be used for inference if we are interested in categories only
in this dataset.
They are also useful for initializing your models when training on the novel dataset. The various
architectures used in the pretrained model are described in this table:
1. First, a deep learning model or algorithm is used to generate a large set of bounding boxes
spanning the full image (that is, an object localization component)
2. Next, visual features are extracted for each of the bounding boxes. They are evaluated and it
is determined whether and which objects are present in the boxes based on visual features (i.e.
an object classification component)
3. In the final post-processing step, overlapping boxes are combined into a single bounding box
(that is, non-maximum suppression)
Chapter 4
SRS document itself states in precise and explicit language those functions and capabilities a
software system (i.e., a software application, an ecommerce website and so on) must provide, as well
as states any required constraints by which the system must abide. SRS also functions as a blueprint
for completing a project with as little cost growth as possible. SRS is often referred to as the “parent”
document because all subsequent project management documents, such as design specifications,
statements of work, software architecture specifications, testing and validation plans, and
documentation plans, are related to it.
4.1 INTRODUCTION
Early work in the field of helping the blind derived its importance on developing scientific methods
to cure partial blindness. In response to this popular discourse, technology seeped in the cracks of
science to not only improve the standard of living but also to engage in curing blindness. According
to the WHO, the estimated number of people visually impaired in the world is 285 million, 39
million blind and 246 million have low vision; 65 % of people visually impaired and 82% of all
blind are 50 years and older.
4.1.1 Purpose
The project “Vision” is an automated system for visually impaired individuals. We are focusing on
providing ease of use and multiple functionalities consolidated within an android application to
help We are looking to build an Android Application to close the gap between what is and what
could be. The main objective of the project is to involve facilities like Image recognition and Image
labelling.
4.1.2 Need/Motivation
The motivation for this particular topic is derived from the lack of cheaply available options for
visually impaired individuals. According to the study in Lancet, 8.8 million blinds in India in 2015.
But these numbers are expected to decrease according to a recent excerpt in the Hindustan Times
which states that, “The government is set to change a four-decade-old definition of blindness to
bring it in line with the WHO criteria and ensure the Indian data on blindness meets the global
estimates. The change in definition will bring down the number of blind persons by 4 million in
India.”
4.2 Requirements
If there is extensive damage to a wide portion of the database due to catastrophic failure, such as a
disk crash, the recovery method restores a past copy of the database that was backed up to archival
storage (typically tape) and reconstructs a more current state by reapplying or redoing the
operations of committed transactions from the backed-up log, up to the time of failure. We are
implementing database features with Firebase by Google, hence all the user data stored will be
safely handled and in case of an application crash, the data can be reloaded and replicated without
assistance.
Firebase Auth will provide the authentication and validation that is required the protect the user
data and details provided during the time of registration.
● CORRECTNESS: The system should generate an appropriate report about different activities of
the lab and should keep track of all records.
● MAINTAINABILITY: The system should maintain correct schedules of labs and the
documentation of all the lab equipment.
● USABILITY: The system should satisfy the maximum number of users’ needs.
● 256 MB RAM
● Android
● Adobe XD
● Java
● Firebase
Chapter 5
SYSTEM ANALYSIS
Analysis is the process of finding the best solution to the problem. System analysis is the process by
which we learn about the existing problems, define objects and requirements and evaluates the
solutions. It is the way of thinking about the organization and the problem it involves, a set of
technologies that helps in solving these problems. Feasibility study plays an important role in system
analysis which gives the target for design and development.
This study is carried out to check the economic impact that the system will have on the organization.
The amount of fund that the company can pour into the research and development of the system is
limited. The expenditures must be justified. Vision application will be economically viable to all the
users as it will be an open-source application and reduces the need for manual aid.
This study is carried out to check the technical feasibility, that is, the technical requirements of the
system. Any system developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest requirement, as only minimal
or null changes are required for implementing this system. Keeping in view the above fact, nowadays
all organizations are automating the repetitive and monotonous works done by humans. The key
process areas of the current system are nicely amenable to automation and hence the technical
feasibility is proved beyond doubt.
Chapter 6
SYSTEM DESIGN
● Unauthorized access is prevented. Because only authorized user can address the complaints and
access the resources.
● The user will be able to use Image Recognition and Image Labelling features for object detection.
● Users can Geo-tag objects of primary use to them and will be navigated to that object.
• Less human resources required as once one phase is finished those people can start working
on to the next phase.
A use case diagram at its simplest is a representation of a user's interaction with the
system that shows the relationship between the user and the different use cases in which the user
is involved. A use case diagram can identify the different types of users of a system and the
different use cases and will often be accompanied by other types of diagrams as well.
In software and systems engineering, a use case is a list of steps, typically defining interactions
between a role (known in Unified Modelling Language (UML) as an "actor") and a system, to
achieve a goal. The actor can be a human, an external system, or time.
In systems engineering, use cases are used at a higher level than within software engineering,
often representing missions or stakeholder goals. The detailed requirements may then be captured
in Systems Modelling Language (SysML) or as contractual statements.
The Sequence of activities that are carried out are the same as the other diagrams .Use case for this
module indicates the users interaction with the system as a whole rather than individual modules
.All the encryption mechanisms are carried out via the login page that redirects the user to the
particular functionality that he or she wishes to implement .
Sequence diagram are an easy and intuitive way of describing the behaviour of a system
by viewing the interaction between the system and the environment. A sequence diagram shows
an interaction arranged in a time sequence. A sequence diagram has two dimensions: vertical
dimension represents time; the horizontal dimension represents the objects existence during the
interaction.
A Sequence diagram is an interaction diagram that shows how processes operate with one another
and what is their order. It is a construct of a Message Sequence Chart. A sequence diagram shows
object interactions arranged in time sequence. It depicts the objects and classes involved in the
scenario and the sequence of messages exchanged between the objects needed to carry out the
functionality of the scenario. Sequence diagrams are typically associated with use case realizations
in the Logical View of the system under development. Sequence diagrams are sometimes called
event diagrams or event scenarios.
A sequence diagram shows, as parallel vertical lines (lifelines), different processes or objects that
live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order
in which they occur. This allows the specification of simple runtime scenarios in a graphical
manner.
In software engineering, a class diagram in the Unified Modelling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among objects.
The class diagram is the main building block of object oriented modelling. It is used both for
general conceptual modelling of the systematics of the application, and for detailed modelling
translating the models into programming code. Class diagrams can also be used for data modelling.
The classes in a class diagram represent both the main objects, interactions in the application and
the classes to be programmed.
In the diagram, classes are represented with boxes which contain three parts:
• The top part contains the name of the class. It is printed in bold and centered, and the first
letter is capitalized.
• The middle part contains the attributes of the class. They are left-aligned and the first letter
is lowercase.
• The bottom part contains the methods the class can execute. They are also left-aligned and
the first letter is lowercase.
In the design of a system, a number of classes are identified and grouped together in a class diagram
which helps to determine the static relations between those objects. With detailed modelling, the
classes of the conceptual design are often split into a number of subclasses.
In order to further describe the behaviour of systems, these class diagrams can be complemented
by a state diagram or UML state machine.
Modules:
Chapter 7
IMPLEMENTATION
This android application involves four modules. Image recognition, navigation, news reader and time
and remainder keeper. Image recognition for the visually impaired to identify the objects in their
surroundings. Navigation for the blind to navigate from source to destination without any problem.
New reader, reads out the day to day news headlines to the blind users, keeping them updated. time
and remainder keeper to help the blind set time and remainders.
This module is implemented using TensorFlow. Image recognition helps identify the objects by taking
the picture using the phone’s camera. The user is supposed to just take the snapshot and the app would
label the objects identified in the picture.
Aim is to guide the visually impaired to travel from source to destination. The user can pin a location
on the map. And then he or she is guided through the audio commands. This module is implemented
using Google API.
This module is voice activated. Reads every day’s headlines, so that they can get updated to day to
day news. Built from RSS news feed and fed into the application. It is real time based and updates
fast.
}
else{
if(temp.getPassword().equals(pass)){
temp.setIsloggedIn(1);
roomDAO.Update(temp);
AppDatabase.destroyInstance();
Intent intent = new Intent(MainActivity.this,MainPage.class);
startActivity(intent);
finish();
}
else{
Toast.makeText(MainActivity.this,"Invalid Password",Toast.LENGTH_SHORT).show();
}
}}
}
Firebase Authentication is implemented which provides backend services, easy-to-use SDKs, and
ready-made UI libraries to authenticate users to your app. It supports authentication using
passwords, phone numbers, popular federated identity providers like Google, Facebook and
Twitter, and more
Firebase Auth will provide the authentication and validation that is required the protect the user
data and details provided during the time of registration.
package com.example.phoneauth;
import android.content.Intent;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.TextView;
import com.google.firebase.auth.FirebaseAuth;
import com.google.firebase.auth.FirebaseUser;
public class MainActivity extends AppCompatActivity {
private FirebaseAuth firebaseAuth;
FirebaseAuth.AuthStateListener mAuthListener;
private String uid;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
TextView text = findViewById(R.id.txtView);
Button button = findViewById(R.id.signOut);
firebaseAuth = FirebaseAuth.getInstance();
FirebaseUser user = firebaseAuth.getCurrentUser();
if (user != null) {
text.setText("You are logged in you Uid is: "+firebaseAuth.getCurrentUser().getUid());
} else {
Intent intent = new Intent(MainActivity.this, LoginActivity.class);
intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP |
Intent.FLAG_ACTIVITY_NEW_TASK| Intent.FLAG_ACTIVITY_CLEAR_TASK);
startActivity(intent);
}
// Log.e("uid",this.uid);
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
signout();
Intent intent = new Intent(MainActivity.this,LoginActivity.class);
intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP|Intent.FLAG_ACTIVITY_NEW_TASK|
Intent.FLAG_ACTIVITY_CLEAR_TASK);
startActivity(intent); }
});
// String phone = firebaseAuth.getCurrentUser().getPhoneNumber();
}
private void signout() {
firebaseAuth.signOut();
Log.e("OUT","OUT");
}
Chapter 8
TESTING
System testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system. Although each test has a different purpose, all work to verify that all the
system elements have been properly integrated and perform allocated functions. The testing process
is actually carried out to make sure that the product exactly does the same thing what is supposed
to do. In the testing stage following goals are tried to achieve: -
• To affirm the quality of the project.
There are many different types of testing methods or techniques used as part of the software
testing methodologies on mobile applications. Some of these testing methodologies are:
Mobile application languages add some specific constructs for managing mobility, sensing, and
energy consumption. The peculiarities of those new programming languages have to be taken into
account when producing control or data flow graphs (and their respective coverage criteria) out of
the mobile programming language. (Potentials and Automation): New coverage criteria (and if
needed, new control and data flow graphs) shall be though as a way to consider at best the new
mobility, sensing, and energy constructs. In case the source code is not available, new bytecode
analysis tools can be realized.
Validating a mobile native app generally involves testing mobile device-based gestures, content,
interfaces, and the general user experience—for example, how the user inter-acts with the camera,
GPS, or a fingerprint sensor. In contrast, usability testing for mobile Web apps typically focuses on
Web-based GUI content, interfaces, and user operation flows. For example, a mobile travel app such
as Dwellable (www.dwellable.com) supports travel information and related content on mobile
browsers in different languages based on user location. Validating such mobile application needs
mobile usability testing to assure the quality of mobile Web content as well as its presentation
formats, styles, and languages.
Mobility testing on a native device usually involves testing the device’s location-based functions,
features, data, profiles, and API. For example, a mobile travel app’s content should be delivered and
presented to users based on their current location; this would include airport information, rental
service offices, maps, and attraction points and related data. If the device cannot accept that data, the
testing engineer needs to know that. In contrast, testing mobility for mobile Web apps focuses on
testing the quality of location-based system functions, data, and behaviours.
Two are the main challenges we foresee in GUI testing of mobile applications:
i) To test whether different devices provide an adequate rendering of data, and
ii) To test whether native applications are correctly displayed on different devices.
Chapter 9
Image Recognition refers to the task of inputting an image into a neural network and having it output
some kind of label for that image.
9.3 Navigation:
➔ Aim is to guide the visually impaired to travel from source to destination.
➔ The user can pin a location on the map.
➔ And then is guided through audio commands.
It is voice activated. Reads every day’s headlines. Built from RSS news feed and fed into application. It
is real time and updates fast
Chapter 10
We have presented an Android application for the assistance of the visually challenged user that will
guide the user in navigating from his source to destination as well as it will help him in acknowledging
his contacts about his location. The app was developed and Google Maps were integrated to search
different locations on map. SMS sending module is integrated to the app, on triggering SMS button
SMS to registered users is sent with the user’s current location. The application requires internet
connectivity and GPS enabled smart-phone and thus can be easily accessed by the user. Thus, the
developed application is more accurate than the existing systems. The use of the application will surely
ease some of the difficulties faced by visually challenged users and can help them in achieving an
independent livelihood.
The government is set to change a four-decade-old definition of blindness to bring it in line with the
WHO criteria and ensure the Indian data on blindness meets the global estimates. The change in
definition will bring down the number of blind persons by 4 million in India by 2020.
We are looking to be part of that goal and do our part for the society.
Chapter 11
REFERENCES
[1] Fu Cheng, Build Mobile Apps with Ionic 4 and Firebase, Hybrid mobile app development, second
Edition, 2018.
[2] Shagufta Md. Rafique Bagwan, L.J. Sankpal, VisualPal: A mobile app for object recognition for
the visually impaired, Proc of IEEE, Sept 2015.
[3] Nada N. Saeed, Mohammad A. M Salem, Alaa Khamis, Android-based object recognition for
visually impaired, IEEE, Dec 2013.
[4] Senjuti Dutta, Mridul S. Barik, Chandreyee Chowdhury, Deep Gupta, Divya-Dristi: A
smartphone-based campus navigation system for the visually impaired, IEEE, Jan 2018.
[5] Bill Phillips, Chris Stewart, Kristin Marsicano, Brian Gardner, Android Programming, The Big
Nerd Ranch Guide, Nov 4,2015.