Project Report Final
Project Report Final
PROJECT REPORT ON
“WILDLIFE ANIMAL TRACKING & LOCATION IDENTIFICATION
SYSTEM”
submitted in partial fulfilment of the requirement for award of the degree of bachelor of Engineering
in Electrical and Electronics Engineering
Submitted by:
HENISH THINGUJAM 1MV14EE043
JACKSON THOUNAOJAM 1MV14EE122
MAINAK BANERJEE 1MV14EE059
MANJARESH SINGH 1MV14EE062
Under the guidance of
MRS. P SUMALATHA
Dept of EEE Sir MVIT,
Bengaluru
CERTIFICATE
It is certified that the project work entitled “Wildlife animal Tracking and Location identifi
cation System” is a bonafide work carried out by Henish Thingujam(1MV14EE043), Main
ak Banerjee(1MV14EE059), Manjaresh Singh(1MV14EE062) and Jackson Thounaojam
(1MV14EE122) in partial fulfilment for the award of the Degree of Bachelor of Engineering
in Electrical and Electronics Engineering of the Visvesvaraya Technological University,Belga
um during the year 2017-2018. It is certified that all corrections and suggestions indicated for
internal Assessment have been incorporated in the report. The project report has been approve
d as it satisfies the academic requirements in respect of Project work prescribed for the course
of Bachelor of Engineering.
Name and Signature Name and Signature Name and Signature
Of Guide of HOD Of Principal
2)…………………… ………………………..
ABSTRACT
This project is used to track the location of Animal in wildlife reserves or national parks. This
project utilizes a GPS(Global positioning system)and a GSM (Global System Mobile) modem
and an ARM processor.
Forest officer will get SMS containing the area co-ordinates where that animal is located. If th
ey are met with some accident or are hurt then the means to help them is provided by this proj
ect and to tend their wounds, in such cases we need to catch those animals and do the required
treatments.
Main problem in such situations is that in large wildlife sanctuaries these animals are really ha
rd to locate. Due to this we have to search the entire area which is tire some. The device will b
e fitted to each animal and will leave them be. This project emphasises in preserving wildlife a
nd looking into the needs of the animals whenever we want.
Less Manpower, Less Gas for vehicles, Animal gets treated as soon as they get wounded, less
time taken to find the animal. These are some of the advantages of this Project.
DECLARATION
We hereby declare that the entire project work embodied in this dissertation has been carried
out by us and no part has been submitted for any degree or diploma of any institution previou
sly.
Place: Bangalore
HENISH THINGUJAM(1MV14EE043)
JACKSON THOUNAOJAM(1MV14EE122)
MAINAK BANERJEE(1MV14EE059)
MANJARESH SINGH(1MV14EE062)
ACKNOWLEDGEMENT
CONTENTS
CHAPTER 1
INTRODUCTION TO EMBEDDED
SYSTEMS
Embedded system
Modern embedded systems are often based on microcontrollers (i.e. CPUs with integrated
memory or peripheral interfaces), but ordinary microprocessors (using external chips for
memory and peripheral interface circuits) are also common, especially in more-complex
systems. In either case, the processor(s) used may be types ranging from general purpose to
those specialized in certain class of computations, or even custom designed for the application
at hand. A common standard class of dedicated processors is the digital signal processor (DSP).
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to
reduce the size and cost of the product and increase the reliability and performance. Some
embedded systems are mass-produced, benefiting from economies of scale.
Embedded systems range from portable devices such as digital watches and MP3 players, to
large stationary installations like traffic lights, factory controllers, and largely complex systems
like hybrid vehicles, MRI, and avionics. Complexity varies from low, with a single
microcontroller chip, to very high with multiple units, peripherals and networks mounted inside
a large chassis or enclosure.
1.1 History
One of the very first recognizably modern embedded systems was the Apollo Guidance
Computer, developed ca. 1965 by Charles Stark Draper at the MIT Instrumentation Laboratory.
At the project's inception, the Apollo guidance computer was considered the riskiest item in
the Apollo project as it employed the then newly developed monolithic integrated circuits to
reduce the size and weight. An early mass-produced embedded system was the Autonetics D-
17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II
went into production in 1966, the D-17 was replaced with a new computer that was the first
high-volume use of integrated circuits.
Since these early applications in the 1960s, embedded systems have come down in price and
there has been a dramatic rise in processing power and functionality. An early microprocessor
for example, the Intel 4004, was designed for calculators and other small systems but still
required external memory and support chips. In 1978 National Engineering Manufacturers
Association released a "standard" for programmable microcontrollers, including almost any
computer-based controllers, such as single board computers, numerical, and event-based
controllers.
As the cost of microprocessors and microcontrollers fell it became feasible to replace expensive
knob-based analog components such as potentiometers and variable capacitors with up/down
buttons or knobs read out by a microprocessor even in consumer products. By the early 1980s,
memory, input and output system components had been integrated into the same chip as the
processor forming a microcontroller. Microcontrollers find applications where a general-
purpose computer would be too costly.
1.2 Applications
Consumer electronics include MP3 players, mobile phones, video game consoles, digital
cameras, GPS receivers, and printers. Household appliances, such as microwave ovens,
washing machines and dishwashers, include embedded systems to provide flexibility,
efficiency and features. Advanced HVAC systems use networked thermostats to more
accurately and efficiently control temperature that can change by time of day and season. Home
automation uses wired- and wireless-networking that can be used to control lights, climate,
security, audio/visual, surveillance, etc., all of which use embedded devices for sensing and
controlling.
Transportation systems from flight to automobiles increasingly use embedded systems. New
airplanes contain advanced avionics such as inertial guidance systems and GPS receivers that
also have considerable safety requirements. Various electric motors — brushless DC motors,
induction motors and DC motors — use electric/electronic motor controllers. Automobiles,
electric vehicles, and hybrid vehicles increasingly use embedded systems to maximize
efficiency and reduce pollution. Other automotive safety systems include anti-lock braking
system (ABS), Electronic Stability Control (ESC/ESP), traction control (TCS) and automatic
four-wheel drive.
Medical equipment uses embedded systems for vital signs monitoring, electronic stethoscopes
for amplifying sounds, and various medical imaging (PET, SPECT, CT, and MRI) for non-
invasive internal inspections. Embedded systems within medical equipment are often powered
by industrial computers.
Embedded systems are used in transportation, fire safety, safety and security, medical
applications and life critical systems, as these systems can be isolated from hacking and thus,
be more reliable, unless connected to wired or wireless networks via on-chip 3G cellular or
other methods for IoT monitoring and control purposes.[citation needed] For fire safety, the
systems can be designed to have greater ability to handle higher temperatures and continue to
operate. In dealing with security, the embedded systems can be self-sufficient and be able to
deal with cut electrical and communication systems.
A new class of miniature wireless devices called motes are networked wireless sensors.
Wireless sensor networking, WSN, makes use of miniaturization made possible by advanced
IC design to couple full wireless subsystems to sophisticated sensors, enabling people and
companies to measure a myriad of things in the physical world and act on this information
through IT monitoring and control systems. These motes are completely self-contained, and
will typically run off a battery source for years before the batteries need to be changed or
charged.
Embedded Wi-Fi modules provide a simple means of wirelessly enabling any device that
communicates via a serial port.
1.3 Characteristics
Embedded systems are designed to do some specific task, rather than be a general-purpose
computer for multiple tasks. Some also have real-time performance constraints that must be
met, for reasons such as safety and usability; others may have low or no performance
requirements, allowing the system hardware to be simplified to reduce costs.
Embedded systems are not always standalone devices. Many embedded systems consist of
small parts within a larger device that serves a more general purpose. For example, the Gibson
Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the
Robot Guitar is, of course, to play music. Similarly, an embedded system in an automobile
provides a specific function as a subsystem of the car itself.
The program instructions written for embedded systems are referred to as firmware, and are
stored in read-only memory or flash memory chips. They run with limited computer hardware
resources: little memory, small or non-existent keyboard or screen.
More sophisticated devices that use a graphical screen with touch sensing or screen-edge
buttons provide flexibility while minimizing space used: the meaning of the buttons can change
with the screen, and selection involves the natural behaviour of pointing at what is desired.
Handheld systems often have a screen with a "joystick button" for a pointing device.
Some systems provide user interface remotely with the help of a serial (e.g. RS-232, USB, I²C,
etc.) or network (e.g. Ethernet) connection. This approach gives several advantages: extends
the capabilities of embedded system, avoids the cost of a display, simplifies BSP and allows
one to build a rich user interface on the PC. A good example of this is the combination of an
embedded web server running on an embedded device (such as an IP camera) or a network
router. The user interface is displayed in a web browser on a PC connected to the device,
therefore needing no software to be installed.
CHAPTER 2
INTRODUCTION TO GPS
INTRODUCTION TO GPS
(GLOBAL POSITIONING SYSTEM)
GSM (Global System for Mobile communications) is the technology that underpins most of
the world's mobile phone networks. The GSM platform is a hugely successful wireless
technology and an unprecedented story of global achievement and cooperation. Today's GSM
platform is living, growing and evolving and already offers an expanded and feature-rich
'family' of voice and multimedia services. GSM currently has a data transfer rate of 9.6k. New
developments that will push up data transfer rates for GSM users are HSCSD (high speed
circuit switched data) and GPRS (general packet radio service) are now available.
Ultimate project for 21st century. Now a day there is lot of burglary happening across the
country. The reason behind this is the robbers are clear that the police can't make out the exact
time of burglary and also they can't make out place where the burglary is happening.
So keeping this in mind, we are designing a device that uses the latest mobile technology called
GSM modem technology. One advantage in this technology is that any data to be transferred
can be done without any wire communication that is the wireless communication. In recent
days the communication used for conversation is the mobile communication. But in this project
we will use for the security purpose.
The Global Positioning System (GPS), originally Navstar GPS, is a satellite-base
radionavigation system owned by the United States government and operated by the United
states Air force. It is a global navigation satellite system that provides geolocation and time
information to a GPS receiver anywhere on or near the Earth where there is an unobstructed
line of sight to four or more GPS satellites. Obstacles such as mountains and buildings block
the relatively weak GPS signals.
The GPS does not require the user to transmit any data, and it operates independently of any
telephonic or internet reception, though these technologies can enhance the usefulness of the
GPS positioning information. The GPS provides critical positioning capabilities to military,
civil, and commercial users around the world. The United States government created the
system, maintains it, and makes it freely accessible to anyone with a GPS receiver
The GPS project was launched by the U.S Department of Defence in 1973 for use by the United
States military and became fully operational in 1995. It was allowed for civilian use in the
1980s. Advances in technology and new demands on the existing system have now led to
efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites
and Next Generation Operational Control System (OCX). Announcements from Vice
President AL Gore and the White House in 1998 initiated these changes. In 2000, the U.S
Congress authorized the modernization effort, GPS III. During the 1990s, GPS quality was
degraded by the United States government in a program called "Selective Availability",
however, this is no longer the case, and was discontinued in May 2000 by law signed by
President Bill Clinton. New GPS receiver devices using the L5 frequency to begin release in
2018 are expected to have a much higher accuracy and pinpoint a device to within 30
centimeters or just under one foot.
The GPS system is provided by the United States government, which can selectively deny
access to the system, as happened to the Indian military in 1999 during the Kargil War, or
degrade the service at any time. As a result, a number of countries have developed or are in the
process of setting up other global or regional navigation systems. The Russian Global
Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but
suffered from incomplete coverage of the globe until the mid-2000s. GLONASS can be added
to GPS devices, making more satellites available and enabling positions to be fixed more
quickly and accurately, to within two meters. China's BeiDou Navigation Satellite System is
due to achieve global reach in 2020. There are also the European Union Galileo positioning
system, and India's NAVIC. Japan's Quasi-Zenith Satellite System (scheduled to commence in
November 2018) will be a GPS satellite-based augmentation system to enhance GPS's
accuracy.
2.1 History:
The GPS project was launched in the United States in 1973 to overcome the limitations of
previous navigation systems, integrating ideas from several predecessors, including a number
of classified engineering design studies from the 1960s. The U.S. Department of
Defense developed the system, which originally used 24 satellites. It was initially developed
for use by the United States military and became fully operational in 1995. Civilian use was
allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A.
Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics
Laboratory are credited with inventing it.
The design of GPS is based partly on similar ground-based radio-navigation systems, such
as LORAN and the Decca Navigator, developed in the early 1940s.
With these parallel developments in the 1960s, it was realized that a superior system could be
developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in
a multi-service program. However, satellite orbital position errors, induced by variations in the
gravity field and radar refraction among others, had to be resolved. A team led by Harold L
Jury of Pan Am Aerospace Division in Florida from 1970–1973, used real-time data
assimilation and recursive estimation to do so; modeling the systematic and residual errors
down to a manageable level to permit accurate navigation.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon
discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting
that the real synthesis that became GPS was created. Later that year, the DNSS program was
named Navstar, or Navigation System Using Timing and Ranging. With the individual
satellites being associated with the name Navstar (as with the predecessors Transit and
Timation), a more fully encompassing name was used to identify the constellation of Navstar
satellites, Navstar-GPS. Ten "Block I" prototype satellites were launched between 1978 and
1985 (an additional unit was destroyed in a launch failure).
The effects of the ionosphere on radio transmission through the ionosphere was investigated
within a geophysics laboratory of Air Force Cambridge Research Laboratory. Located
at Hanscom Air Force Base, outside Boston, the lab was renamed the Air Force Geophysical
Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar Model for computing
ionospheric corrections to GPS location. Of note is work done by Australian Space Scientist
Elizabeth Essex-Cohen at AFGRL in 1974. She was concerned with the curving of the path of
radio waves traversing the ionosphere from NavSTAR satellites.
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down in 1983
after straying into the USSR's prohibited airspace, in the vicinity of Sakhalin and Moneron
Islands, President Ronald Reagan issued a directive making GPS freely available for civilian
use, once it was sufficiently developed, as a common good. The first Block II satellite was
launched on February 14, 1989, and the 24th satellite was launched in 1994. The GPS program
cost at this point, not including the cost of the user equipment, but including the costs of the
satellite launches, has been estimated at about USD 5 billion (then-year dollars).
Initially, the highest quality signal was reserved for military use, and the signal available for
civilian use was intentionally degraded (Selective Availability). This changed with
President Bill Clinton signing a policy directive to turn off Selective Availability May 1, 2000
to provide the same accuracy to civilians that was afforded to the military. The directive was
proposed by the U.S. Secretary of Defense, William Perry, because of the widespread growth
of differential GPS services to improve civilian accuracy and eliminate the U.S. military
advantage. Moreover, the U.S. military was actively developing technologies to deny GPS
service to potential adversaries on a regional basis.
Since its deployment, the U.S. has implemented several improvements to the GPS service
including new signals for civil use and increased accuracy and integrity for all users, all the
while maintaining compatibility with existing GPS equipment. Modernization of the satellite
system has been an ongoing initiative by the U.S. Department of Defense through a series
of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial
market.
As of early 2015, high-quality, FAA grade, Standard Positioning Service (SPS) GPS receivers
provide horizontal accuracy of better than 3.5 meters, although many factors such as receiver
quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States government as a national resource. The
Department of Defense is the steward of GPS. The Interagency GPS Executive Board
(IGEB) oversaw GPS policy matters from 1996 to 2004. After that the National Space-Based
Positioning, Navigation and Timing Executive Committee was established by presidential
directive in 2004 to advise and coordinate federal departments and agencies on matters
concerning the GPS and related systems. The executive committee is chaired jointly by the
Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level
officials from the Departments of State, Commerce, and Homeland Security, the Joint Chiefs
of Staff and NASA. Components of the executive office of the president participate as
observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service
(as defined in the federal radio navigation plan and the standard positioning service signal
specification) that will be available on a continuous, worldwide basis," and "develop measures
to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading
civilian uses."
• On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive
order, allowing civilian users to receive a non-degraded signal globally.
• In 2004, the United States Government signed an agreement with the European Community
establishing cooperation related to GPS and Europe's Galileo system.
• In 2004, United States President George W. Bush updated the national policy and replaced
the executive board with the National Executive Committee for Space-Based Positioning,
Navigation, and Timing.
• November 2004, Qualcomm announced successful tests of assisted GPS for mobile
phones.
• In 2005, the first modernized GPS satellite was launched and began transmitting a second
civilian signal (L2C) for enhanced user performance.
• On September 14, 2007, the aging mainframe-based Ground Segment Control System was
transferred to the new Architecture Evolution Plan.
• On May 19, 2009, the United States Government Accountability Office issued a report
warning that some GPS satellites could fail as soon as 2010.
• On May 21, 2009, the Air Force Space Command allayed fears of GPS failure, saying
"There's only a small risk we will not continue to exceed our performance standard."
• On January 11, 2010, an update of ground control systems caused a software
incompatibility with 8,000 to 10,000 military receivers manufactured by a division of
Trimble Navigation Limited of Sunnyvale, Calif.
• On February 25, 2010, the U.S. Air Force awarded the contract to develop the GPS Next
Generation Operational Control System (OCX) to improve accuracy and availability of
GPS navigation signals, and serve as a critical part of GPS modernization.
2.3 Fundamentals
The GPS concept is based on time and the known position of GPS specialized satellites. The s
atellites carry very stable atomic clocks that are synchronized with one another and with the g
round clocks. Any drift from true time maintained on the ground is corrected daily. In the sam
e manner, the satellite locations are known with great precision. GPS receivers have clocks as
well, but they are less stable and less precise.
GPS satellites continuously transmit data about their current time and position. A GPS receive
r monitors multiple satellites and solves equations to determine the precise position of the rec
eiver and its deviation from true time. At a minimum, four satellites must be in view of the re
ceiver for it to compute four unknown quantities (three position coordinates and clock deviati
on from satellite time).
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that
includes:
• A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-
aligning a receiver-generated version and the receiver-measured version of the code, the
time of arrival (TOA) of a defined point in the code sequence, called an epoch, can be
found in the receiver clock time scale
• A message that includes the time of transmission (TOT) of the code epoch (in GPS time
scale) and the satellite position at that time
Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite
signals. From the TOAs and the TOTs, the receiver forms four time of flight (TOF) values,
which are (given the speed of light) approximately equivalent to receiver-satellite ranges. The
receiver then computes its three-dimensional position and clock deviation from the four TOFs.
In practice the receiver position (in three dimensional Cartesian coordinates with origin at the
Earth's center) and the offset of the receiver clock relative to the GPS time are computed
simultaneously, using the navigation equations to process the TOFs.
The receiver's Earth-centered solution location is usually converted to latitude, longitude and
height relative to an ellipsoidal Earth model. The height may then be further converted to height
relative to the geoid (e.g., EGM96) (essentially, mean sea level). These coordinates may be
displayed, e.g., on a moving map display, and/or recorded and/or used by some other system
(e.g., a vehicle guidance system).
Although usually not formed explicitly in the receiver processing, the conceptual time differe
nces of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to a hyp
erboloid of revolution. The line connecting the two satellites involved (and its extensions) for
ms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids i
ntersect.
It is sometimes incorrectly said that the user location is at the intersection of three spheres. W
hile simpler to visualize, this is only the case if the receiver has a clock synchronized with the
satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differe
nces). There are significant performance benefits to the user carrying a clock synchronized wi
th the satellites. Foremost is that only three satellites are needed to compute a position solutio
n. If this were an essential part of the GPS concept so that all users needed to carry a synchron
ized clock, then a smaller number of satellites could be deployed. However, the cost and com
plexity of the user equipment would increase significantly.
The description above is representative of a receiver start-up situation. Most receivers have
a track algorithm, sometimes called a tracker, that combines sets of satellite measurements
collected at different times—in effect, taking advantage of the fact that successive receiver
positions are usually close to each other. After a set of measurements are processed, the tracker
predicts the receiver location corresponding to the next set of satellite measurements. When
the new measurements are collected, the receiver uses a weighting scheme to combine the new
measurements with the tracker prediction. In general, a tracker can (a) improve receiver
position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and
direction.
The disadvantage of a tracker is that changes in speed or direction can only be computed with
a delay, and that derived direction becomes inaccurate when the distance traveled between two
position measurements drops below or near the random error of position measurement. GPS
units can use measurements of the Doppler shift of the signals received to compute velocity
accurately. More advanced navigation systems use additional sensors like a compass or
an inertial navigation system to complement GPS.
In typical GPS operation as a navigator, four or more satellites must be visible to obtain an
accurate result. The solution of the navigation equations gives the position of the receiver along
with the difference between the time kept by the receiver's on-board clock and the true time-
of-day, thereby eliminating the need for a more precise and possibly impractical receiver based
clock. Applications for GPS such as time transfer, traffic signal timing, and synchronization of
cell phone base stations, make use of this cheap and highly accurate timing. Some GPS
applications use this time for display, or, other than for the basic position calculations, do not
use it at all.
Although four satellites are required for normal operation, fewer apply in special cases. If one
variable is already known, a receiver can determine its position using only three satellites. For
example, a ship or aircraft may have known elevation. Some GPS receivers may use additional
clues or assumptions such as reusing the last known altitude, dead reckoning, inertial
navigation, or including information from the vehicle computer, to give a (possibly degraded)
position when fewer than four satellites are visible.
CHAPTER 3
INTRODUCTION TO GSM
INTRODUCTION TO GSM
(GLOBAL SYSTEM OF MOBILE COMMUNICATIONS)
The GSM logo is used to identify compatible devices and equipment. The dots symboli
ze three clients in the home network and one roaming client.
GSM (Global System for Mobile communications) is a standard developed by the European
Telecommunications Standards Institute (ETSI) to describe the protocols for second-
generation digital cellular networks used by mobile devices such as tablets, first deployed in
Finland in December 1991. As of 2014, it has become the global standard for mobile
communications – with over 90% market share, operating in over 193 countries and territories.
2G networks developed as a replacement for first generation (1G) analog cellular networks,
and the GSM standard originally described as a digital, circuit-switched network optimized
for full duplex voice telephony. This expanded over time to include data communications, first
by circuit-switched transport, then by packet data transport via GPRS (General Packet Radio
Services) and EDGE (Enhanced Data rates for GSM Evolution, or EGPRS).
Subsequently, the 3GPP developed third-generation (3G) UMTS standards, followed by
fourth-generation (4G) LTE Advanced standards, which do not form part of the ETSI GSM
standard.
"GSM" is a trademark owned by the GSM Association. It may also refer to the (initially) most
common voice codec used, Full Rate.
3.1 History:
In 1983 work began to develop a European standard for digital cellular voice
telecommunications when the European Conference of Postal and Telecommunications
Administrations(CEPT) set up the Groupe Spécial Mobile committee and later provided a
permanent technical-support group based in Paris. Five years later, in 1987, 15 representatives
from 13 European countries signed a memorandum of understanding in Copenhagen to
develop and deploy a common cellular telephone system across Europe, and EU rules were
passed to make GSM a mandatory standard. The decision to develop a continental standard
eventually resulted in a unified, open, standard-based network which was larger than that in
the United States.
In February 1987 Europe produced the very first agreed GSM Technical Specification.
Ministers from the four big EU countries cemented their political support for GSM with the
Bonn Declaration on Global Information Networks in May and the GSM MoU was tabled for
signature in September. The MoU drew in mobile operators from across Europe to pledge to
invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought
behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn
(Germany), Stephen Temple (UK), Philippe Dupuis (France), and Renzo Failli (Italy). In 1989
the Groupe Spécial Mobile committee was transferred from CEPT to the European
Telecommunications Standards Institute (ETSI).
In parallel France and Germany signed a joint development agreement in 1984 and were joined
by Italy and the UK in 1986. In 1986, the European Commission proposed reserving the
900 MHz spectrum band for GSM. The former Finnish prime minister Harri Holkeri made the
world's first GSM call on July 1, 1991, calling Kaarina Suonio (mayor of the city of Tampere)
using a network built by Telenokia and Siemens and operated by Radiolinja. The following
year saw the sending of the first short messaging service (SMS or "text message") message,
and Vodafone UK and Telecom Finland signed the first international roaming agreement.
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the
first 1800 MHz network became operational in the UK by 1993, called and DCS 1800. Also
that year, Telecom Australia became the first network operator to deploy a GSM network
outside Europe and the first practical hand-held GSM mobile phone became available.
In 1995 fax, data and SMS messaging services were launched commercially, the first
1900 MHz GSM network became operational in the United States and GSM subscribers
worldwide exceeded 10 million. In the same year, the GSM Association formed. Pre-paid GSM
SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.
In 2000 the first commercial GPRS services were launched and the first GPRS-compatible
handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was
launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500
million. In 2002, the first Multimedia Messaging Service (MMS) was introduced and the first
GSM network in the 800 MHz frequency band became operational. EDGE services first
became operational in a network in 2003, and the number of worldwide GSM subscribers
exceeded 1 billion in 2004.
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network
market, serving 1.5 billion subscribers. In 2005, the first HSDPA-capable network also became
operational. The first HSUPA network launched in 2007. (High-Speed Packet Access (HSPA)
and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM
subscribers exceeded three billion in 2008.
The GSM Association estimated in 2010 that technologies defined in the GSM standard served
80% of the mobile market, encompassing more than 5 billion people across more than 212
countries and territories, making GSM the most ubiquitous of the many standards for cellular
networks.
GSM is a second-generation (2G) standard employing time-division multiple-Access (TDMA)
spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI).
The GSM standard does not include the 3G Universal Mobile Telecommunications
System (UMTS) code division multiple access (CDMA) technology nor the 4G
LTE orthogonal frequency-division multiple access (OFDMA) technology standards issued by
the 3GPP.
GSM, for the first time, set a common standard for Europe for wireless networks. It was also
adopted by many countries outside Europe. This allowed subscribers to use other GSM
networks that have roaming agreements with each other. The common standard reduced
research and development costs, since hardware and software could be sold with only minor
adaptations for the local market.
Telstra in Australia shut down its 2G GSM network on December 1, 2016, the first mobile
network operator to decommission a GSM network. The second mobile provider to shut down
its GSM network (on January 1, 2017) was AT&T Mobility from the United
States. Optus in Australia completed the shut down its 2G GSM network on August 1, 2017,
part of the Optus GSM network covering Western Australia and the Northern Territory had
earlier in the year been shut down in April 2017. Singapore shut down 2G services entirely in
April 2017.
Base station subsystem – the base stations and their controllers explained
Network and Switching Subsystem – the part of the network most similar to a fixed
network, sometimes just called the "core network"
GPRS Core Network – the optional part which allows packet-based Internet connections
GSM is a cellular network, which means that cell phones connect to it by searching for cells in
the immediate vicinity. There are five different cell sizes in a GSM network—
macro, micro, pico, femto, and umbrella cells. The coverage area of each cell varies according
to the implementation environment. Macro cells can be regarded as cells where the base
station antenna is installed on a mast or a building above average rooftop level. Micro cells are
cells whose antenna height is under average rooftop level; they are typically used in urban
areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly
used indoors. Femtocells are cells designed for use in residential or small business
environments and connect to the service provider’s network via a broadband internet
connection. Umbrella cells are used to cover shadowed regions of smaller cells and fill in gaps
in coverage between those cells.
Cell horizontal radius varies depending on antenna height, antenna gain, and propagation
conditions from a couple of hundred meters to several tens of kilometres. The longest distance
the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several
implementations of the concept of an extended cell, where the cell radius could be double or
even more, depending on the antenna system, the type of terrain, and the timing advance.
Indoor coverage is also supported by GSM and may be achieved by using an indoor picocell
base station, or an indoor repeater with distributed indoor antennas fed through power splitters,
to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna
system. These are typically deployed when significant call capacity is needed indoors, like in
shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also
provided by in-building penetration of the radio signals from any nearby cell.
GSM networks operate in a number of different carrier frequency ranges (separated into
GSM frequency ranges for 2G and UMTS frequency bands for 3G), with most 2G GSM
networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already
allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and
the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some
countries because they were previously used for first-generation systems.
For comparison most 3G networks in Europe operate in the 2100 MHz frequency band. For
more information on worldwide GSM frequency usage, see GSM frequency bands.
Regardless of the frequency selected by an operator, it is divided into timeslots for individual
phones. This allows eight full-rate or sixteen half-rate speech channels per radio frequency.
These eight radio timeslots (or burst periods) are grouped into a TDMA frame. Half-rate
channels use alternate frames in the same timeslot. The channel data rate for all 8 channels is
270.833 kbit/s, and the frame duration is 4.615 ms.
The transmission power in the handset is limited to a maximum of 2 watts in GSM 850/900
and 1 watt in GSM 1800/1900.
GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 6.5 and 13
kbit/s. Originally, two codecs, named after the types of data channel they were allocated,
were used, called Half Rate (6.5 kbit/s) and Full Rate (13 kbit/s). These used a system based
on linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs
also made it easier to identify more important parts of the audio, allowing the air interface
layer to prioritize and better protect these parts of the signal. GSM was further enhanced in
1997 with the enhanced full rate (EFR) codec, a 12.2 kbit/s codec that uses a full-rate
channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate
codec called AMR-Narrowband, which is high quality and robust against interference when
used on full-rate channels, or less robust but still relatively high quality when used in good
radio conditions on half-rate channel.
Network switching subsystem (NSS) (or GSM core network) is the component of a GSM
system that carries out call switching and mobility management functions for mobile phones
roaming on the network of base stations. It is owned and deployed by mobile phone operators
and allows mobile devices to communicate with each other and telephones in the wider
public switched telephone network (PSTN). The architecture contains specific features and
functions which are needed because the phones are not fixed in one location.
The NSS originally consisted of the circuit-switched core network, used for traditional GSM
services such as voice calls, SMS, and circuit switched data calls. It was extended with an
overlay architecture to provide packet-switched data services known as the GPRS core
network. This allows mobile phones to have access to services such as WAP, MMS and the
Internet.
The mobile switching center (MSC) is the primary service delivery node for GSM/CDMA,
responsible for routing voice calls and SMS as well as other services (such as conference
calls, FAX and circuit switched data).
The MSC sets up and releases the end-to-end connection, handles mobility and hand-over
requirements during the call and takes care of charging and real time prepaid account
monitoring.
In the GSM mobile phone system, in contrast with earlier analogue services, fax and data
information is sent digitally encoded directly to the MSC. Only at the MSC is this re-coded
into an "analogue" signal (although actually this will almost certainly mean sound is encoded
digitally as a pulse-code modulation (PCM) signal in a 64-kbit/s timeslot, known as a DS0 in
America).
There are various different names for MSCs in different contexts which reflects their
complex role in the network, all of these terms though could refer to the same MSC, but
doing different things at different times.
The gateway MSC (G-MSC) is the MSC that determines which "visited MSC" (V-MSC) the
subscriber who is being called is currently located at. It also interfaces with the PSTN. All
mobile to mobile calls and PSTN to mobile calls are routed through a G-MSC. The term is
only valid in the context of one call, since any MSC may provide both the gateway function
and the visited MSC function. However, some manufacturers design dedicated high capacity
MSCs which do not have any base station subsystems (BSS) connected to them. These MSCs
will then be the gateway MSC for many of the calls they handle.
The visited MSC (V-MSC) is the MSC where a customer is currently located. The visitor
location register (VLR) associated with this MSC will have the subscriber's data in it.
The anchor MSC is the MSC from which a handover has been initiated. The target MSC is
the MSC toward which a handover should take place. A mobile switching center server is a
part of the redesigned MSC concept starting from 3GPP Release 4.
The mobile switching center server is a soft-switch variant (therefore it may be referred as
mobile soft switch, MSS) of the mobile switching center, which provides circuit-switched
calling mobility management, and GSM services to the mobile phones roaming within the
area that it serves. MSS functionality enables split between control (signalling) and user plane
(bearer in network element called as media gateway/MG), which guarantees better placement
of network elements within the network.
MSS and media gateway (MGW) makes it possible to cross-connect circuit switched calls
switched by using IP, ATM AAL2 as well as TDM. More information is available in 3GPP
TS 23.205.
The term Circuit switching (CS) used here originates from traditional telecommunications
systems. However, modern MSS and MGW devices mostly use generic Internet technologies
and form next-generation telecommunication networks. MSS software may run on generic
computers or virtual machines in cloud environment.
The home location register (HLR) for obtaining data about the SIM and mobile services
ISDN number (MSISDN; i.e., the telephone number).
The base station subsystem (BSS) which handles the radio communication with 2G and 2.5G
mobile phones.
The UMTS terrestrial radio access network (UTRAN) which handles the radio
communication with 3G mobile phones.
The visitor location register (VLR) provides subscriber information when the subscriber is
outside its home network.
Delivering calls to subscribers as they arrive based on information from the VLR.
Delivering SMSs from subscribers to the short message service center (SMSC) and vice
versa.
The network provides mobility management, session management and transport for Internet
Protocol packet services in GSM and WCDMA networks. The core network also provides
support for other additional functions such as billing and lawful interception. It was also
proposed, at one stage, to support packet radio services in the US D-AMPS TDMA system,
however, in practice, all of these networks have been converted to GSM so this option has
become irrelevant.
PRS module is an open standards driven system. The standardization body is the 3GPP.
Operations support systems (OSS), or operational support systems in British usage, are
computer systems used by telecommunications service providers to manage their networks
(e.g., telephone networks). They support management functions such as network inventory,
service provisioning, network configuration and fault management.
Together with business support systems (BSS), they are used to support various end-to-end
telecommunication services. BSS and OSS have their own data and service responsibilities.
The two systems together are often abbreviated OSS/BSS, BSS/OSS or simply B/OSS.
The abbreviation OSS is also used in a singular form to refer to all the Operations Support
Systems viewed as a whole system.
Different subdivisions of OSS have been proposed by the TM Forum, industrial research labs
or OSS vendors. In general, an OSS covers at least the following five functions:
Service delivery
Service assurance
Customer care
3.4.3.1 History
Before about 1980, many OSS activities were performed by manual administrative processes.
However, it became obvious that much of this activity could be replaced by computers. In the
next five years or so, telephone companies created a number of computer systems (or
software applications) which automated much of this activity. This was one of the driving
factors for the development of the Unix operating system and the C programming language.
The Bell System purchased their own product line of PDP-11 computers from Digital
Equipment Corporation for a variety of OSS applications. OSS systems used in the Bell
System include AMATPS, CSOBS, EADAS, Remote Memory Administration System
(RMAS), Switching Control Center System (SCCS), Service Evaluation System (SES),
Trunks Integrated Record Keeping System (TIRKS), and many more. OSS systems from this
era are described in the Bell System Technical Journal, Bell Labs Record, and Telcordia
Technologies (now part of Ericsson) SR-2275.
Many OSS systems were initially not linked to each other and often required manual
intervention. For example, consider the case where a customer wants to order a new telephone
service. The ordering system would take the customer's details and details of their order, but
would not be able to configure the telephone exchange directly--this would be done by a switch
management system. Details of the new service would need to be transferred from the order
handling system to the switch management system, and this would normally be done by a
technician re-keying the details from one screen into another, a process often referred to as
"swivel chair integration". This was another source of inefficiency, so the focus for the next
few years was on creating automated interfaces between the OSS applications.
3.4.3.2 Architecture
A lot of the work on OSS has been centered on defining its architecture. Put simply, there are
four key elements of OSS:
-Processes
-Data
-Applications
-Technology
During the 1990s, new OSS architecture definitions were done by the ITU
Telecommunication Standardization Sector (ITU-T) in its Telecommunications Management
Network (TMN) model. This established a 4-layer model of TMN applicable within an OSS:
A fifth level is mentioned at times being the elements themselves, though the standards speak
of only four levels. This was a basis for later work. Network management was further defined
by the ISO using the FCAPS model - Fault, Configuration, Accounting, Performance and
Security. This basis was adopted by the ITU-T TMN standards as the Functional model for
the technology base of the TMN standards M.3000 - M.3599 series. Although the FCAPS
model was originally conceived and is applicable for an IT enterprise network, it was adopted
for use in the public networks run by telecommunication service providers adhering to ITU-T
TMN standards.
A big issue of network and service management is the ability to manage and control the
network elements of the access and core networks. Historically, many efforts have been spent
in standardization fora (ITU-T, 3GPP) in order to define standard protocol for network
management, but with no success or practical results. On the other hand, IETF SNMP
protocol (Simple Network Management Protocol) has become the de facto standard for
Internet and telco management, at the EML-NML communication level.
From 2000 and beyond, with the growth of the new broadband and VoIP services, the
management of home networks is also entering the scope of OSS and network management.
DSL Forum TR-069 specification has defined the CPE WAN Management Protocol (CWMP),
suitable for managing home networks devices and terminals at the EML-NML interface.
GSM was intended to be a secure wireless system. It has considered the user authentication
using a pre-shared key and challenge-response, and over-the-air encryption. However, GSM is
vulnerable to different types of attack, each of them aimed at a different part of the network.
The development of UMTS introduced an optional Universal Subscriber Identity
Module (USIM), that uses a longer authentication key to give greater security, as well as
mutually authenticating the network and the user, whereas GSM only authenticates the user to
the network (and not vice versa). The security model therefore offers confidentiality and
authentication, but limited authorization capabilities, and no non-repudiation.
GSM uses several cryptographic algorithms for security. The A5/1, A5/2, and A5/3 stream
ciphers are used for ensuring over-the-air voice privacy. A5/1 was developed first and is a
stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other
countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2
in real-time with a ciphertext-only attack, and in January 2007, The Hacker's Choice started
the A5/1 cracking project with plans to use FPGAs that allow A5/1 to be broken with a rainbow
table attack. The system supports multiple algorithms so operators may replace that cipher with
a stronger one.
Since 2000 different efforts have been made in order to crack the A5 encryption algorithms.
Both A5/1 and A5/2 algorithms have been broken, and their cryptanalysis has been revealed in
the literature. As an example, Karsten Nohl (de) developed a number of rainbow tables (static
values which reduce the time needed to carry out an attack) and have found new sources
for known plaintext attacks. He said that it is possible to build "a full GSM interceptor...from
open-source components" but that they had not done so because of legal concerns. Nohl
claimed that he was able to intercept voice and text conversations by impersonating another
user to listen to voicemail, make calls, or send text messages using a seven-year-
old Motorola cellphone and decryption software available for free online.
GSM uses General Packet Radio Service (GPRS) for data transmissions like browsing the web.
The most commonly deployed GPRS ciphers were publicly broken in 2011.
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 ciphers and published
the open-source "gprsdecode" software for sniffing GPRS networks. They also noted that some
carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or
protocols they do not like (e.g., Skype), leaving customers unprotected. GEA/3 seems to
remain relatively hard to break and is said to be in use on some more modern networks. If used
with USIM to prevent connections to fake base stations and downgrade attacks, users will be
protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
CHAPTER 4
BLOCK DIAGRAM
BLOCK DIAGRAM:
LCD
GPS
LPC 2148
GSM
modem
Power
Supply
GPS Module is attached to the vehicle, which is going to be traced. Using this Project we hav
e to send the position of GPS Module (vehicle) to the control unit. The GPS receiver receives
signal from the satellite. This signal shows the Latitudes and Longitudes of the GPS receiver.
This signal is given to the micro controller. Output from Micro controller is given to the GSM
module. GSM module, which is kept at the receiver side, will read this message. This messag
e contains longitude and latitude and receiving time and date. Using this information we can e
asily find the position of GPS receiver, or simply the position of the vehicle. GPS receiver, w
hich is fixed in transport, will collect the NMEA data from GPS Satellite, which incorporates
Latitude, Longitude, time of receiving data, and date. ARM processor will receive these data t
hrough serial communication from GPS receiver.
These data are used to identify the position of that transport on the earth. Collected informatio
n is given to ARM processor and will be displayed on display board of that vehicle. So ARM
processor will send the exact position of that vehicle to GSM module.
This GSM module will send this information to another GSM module, which is kept at the rec
eiver side, where the position of the Transport is needed to be displayed. This information wil
l be sent to ARM processor, which is kept at the receiver side, and position will be displayed a
t the required place.
CHAPTER 5
INTRODUCTION TO ARM LPC2148
Over the last few years, the ARM architecture has become the most pervasive 32-bitarchitectu
re in the world, with wide range of ICs available from various IC manufacturers. ARM proces
sors are embedded in products ranging from cell/mobile phones to automotive braking system
s. A worldwide community of ARM partners and third-party vendors has developed among se
miconductor and product design companies, including hardware engineers, system designers,
and software developers.
ARM7 is one of the widely used micro-controller family in embedded system application. Thi
s section is humble effort for explaining basic features of ARM-7.
ARM is a family of instruction set architectures for computer processors based on a reduced i
nstruction set computing (RISC) architecture developed by British company ARM Holdings.
A RISC-based computer design approach means ARM processors require significantly fewer t
ransistors than typical processors in average computers. This approach reduces costs, heat and
power use. These are desirable traits for light, portable, battery-powered devices—including s
martphones, laptops, tablet and notepad computers), and other embedded systems. A simpler
design facilitates more efficient multi-core CPUs and higher core counts at lower cost, provid
ing higher processing power and improved energy efficiency for servers and supercomputers.
In 2005, about 98% of all mobile phones sold used at least one ARM processor. The low pow
er consumption of ARM processors has made them very popular: 37 billion ARM processors
have been produced as of 2013, up from 10 billion in 2008. The ARM architecture (32-bit) is
the most widely used architecture in mobile devices, and most popular 32-bit one in embedde
d systems.
According to ARM Holdings, in 2010 alone, producers of chips based on ARM architectures r
eported shipments of 6.1 billion ARM Based processors, representing 95% of smartphones, 3
5% of digital televisions and set-top boxes and 10% of mobile computers. It is the most widel
y used 32-bit instruction set architecture in terms of quantity produced.
LPC2148 is the widely used IC from ARM-7 family. It is manufactured by Philips and it is pr
e-loaded with many inbuilt peripherals making it more efficient and a reliable option for the b
eginners as well as high end application developer.
Let us go through the features of LPC214x series controllers.
• 8 to 40 kB of on-chip static RAM and 32 to 512 kB of on-chip flash program memory.128bi
t wide interface/accelerator enables high speed 60 MHz operation.
• In-System/In-Application Programming (ISP/IAP) via on-chip boot-loader software. Single
flash sector or full chip erase in 400 ms and programming of 256 bytes in 1ms.
• Embedded ICE RT and Embedded Trace interfaces offer real-time debugging with the on-ch
ip Real Monitor software and high speed tracing of instruction execution.
• USB 2.0 Full Speed compliant Device Controller with 2 kB of endpoint RAM. In addition, t
he LPC2146/8 provides 8 kB of on-chip RAM accessible to USB by DMA.
• One or two (LPC2141/2 vs. LPC2144/6/8) 10-bit A/D converters provide a total of 6/14ana
log inputs, with conversion times as low as 2.44 us per channel.
5.2 MEMORY:
LPC2148 has 32kB on chip SRAM and 512 kB on chip FLASH memory. It has inbuilt support
up to 2kB end point USB RAM also. This huge amount of memory is well suited for almost all
the applications.
We will explain the basic functions of above memory.
Total of 30 input/output and a single output only pin out of 32 pins are available on PORT0.
PORT1 has up to 16 pins available for GPIO functions. PORT0 and PORT1 are controlled via
two groups of registers explained below.
1. IOPIN
It is GPIO Port Pin value register. The current state of the GPIO configured port pins can always
be read from this register, regardless of pin direction.
2.IODIR
GPIO Port Direction control register. This register individually controls the direction of each
port pin.
3.IOCLR
GPIO Port Output Clear registers. This register controls the state of output pins. Writing ones
produces lows at the corresponding port pins and clears the corresponding bits in the IOSET
register. Writing zeroes has no effect.
4.IOSET
GPIO Port Output Set registers. This register controls the state of output pins in conjunction
with the IOCLR register. Writing ones produces highs at the corresponding port pins. Writing
zeroes has no effect.
This is the set of register used to configure I/O Pins. Now let's move to individual registers
in deep.
Port 0 has 32 pins (P0.0 to P0.31). Each pin can have multiple functions. On RESET, all pins
are configured as GPIO pins. However we can re-configure using the registers IOSEL0 and
IOSEL1.
IOSEL0 is used to select function of P0.0 to P0.15. Each pin has up to 4 functions so 2 bits/pin
in IOSEL0 is provided for selecting function.
2.IOSEL1
3.IOSEL2
4.IO0DIR
1= output pin
0= input pin
Example: IO0DIR=0x0000ffff means P0.0 to P0.15 are configured as output pins and P0.16
to P0.31 are configured as input pins.
5.IO1DIR
1= output pin
0= input pin
Example: IO0DIR=0xaaaaaaaa means even pins (P1.0, P1.2, P1.4 etc.) are configured as input
pins and odd pins (P1.1, P1.3, P1.5 etc.) are configured as input pins.
6. IO0SET
Example: IO0SET=0x0000ffff will set pins P0.0 to P0.15 at logic 1. It will not affect other
pins.
7. IO0CLR
Example: IO0SET=0x0000ffff will set pins P0.0 to P0.15 at logic 0. It will not affect other
pins.
8. IO1SET
Example: IO1SET=0x0000ffff will set pins P1.0 to P1.15 at logic 1. It will not affect other
pins.
9.IO1CLR
Example: IO1SET=0x0000ffff will set pins P1.0 to P1.15 at logic 0. It will not affect other
pins.
CHAPTER 6
RESULTS AND CONCLUSION
LCD DISPLAY:
6.2 Illumination:
Since LCD panels produce no light of their own, they require external light to produce a visib
le image. In a transmissive type of LCD, this light is provided at the back of the glass stack an
d is called the backlight . While passive-matrix displays are usually not backlit (e.g. calculator
s, wristwatches), active-matrix displays almost always are. Over the last years (1990 — 2017)
, the LCD backlight technologies have strongly been emerged by lighting companies such as P
hilips, Lumileds (a Philips subsidiary) and more.
The common implementations of LCD backlight technology are:
• CCFL: The LCD panel is lit either by two cold cathode fluorescent lamp placed at o
pposite edges of the display or an array of parallel CCFLs behind larger displays. A diff
user then spreads the light out evenly across the whole display. For many years, this tec
hnology had been used almost exclusively. Unlike white LEDs, most CCFLs have an e
ven-white spectral output resulting in better color gamut for the display. However, CCF
Ls are less energy efficient than LEDs and require a somewhat costly inverter to conver
t whatever DC voltage the device uses (usually 5 or 12 V) to ~1000 V needed to light a
CCFL. The thickness of the inverter transformers also limits how thin the display can b
e made.
• EL-WLED: The LCD panel is lit by a row of white LEDs placed at one or more edge
s of the screen. A light diffuser is then used to spread the light evenly across the whole
display. As of 2012, this design is the most popular one in desktop computer monitors. I
t allows for the thinnest displays. Some LCD monitors using this technology have a feat
ure called dynamic contrast, invented by Philips researchers Douglas Stanton, Martinus
Stroomer and Adrianus de Vaan Using PWM (pulse-width modulation, a technology w
here the intensity of the LEDs are kept constant, but the brightness adjustment is achiev
ed by varying a time interval of flashing these constant light intensity light sources), the
backlight is dimmed to the brightest color that appears on the screen while simultaneou
sly boosting the LCD contrast to the maximum achievable levels, allowing the 1000:1 c
ontrast ratio of the LCD panel to be scaled to different light intensities, resulting in the "
30000:1" contrast ratios seen in the advertising on some of these monitors. Since comp
uter screen images usually have full white somewhere in the image, the backlight will u
sually be at full intensity, making this "feature" mostly a marketing gimmick for compu
ter monitors, however for TV screens it drastically increases the perceived contrast ratio
and dynamic range, improves the viewing angle dependency and drastically reducing th
e power consumption of conventional LCD televisions.
• WLED array: The LCD panel is lit by a full array of white LEDs placed behind a diff
user behind the panel. LCDs that use this implementation will usually have the ability t
o dim the LEDs in the dark areas of the image being displayed, effectively increasing th
e contrast ratio of the display. As of 2012, this design gets most of its use from upscale,
larger-screen LCD televisions.
• RGB-LED array: Similar to the WLED array, except the panel is lit by a full array of
RGB LEDs . While displays lit with white LEDs usually have a poorer color gamut tha
n CCFL lit displays, panels lit with RGB LEDs have very wide color gamuts. This impl
ementation is most popular on professional graphics editing LCDs. As of 2012, LCDs i
n this category usually cost more than $1000. As of 2016 the cost of this category has d
rastically reduced and such LCD televisions obtained same price levels as the former 28
" (71 cm) CRT based categories.
Today, most LCD screens are being designed with an LED backlight instead of the traditional
CCFL backlight, while that backlight is dynamically controlled with the video information (d
ynamic backlight control). The combination with the dynamic backlight control, invented by P
hilips researchers Douglas Stanton, Martinus Stroomer and Adrianus de Vaan, simultaneously
increases the dynamic range of the display system (also marketed as HDR, high dynamic rang
e television.
The LCD backlight systems are made highly efficient by applying optical films such as prism
atic structure to gain the light into the desired viewer directions and reflective polarizing films
that recycle the polarized light that was formerly absorbed by the first polarizer of the LCD (i
nvented by Philips researchers Adrianus de Vaan and Paulus Schaareman),generally achieved
using so called DBEF films manufactured and supplied by 3M. These polarizers consist of a l
arge stack of uniaxial oriented birefringent films that reflect the former absorbed polarization
mode of the light. Such reflective polarizers using uniaxial oriented polymerized liquid crysta
ls (birefringent polymers or birefringent glue) are invented in 1989 by Philips researchers Dir
k Broer, Adrianus de Vaan and Joerg Brambring.The combination of such reflective polarizer
s, and LED dynamic backlight control make today's LCD televisions far more efficient than th
e CRT-based sets, leading to a worldwide energy saving of 600 TWh (2017), equal to 10% of
the electricity consumption of all households worldwide or equal to 2 times the energy produc
tion of all solar cells in the world.
Due to the LCD layer that generates the desired high resolution images at flashing video spee
ds using very low power electronics in combination with these excellent LED based backlight
technologies, LCD technology has become the dominant display technology for products such
as televisions, desktop monitors, notebooks, tablets, smartphones and mobile phones. Althoug
h competing OLED technology is pushed to the market, such OLED displays does not feature
the HDR capabilities like LCDs in combination with 2D LED backlight technologies have, re
ason why the annual market of such LCD-based products is still growing faster (in volume) th
an OLED-based products while the efficiency of LCDs (and products like portable computers
, mobile phones and televisions) may even be further improved by preventing the light to be a
bsorbed in the colour filters of the LCD. Although until today such reflective colour filter solu
tions are not yet implemented by the LCD industry and did not made it further than laboratory
prototypes, such reflective colour filter solutions still likely will be implemented by the LCD i
ndustry to increase the performance gap with OLED technologies).
6.3 Advantages:
• Very compact, thin and light, especially in comparison with bulky, heavy CRT displa
ys.
• Low power consumption. Depending on the set display brightness and content being
displayed, the older CCFT backlit models typically use less than half of the power a CR
T monitor of the same size viewing area would use, and the modern LED backlit model
s typically use 10–25% of the power a CRT monitor would use.
• Little heat emitted during operation, due to low power consumption.
• No geometric distortion.
• The possible ability to have little or no flicker depending on backlight technology.
• Usually no refresh-rate flicker, because the LCD pixels hold their state between refre
shes (which are usually done at 200 Hz or faster, regardless of the input refresh rate).
• Sharp image with no bleeding or smearing when operated at native resolution. .
• Emits almost no undesirable electromagnetic radiation(in the extremely low frequenc
y range), unlike a CRT monitor.
• Can be made in almost any size or shape.
• No theoretical resolution limit. When multiple LCD panels are used together to create
a single canvas, each additional panel increases the total resolution of the display, whic
h is commonly called stacked resolution.
• Can be made in large sizes of over 60-inch (150 cm) diagonal.
• Masking effect: the LCD grid can mask the effects of spatial and grayscale quantizati
on, creating the illusion of higher image quality.
• Unaffected by magnetic fields, including the Earth's.
• As an inherently digital device, the LCD can natively display digital data from a DVI
or HDMI connection without requiring conversion to analog. Some LCD panels have n
ative FIBRE OPTICS inputs in addition to DVI and HDMI.
• Many LCD monitors are powered by a 12 V power supply, and if built into a comput
er can be powered by its 12 V power supply.
• Can be made with very narrow frame borders, allowing multiple LCD screens to be a
rrayed side-by-side to make up what looks like one big screen.
Disadvantages
• Limited viewing angle in some older or cheaper monitors, causing color, saturation,
contrast and brightness to vary with user position, even within the intended viewing angle.
• Uneven backlighting in some monitors (more common in IPS-types and older TNs),
causing brightness distortion, especially toward the edges ("backlight bleed").
• Black levels may not be as dark as required because individual liquid crystals cannot
completely block all of the backlight from passing through.
• Display motion blur on moving objects caused by slow response times (>8 ms) and eye-
tracking on a sample-and-hold display, unless a strobing backlight is used. However, this
strobing can cause eye strain, as is noted next:
• As of 2012, most implementations of LCD backlighting use pulse-width
modulation (PWM) to dim the display, which makes the screen flicker more acutely (this
does not mean visibly) than a CRT monitor at 85 Hz refresh rate would (this is because the
entire screen is strobing on and off rather than a CRT's phosphor sustained dot which
continually scans across the display, leaving some part of the display always lit), causing
severe eye-strain for some people. Unfortunately, many of these people don't know that
their eye-strain is being caused by the invisible strobe effect of PWM.[102] This problem is
worse on many LED-backlit monitors, because the LEDs switch on and off faster than
a CCFL lamp.
• Only one native resolution. Displaying any other resolution either requires a video scaler,
causing blurriness and jagged edges, or running the display at native resolution using 1:1
pixel mapping, causing the image either not to fill the screen (letterboxed display), or to
run off the lower or right edges of the screen.
• Fixed bit depth (also called color depth). Many cheaper LCDs are only able to display
262,000 colors. 8-bit S-IPS panels can display 16 million colors and have significantly
better black level, but are expensive and have slower response time.
• Low refresh rate. All but a few high-end monitors support no higher than 60 or 75 Hz;
while this does not cause visible flicker due to the LCD panel's high internal refresh rate,
the low input refresh rate limits the maximum frame-rate that can be displayed, affecting
gaming and 3D graphics.
• Input lag, because the LCD's A/D converter waits for each frame to be completely been
output before drawing it to the LCD panel. Many LCD monitors do post-processing before
displaying the image in an attempt to compensate for poor color fidelity, which adds an
additional lag. Further, a video scaler must be used when displaying non-native resolutions,
which adds yet more time lag. Scaling and post processing are usually done in a single chip
on modern monitors, but each function that chip performs adds some delay. Some displays
have a video gaming mode which disables all or most processing to reduce perceivable
input lag.
• Dead or stuck pixels may occur during manufacturing or after a period of use. A stuck pixel
will glow with color even on an all-black screen, while a dead one will always remain
black.
• Subject to burn-in effect, although the cause differs from CRT and the effect may not be
permanent, a static image can cause burn-in in a matter of hours in badly designed displays.
• In a constant-on situation, thermalization may occur in case of bad thermal management,
in which part of the screen has overheated and looks discolored compared to the rest of the
screen.
• Loss of brightness and much slower response times in low temperature environments. In
sub-zero environments, LCD screens may cease to function without the use of
supplemental heating.
• Loss of contrast in high temperature environments.
A liquid crystal display (LCD) is a thin, flat display device made up of any number of color or
monochrome pixels arrayed in front of a light source or reflector. It is often utilized in battery-
powered electronic devices because it uses very small amounts of electric power.
Each pixel of an LCD typically consists of a layer of molecules aligned between two
transparent electrodes, and two polarizing filters, the axes of transmission of which are (in most
of the cases) perpendicular to each other. With no liquid crystal between the polarizing filters,
light passing through the first filter would be blocked by the second (crossed) polarizer.
The surfaces of the electrodes that are in contact with the liquid crystal material are treated so
as to align the liquid crystal molecules in a particular direction. This treatment typically consists
of a thin polymer layer that is unidirectionally rubbed using, for example, a cloth. The direction
of the liquid crystal alignment is then defined by the direction of rubbing.
PIN DETAILS:
-The ability to display numbers, characters, and graphics. This is in contrast to LED
Seven Segment Displays, which are limited to numbers and a few characters.
-Incorporation of a refreshing controller into the LCD, thereby relieving the CPU of the
task of refreshing the LCD. In contrast, the LED Seven Segment Displays must be
refreshed by the CPU (or in some other way) to keep displaying data, in case of
multiplexed displays
6.6 Conclusion:
New mobile tracking/searching system in a compact platform for animal behavior toward GSM
networks is proposed in this paper. We practically utilized the mobile phone - Android OS -
function for easy-to-use for mobile users to observe the animal behavior over environmental
and global location data. In this paper, we chose the use of Arm 7 , Android OS, IOIO, GSM,
and GPS System mainly because of the availability and accessibility with an open platform
architecture and reasonable cost. Each module is also self-contained and not complicated to
troubleshoot to recreate and analyze for research community.
Result:
CHAPTER 7
KEIL SOFTWARE
7.1 Introduction
Many companies provide the ARM7 assembler, some of them provide shareware version of
their product on the web. Kiel is one of them. We can download them from their websites.
However, the size of code for these shareware version is limited and we have to consider
which assembler is suitable for our application.
1. A project Manager.
2. A make facility.
3. Tool configuration.
4. Editor
5. A powerful debugger.
6. To help you get started, several example programs.
3. Select Project- Select device and select an 8051, or C16x/ST10 device from that
Device Database.
5. Select Project-Targets, Groups and Files. Add/Files, select Source Group1, and add
the source files to the project
6. Select Project- Options and set the tool options. Note when you select the target
device from Device Database all special options are set automatically. You typically
only need to configure the memory map of your target hardware. Default memory
model settings are optimal for most
Following steps are to be followed in order to develop code and test the equipment with
software.
Step 1:
Install KEIL MicroVision-4 in your pc, then after Click on that “KEIL uVision4” icon. After
opening the window go to toolbar and Select project Tab and then close previous project.
Step 2:
Step 3:
Step 4:
Next it opens “select Device for target” window, it shows list of companies and here you can
select the device manufacturer company
Step 5:
For an example, for your project purpose you can select the chip as 89c51/52 from Atmel
group. Next click OK button, it appears empty window here you can observe left side a small
window i,e,, “Project Window”. Next create a New File.
Step 6:
From the main toolbar menu select “File” Tab and go to New, then it will open a window,
there you can edit the program.
Step 7:
Here you can edit the program as which language you prefer either assembly or C.
Step 8:
After editing the program save file with extension as “.c” or “.asm” , if you write a program
in assembly language save as “.asm” or if you write a program in C language save as “.c” in
the selected path. Take an example and save the file as “test.c”.
Step 9:
Then after saving the file, compile the program. For compilation go to project window select
“source group” and right click on that and go to “Add files to Group”.
Step 10:
Here it will ask which files to add. For example you can add “test.c” as saved before.
Step 11:
After adding the file, again go to project window and right click on your “c file” then select
“build target” for compilation. If there is any errors or Warnings in your program you can
check in “Output Window” that is shown bottom of the KEIL window.
Step 12:
Here in this step you can observe the output window for “errors and warnings”.
Step 13:
If you make any mistake in your program you can check in this slide for which error and
where the error is by clicking on that error.
Step 14:
After compilation then next go to debug Session. In toolbar menu go to “debug” tab and
select “Start/Stop Debug session”.
Step 15:
Here a simple program for “LED’s Blinking”. LEDs are connected to PORT-1. You can
observe the output in that port.
Step 16:
To see the ports and other peripheral features go to main toolbar menu and select peripherals.
Step 17:
Step 18:
Start to trace the program in sequence manner i,e step by step execution and observe the
output in port window.
Step 19:
After completion of Debug Session Create a hex file for burning the processor. Here to create
a hex file goes to project window and right click on Target next select “Option for Target”.
Step 20:
It appears one window ; here in “target tab” modify the crystal frequency as you connected to
your microcontroller.
Step 21:
Next go to “Output tab”. In that Output tab click on “Create HEX file” and then click OK.
Step 22:
Finally once again compile your program. The created HEX file will appear in your path
folder.
You must:
2. Use the step toolbar buttons to single-step through your program. You may enter G,
main in the output Window to execute to the main C function.
3. Open the serial Window using the serial #1 button on the toolbar.
4. Debug your program using standard options like step, Go, Break and so on.
ii)Peripheral Simulation:
The uVision4 debugger provides complete simulation for the CPU and on chip peripherals of
most embedded devices. To discover which peripherals of a device are supported in
uVision4. Select the simulated peripherals item from the help menu. You may also use the
web-based device database. We are constantly adding new devices and simulation support for
on-chip peripherals so be sure to check Device Database often.
CHAPTER 8
FUTURE SCOPE
Future Scope:
In addition, efficiently use the battery especially for outdoor journey as well as transmission
cost cutting toward SMS services, the alternative analog light sensor was investigated instead
of the digital motion sensor together with the movement decision logic before the actual
transmission resulting in cost saving for SMS transmission.
Note that due to the limitation of sensor capability and requirement (during day-time
measurement), we do not investigate during the night time period, and this may be for further
investigation of other research demands. However, other similar mote architectures and mobile
phone technologies are also applicable, e.g., RFID and GPRS/EDGE; Iphone, WP7,
Blackberry, and Bada mobile devices which require further investigation.
In addition, due to the time consuming (several hours) in practical measurement, we do not run
multiple tests for power consumption evaluation, and that is for further study. Also, although
the battery power saved is a small factor in a small time scale, practically, the technique will
result in higher impact when used in long term. Finally, since the mobile animal tracking
system is currently in a prototype stage, the size of sensing device is limited to each individual
module, and the more compact size is for future investigation.