Data Input and Topology
Data Input and Topology
The choice of data input method is governed largely by the application, the available
budget, and the type and the complexity of data being input.
There are at least four basic procedures for inputting spatial data into a GIS. These are:
• Manual digitizing;
• Automatic scanning;
• Entry of coordinates using coordinate geometry; and the
• Conversion of existing digital data.
Digitizing
While considerable work has been done with newer technologies, the overwhelming
majority of GIS spatial data entry is done by manual digitizing. A digitizer is an electronic device
consisting of a table upon which the map or drawing is placed. The user traces the spatial features
with a hand-held magnetic pen, often called a mouse or cursor. While tracing the features the
coordinates of selected points, e.g. vertices, are sent to the computer and stored. All points that are
recorded are registered against positional control points, usually the map corners that are keyed in
by the user at the beginning of the digitizing session. The coordinates are recorded in a user
defined coordinate system or map projection. Latitude and longitude and UTM is most often used.
The ability to adjust or transform data during digitizing from one projection to another is a
desirable function of the GIS software. Numerous functional techniques exist to aid the operator
in the digitizing process.
Digitizing can be done in a point mode, where single points are recorded one at a time, or
in a stream mode, where a point is collected on regular intervals of time or distance, measured by
an X and Y movement, e.g. every 3 metres. Digitizing can also be done blindly or with a graphics
terminal. Blind digitizing infers that the graphic result is not immediately viewable to the person
digitizing. Most systems display the digitized linework as it is being digitized on an
accompanying graphics terminal.
Most GIS's use a spaghetti mode of digitizing. This allows the user to simply digitize lines
by indicating a start point and an end point. Data can be captured in point or stream mode.
However, some systems do allow the user to capture the data in an arc/node topological data
structure. The arc/node data structure requires that the digitizer identify nodes.
The sheer cost of scanning usually eliminates the possibility of using scanning methods
for data capture in most GIS implementations. Large data capture shops and government agencies
are those most likely to be using scanning technology.
Currently, general consensus is that the quality of data captured from scanning devices is
not substantial enough to justify the cost of using scanning technology. However, major
breakthroughs are being made in the field, with scanning techniques and with capabilities to
automatically clean and prepare scanned data for topological encoding. These include a variety of
line following and text recognition techniques. Users should be aware that this technology has
great potential in the years to come, particularly for larger GIS installations.
Coordinate Geometry
A third technique for the input of spatial data involves the calculation and entry of
coordinates using coordinate geometry (COGO) procedures. This involves entering, from survey
data, the explicit measurement of features from some known monument. This input technique is
obviously very costly and labour intensive. In fact, it is rarely used for natural resource
applications in GIS. This method is useful for creating very precise cartographic definitions of
property, and accordingly is more appropriate for land records management at the cadastral or
municipal scale.
Most GIS software vendors also provide an ASCII data exchange format specific to their
product, and a programming subroutine library that will allow users to write their own data
conversion routines to fulfill their own specific needs. As digital data becomes more readily
available this capability becomes a necessity for any GIS. Data conversion from existing digital
data is not a problem for most technical persons in the GIS field. However, for smaller GIS
installations who have limited access to a GIS analyst this can be a major stumbling block in
getting a GIS operational. Government agencies are usually a good source for technical
information on data conversion requirements.
Some of the data formats common to the GIS marketplace are listed below. Please note
that most formats are only utilized for graphic data. Attribute data is usually handled as ASCII
text files. Vendor names are supplied where appropriate.
IGDS - Interactive Graphics This binary format is a standard in the turnkey CAD market
Design Software (Intergraph / and has become a de facto standard in Canada's mapping
Micro station) industry. It is a proprietary format, however most GIS
software vendors provide DGN translators.
DLG - Digital Line Graph (US This ASCII format is used by the USGS as a distribution
Geological Survey) standard and consequently is well utilized in the United
States. It is not used very much in Canada even though most
software vendors provide two way conversions to DLG.
DXF - Drawing Exchange This ASCII format is used primarily to convert to/from the
Format (Auto-cad) Auto-cad drawing format and is a standard in the
engineering discipline. Most GIS software vendors provide a
DXF translator.
GENERATE - ARC/INFO A generic ASCII format for spatial data used by the
Graphic Exchange Format ARC/INFO software to accommodate generic spatial data.
EXPORT - ARC/INFO Export An exchange format that includes both graphic and attribute
Format. data. This format is intended for transferring ARC/INFO
data from one hardware platform, or site, to another. It is
also often used for archiving.
ARC/INFO data. This is not a published data format,
however some GIS and desktop mapping vendors provide
translators. EXPORT format can come in either
uncompressed, partially compressed, or fully compressed
format
A wide variety of other vendor specific data formats exist within the mapping and GIS
industry. In particular, most GIS software vendors have their own proprietary formats. However,
almost all provide data conversion to/from the above formats. As well, most GIS software vendors
will develop data conversion programs dependant on specific requests by customers. Potential
purchasers of commercial GIS packages should determine and clearly identify their data
conversion needs, prior to purchase, to the software vendor.
3.3.1. Grids
Grids are an ESRI file format used to store both discrete features such as buildings, roads,
and parcels, and continuous phenomena such as elevation, temperature, and precipitation. Recall
that the basic unit of the raster data model is the cell. Cells store information about what things are
like at a particular location on the earth's surface. Depending on the type of data being stored, cell
values can be either integers (whole numbers) or floating points (numbers with decimals). There
are two types of grids: one store integers and the other stores floating points.
A discrete grid contains cells whose values are integers, often code numbers for a
particular category. Cells can have the same value in a discrete grid. For example, in a discrete
grid of land use, each land use type is coded by a different integer, but many cells may have the
same code. Discrete grids have an attribute table that stores the cell values and their associated
attributes.
Continuous grid is used to represent continuous phenomena; its cell values are floating
points. Each cell in a continuous grid can have a different floating point value. For example, in a
continuous grid representing elevation, one cell might store an elevation value of 564.3 meters,
while the cell to the left might store an elevation value of 565.1 meters. Unlike discrete grids,
continuous grids don't have an attribute table.
Discrete grids represent discrete features such as land use categories with integer values.
Continuous grids represent continuous phenomena such as elevation with floating point values.
The attribute tables of discrete grids are INFO format, the same format in which coverage
feature class attribute tables are stored. As with coverage attribute tables, the INFO table of a
discrete grid is stored within an info folder, which is stored at the same level as the grid in a
workspace folder. Again like coverages, there is one info folder for all the grids in a workspace
folder. To avoid breaking or corrupting the connection between grid files and the info folder,
always use ArcCatalog to move, copy, rename, and delete grids.
The Grids workspace folder contains two grids: soils and vegetation. The attribute tables
for both grids are stored in the info folder. Auxiliary files called soils.aux and vegetation.aux link
the grids and their attribute tables.
3.3.2. Images
The term "image" is a collective term for raster’s whose cells, or pixels, store brightness
values of reflected visible light or other types of electromagnetic radiation, such as emitted heat
(infrared) or ultraviolet (UV). Aerial photos, satellite images, and scanned paper maps are
examples of images commonly used in a GIS.
Images can be displayed as layers in a map or they can be used as attributes for vector
features. For example, a real estate company might include photos of available houses as an
attribute of a home’s layer. To be displayed as a layer, however, images must be referenced to
real-world locations.
For example, an aerial photo as it comes from the camera is just a static picture, like a
picture of a house. There's no information about what part of the world the photo has captured,
and the photo may contain distortion and scale variations caused by the angle of the camera. To
display properly with other map layers, the aerial photo must be assigned a coordinate system and
some of its pixels must be linked to known geographic coordinates.
Raster images, such as aerial photographs and scanned maps, can be referenced to real-
world locations, then displayed as a layer in a GIS map.
There are many image file formats, which differ in the type of compression used to reduce
the file size. Some of the image formats supported by ArcGIS software.
3.5. DIGITIZERS
• Digitizers are the most common device for extracting spatial information from maps
and photographs
◦ the map, photo, or other document is placed on the flat surface of the digitizing
tablet
3.5.1. Hardware
• The position of an indicator as it is moved over the surface of the digitizing tablet is
detected by the computer and interpreted as pairs of x,y coordinates
◦ the indicator may be a pen-like stylus or a cursor (a small flat plate the size of a
hockey puck with a cross-hair)
• frequently, there are control buttons on the cursor which permit control of the system
without having to turn attention from the digitizing tablet to a computer terminal
• digitizing tablets can be purchased in sizes from 25x25 cm to 200x150 cm, at
approximate costs from $500 to $5,000
• early digitizers (ca. 1965) were backlit glass tables
◦ a magnetic field generated by the cursor was tracked mechanically by an arm
located behind the table
◦ the arm's motion was encoded, coordinates computed and sent to a host processor
◦ some early low-cost systems had mechanically linked cursors - the free-cursor
digitizer was initially much more expensive
• the first solid-state systems used a spark generated by the cursor and detected by linear
microphones
◦ problems with errors generated by ambient noise
• contemporary tablets use a grid of wires embedded in the tablet to generate a magnetic
field which is detected by the cursor
◦ accuracies are typically better than 0.1 mm
◦ this is better than the accuracy with which the average operator can position the
cursor
◦ functions for transforming coordinates are sometimes built into the tablet and used
to process data before it is sent to the host
Dimensionality - the distinction between point, line, area, and volume, which are said to
have topological dimensions of 0, 1, 2, and 3 respectively.
3.6.1. Adjacency
Adjacency including the touching of land parcels, counties, and nation-states (They share
a common border).
3.6.2. Connectivity
Connectivity including junctions between streets, roads, railroads, and rivers (Very
common topological error. See diagrams about "Overshoot" below).
3.6.3. Containment
Containment when a point lies inside rather than outside an area.
Topology defines and enforces data integrity rules (there should be no gaps between
polygons). It supports topological relationship queries and navigation (navigating feature
adjacency or connectivity), sophisticated editing tools, and allows feature construction from
unstructured geometry (constructing polygons from lines).
3.16 GIS
Addressing topology is more than providing a data storage mechanism. In GIS, topology is
maintained by using some of the following aspects:
• The geo-database includes a topological data model using an open storage format for
simple features (i.e., feature classes of points, lines, and polygons), topology rules, and
topologically integrated coordinates among features with shared geometry. The data
model includes the ability to define the integrity rules and topological behaviour of the
feature classes that participate in a topology.
• Most GIS programs include a set of tools for query, editing, validation, and error
correction of topology.
• GIS software can navigate topological relationships, work with adjacency and
connectivity, and assemble features from these elements. It can identify the polygons that
share a specific common edge; list the edges that connect at a certain node; navigate along
connected edges from the current location; add a new line and "burn" it into the
topological graph; split lines at intersections; and create resulting edges, faces, and nodes.
Vector formats
• AutoCAD DXF – contour elevation plots in AutoCAD DXF format (by Autodesk)
• Cartesian coordinate system (XYZ) – simple point cloud
• Digital line graph (DLG) – a USGS format for vector data
• Esri TIN - proprietary binary format for triangulated irregular network data used
by Esri
• Geography Markup Language (GML) – XML based open standard (by OpenGIS)
for GIS data exchange
• GeoJSON – a lightweight format based on JSON, used by many open source GIS
packages
3.18 GIS
• GeoMedia – Intergraph's Microsoft Access based format for spatial vector storage
• ISFC – Intergraph's MicroStation based CAD solution attaching vector elements to a
relational Microsoft Access database
• Keyhole Markup Language (KML) – XML based open standard (by OpenGIS) for
GIS data exchange
• MapInfo TAB format – MapInfo's vector data format using TAB, DAT, ID and
MAP files
• National Transfer Format (NTF) – National Transfer Format (mostly used by the
UK Ordnance Survey)
• Spatialite – is a spatial extension to SQLite, providing vector geo-database
functionality. It is similar to Post-GIS, Oracle Spatial, and SQL Server with spatial
extensions
• Shapefile – a popular vector data GIS format, developed by Esri
• Simple Features – Open Geospatial Consortium specification for vector data
• SOSI – a spatial data format used for all public exchange of spatial data in Norway
• Spatial Data File – Autodesk's high-performance geo-database format, native
to MapGuide
• TIGER – Topologically Integrated Geographic Encoding and Referencing
• Vector Product Format (VPF) – National Geospatial-Intelligence Agency (NGA)'s
format of vectored data for large geographic databases
Grid formats
• USGS DEM – The USGS' Digital Elevation Model
• GTOPO30 – Large complete Earth elevation model at 30 arc seconds, delivered in the
USGS DEM format
• DTED – National Geospatial-Intelligence Agency (NGA)'s Digital Terrain Elevation
Data, the military standard for elevation data
• GeoTIFF – TIFF variant enriched with GIS relevant metadata
• SDTS – The USGS' successor to DEM
In the layer bar, right-click the COUNTY layer name to open the pop-up menu for the
COUNTY layer. Select Edit to open the GIS Layer window. In the definition for the COUNTY
Data Input and Topology 3.19
layer, select Thematic. The GIS Attribute Data Sets window appears for you to define the link to
the theme data set.
In the GIS Attribute Data Sets window, select New to define a new link. In the
resulting select a Member window, select MAPS.USAAC. You must next specify the values that
are common to both the attribute and spatial data, because the common values provide the
connection between the spatial data and the attribute data.
The spatial database and the MAPS.USAAC data set share compatible state and county
codes, so first select STATE in both the Data Set Vars and Compositeslists, and then select
COUNTY in both lists. Select Save to save the link definition to the Links list. Finally,
select Continue to close the GIS Attribute Data Setswindow.
After the GIS Attribute Data Sets window closes, the Var window automatically opens for
you. Select which variable in the attribute data provides the theme data for your theme. Select the
CHANGE variable to have the counties colored according to the level of change in the county
population. Select OK to close the Var window.
The counties in the spatial data are colored according to the demographic values in the
attribute data set, as shown in the following display.
If the external table has an existing spatial column that contains no data, the
ArcGIS Maps Connect workflow populates the column based on other location information in
the table (for example, address). If no spatial column exists, the ArcGIS Maps Connect
workflow creates a geography spatial type column named EsriShape with a Spatial Reference
Identifier (SRID) of 4326 (WGS 84). The EsriShape field supports all geometries including
points, lines, and polygons. In all scenarios, the external content can be enriched with
additional geographic data variables from ArcGIS.
3.7.2. Note
If the ArcGIS Maps Connect workflow fails, ensure the appropriate permissions for
Microsoft SQL Server have been set. You can view the error messages in the SharePoint site
workflow history to view exact details on the settings that need to be corrected.
When the ArcGIS Maps Connect workflow completes, the result is a regular SharePoint
list, not an external list. That said, the fields created from the SQL Server database are of an
external type, and edits made to these fields in SharePoint cannot be passed back to the database.
SharePoint can only pass back the fields it has created, such as for the ArcGIS Maps Locate
workflow and geoenrichment.
ODBC was originally developed by Microsoft and Simba Technologies during the early
1990s, and became the basis for the Call Level Interface (CLI) standardized by SQL Access
Group in the UNIX and mainframe field. ODBC retained several features that were removed as
part of the CLI effort. Full ODBC was later ported back to those platforms, and became a de facto
standard considerably better known than CLI. The CLI remains similar to ODBC, and
applications can be ported from one platform to the other with few changes.
Data Input and Topology 3.21
3.8.1. History of Before ODBC
The introduction of the mainframe-based relational database during the 1970s led to a
proliferation of data access methods. Generally these systems operated together with a simple
command processor that allowed users to type in English-like commands, and receive output. The
best-known examples are SQL from IBM and QUEL from the Ingres project. These systems may
or may not allow other applications to access the data directly, and those that did use a wide
variety of methodologies. The introduction of SQL aimed to solve the problem of language
standardization, although substantial differences in implementation remained.
Also, since the SQL language had only rudimentary programming features, users often
wanted to use SQL within a program written in another language, say Fortran or C. This led to the
concept of Embedded SQL, which allowed SQL code to be embedded within another language.
For instance, a SQL statement like SELECT * FROM city could be inserted as text within C
source code, and during compiling it would be converted into a custom format that directly called
a function within a library that would pass the statement into the SQL system. Results returned
from the statements would be interpreted back into C data formats like char * using similar library
code.
There were several problems with the Embedded SQL approach. Like the different
varieties of SQL, the Embedded SQLs that used them varied widely, not only from platform to
platform, but even across languages on one platform – a system that allowed calls into IBM's DB2
would look very different from one that called into their own SQL/DS. Another key problem to
the Embedded SQL concept was that the SQL code could only be changed in the program's source
code, so that even small changes to the query required considerable programmer effort to modify.
The SQL market referred to this as static SQL, versus dynamic SQL which could be changed at
any time, like the command-line interfaces that shipped with almost all SQL systems, or a
programming interface that left the SQL as plain text until it was called. Dynamic SQL systems
became a major focus for SQL vendors during the 1980s.
Older mainframe databases, and the newer microcomputer based systems that were based
on them, generally did not have a SQL-like command processor between the user and the database
engine. Instead, the data was accessed directly by the program – a programming library in the case
of large mainframe systems, or a command line interface or interactive forms system in the case
of dBASE and similar applications. Data from dBASE could not generally be accessed directly by
other programs running on the machine. Those programs may be given a way to access this data,
often through libraries, but it would not work with any other database engine, or even different
databases in the same engine. In effect, all such systems were static, which presented considerable
problems.
By the late 1980s there were several efforts underway to provide an abstraction layer for
this purpose. Some of these were mainframe related, designed to allow programs running on those
machines to translate between the variety of SQL's and provide a single common interface which
could then be called by other mainframe or microcomputer programs. These solutions included
IBM's Distributed Relational Database Architecture (DRDA) and Apple Computer's Data Access
Language. Much more common, however, were systems that ran entirely on microcomputers,
including a complete protocol stack that included any required networking or file translation
support.
One of the early examples of such a system was Lotus Development's DataLens, initially
known as Blueprint. Blueprint, developed for 1-2-3, supported a variety of data sources, including
SQL/DS, DB2, FOCUS and a variety of similar mainframe systems, as well as microcomputer
systems like dBase and the early Microsoft/Ashton-Tate efforts that would eventually develop into
Microsoft SQL Server. Unlike the later ODBC, Blueprint was a purely code-based system, lacking
anything approximating a command language like SQL. Instead, programmers used data
structures to store the query information, constructing a query by linking many of these structures
together. Lotus referred to these compound structures as query trees.
Around the same time, an industry team including members from Sybase (Tom Haggin),
Tandem Computers (Jim Gray & Rao Yendluri) and Microsoft (Kyle G) were working on a
standardized dynamic SQL concept. Much of the system was based on Sybase's DB-Library
system, with the Sybase-specific sections removed and several additions to support other
platforms. DB-Library was aided by an industry-wide move from library systems that were tightly
linked to a specific language, to library systems that were provided by the operating system and
required the languages on that platform to conform to its standards. This meant that a single
library could be used with (potentially) any programming language on a given platform.
The first draft of the Microsoft Data Access API was published in April 1989, about the
same time as Lotus' announcement of Blueprint. In spite of Blueprint's great lead – it was running
when MSDA was still a paper project – Lotus eventually joined the MSDA efforts as it became
clear that SQL would become the de facto database standard. After considerable industry input, in
the summer of 1989 the standard became SQL Connectivity (SQLC).
MS continued working with the original SQLC standard, retaining many of the advanced
features that were removed from the CLI version. These included features like scrollable cursors,
and metadata information queries. The commands in the API were split into groups; the Core
group was identical to the CLI, the Level 1 extensions were commands that would be easy to
implement in drivers, while Level 2 commands contained the more advanced features like cursors.
A proposed standard was released in December 1991, and industry input was gathered and worked
into the system through 1992, resulting in yet another name change to ODBC.
The SAG standardization efforts presented an opportunity for Microsoft to adapt their Jet
system to the new CLI standard. This would not only make Windows a premier platform for CLI
development, but also allow users to use SQL to access both Jet and other databases as well. What
was missing was the SQL parser that could convert those calls from their text form into the C-
interface used in Jet. To solve this, MS partnered with PageAhead Software to use their existing
query processor, SIMBA. SIMBA was used as a parser above Jet's C library, turning Jet into an
SQL database. And because Jet could forward those C-based calls to other databases, this also
allowed SIMBA to query other systems. Microsoft included drivers for Excel to turn its
spreadsheet documents into SQL-accessible database tables.
Meanwhile, the CLI standard effort dragged on, and it was not until March 1995 that the
definitive version was finalized. By then, Microsoft had already granted Visigenic Software a
source code license to develop ODBC on non-Windows platforms. Visigenic ported ODBC to a
wide variety of Unix platforms, where ODBC quickly became the de facto standard. "Real" CLI is
rare today. The two systems remain similar, and many applications can be ported from ODBC to
CLI with few or no changes.
Over time, database vendors took over the driver interfaces and provided direct links to
their products. Skipping the intermediate conversions to and from Jet or similar wrappers often
resulted in higher performance. However, by then Microsoft had changed focus to their OLE DB
concept (recently reinstated), which provided direct access to a wider variety of data sources from
address books to text files. Several new systems followed which further turned their attention
from ODBC, including ActiveX Data Objects (ADO) and ADO.net, which interacted more or less
with ODBC over their lifetimes.
As Microsoft turned its attention away from working directly on ODBC, the UNIX field
was increasingly embracing it. This was propelled by two changes within the market, the
introduction of graphical user interfaces (GUIs) like GNOME that provided a need to access these
sources in non-text form, and the emergence of open software database systems like PostgreSQL
and MySQL, initially under Unix. The later adoption of ODBC by Apple for using the standard
Unix-side iODBC package Mac OS X 10.2 (Jaguar) (which OpenLink Software had been
independently providing for Mac OS X 10.0 and even Mac OS 9 since 2001) further cemented
ODBC as the standard for cross-platform data access.
Sun Microsystems used the ODBC system as the basis for their own open standard, Java
Database Connectivity (JDBC). In most ways, JDBC can be considered a version of ODBC for
the programming language Java instead of C. JDBC-to-ODBC bridges allow Java-based programs
to access data sources through ODBC drivers on platforms lacking a native JDBC driver, although
these are now relatively rare. Inversely, ODBC-to-JDBC bridges allow C-based programs to
access data sources through JDBC drivers on platforms or from databases lacking suitable ODBC
drivers.
However, the rise of thin client computing using HTML as an intermediate format has
reduced the need for ODBC. Many web development platforms contain direct links to target
databases – MySQL being very common. In these scenarios, there is no direct client-side access
Data Input and Topology 3.25
nor multiple client software systems to support; everything goes through the programmer-supplied
HTML application. The virtualization that ODBC offers is no longer a strong requirement, and
development of ODBC is no longer as active as it once was.[citation needed]
3.9. GPS
Stands for "Global Positioning System." GPS is a satellite navigation system used to
determine the ground position of an object. GPS technology was first used by the United States
military in the 1960s and expanded into civilian use over the next few decades. Today, GPS
receivers are included in many commercial products, such as automobiles, Smartphone, exercise
watches, and GIS devices.
The GPS system includes 24 satellites deployed in space about 12,000 miles (19,300
kilometers) above the earth's surface. They orbit the earth once every 12 hours at an extremely fast
pace of roughly 7,000 miles per hour (11,200 kilometers per hour). The satellites are evenly
spread out so that four satellites are accessible via direct line-of-sight from anywhere on the globe.
Each GPS satellite broadcasts a message that includes the satellite's current position, orbit,
and exact time. A GPS receiver combines the broadcasts from multiple satellites to calculate its
exact position using a process called triangulation. Three satellites are required in order to
determine a receiver's location, though a connection to four satellites is ideal since it provides
greater accuracy.
In order for a GPS device to work correctly, it must first establish a connection to the
required number of satellites. This process can take anywhere from a few seconds to a few
minutes, depending on the strength of the receiver. For example, a car's GPS unit will typically
establish a GPS connection faster than the receiver in a watch or Smartphone. Most GPS devices
also use some type of location caching to speed up GPS detection. By memorizing its previous
location, a GPS device can quickly determine what satellites will be available the next time it
scans for a GPS signal.
Data: geospatial information (where things are located) and the details of objects such as
services, roads, buildings etc. are collected and entered into the GIS software
Software: GIS software analyses data and presents it in different combinations for the
user
Hardware: includes hand held devices for collecting data and computers with GIS
software
Positioning accuracy
Factors that trigger GPS position errors
Ionosphere
The ionosphere is a portion of the upper atmosphere, between the thermosphere and the
exosphere. When GPS signals pass through this layer, the propagation velocity of the GPS signal
goes slower, hence causing propagation error.
Troposphere
The troposphere is the lowest
portion of Earth's atmosphere. Radio
reflections caused by dry atmosphere
and water vapor within provoke GPS
position error.
Multipath propagation
GPS signal is not immune to
reflection when it hits on the ground,
structures and many others. This
phenomenon is called multipath
propagation, one of the causes of GPS
position errors. Fig.3.13. GPS position error
3.30 GIS
3.9.7. DOP (Dilution of Precision)
DOP is a value that shows the degree of degradation of the GPS positioning accuracy. The
smaller the value is, the higher the positioning accuracy is. This value depends upon the positions
of the GPS satellites tracked for positioning. If the tracked satellites spread evenly over the earth,
the positioning accuracy would become higher, and if the positions of tracked satellites are
disproportionate, the positioning accuracy would become lower.
If the number of the tracked satellites is great, GPS positioning becomes greater, but if
there were a fewer satellites tracked for positioning, it would be difficult to generate GPS position.
The Fig. 1-11 illustrates the occasion where the GPS receiver tracks a greater number of satellites
for positioning. The Fig. 1-12 illustrates the occasion where the GPS receiver tracks only a few
number of satellites for positioning.