Section3 Exercise1 ProcessDroneDataToModelALargeConstructionProject
Section3 Exercise1 ProcessDroneDataToModelALargeConstructionProject
Exercise
Process drone data to model a large
construction project
Section 3 Exercise 1
September 27, 2022
Transform AEC Projects with GIS and BIM
Time to complete
40 minutes
Technical note
This exercise requires the download of large amounts of data, and based on your
individual system specifications, it can potentially take long periods of time to process the
data. It is recommended to briefly review the exercise prior to downloading data.
Additionally, please review ArcGIS Drone2Map Help: Hardware resources and performance
to verify your specifications and to gauge performance expectations. A solution set or
project file of results dataset is available of limited products generated if you determine
that your system may take too long to perform the product generation.
Due to individual system variables that cannot be accounted for, the estimated
time to complete this exercise (40 minutes) does not include any of the
download time for data nor the potential processing time to create the various
2D and 3D products in ArcGIS Drone2Map.
Software requirements
ArcGIS Drone2Map 2022.1
ArcGIS Pro 3.0
Introduction
ArcGIS is capable of using imagery from many sources, including unmanned aerial vehicles
(UAV), or drones. With these sensors, high-resolution imagery can be captured and quickly
added to your GIS to provide an updated view of your project site or construction site or for
use in advanced analysis. Several Esri products can help manage your drone data, as
described in the following table.
Additionally, ArcGIS Pro's orthometric mapping capability can be used to create orthorectified
imagery. Depending on the origin of the input imagery and the organization requirements,
the choice of which application to use can vary.
Exercise scenario
In this exercise, you have been provided with recent imagery of a building under construction
(Building E) on the Esri Redlands, California, campus, which was collected by a drone. Imagine
that you are a GIS analyst with the company providing aerial surveys of the construction site.
Your task is to use the data to create 2D and 3D imagery products. These products will be
used to show progress on the development to stakeholders, project engineers, and the
construction team. You will use ArcGIS Drone2Map and the following workflow to create these
imagery products.
Creating 2D and 3D products in ArcGIS Drone2Map follows a generalized workflow based on your AEC project
needs and requirements. You can customize your workflow as necessary.
Note: To learn how to enable private browsing, see this How to Enable Private Browsing on
Any Web Browser article (https://links.esri.com/HowToBrowse).
d Under ArcGIS Login, copy and paste or type your MOOC course ArcGIS username and
password.
e On the Transform AEC Projects With GIS And BIM Home page, click the Content tab.
On the My Organization page, you will see a Desktop Application download button for
ArcGIS Drone2Map.
g On the My Organization page, click the ArcGIS Drone2Map graphic to download ArcGIS
i After the download completes, extract the file on your local computer.
This ZIP file contains the .exe file to install ArcGIS Drone2Map.
k Follow the installation instructions, accept the Master Agreement, and then accept the
rest of the defaults.
l When you are finished, close the private or incognito web browser.
The size of this dataset is approximately 581 MB, so be sure that your computer
has enough space to download the data.
c When you are finished, close the web browser and File Explorer, if necessary.
e In the Choose Project Location dialog box, browse to ..\EsriMOOC and click the Projects
folder to select it.
f Click OK, and then notice that your new project name and the location have been
updated.
h In the Browse For Image Folder dialog box, browse to ..\EsriMOOC\Data and select the
2020-05-05_Oblique folder.
i Click OK.
There are now 68 oblique drone images collected of Building E under construction added to
your project.
Next you will set the template to use as the default output product template for your project.
Drone2Map includes several templates that you can use, allowing you to quickly create
output products based on the needs or requirements of your project. However, if you choose
to, you can modify the output products required for your individual needs. In other words, if
you select the 2D Products template but decide while working in your project that you want
to create 3D products, you have the ability to modify the processing options to reflect your
requirements. For this project, you will be creating both 2D and 3D products to show project
progress. You will start with the Rapid template to quickly verify that your drone collection
importing of your imagery was successful.
j In the 2D Products Template field, click the down arrow and choose Rapid Template.
For more information on the default project templates available in Drone2Map, including the
output product type and example uses, see ArcGIS Drone2Map Help: Work with project
templates.
All your input options to create your project and add images are now set.
k Click Create.
The Drone2Map project is created, and a 2D map is displayed, showing the construction
project site.
l If necessary, from the Home tab, in the Layers group, click Basemap and select the
Topographic basemap.
In Drone2Map, you will see 2D and 3D maps added to the display. The flight line pattern in
the 2D map will be visible based on the arrangement of the input data.
m On the left side, review the Contents pane that shows the layers added to the map.
1. What does the orange line represent?
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
n In the map, in the upper-right (northeast) section of the map and along the northeast part
the flight line pattern, click the blue dot for image DJI_0915, as shown in the following
graphic.
The Image Viewer appears in a new window, showing the image that you just selected from
the map. The purpose of the Image Viewer is to review input images before imagery products
are created. You can add notes to images or even remove them if they are not needed for
your project.
o On your own, examine a few of the other images in this collection of images.
c In the Image Collection section, for Orthorectification Method, click the down arrow and
choose Solution Points.
d In the Orthomosaic section, confirm that the Create Orthomosaic box is checked.
e In the Digital Surface Model section, check the box for Create DSM.
f In the Digital Terrain Model section, check the box for Create DTM.
Hint: You can point to the Information icon to see a detailed description of the individual
parameters and settings.
For more information about the 2D Products options, see ArcGIS Drone2Map Help:
Processing options – 2D products.
Now you will set the processing options for the 3D products that you want to create. You will
select options to create scene layer packages (SLPK) for the DSM and 3D textured meshes.
Additionally, you will create an LAS point cloud. The LAS point cloud is a set of points that
represents coincident locations in the project area where several pixels from various input
rasters are the same. These keypoints are then used to create a point cloud that can be used
to model different elevation imagery products.
i In the Create Point Clouds section, check the boxes for SLPK, LAS, and Merge LAS Tiles.
The textured mesh options will create an object that can be viewed in three dimensions. The
meshes can be used to model what the project area looks like as if you were on the ground
looking around at the features.
j In the Create DSM Textured Meshes section, check the box for SLPK.
k In the Create 3D Textured Meshes section, check the box for SLPK.
For more information about the 3D Product options, see ArcGIS Drone2Map Help: Processing
options – 3D products.
m Click the General tab, and then ensure that Point Cloud Density is set to Medium and
Project Resolution: Automatic is set to 4x.
If you are working on a computer that is GPU-enabled, Processor Type is set automatically to
use both CPU and GPU processors to maximize processing speeds. For more information on
different factors that can influence processing speed of your drone collections, see ArcGIS
Drone2Map Help: Hardware resources and performance.
n If possible, in the Hardware section, for CPU Threads, set the slider bar to the maximum
number available for your machine.
o If possible, and if necessary, in the Hardware section, set the Processor Type to CPU +
GPU.
Note: In the preceding graphic, the system used has only 4 CPU Threads available and does
not have an NVIDIA GPU, so CPU is the option shown. Set your Hardware options to
maximize your processing speed based on your available resources.
More information: Point Cloud Density
The Density Matching (Point Cloud Density) defines the level of density of the final point
cloud used to derive and build the geometric features of the meshes. As the density of the
point cloud increases from Low to Ultra, the processing speed increases as well. Additionally,
the size of the mesh products increases, too. For the Rapid Template, the Project Resolution is
set to 4x by default. This option multiplies the default ground sampling distance (GSD) by a
factor of 4, in this case, and helps reduce the file size on disk of the final product. GSD is the
distance between the center points of adjacent pixels, and it is related to pixel size and spatial
resolution. Increasing the multiplier will not only increase the product GSD and image
resolution, but it will also create a larger file and increase the processing time.
Use lower point cloud density—such as Low or Medium—and more course GSD
settings for initial product creation to maximize processing speed and time. Use
higher settings for could density—such as High or Ultra—and 1x only for final
products or products for presentation.
The following graphics provide three examples of not only the point cloud density
settings but also examples of different project resolution settings.
This graphic represents a 3D mesh product generated using a Medium point cloud density and a project resolution
of 4x GSD. The resulting file size 41.3 MB.
This graphic represents a 3D mesh product generated using a High point cloud density and a project resolution of
4x GSD. The resulting file size 144.4 MB.
This graphic represents a 3D mesh product generated using a Ultra point cloud density and a project resolution of
1x GSD. The resulting file size 713.2 MB.
For more information and specifics about all the General options, see ArcGIS Drone2Map
Help: Processing options – General.
p Click the Adjust Images tab and review the various options that are available.
The options on the Adjust Images tab allow you finer control of certain image parameters, if
necessary. You can define key adjustments to be used in the block adjustment process, tie
point matching, and point cloud generation options. These parameters, and their settings, are
dependent on the type of drone that you use for collection, the type of internal accuracy of
the drone collection, different orientation variables, and calibration types. Refer to your drone
manuals and documentation to see which of these that you may need to adjust when
necessary.
q Verify that the Adjust Images tab parameters look like the following graphic.
For this exercise, and for these images, there is no need to adjust any properties or options
for these images on this tab. For more information on these options, see ArcGIS Drone2Map
Help: Processing options – Adjust Images.
r On your own, examine the final two options tabs: Coordinate Systems and Resources.
In most cases, your coordinate system will be set based on the input parameter of your drone
imagery, but you have the option to modify these if desired. If you want to have a different
output coordinate system, for instance, if your final project files are in a different spatial
reference system than the drone imagery, the Coordinate Systems tab is where you can
modify those parameters.
The Resources tab provides you with the flexibility of setting how you want to allocate various
image resources, project settings, files, and image locations when necessary.
s Click OK.
For more information about these remaining options available on the Coordinate Systems and
a On the Home tab, in the Control group, click the Control down arrow and choose Import
Control.
b In the Import Control window, select the Import From CSV Or Text File option, if
necessary.
c Click OK.
d In the Import Control dialog box, for Import Control From, click Browse.
f Click OK.
The default horizontal spatial reference system (Current XY) is the projected coordinate
system of WGS 1984 UTM Zone 11N, which is the UTM zone for this portion of Redlands,
California. The points in your GCP file are the geographic coordinate system of WGS 1984.
You will need to set WGS 1984 as your input horizontal spatial reference system.
h In the Import Control dialog box, in the Control Coordinate System section, click the Set
i In the Spatial Reference dialog box, under XY Coordinate Systems Available, expand
Layers, if necessary, and select WGS 1984.
j Click OK.
Drone2Map will perform a coordinate transformation to convert the input WGS 1984 latitude
and longitude coordinates into the UTM zone for this map.
l Accept all other default values and click OK to add the GCPs.
The 12 new GCPs will appear in the map as green plus symbols. Simultaneously, the Control
Manager pane appears with information about the new GCPs.
In this step, you added GCPs to use during processing of your 68 drone images to improve
the accuracy of your drone data. Not only does this action improve the spatial accuracy of
your final products, but it also improves the overall quality of your image products.
Processing of the drone images in ArcGIS Drone2Map can take a long amount of
time, depending on multiple factors. The results shown in this exercise were
generated on a system using an AMD EPYC 7v12 64-Core Processor, 2.44 GHz (4
Cores), no NVIDIA GPU, and with 14 GB of RAM. Based on the options set in the
Select processing options step, creating all the 2D and 3D products took
approximately 57 minutes. A zipped ArcGIS Pro project file of selected finished
2D and 3D products can be downloaded from this location https://links.esri.com
/Products and extracted to the ..\EsriMOOC\Data folder. This zipped file is
approximately 310 MB in size.
You can use ArcGIS Pro to examine these products; however, the remainder of
this exercise examines and reviews these output products directly in ArcGIS
Drone2Map, so some of the steps may vary.
Next you will create the 2D and 3D products that you specified in the processing options.
These products are what you will use to share with interested parties on the progress of the
Building E construction.
a On the Home tab, in the Processing group, click Start to create the imagery products.
As the different products are created, they will automatically be added to your map.
Note: The progress for the project will be indicated at the bottom of the Manage pane on the
right. Depending on your computer system, processing times may be lengthy.
When the processing is complete, the 2D map still shows the original project data, such as the
images and flight lines, but now it also includes the 2D products.
b In the Contents pane, turn off the visibility of the Project Data layer.
In the Contents pane, the 2D imagery products are displayed in two group layers: the Imagery
Products group layer, which contains the Orthomosaic layer, and the DEM Products group
layer, which contains the Digital Surface Model and Digital Terrain Model layers.
the products will meet the quality standards necessary for your project.
a On the Home tab, in the Processing group, click Report to open the Processing Report.
The Processing Report includes information about the process and the resulting products.
Ground Resolution represents the ground sampling distance (GSD) of the original sensor.
3. In the Project Summary section, what is the ground resolution?
__________________________________________________________________________________
__________________________________________________________________________________
Knowing the ground resolution of your sensor is important when conveying information to
engineers, architects, and other construction team members, including survey teams. This
value determines the size of the smallest feature that you can resolve, or see, in your images.
This information becomes very important to members of the construction and survey team,
not only for project site status and updates but also as inputs for survey information.
One measure of successful processing performance is whether all input images were used to
create the imagery products.
4. In the Project Summary section, how many images were calibrated in the project?
__________________________________________________________________________________
__________________________________________________________________________________
After confirming the number of images calibrated in the project, you should confirm the
coverage, including overlapping coverage, of those images.
The graphic indicates the coverage relative to the number of overlapping images. Areas in
green indicate more coverage. Due to the number of overlapping images in the green areas,
the quality of the product is greater in those areas.
5. What other information or statistics could be important for your AEC projects and
why?
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
a In the Contents pane, right-click Orthomosaic and choose Zoom To Source Resolution.
Drone imagery data is collected at low altitudes and is capable of creating high-resolution
imagery.
6. In the bottom-left corner of the map, what is the reported scale?
__________________________________________________________________________________
__________________________________________________________________________________
b In the Contents pane, in the Imagery Products group layer, right-click Orthomosaic and
choose Properties.
c In the Layer Properties dialog box, click the Source tab, and then expand Raster
Information.
One of the advantages of drone data is that the sensors are capable of extremely high-
resolution data. With such a fine spatial resolution, you are capable of recognizing small
features within the orthomosaic.
7. What is the cell size reported?
__________________________________________________________________________________
__________________________________________________________________________________
When you configured the processing options for your 2D products, the automatic resolution
was set at 4x GSD, which is reflected in the cell size, or spatial resolution, of the new
orthomosaic. (4 x 0.040 = .160).
While only generated at the Medium Point Density and 4x the GSD of the original imagery, at
this scale, you can clearly see many features. This orthomosaic will allow the project team to
visually inspect the construction site. If necessary, and warranted for your project, you can
create a new orthomosaic at 1x the GSD and either High or Ultra density for the point density
cloud.
e On your own, pan and zoom in your map to explore the imagery and construction site.
The orthomosaic is an orthorectified image and can be used to accurately measure features.
The process of orthorectification creates images that have been corrected to mitigate errors
and distortions caused by the sensor and the terrain.
f In the Contents pane, right-click Orthomosaic, choose Zoom To Source Resolution, and
ensure that Building E is in the center of your map view.
h Measure the area of the newly installed solar panel arrays indicated in the following
graphic by clicking the outline of the array.
Note: You will need to take the area measurements of both arrays separately.
Hint: You can zoom in to the northeast section of the roof to aid in your measurements.
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
j In the Mensuration Results pane, select the measurement listed in the table, click the
Delete button , and then click Yes to delete the results.
a If necessary, in the Contents pane, right-click Orthomosaic and choose Zoom To Source
Resolution, and then ensure that Building E is again in the center of your map.
b In the Contents pane, in the Imagery Products group layer, turn off the visibility of the
Orthomosaic layer.
The digital surface model (DSM) is created from the LAS point cloud and indicates the surface
elevation throughout the raster. Building E is clearly visible. Trees and other features around
the construction site are also easily discernible. The values in this raster can be used for
measuring the height of features.
LAS points were also used to create the Digital Terrain Model layer, which represents a bare
earth surface.
c In the Contents pane, in the DEM Products group layer, turn off the visibility of the Digital
Surface Model layer.
The digital terrain model (DTM) shows what the area would look like if surface features such as
buildings and trees were removed; it is sometimes referred to as a base earth model or a bare
earth surface.
You have reviewed the point cloud and the products derived from it (a DSM and a DTM) and
determined that they can provide additional context for communicating progress at the
construction site to your stakeholders.
a At the top of the map view, click the 3D Map tab to view the data in three dimensions.
Note: It may take a moment for the data to load.
b On the Home tab, in the Layers group, click Basemap and select the Topographic
basemap.
9. What is the advantage when changing the basemap to topographic?
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
c On your own, experiment with other basemaps to see whether another basemap provides
for easier interpretation.
d If necessary, in the Contents pane, in the 3D Layers group layer, right-click the Flight Lines
layer and choose Zoom To Layer.
e In the Contents pane, under Elevation Surfaces, turn off the visibility of the DEM layer.
This DEM layer derived from the drone imagery and LAS point cloud is more localized than
the surrounding WorldElevation3D layer. This discrepancy causes some distortion. Because
the WorldElevation3D layer covers a wider area (and for visualization purposes),the DEM in
this example is turned off.
g Rotate the scene until the flight lines are aligned, as shown in the following graphic.
Note: The 3D mesh results look a bit distorted. There are methods in ArcGIS Drone2Map and
ArcGIS Pro to improve the level of detail and quality of the 3D objects.
With the flight lines visible, you can see the position of the drone during the collection of the
imagery.
10. How can the observed distortion of the trees and other features outside of these
flight lines be explained?
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
These 3D products can now be used to represent your construction site, including the
progress of Building E, with a realistic view for your stakeholders. Additionally, this view, along
with others collected during the life cycle of the construction project, can be examined in its
historic context to aid in lessons learned, safety checks, or to understand and track various
project milestones.
Stretch goals are community-supported (meaning that your fellow MOOC participants can
assist you with the steps to complete the stretch goal using the Lesson Forum), and they are a
great opportunity to work together to learn.
• What are some similarities that you notice in the output products?
• What are some observed differences between products created by the oblique and
nadir image collections?
• What are some applications for the products created from the nadir collection as
opposed to the oblique collection?
Use the Lesson Forum to post your questions, observations, and screenshot examples that
best represent your answers and observations. Be sure to include the #stretch hashtag in the
posting title.
4. In the Project Summary section, how many images were calibrated in the project?
All 68 images were calibrated.
5. What other information or statistics could be important for your AEC projects and why?
Answers may vary but can include the following:
• Information related to tie points (Images with low tie point counts may indicate
problematic areas, such as areas with poor image quality, insufficient image overlap,
or homogeneous image textures.)
• Solution points (Solution points with a higher number of image observations
generally produce more accurate results.)
• Various photogrammetric solution parameters (internal camera parameters, standard
deviation of exterior orientation)
• Information on system parameters and options set for processing
10. How can the observed distortion of the trees and other features outside of these flight lines
be explained?
Because the drone collected images inside the ring of the flight lines—that is, the
image collection was centered on Building E—there is minimal, if any, overlapping
data beyond or outside the ring of flight lines. Combined with this fact and the lack of
control, only a rough mesh can be generated from the output LAS dataset and point
cloud. The result is blurred or highly distorted 3D features outside the main area of
drone imagery collection.