Camera Notes For Photogrammetry
Camera Notes For Photogrammetry
Introduction to
Photogrammetry
Cyrill Stachniss
[Courtesy: ImagingSource] 2
What is Photogrammetry?
“Estimation of the geometric and
semantic properties of objects
based on images or observations
from similar sensors.”
3
Two Key Problems in
Photogrammetry
5
Involved Disciplines
At the intersection of 4 disciplines
§ Traditional photogrammetry
§ Computer vision
§ Machine learning
§ Robotics
6
Photogrammetry Connections
§ Developed for surveying purposes and
is a part of the geodetic sciences
§ A form of optical remote sensing
§ Digital photogrammetry has strong
connections to image processing
and computer vision
§ Strong links between photogrammetry
and state estimation and robotics
§ Uses machine learning approaches
7
Advantages (1)
§ Contact-free sensing
8
Advantages (1)
§ Contact-free sensing
Why is contact-free
sensing relevant?
9
Advantages (1)
§ Contact-free sensing is important for
§ inaccessible (but visible) areas
§ sensitive material
§ hot/cold material
§ toxic material
10
Advantages (1)
§ Contact-free sensing
§ Relatively easy to acquire a large
number of measurements
§ Dense coverage of comparably large
areas
§ Flexible resolution (small but accurate
or large but coarse models)
§ 2D sensing and 3D sensing
11
Advantages (2)
§ Ability to record dynamic scenes
§ More than just geometry (image
interpretation, inferring semantics,
classification, …)
§ Data can be interpreted by humans
§ Recorded images document the
measuring process
§ Automatic data processing
§ Possibility for real-time processing
12
There is no free lunch!
13
Disadvantages
§ Light source is needed
§ Cameras only measures intensities
from certain directions
§ Occlusions and visibility constraints
§ One image is a projection of the
3D world onto a 2D image plane
§ Other techniques may achieve a
higher measurement accuracy
14
Cameras to Measure Directions
An image point in a camera image
defines a ray to the object point
object
geometry
location
type
…
physics image
camera
intrinsics
extrinsics
17
The Inverted Mapping
physics
geometry
location
object
type
…
images algorithm
camera intrinsics
extrinsics
background
knowledge
18
Two Key Problems in
Photogrammetry
20
Experiment
§ Person, who is blind from birth on
§ Camera records a scene
§ Image “printed” on the persons skin
using a pin for each pixel
21
Experiment
§ Person, who is blind from birth on
§ Camera records a scene
§ Image “printed” on the persons skin
using a pin for each pixel
§ Yes, the person can recognize different
objects and interpret the scene
23
Typical Sensors
24
Typical Sensors
§ Industrial cameras
[Courtesy: Microsoft] 27
Typical Sensors
§ Laser range finders
29
Application: Maps
[Courtesy: NEXTMap] 32
Application: Environment
Monitoring
33
Application: Environment
Monitoring
34
Segmentation and Instances
35
Segmentation and Instances
36
Application: Aerial Mapping (1)
37
Application: Aerial Mapping (2)
38
Application: Orthophotos
[Courtesy: SIGPAC] 39
Application: City Mapping
[Courtesy: Früh] 41
Application: 3D City Models
[Courtesy: Google ] 42
Application: Digital Preservation
of Cultural Heritage
43
Application: Digital Preservation
of Cultural Heritage
44
Image-Based 3D Reconstruction
§ Seven cameras in known configuration
§ Seeing points in multiple images
allows for estimation 3D locations
45
3D Model of Cultural Heritage
Site (Catacombe di Priscilla)
46
Application: Digital Preservation
of Cultural Heritage
47
Application: Robotics
48
Semantics in Robotics
49
Visual Localization
50
Requires to Solve Challenging
Image Matching Problems
= = =
51
Purely Vision Localization
Across Seasonal Changes
52
Robotic Cars
[Courtesy: Google] 53
What Does the Car See?
[Courtesy: Google] 54
Camera-based
Semantic Segmentation
55
LiDAR-based
Semantic Segmentation
56
What Do We Need to Estimate?
poses instances
3D geometry
tracking
semantics predictions
57
Today’s Autonomous Cars
[Courtesy: Google/Waymo] 58
Photogrammetry I + II
§ This module (Photo I + II) is intended
to provide the foundations of
photogrammetry
§ Key building blocks for interesting and
exciting applications
59
Relevant Literature
Used in this course
§ Förstner & Wrobel: Photogrammetric
Computer Vision
§ Förstner: Photogrammetrie I Skriptum
§ Szeliski: Computer Vision: Algorithms
and Applications. Springer, 2010
§ Alpaydin: Introduction to Machine
Learning, 2009
§ Hartley & Zisserman: Multiple View
Geometry in Computer Vision, 2004 60
Slide Information
§ The slides have been created by Cyrill Stachniss as part of the
photogrammetry and robotics courses.
§ I tried to acknowledge all people from whom I used
images or videos. In case I made a mistake or missed
someone, please let me know.
§ The photogrammetry material heavily relies on the very well
written lecture notes by Wolfgang Förstner and the
Photogrammetric Computer Vision book by Förstner & Wrobel.
§ Parts of the robotics material stems from the great
Probabilistic Robotics book by Thrun, Burgard and Fox.
§ If you are a university lecturer, feel free to use the course
material. If you adapt the course material, please make sure
that you keep the acknowledgements to others and please
acknowledge me as well. To satisfy my own curiosity, please
send me email notice if you use my slides.