100% found this document useful (1 vote)
406 views

Computer Science - BotPro Case Study (May 2024)

The document summarizes a case study for BotPro, a company that manufactures rescue robots. It outlines four key challenges for BotPro to address: accurate mapping in GPS-degraded environments, navigation in dynamic and unknown environments, survivor detection under various conditions, and effective communication with rescue teams. Potential solutions proposed include using edge computing and sensor fusion to enable mapping and navigation, incorporating human sniffing sensors to detect survivors, and utilizing RF-based localization for robot tracking and communication.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
406 views

Computer Science - BotPro Case Study (May 2024)

The document summarizes a case study for BotPro, a company that manufactures rescue robots. It outlines four key challenges for BotPro to address: accurate mapping in GPS-degraded environments, navigation in dynamic and unknown environments, survivor detection under various conditions, and effective communication with rescue teams. Potential solutions proposed include using edge computing and sensor fusion to enable mapping and navigation, incorporating human sniffing sensors to detect survivors, and utilizing RF-based localization for robot tracking and communication.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Computer Science

BotPro Case Study - Summary Notes


Case Study
Overview
● BotPro's Focus: Specialises in manufacturing rescue robots for disaster
scenarios like earthquakes and tsunamis.
● Previous Performance Issues: Rescue robots faced challenges navigating
and locating survivors in GPS-degraded environments like fire-damaged and
earthquake-hit factories.
● CEO's Directive: Redesign a cost-effective rescue robot with efficient
algorithms for interior mapping, navigation, survivor detection, and
communication.
● Key Problems to Address:
○ Accurate mapping in GPS-degraded environments.
○ Navigation in dynamic and unknown environments.
○ Survivor detection under various conditions.
○ Effective communication with the rescue team.
● Technologies Employed:
○ Computer Vision: Using visual situational awareness, vSLAM, and
pose estimation.
○ vSLAM (Visual SLAM): Combines odometry sensor and camera data
for real-time mapping and localization in unknown indoor spaces.
○ Pose Estimation: Determines the configuration of human body parts
using key points.
● Social and Ethical Considerations:
○ Benefits include reaching dangerous areas and aiding human rescue
workers.
○ Questions about safety, accountability, and survivor comfort with robot
assistance.
● Future Research and Challenges:
○ Sensor fusion models for greater accuracy.
○ Collaboration among multiple rescue robots and with human operators.
○ Development of rescuer GUI for real-time disaster zone mapping.
● Ongoing Challenges for Design Team:
○ Understanding vSLAM navigation in unknown environments.
○ Minimising scanning time for learning environments.
○ Accurate pose estimation in varying conditions and occlusion.
○ Adapting to dynamically changing environments and ethical
considerations.

Challenges for BotPro


● Accurate mapping of the area
○ Find way inside buildings
○ Reliably operate in a GPS-degraded or GPS-denied environment
○ Find its way despite absence of a map
● Navigate in a dynamic and unknown environment
○ Find way through structures that have been damaged/changed due to
the disaster.
● Finding survivors
○ Detect debris and humans under different light conditions
○ Deal with occlusion by other objects
○ Recognize deformation
● Communication
○ Needs to keep in touch with the rescue team outside the space.
○ Extract and meaningfully use data from large databases and central
computer inputs.

Challenge 1: Accurate Mapping


● Edge computing is the processing and analysis of data near its source,
rather than using a centralised cloud server or data centre to do so. It brings
computation closer to the data-source, which is often at the “edge” of the
network

Solution: Edge Processing of Data


● The implementation of edge processing allows the computer to make
on-board decisions and analysis.
● This can enhance the robot’s autonomy, reduce latency and ensure robust
operation in environments with limited or intermittent connectivity.
● The rescue robots must be equipped with a powerful edge processing unit
that is capable of handling real-time data processing and execution of
complex algorithms.
● An RT-OS (Real Time Operating System) must be employed to ensure timely
and deterministic execution of critical tasks.
● An example of a rescue robot currently adopting edge computing is SPOT, an
innovative rescue robot created by Boston Dynamics.
● It makes use of the Spot CORE I/O, which is a new high-efficiency computer
payload that enables Spot to process data in the field for tasks including
computer vision-based site inspections, continuous data collection and so on.
○ The NVIDIA Jetson Xavier NX is a popular choice for high-efficiency
computing and is also being used by Spot CORE I/O for its tasks.
● Spot also makes use of 5G Network Data from its provider AT&T to retain
connection with its servers and operators. Although AT&T is limited to the US
alone, it provides the robot with sufficient network resources for data
transmission.

(Source: Doing More with Spot | Boston Dynamics.)

Challenge 2: Navigation
● Multimodalism is the process in an autonomous system that integrates data
from multiple sensing and operational modalities. By combining information
and processes from different sources, these systems can help in providing a
more comprehensive and accurate understanding and navigation of the
environment.

Solution: Double Track Wheel and Mobilisation System Improvements


● In a study produced at the Korea Advanced Institute of Science and
Technology (KAIST), a new type of rescue robot called the ROBHAZ-DT3 was
produced with a semi-autonomous mapping and localization algorithm. This
robot was found to be reliable in travelling through rugged terrain and the
algorithm proved accurate in an unstructured environment with uneven
ground.
● The ROBHAZ-DT3 inculcated the use of a double-track wheel mechanism
that allows mobility on rough landform, made of two driving shafts and motors
to actuate the shaft.
● The control system of the ROBHAZ-DT3 makes use of the Linux (Kernel
2.4.18-4) operating system. The control system also comprised two
linux-operating CPU boards, two sub-controllers, and two BLDC motor drives.
These components are connected to the remote control station via a CAN
(Control Area Network) that helps in the regulation of position data of Pan/Tilt
as well as other sensor information.
● For map building and localization, a rear camera, 2D laser scanner and
non-contact temperature sensors were installed. These help in a
semi-automated navigation system that displays pictures for a user to control
from their control PC.
● This model could be adapted in BotPro’s Rescue Robot by using the
double-track wheel mechanism that helps in easier navigation and prevents
the robot from overturning or falling in rough and uneven terrain. The control
system can also be implemented to further facilitate this, however aspects like
the semi-automated system and sensor scanning is being done already by the
SPOT Core I/O in the previous challenge.

(Source: ROBHAZ-DT3 with sensors for rescue application | Download Scientific


Diagram)
Challenge 3: Finding Survivors
● Sensor fusion is the process of combining several different types of sensors
in order to extract a single and reliable insight about a system’s surroundings.

Solution: Human Sniffing


● A particular sensor that could be incorporated into the rescue robot for the
better identification of human beings alongside any Human Pose Estimation
(HPE) technologies is a gas sensor
● The gas sensor has the potential to “sniff-out” the presence of humans by
detecting the chemicals and gases released from their bodies
● A research study at the University of Innsbruck, Austria has revealed a
portable sensor system for the detection of human volatile compounds.
○ The human body constantly releases hundreds of traces of VOCs
through breath, sweat and skin, offering a continuous source of
biomarkers. The combination of these volatile biomarkers ultimately
produces a human chemical signature detectable with sensitive
analytical chemical techniques.
○ The sensor system comprises of the following equipment, each with a
different purpose:
■ Controller: to manage the operation of sensor detection and
determination.
■ Aldehyde sensor: is a particular type of gas sensor that is able
to detect the VOCs coming from human bodies.
■ Sampling system: to get a sample of the air and detect it.
■ LAN Computer: to interface with the database and check against
data.
■ aIMS: a dual channel carbon dioxide sensor module that helps
in the detection of gases.
○ For the BotPro system, the aIMS, the aldehyde sensor and the
sampling system is sufficient to detect a human.
(Source: Science Direct - A portable sensor system for the detection of human
volatile compounds against transnational crime)
● CURSOR rescue robots are a system that already makes use of similar
human sniffing technology in their rescue robot called SMURF (Soft
Miniaturised Underground Robotic Finder)
○ It makes use of a sensor array technology combining quartz crystal
microbalance transducers with off-the-shelf CO2 and Volatile Organic
Compound (VOC) sensors.
○ These sensors are then coupled with pattern recognition algorithms to
recognize chemical signatures of living vs. deceased victims and
human vs. animals.
○ These sniffers are highly capable of detecting live persons from 2-3
metres away.
(Source: CURSOR Ingenious Cluster Event)
Challenge 4: Communication
● RF-based localization utilises radio frequency (RF)-based localization
techniques, such as triangulation or trilateration using signals from fixed
beacons or communication towers

Solution: Using RF-based localization for signal-wise tracking of robot


● A detailed study of RF-based localisation and its associated techniques was
conducted at the University of Luxembourg
● Time-of-Arrival (TOA) technique: measures the time taken for an RF signal to
travel from the transmitter at the robot to the receiver.
○ Occlusion does not impact the speed or transmission of the wave and
therefore TOA is able to deliver higher accuracy provided that the
clocks are synchronised between the receiver and transmitter.
○ However, there does need to be a line of sight (LOS) path between the
clock and the robot or there is a possibility that the RF might overshoot
the receiver.
● Angle of Arrival (AoA) Technique: 5G cellular networks and directional
antennas popularly use the AoA technique for localisation. Although this is
furthemore reliable than the TOA method, the AoA method also suffers from
the issue of the presence of a LOS.
● A combination of these techniques is the range-based localisation, which is
done by inferring the distance or angle of a target from a node, based on the
measurements. The bearing measurements are obtained from TOA, TDOA
and AoA, and then are passed through ranges of data in order to estimate the
location using mathematical tools like Maximum Likelihood, Least Squares
Approach, the Bayesian Model or the Kalman Filters.

(Source: A Review of Radio Frequency Based Localisation for Aerial and Ground
Robots with 5G Future Perspectives)

Ethical Implications of the Methodologies


● Privacy Concerns:
○ The use of gas sensors to detect human survivors by analysing volatile
compounds raises privacy concerns.
○ The collected data may inadvertently reveal personal health
information, and the implementation must be approached with
sensitivity to privacy rights.
● Data Security and Cybersecurity:
○ As rescue robots communicate and process data, ensuring the security
of transmitted information is critical.
○ Safeguards against cyber threats, data breaches, and unauthorised
access are essential to protect sensitive data related to disaster
response.
● Bias in Sensor Technologies:
○ Sensor technologies, such as gas sensors, may have inherent biases.
○ Developers must actively work to identify and mitigate biases to ensure
fair and unbiased detection, especially in scenarios where decisions
based on sensor data can impact lives.
● Informed Consent:
○ In situations where rescue robots interact with individuals,
considerations for obtaining informed consent become relevant.
○ Users should be informed about the capabilities of the robot, the type
of data collected, and how that data will be used to ensure ethical
interactions.
● Human-Robot Interaction:
○ As rescue robots become more autonomous, it's crucial to align their
actions with human values.
○ Decisions made by the robot, especially in dynamic and unknown
environments, should be guided by ethical principles, prioritising the
well-being of individuals.
● Transparency and Explainability:
○ Transparency in the design and functionality of rescue robots is vital.
○ Users, operators, and the general public should have access to
information about how the technology works, what data is collected,
and how decisions are made.
○ This promotes trust and accountability.
● Equitable Resource Allocation:
○ In disaster scenarios with limited resources, the deployment of rescue
robots should be guided by principles of fairness and equity.
○ Decisions about where to deploy robots and allocate resources should
prioritise areas where the impact is most significant.
● Human Dignity:
○ The use of technology in disaster response should uphold human
dignity.
○ This includes respecting cultural norms, ensuring non-discriminatory
practices, and treating individuals with respect and empathy, even in
high-stress situations.
● Avoiding Harm:
○ Developers and operators should actively work to minimise the
potential negative consequences of the technology.
○ This includes avoiding harm to individuals, communities, and the
environment, even as the robot fulfils its rescue and relief functions.
● Human Oversight:
○ While rescue robots may operate autonomously, there should be
mechanisms for human oversight. Clear lines of accountability and
responsibility should be established, and human operators should have
the ability to intervene or override automated decisions if necessary.
● Long-Term Impact:
○ Societal Impact: Consideration should be given to the long-term impact
of rescue robots on society. Ethical analysis should encompass
potential societal changes, employment impact, and the broader
implications of widespread use of autonomous systems in disaster
response.

Additional Terminology Definitions

Bundle Adjustment
● Technique used in computer vision and photogrammetry
● Refines parameters of a 3D reconstruction system
● Used to align multiple views of the scene points and camera points
● Bundle - a collection of features observed in multiple images of the same
scene.
● The bundles are then optimised while taking into account the camera
parameters such as distortion, calibration and relative positioning to rearrange
into a proper scene.
● Iterative minimisation of reprojection error provides a more precise
representation of the scene

Computer Vision
● Field of study and research focusing on enabling computers to understand,
interpret and analyse information from images and videos
● Development of algorithms, techniques and models to extract meaningful
information from visual data to replicate human vision capabilities.
● Steps in computer vision
○ Accepts the digital information and video frames as input
○ Extract high-level information from visual data using complex
mathematical and statistical algorithms to analyse patterns, shapes,
colours and textures to recognise objects.

Dead Reckoning Data


● The technique of estimating current position, orientation or velocity of an
object using previously interpreted measurements of kinematic properties
● Dead reckoning data: the data gathered about the position using dead
reckoning.
○ Initial position: starting position, reference point.
○ Acceleration: in X, Y and Z axes using accelerometer to estimate
changes in orientation or angular velocity.
○ Rotation: provided by gyroscopes or IMUs to estimate changes in
orientation or angular velocity
○ Time: integrates the other measured parameters
● Prone to accumulating errors due to sensor drift, noise and imprecise
measurements leading to inaccurate position estimates if not properly
calibrated using GPS or visual tracking.

Edge Computing
● Distributed computing paradigm
● Brings computation and data storage to the edge of the network where data is
generated or consumed compared to sole reliance on centralised cloud
computing infrastructure.
● Processing algorithms are located near the edge devices or sensors which
reduces the need to transmit data to remote cloud servers to compute data.
● Idea is to enable real time processing and processing closer to the data
source
● Advantages
○ Low latency: less access times, good for time-sensitive applications
such as autonomous vehicles and real-time analytics.
○ Bandwidth Optimisation: reduces amount of data needed to be
transferred across the internet, alleviates congestion and high
bandwidth costs
○ Privacy and security: data does not need to traverse external networks,
allows localised data storage and processing
○ Offline operation: allows operation during times of limited cloud/network
connectivity.

Global Map Optimisation


● Technique used to refine and improve accuracy of 3D map reconstruction
● Involves simultaneous optimisation of 3D positions of landmarks (scene
points) and its relevant camera poses.
● Employed in SLAM algorithms to create a map of the environment while
simultaneously determining position of sensor within the map
● Minimises discrepancy between predicted positions of 3D points and actual
observed points.
● Bund;e adjustment or nonlinear least squares optimization is used.

GPS Signal
● Refers to radio frequency signals emitted by GPS satellites that provide
information to receivers on earth
● Allows receivers to calculate precise location, position and time to
synchronise its motion
● Consists of satellites that continuously transmit signals about their orbital
parameters and exact time signals
● Components of the GPS signal
○ Navigation message - information about satellite orbit, clock eros and
other parameters are 50 bits per second
○ Carrier wave - radio wave carrying the navigation message on L1 or L2
in the form of modulated signals
○ Spread spectrum signal - to enhance signal quality and resistance to
interference or disruptions to spread the signal over a wider frequency
band.
● Receivers intercept the signal to analyse time delay between sending and
receiving to calculate distance.

GPS-Degraded Environment
● The GPS signals are severely compromised or degraded, leading to
challenges/limitations in accurate positioning and navigation.
● Causes:
○ Signal obstruction - physical obstructions along the line-of-sight of GPS
receiver and satellites
○ Multi-path interference - signals reflect off buildings and other terrain
before reaching receiver to interference with direct signals to cause
errors/inaccuracies.
○ Signal jamming - intentional or unintentional interference to disrupt or
block GPS using EM waves, concerned in high electronically active
areas.
● Alternative positioning methods such as inertial navigation system (INS) using
accelerometers and gyros.

GPS-Denied Environment
● Situation where the GPS signals are too weak/completely unavailable
● Indoor, underground, dense areas where signals may be weakened, distorted
or blocked
● Again, alternative positioning methods or technologies are required

Human Pose Tracking


● Computer vision task that involves estimating position and orientation of
human body joints or body parts
● Understand and analyse human movement and posture
● Algorithms operate on visual data and machine learning techniques to
estimate 2D and 3D positions or orientations of the body joints to represent
human poses
● Can employ deep-learning, graphic or optimisation based methodologies
● Leverage on CNNs (convolutional neural networks), or recurrent neural
networks (RNNs) to learn features and relationships between body key points

Inertial Measurement Unit (IMU)


● Electronic sensor device combining multiple sensors to measure linear and
angular motion of object
● Integrates multiple devices including accelerometer, gyroscope and
magnetometer into one single compact unit
● Provide complete insight into object’s kinematic state
● Used in navigation systems to estimate changes in position to enable precise
control and localisation

Keyframe Selection
● Video processing technique involving the identification and selection of key
frames that represent the entire scene from a sequence of videos / images
● Capture essential information from the content of the visual sequence
● Reduces amount of data to be processed or analysed while limiting the data
to only relevant information
● Criteria for a frame to be considered keyframe
○ Visual Saliency: only capture visually salient regions or objects in the
video
○ Content Diversity: represent different scenes, perspectives and/or
actions to provide a comprehensive overview of text
○ Temporal Significance: select specific points in time of significance
○ Motion Characteristics: based on motion analysis
○ Redundancy: Selecting frames that only offer unique information
compared to neighbouring frames
○ Computational efficiency: strike balance between accuracy and
complexity

Key Points / Pairs


● Distinctive or informative locations or regions in a set of images
● Identified based on unique visual characteristics including corners, edges and
blobs
● Serve as landmarks or reference points for computer vision tasks
● Detected using feature extraction algorithms that analyse local properties
such as intensity gradients, texture or scale-space representation.
● After detection, they are described using feature descriptions that encode
local appearance around the key point
● Key pairs - corresponding key points detected in two or more images
● Matching key pairs allows the tracking of movement by observing changes in
the environments

Light Detection and Ranging (LiDAR)


● Remote sensing technology using laser light to measure distances and create
precise 3D representations of surrounding environment
● Emit laser pulses to measure time taken to bounce back and delay is used to
calculate the kinematic information.
● Components
○ Laser source: to emit pulsated bursts of light in rapid succession
○ Scanner / Receiver: to detect the reflected pulses and to steer laser
beam in larger directions
○ Timing and Positioning: measures time to detect laser pulses to enable
calculation of distance
● Applications
○ Mapping and Surveying
○ Autonomous Vehicles
○ Environmental Monitoring

Object Occlusion
● Phenomena in which an object positioned in front of another obstructs the
visibility of obscured object from viewpoint of observer
● Affects tracking, segmentation and recognition
● Causes complexities due to partial visibility making it difficult to predict actual
nature
● Disadvantages
○ Full extent and boundary of object not visible
○ Loss of tracking on an object - continuity requires complex algorithms
○ Exhibit limited visual cues or fragmented appearance
○ Depth relationships between occluded and occluded objects are not
visible

Odometer Sensor
● Device used to measure movement and displacement of mobile robot or
vehicle
● Provides information about vehicle change in position based on motion
● Use rotational encoders or sensors on wheels / motor shafts to measure
movement
● Combined with other information such as from IMU or GPS to improve
accuracy and reliability of vehicle pose estimation and localisation
● Provide real-time feedback to allow for precise control and monitoring

Optimisation
● Process of finding best possible solution by maximising a specific objective
function within a given set of constraints
● Involves systematic exploration and performance improvement
● Process
○ Defining the problem and its constraints
○ Identification of the space of possible solutions through range and
bound determination
○ Objective function to measure quality of solution based on optimisation
goal
○ Reference to any constraints of the problem that solution must aim to
solve
○ Developing algorithms and techniques to solve the problem
○ Assessing the optimised solution and evaluating its performance
against defined objectives and constraints

Re-localization
● Relocating the position when camera or robot loses track or encounters
environmental change due to sensor drift, occlusion, movement or lighting
● Successful re-localization allows system to accurately cover its pose
estimation and continue operation
● Steps to re-localization
○ Map of the environment is created with key reference points for pose
estimation
○ Extraction of visual features including key points and key frames
○ Features matched against map
○ Estimation of camera or robot pose calculated using matched features
○ Refinement or verification of estimation to improve accuracy and
reliability
○ Most useful in SLAM situations where the robot/camera needs to
constantly update its location

Rigid Pose Estimation (RPE)


● Process of determining precise position and orientation of rigid object in 3D
space
● Estimates six degrees of freedom (6DoF) transformation
● Rigid - object does not deform or change its shape during pose estimation
● Process steps
○ Feature detection - distinctive key points and frames detected
○ Feature matching - matching the data with corresponding figures
○ Pose estimation - solving for 6DoF transformation and estimating the
pose using key points in reference frame
○ Refinement - to improve accuracy
● Performed using various algorithmic techniques including PnP, ICP or
RANSAC algorithms

Robot Drift
● Robot’s estimated position gradually deviates from actual position over time
● Factors that contribute
○ Sensor noise: inaccuracies or anomalies
○ Calibration errors: misalignment of components, incorrect calibration
○ Environmental changes: terrain, lighting or magnetic field affect
readings
○ Accumulative integration: adding sensor measurements causing error
propagation and accumulation of mistake
○ Uncertainty: complex or dynamic environments
● Methods for mitigation
○ Sensor fusion: integrating data from multiple sensors doing same thing
○ Kalman Filtering: to mitigate noise and uncertainties
○ Loop closure: correction mechanisms to correct accumulated error
○ Environmental Constraints: connect drift by aligning estimated pose
with actual environment
○ Online calibration/recalibration: reduce systematic errors

Simultaneous Localisation and Mapping (SLAM)


● Technique used in order to enable a robot or an autonomous system to build a
map of an unknown environment while estimating its own position
● Process
○ Data Acquisition: images, range measurements, visual cues
○ Extraction: key frames, key pairs and reference points
○ Data association: identify common features across the different
positions and viewpoints to create consistent map
○ Mapping: construct map using point clouds, occupancy grids and
feature-based maps
○ Localisation: use of IMU and GPS data to estimate robot position and
orientation
○ Loop Closure: revisit previously obstructed areas to correct
accumulated error and improve consistency

Sensor Fusion Model


● Process of combining information from multiple sensors to obtain more
accurate and comprehensive understanding of the environment/state
● Integration of data from different sensors to overcome limitation
● Process
○ Sensor selection: choosing appropriate sensor based on location
○ Data acquisition: from selected sensors including measurements,
images, point clouds
○ Preprocessing: to remove noise, anomalies and align data spatially and
temporally
○ Data Fusion Algorithms: to combine the data using statistic methods
such as Kalman filter or Bayesian networks
○ Fusion output: generation of data that provides a more accurate and
comprehensive representation
● Benefits
○ Improved accuracy and reliability
○ More robust
○ Enhance situational awareness
Visual Simultaneous Localization and Mapping (vSLAM)
● Technique of using visual information from cameras to simultaneously
estimate pose and construct map
● Step 1: Initialisation
○ Camera calibration: estimation of intrinsic parameters (distortion, foci)
○ Feature extraction: visual features or key points from initial frames to
serve as reference points.
○ Pose estimation: estimate initial position relative to initial reference
frame using PnP algorithm
○ Map initialisation: sparse drawing of surroundings as starting point
○ Scale estimation: obtaining absolute size using LiDAR or depth
cameras for accuracy
● Step 2: Local Mapping
○ Feature extraction: visual features or key points from current camera
frames that are distinctive and unique using SIFT, SURF or ORB
○ Feature tracking: track position of the key points across consecutive
frames to maintain consistency and measure time delay
○ Triangulation: estimate 3D position of key points and calculate spatial
coordinates
○ Map representation: Triangulated key points placed on map
○ Update: as per new processing of camera frames
● Step 3: Loop closure
○ Feature matching: compares current frames with previous to find
similarities or matches
○ Similarity detection: determination of whether or not current frames
contain similarities
○ Hypotheses Generation: algorithm generates hypothetical ways to
close error loops
○ Verification and Consistency: to determine true loop closure
○ Update and Correction: based on changes to revisited area
● Step 4: Re-localization (if any)
○ Image/frame matching: find match between current frames and existing
map
○ Hypothesis Generation: about camera pose or position in environment
○ Hypothesis verification: using RANSAC or geometric verification
○ Map Re-association: cameras current position with map to continue
mapping process
● Step 5: Tracking
○ Feature extraction: of visual key points or features from frames
○ Feature matching: with corresponding features from previous frames
○ Motion estimation: using IMU / kinematic data
○ Pose Update: based on pose estimation using features of current
frame
○ Robustness and error handling: recover from errors and gives real time
update

You might also like