CAI3034 Set 3
CAI3034 Set 3
Generally, there are 3 types of robotic work cell: 1) Robot-centered work cell 2) In-line robot work
cell 3) Mobile robot work cell.
Part-feeders Pallets
1
CAI3034 Prepared by Ooi W.S.
4. There are 3 types of work part transport systems used in in-line robot work cell:
a) Intermittent Transfer
b) Continuous Transfer
c) Non-synchronous Transfer
2
CAI3034 Prepared by Ooi W.S.
1. In this arrangement, the robot is provided with a means of transport, such as a mobile base, within
the work cell to perform various tasks at different locations.
2. The transport mechanism can be floor mounted tracks or overhead railing system that allows the
robot to be moved along linear paths.
3. Mobile robot work cells are suitable for installations where the robot must service more than one
station (production machine) that has long processing cycles, and the stations cannot be arranged
around the robot in a robot-centred cell arrangement.
4. One such reason could be due to the stations being geographically separated by distances greater
than the robot’s reach. The type of layout allows for time-sharing tasks that will lower the robot idle
time.
5. One of the problems in designing this work cell is to find the optimum number of stations or machines
for the robot to service.
Mobile Collaborative Robot - FANUC’s New CR-7iA/L Uses AGV to Move Between Robotic Assembly
Stations: https://www.youtube.com/watch?v=rQBnZuby05s
Safety Monitoring
Emergency stopping requires an alert operator to be present to notice the emergency and take action
to interrupt the cycle (however, safety emergencies do not always occur at convenient times, when the
operator is present).
Therefore, a more automatic and reliable means of protecting the cell equipment and people who
might wander into the work zone, is imperative. This is safety monitoring.
Safety monitoring (or hazard monitoring) is a work cell control function where various types of sensors
are used for such purpose. For example, limit switches and proximity sensors can be used to monitor
status and activities of the cell, to detect the unsafe or potentially unsafe conditions. Other type of
sensors such as temperature sensors, pressure sensitive floor mats, light beams combined with
photosensitive sensors, and machine vision can also be used to protect the cell equipment and people
who might wander into the work zone.
3
CAI3034 Prepared by Ooi W.S.
Passive sensors
Passive sensors measure ambient environment energy entering the system. Examples of passive sensor:
thermocouple, camera, Light Dependent Resistor (LDR).
Advantage – their operation conditions are not limited by the battery and that they are inexpensive /
Simplicity of the design, therefore, low cost and low power consumption
Disadvantage – can only be used to detect energy when the naturally occurring energy is available
Active sensors
Active sensors emit energy into the environment, and then measure the environment response.
Examples of active sensor: laser range finders, sonar sensor, synthetic aperture radar (SAR)
Advantage – is their ability to obtain measurements at any time, regardless of the time of day, season
or amount of natural illumination.
Disadvantage – require the generation of a fairly large amount of energy to adequately illuminate the
targets.
Braitenberg vehicle
A Braitenberg vehicle is a conceptual mobile robot in which simple sensors are connected directly to
drive wheels.
The Braitenberg vehicle with two light sensors at the robot’s front will be attracted to light and two
proximity sensors beside the light sensors will turn the vehicle away from obstacles. After the Braitenberg
vehicle has acquired the phototaxis competence, it can be taught to avoid obstacles as well. If the vehicle
approaches the obstacle, the proximity sensor with a high proximity activation accelerates the motor on
the sensor’s side whereas this sensor slows down the motor on the opposite side. The presence of an
obstacle leads to different motor speeds, which causes the vehicle to turn. The ability to move towards a
light source is not lost by this. If not obstacle is detected and if the left half of the field of is brighter than
the right half (by a certain threshold), then the robot should move left. If the right half of the field of is
brighter than the left half (by a certain threshold), then the robot should move right. If both halves of the
field of view are close to each other in brightness, then the Braitenberg should move straight ahead.
Brownouts typically happen when batteries are already running low, and the servos suddenly demand
more power. The symptom is the programs will be restarted. When the program restarts the Boe-Bot will
have different behaviour patterns meaning that the Boe-Bot will dance crazy or do other things it is not
programmed to do.When the battery supply drops below the level normal level, the controller will reset.
This can be detected using piezospeaker or buzzer each time the controller reset. The piezospeaker can
create a tone that can indicate that a brownout has occurred. The FREQOUT command allows the
piezospeaker to create a tone.
4
CAI3034 Prepared by Ooi W.S.
According to Goodrich and Schultz (Goodrich and Schultz, 2007), the interaction between a
human and robot are categorized into two. These include remote interaction and proximate
interaction.
Anthropomorphism
Is the tendency to attribute human characteristics to inanimate objects, animals and others
with a view to helping us rationalise their actions.
Personification: Your computer does not actually hate you. We use this expression to
mean that your computer isn’t working at that particular moment.
Anthropomorphism: Would be if your computer grew arms and legs, punched you in the
neck, and stole your wallet…then it would ACTUALLY hate you.
5
CAI3034 Prepared by Ooi W.S.
Figure 1: Anthropomorphism design space for robot heads. Notes: The diagram refers uniquely
to the head construction and ignores body function and form. This is also by no means an
exhaustive list. Examples were chosen to illustrate the proposed idea.
In order to portray artificial emotional states, some utilise a strongly realistic humanlike
construction (i.e. with synthetic skin and hair) for facial gestures. Building mannequin-like
robotic heads, where the objective is to hide the “robotic” element as much as possible and
blur the issue as to whether one is talking to a machine or a person, results in effectively
unconstrained anthropomorphism. However, as Mori outlined with “The Uncanny Valley”
(Mori, 1982), the closer the design and functionality of the robot comes to the human, the
more susceptible it is to failure unless such a high degree of resolution is achieved that its
distinction from a human becomes very difficult.
You made the robot interact with you while working on your assignments.
But how many of you:
o Swore at your robot?
o Complimented your robot?
o Referred to your robot as he/she?
o Gave your robot a name?
Why?
6
CAI3034 Prepared by Ooi W.S.
The term “uncanny valley” describes our strange revulsion toward things that appear nearly
human, but not quite right. The uncanny valley is a common unsettling feeling people
experience when humanoid robots closely resemble humans in many respects but are not quite
convincingly realistic. This strange revulsion usually involves robots, but can also include
computer animations and some medical conditions.
In 1970, Japanese roboticist Masahiro Mori proposed the “uncanny valley” hypothesis, which
predicted a nonlinear relation between robots’ perceived human likeness and their likeability as
follows,
The uncanny valley is a psychological theory about the effect involving art and robots and
human emotions. As something starts to look more human-like, there is a point at which people
start to feel it looks wrong. At this point, they have negative feelings toward the object. These
feelings keep getting worse as the object is made to look more human-like. The uncanny valley
occurs because of mismatches between aspects of the robot’s appearance and/or behaviour.
At a certain point, as the object starts to get very close to looking human-like, how people feel
towards it tend to reverse and they have more positive emotional feelings towards it.
The uncanny valley effect can be reduced by ensuring that a character’s facial expressions
match its emotive tones of speech, and that its body movements are responsive and reflect its
hypothetical emotional state. Special attention must also be paid to facial elements such as the
forehead, eyes, and mouth, which depict the complexities of emotion and thought.
To investigate the uncanny valley theory, a collection of video clips depicting robots of varying
human similarity and at varying levels of sophistication are used in the experiment. Video clips
but not still images are used because animated stimuli provide an even richer set of cues to
evaluate the human likeness of given stimulus. These clips will be presented to participants.
The clips and scales are randomized to prevent order effect. Participants will then be asked to
rate these clips using human likeness, familiarity, and eeriness scales. From subjective ratings
7
CAI3034 Prepared by Ooi W.S.
collected, statistical measures will be employed to determine whether the set of humanlike
robots reflects an uncanny valley.
Social Robots
The use of robots is swiftly shifting from industrial uses where they are basically deployed for
manufacturing purposes and tasks that are too dangerous for human beings to the use of social
robots which have the capability to interact with human beings in a particular environment.
Social robots have been widely deployed in healthcare in recent times as a result of low
accessibility to healthcare services. Social robots are autonomous mobile machines that are
designed to interact with human and exhibit social behaviors such as recognizing, following and
assisting their owners and engaging in conversation.
As with any technology, robots are susceptible to hardware malfunctions and failures and
may involve a high cost to repair and maintain.
it is hard to make judgements on what an emotion should be, someone might laugh at
something that others find offensive.
If people rely on these machines, rather than real people, we could end up with a very poor
society where people avoid each other as machines are far better.
If the device is used by people with emotional issues they may become too attached which
could cause problems if the device does not act as expected.
HRI in healthcare is primarily concerned with helping patients improve or monitor their health.
Examples of social robots in healthcare are surgical assistance robots, rehabilitation robots,
and companion robots.
Surgical assistance robots are robots that allow physicians to perform surgical operations
with greater precision. Surgical robots support both face-to-face and remote surgical
operations. In face-to-face surgical operations, the physicians and patients are physically
present while the human surgeon is not physically present with the patients in remote
surgical operations. The use of surgical assistance robots results in minimally invasive
surgeries.
Rehabilitation robots are robots that assist people with disability and provide therapy for
people seeking to improve physical or cognitive functions. Examples of rehabilitation robots
are assistive robots for mobility, assistive robots for manipulation and therapy robots.
Examples of assistive robots for mobility include robotic wheelchairs, intelligent wheel chairs,
robotic walkers and robotic aids for the blind. Assistive robots for manipulation are used for
handling physical objects. They are usually used by people with impairments of the arm,
8
CAI3034 Prepared by Ooi W.S.
hand and fingers. Therapy robots are robots that provide treatment for people with physical
and mental challenges.
Companion robots are typically designed to enhance the health and psychological well-
being of the aged and the sick by providing companionship, alleviating stress and
increasing their immune system.
Ethical Issue
Social robots are now widely used in healthcare. Their applications range from surgery,
emotional and aging care, companionship to telemedicine and rehabilitation. However, there
are numerous challenges associated with the interaction between humans and social robots in
healthcare. These challenges range from ethical challenges, design issues to safety,
usefulness, acceptability and appropriateness.
Ethics is a philosophical discipline which is concerned with the morality of human behavior, with
right and wrong. Examples of the ethical challenges confronting HRI in healthcare are as
follows,
Human-human relation HHR is a very important ethical issue that has to be considered
when using assistive/socialized robots. Typically, robots are used as means of addition or
enhancement of the therapy given by caregivers, not as a replacement of them. Thus, the
patient-care giver relation and interaction must not be disturbed. But, if the robot is used as
a replacement of the human therapist, then the robot may lead to a reduction of the amount
of “human-human” contact. This is a serious ethical problem if the robot is employed as the
only therapeutic aid for the patient's life. In this case, in fragile persons (children with
development disorders, elderly with dementia, etc.) that suffer from loneliness, the isolation
syndrome may be worsened.
A robot designed to play the role of a therapist is given some authority to exert some
influence on the patient. Therefore, the ethical question arises who actually controls the
type, the level, and the duration of interaction. For example, if a patient wants to stop an
exercise due to stress or pain, a human therapist would accept this on the basis of his
general humanized evaluation of the patient's physical state. Such a feature is desirable to
be technically embedded to the robot in order to balance ethically the patient's autonomy
with the robot's authority.
Reactive Control
Reactive Control is a technique for tightly coupling sensory inputs and effector outputs, to
allow the robot to respond very quickly to changing and unstructured environments, i.e.,
sensors directly determine the actions.
Limitations to this approach are that such robots, because they only look up actions for any
sensory input, do not usually keep much information around, have no memory, no internal
representations of the world around them, and no ability to learn over time.
Example of reactive control is a light-chasing robot:
(behavior chase-light
:period (1 ms)
:actions ((set left-motor (right-sensor-value))
(set right-motor (left-sensor-value))))
9
CAI3034 Prepared by Ooi W.S.
Deliberative Control
In deliberative control, the robot takes all of the available sensory information, and all of the
internally stored knowledge it has, and it thinks ("reasons") about it in order to create a plan
of action. To do so, the robot must search through potentially all possible plans until it finds
one that will do the job. This requires the robot to look ahead, and think in terms of: "if I do
this next, and then this happens, then what if I do this next, then this happens,..." and so on.
This can take a long time. However, if there is time, this allows the robot to act strategically.
The deliberative systems were also used in non-physical domains, such as playing chess.
Hybrid Control
In hybrid control, the goal is to combine the speed of reactive control and the brains of
deliberative control. In it, one part of the robot's "brain" plans, while another deals with
immediate reaction, such as avoiding obstacles and staying on the road.
The three-layer hybrid architecture is often known as hybrid architecture which uses higher
level planning in order to guide the lower level of reactive components.
The bottom layer is the reactive/behavior-based layer, in which sensors and actuators are
closely coupled
The upper layer provides the deliberative component such as planning and localisation.
The control execution layer is responsible to supervise the interaction between the high level
layer and low level layer / The control execution layer combines the speed of reactive control
and the brains of deliberative control.
With AI integrated with industrial robotics technology, robots can monitor their own accuracy
and performance, signaling when maintenance is required to avoid expensive downtime.
Hence, increase uptime and productivity from predictive maintenance.
AI helps to simplify robots training process. AI can train several robots at once, so less time
is spent on training. This is especially helpful to industries experiencing labor shortages. AI
10
CAI3034 Prepared by Ooi W.S.
technology lets the operator teach the robot more intuitively. With AI, it’s easier for an
operator to track the robot’s learning than it previously was with conventional training
methods.
In the assembly application, AI is a highly useful tool in robotic assembly applications. When
combined with advanced vision systems, AI can help with real-time course correction, which
is particularly useful in complex manufacturing sectors. AI can also be used to help a robot
learn on its own which paths are best for certain processes while it’s in operation
Robotic packaging uses forms of AI frequently for quicker, lower cost and more accurate
packaging. AI helps save certain motions a robotic system makes, while constantly refining
them, which makes installing and moving robotic systems easy enough for anybody to do.
Another possible impact is social unrest. If these devices displacing workers from tasks they
were previously performing, workers will loss career opportunities and what will they do to
earn money to pay for the food.
In a robot soccer competition, two cameras are mounted over the field. Each camera
captures half of the field so that the whole field can be captured.
The first process to extract the information from the camera is data pre-processing. To pre-
process the data, the continuous video is first sampled into images. The RGB image is then
transformed to HSV or YUV which is a more perceptually uniform colour space. Color
normalization is used to handle non-uniform lighting across the field. Adaptive colour histogram
is used to remove the background, in this case the green field colour.
11
CAI3034 Prepared by Ooi W.S.
Background removal
After the data is pre-processed, image segmentation is perform to extract useful information.
To segment the image, each image pixel is classified into different classes of object (robots and
ball) using thresholding method. The threshold image is dilated to fill holes which might appear
inside a region. The relevant information such as the colour, area, and bounding box are
computed from the segmented region. This information is stored in a data structure for
recognition to be done.
After image segmentation, the objects in the image must be classified. In the classification
process, the set of attributes assigned in the rules to the objects of interest is recognized. For
example, the ball is an orange spherical object and smaller in size, thus it can be detected as a
small orange circular region on the image plane. Robots are detected as red or blue regions
(the team colors). The goals are detected as large yellow objects on the image. Thus, the
objects can be classified.
One of the possible challenges of implementing computer vision is the narrow field of view
achieved if a single camera is used. Although special lenses can be mounted to the camera to
broaden the field of view, they cause image deformation. Using two cameras can improve the
field of view. However, this method causes higher image processing complexity as well as
doubling required hardware. Another challenge is real-time reaction has to be achieved. Higher
image resolutions result in better precision to image segmentation but also increase the time to
process each image since the bigger the image the longer it takes to be analysed.
12