0% found this document useful (0 votes)
46 views12 pages

CAI3034 Set 3

Uploaded by

learning.edu1980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views12 pages

CAI3034 Set 3

Uploaded by

learning.edu1980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

CAI3034 Prepared by Ooi W.S.

Robot Work Cell


Most people know that using industrial robots for a manufacturing process can improve product
quality, reduce labor and production costs, and decrease waste; but there is an additional benefit of using
a robotic cell to enhance the overall flow and efficiency.
A robot work cell is a complete system that includes the robot, controller, and other peripherals
such as a part positioner and safety environment. Robotic work cells are made to allow robots to operate
at their full capacity and speed. There are regulations in place that limit the speed of a robot within the
presence of a human worker and with the right barriers. However, in robotic work cell this limitation is
removed. The layout of these work cells can be customized to specific application process in order to
increase how quickly a part can be completed and moved down the production line. Operators can load
and unload parts while another part is being completed within the same work cell area.

Generally, there are 3 types of robotic work cell: 1) Robot-centered work cell 2) In-line robot work
cell 3) Mobile robot work cell.

1) Robot-centered work cell


1. The robot is positioned at the approximate centre of the work cell.
2. Other components and equipment are arranged in a partial circle around the robot.
3. This layout allows for high utilization of robot
4. Parts to be presented in known location and orientation (usage of conveyors, part-feeders, pallets)

End-of-Line Robotic Case Packer & Palletizer - Schneider Packaging


https://www.youtube.com/watch?v=zM_8pHBvYm0

Combi RCE Robotic Random Case Erector installed by SWS Packaging


https://www.youtube.com/watch?v=9-W9gfCtZhE

Part-feeders Pallets

1
CAI3034 Prepared by Ooi W.S.

2) In-line robot work cell


1. One or more robots are located along an in-line conveyor or other material transport system.
2. Work is organized so that parts are presented to the robots by the transport system.
3. Typical applications such as in welding lines used to spot-weld car body frames, usually utilizes
multiple robots.

Inline robotic cell layout for spot-weld

4. There are 3 types of work part transport systems used in in-line robot work cell:
a) Intermittent Transfer
b) Continuous Transfer
c) Non-synchronous Transfer

a) Intermittent Transfer System


 The parts are moved in a start-and-stop motion from one station to another along the line. It is
also called synchronous transfer since all parts are moved simultaneously to the next stop.
 The advantage of this system is that the parts are registered in a fixed location and orientation
with respect to the robot during the robot’s work cycle.

b) Continuous Transfer System


 Work parts are moved continuously along the line at constant speed. The robot(s) has to perform
the tasks as the parts move along.
 The position and orientation of the parts with respect to any fixed location along the line are
continuously changing.
 This results in a “tracking” problem, that is, the robot must maintain the relative position and
orientation of its tool with respect to the work part.
 This tracking problem can be partly solved by the moving baseline tracking system i.e. by
moving the robot parallel to the conveyor at the same speed, or by the stationary baseline
tracking system i.e. by computing and adjusting the robot tool to maintain the position and
orientation with respect to the moving part.

c) Non-synchronous Transfer System


 This is a “power and free system”. Each work part moves independently of other parts, in a stop-
and-go manner.
 When a work station has finished working on a work part, that part then proceeds to the next
work station. Hence, some parts are being processed on the line at the same time that others
are being transported or located between stations. Here, the timing varies according to the cycle
time requirements of each station.
 The design and operation of this type of transfer system is more complicated than the other two
because each part must be provided with its own independently operated moving cart.
 However, the problem of designing and controlling the robot system used in the power-and-free
method is less complicated than for the continuous transfer method.
 For the irregular timing of arrivals, sensors must be provided to indicate to the robot when to
begin its work cycle. The more complex problem of part registration with respect to the robot
that must be solved in the continuously moving conveyor systems are not encountered on either
the intermittent transfer or the non-synchronous transfer.
 Non-synchronous transfer systems offer a greater flexibility than the other two systems.

Mercedes-Benz E-Class Production:


https://www.youtube.com/watch?v=LEgGqSybLTo

2
CAI3034 Prepared by Ooi W.S.

3) Mobile robot work cell

1. In this arrangement, the robot is provided with a means of transport, such as a mobile base, within
the work cell to perform various tasks at different locations.
2. The transport mechanism can be floor mounted tracks or overhead railing system that allows the
robot to be moved along linear paths.

3. Mobile robot work cells are suitable for installations where the robot must service more than one
station (production machine) that has long processing cycles, and the stations cannot be arranged
around the robot in a robot-centred cell arrangement.
4. One such reason could be due to the stations being geographically separated by distances greater
than the robot’s reach. The type of layout allows for time-sharing tasks that will lower the robot idle
time.
5. One of the problems in designing this work cell is to find the optimum number of stations or machines
for the robot to service.

Mobile Collaborative Robot - FANUC’s New CR-7iA/L Uses AGV to Move Between Robotic Assembly
Stations: https://www.youtube.com/watch?v=rQBnZuby05s

Safety Monitoring

 Emergency stopping requires an alert operator to be present to notice the emergency and take action
to interrupt the cycle (however, safety emergencies do not always occur at convenient times, when the
operator is present).

 Therefore, a more automatic and reliable means of protecting the cell equipment and people who
might wander into the work zone, is imperative. This is safety monitoring.

 Safety monitoring (or hazard monitoring) is a work cell control function where various types of sensors
are used for such purpose. For example, limit switches and proximity sensors can be used to monitor
status and activities of the cell, to detect the unsafe or potentially unsafe conditions. Other type of
sensors such as temperature sensors, pressure sensitive floor mats, light beams combined with
photosensitive sensors, and machine vision can also be used to protect the cell equipment and people
who might wander into the work zone.

Robotics Risk Assessment: Recognizing Potential Hazards:


https://www.youtube.com/watch?v=GtNKX4kpC18

3
CAI3034 Prepared by Ooi W.S.

Passive sensors
Passive sensors measure ambient environment energy entering the system. Examples of passive sensor:
thermocouple, camera, Light Dependent Resistor (LDR).

 Advantage – their operation conditions are not limited by the battery and that they are inexpensive /
Simplicity of the design, therefore, low cost and low power consumption
 Disadvantage – can only be used to detect energy when the naturally occurring energy is available

Active sensors
Active sensors emit energy into the environment, and then measure the environment response.
Examples of active sensor: laser range finders, sonar sensor, synthetic aperture radar (SAR)

 Advantage – is their ability to obtain measurements at any time, regardless of the time of day, season
or amount of natural illumination.
 Disadvantage – require the generation of a fairly large amount of energy to adequately illuminate the
targets.

Braitenberg vehicle

A Braitenberg vehicle is a conceptual mobile robot in which simple sensors are connected directly to
drive wheels.

The Braitenberg vehicle with two light sensors at the robot’s front will be attracted to light and two
proximity sensors beside the light sensors will turn the vehicle away from obstacles. After the Braitenberg
vehicle has acquired the phototaxis competence, it can be taught to avoid obstacles as well. If the vehicle
approaches the obstacle, the proximity sensor with a high proximity activation accelerates the motor on
the sensor’s side whereas this sensor slows down the motor on the opposite side. The presence of an
obstacle leads to different motor speeds, which causes the vehicle to turn. The ability to move towards a
light source is not lost by this. If not obstacle is detected and if the left half of the field of is brighter than
the right half (by a certain threshold), then the robot should move left. If the right half of the field of is
brighter than the left half (by a certain threshold), then the robot should move right. If both halves of the
field of view are close to each other in brightness, then the Braitenberg should move straight ahead.

Brownouts typically happen when batteries are already running low, and the servos suddenly demand
more power. The symptom is the programs will be restarted. When the program restarts the Boe-Bot will
have different behaviour patterns meaning that the Boe-Bot will dance crazy or do other things it is not
programmed to do.When the battery supply drops below the level normal level, the controller will reset.
This can be detected using piezospeaker or buzzer each time the controller reset. The piezospeaker can
create a tone that can indicate that a brownout has occurred. The FREQOUT command allows the
piezospeaker to create a tone.

4
CAI3034 Prepared by Ooi W.S.

Human-Robot Interaction (HRI)

Human-Robot Interaction (HRI) is a field of study dedicated to understanding, designing, and


evaluating robotic systems for use by or with humans. Interaction, by definition, requires
communication between robots and humans.

According to Goodrich and Schultz (Goodrich and Schultz, 2007), the interaction between a
human and robot are categorized into two. These include remote interaction and proximate
interaction.

Remote Interaction Proximate interaction


The human and the robot are not co-located The humans and the robots are co-located
and are separated spatially or even (for example, service robots may be in the
temporally (for example, the Mars Rovers same room as humans).
are separated from earth both in space and
time).

Anthropomorphism

 Is the tendency to attribute human characteristics to inanimate objects, animals and others
with a view to helping us rationalise their actions.

 Examples in cartoons (Disney being particularly prolific)

 You might be thinking that anthropomorphism sounds a lot like personification—and


you're right. But here's the difference. With anthropomorphism, the object or animal is
actually doing something human. With personification, the object or animal just seems like
it's doing something human. You generally hear personification in poetry.

 For example: If you were to say: My computer hates me.

Personification: Your computer does not actually hate you. We use this expression to
mean that your computer isn’t working at that particular moment.

Anthropomorphism: Would be if your computer grew arms and legs, punched you in the
neck, and stole your wallet…then it would ACTUALLY hate you.

5
CAI3034 Prepared by Ooi W.S.

Figure 1: Anthropomorphism design space for robot heads. Notes: The diagram refers uniquely
to the head construction and ignores body function and form. This is also by no means an
exhaustive list. Examples were chosen to illustrate the proposed idea.

 Figure 1 provides an illustrative “map” of anthropomorphic features as applied to a design of


existing robotic heads in the development of social relationships between a physical robot
and people. The three extremities of the diagram (human, iconic and abstract) embrace
the primary categorisations for robots employing anthropomorphism to some degree.
“Human” correlates to an as-close-as-possible proximity replication of the human head.
“Iconic” seeks a very minimum set of features as often found in comics that still succeeds in
being expressive. The “Abstract” corner refers to more mechanistic functional design of the
robot with minimal attention to humanlike aesthetics.

 In order to portray artificial emotional states, some utilise a strongly realistic humanlike
construction (i.e. with synthetic skin and hair) for facial gestures. Building mannequin-like
robotic heads, where the objective is to hide the “robotic” element as much as possible and
blur the issue as to whether one is talking to a machine or a person, results in effectively
unconstrained anthropomorphism. However, as Mori outlined with “The Uncanny Valley”
(Mori, 1982), the closer the design and functionality of the robot comes to the human, the
more susceptible it is to failure unless such a high degree of resolution is achieved that its
distinction from a human becomes very difficult.

 You made the robot interact with you while working on your assignments.
But how many of you:
o Swore at your robot?
o Complimented your robot?
o Referred to your robot as he/she?
o Gave your robot a name?

Why?

 Good papers for reading:


a) Duffy, B.R., 2003. Anthropomorphism and the social robot. Robotics and autonomous
systems, 42(3-4), pp.177-190.
b) Złotowski, J., Proudfoot, D., Yogeeswaran, K. and Bartneck, C., 2015.
Anthropomorphism: opportunities and challenges in human–robot interaction. International
Journal of Social Robotics, 7(3), pp.347-360.

6
CAI3034 Prepared by Ooi W.S.

The “Uncanny Valley”

"These were robots in human form with distorted faces, and


they gave my daughter nightmares. When I asked her why
she was frightened of the Cybermen but not of the Daleks,
she replied that the Cybermen looked like terrible human
beings, whereas the Daleks were just Daleks."
— Ann Lawrence, writer for The Morning Star on Doctor
Who: The Tomb of the Cybermen

The term “uncanny valley” describes our strange revulsion toward things that appear nearly
human, but not quite right. The uncanny valley is a common unsettling feeling people
experience when humanoid robots closely resemble humans in many respects but are not quite
convincingly realistic. This strange revulsion usually involves robots, but can also include
computer animations and some medical conditions.

In 1970, Japanese roboticist Masahiro Mori proposed the “uncanny valley” hypothesis, which
predicted a nonlinear relation between robots’ perceived human likeness and their likeability as
follows,

The uncanny valley is a psychological theory about the effect involving art and robots and
human emotions. As something starts to look more human-like, there is a point at which people
start to feel it looks wrong. At this point, they have negative feelings toward the object. These
feelings keep getting worse as the object is made to look more human-like. The uncanny valley
occurs because of mismatches between aspects of the robot’s appearance and/or behaviour.
At a certain point, as the object starts to get very close to looking human-like, how people feel
towards it tend to reverse and they have more positive emotional feelings towards it.

The uncanny valley effect can be reduced by ensuring that a character’s facial expressions
match its emotive tones of speech, and that its body movements are responsive and reflect its
hypothetical emotional state. Special attention must also be paid to facial elements such as the
forehead, eyes, and mouth, which depict the complexities of emotion and thought.

To investigate the uncanny valley theory, a collection of video clips depicting robots of varying
human similarity and at varying levels of sophistication are used in the experiment. Video clips
but not still images are used because animated stimuli provide an even richer set of cues to
evaluate the human likeness of given stimulus. These clips will be presented to participants.
The clips and scales are randomized to prevent order effect. Participants will then be asked to
rate these clips using human likeness, familiarity, and eeriness scales. From subjective ratings

7
CAI3034 Prepared by Ooi W.S.

collected, statistical measures will be employed to determine whether the set of humanlike
robots reflects an uncanny valley.

Social Robots

The use of robots is swiftly shifting from industrial uses where they are basically deployed for
manufacturing purposes and tasks that are too dangerous for human beings to the use of social
robots which have the capability to interact with human beings in a particular environment.
Social robots have been widely deployed in healthcare in recent times as a result of low
accessibility to healthcare services. Social robots are autonomous mobile machines that are
designed to interact with human and exhibit social behaviors such as recognizing, following and
assisting their owners and engaging in conversation.

Issues related to the use of social robots:


 They’re not humans and lack empathy, emotion and reasoning. They handle the routine
tasks they are programmed to do, but may respond unpredictably to situations for which
they were not trained.

 As with any technology, robots are susceptible to hardware malfunctions and failures and
may involve a high cost to repair and maintain.

 In addition, humans that develop an over-dependence on social robots, such as for


emotional companionship, may miss out on the person-to-person interactions that are the
essence of the human condition.

 it is hard to make judgements on what an emotion should be, someone might laugh at
something that others find offensive.

 If people rely on these machines, rather than real people, we could end up with a very poor
society where people avoid each other as machines are far better.

 If the device is used by people with emotional issues they may become too attached which
could cause problems if the device does not act as expected.

Social Robots in Healthcare

HRI in healthcare is primarily concerned with helping patients improve or monitor their health.
Examples of social robots in healthcare are surgical assistance robots, rehabilitation robots,
and companion robots.

 Surgical assistance robots are robots that allow physicians to perform surgical operations
with greater precision. Surgical robots support both face-to-face and remote surgical
operations. In face-to-face surgical operations, the physicians and patients are physically
present while the human surgeon is not physically present with the patients in remote
surgical operations. The use of surgical assistance robots results in minimally invasive
surgeries.

 Rehabilitation robots are robots that assist people with disability and provide therapy for
people seeking to improve physical or cognitive functions. Examples of rehabilitation robots
are assistive robots for mobility, assistive robots for manipulation and therapy robots.

Examples of assistive robots for mobility include robotic wheelchairs, intelligent wheel chairs,
robotic walkers and robotic aids for the blind. Assistive robots for manipulation are used for
handling physical objects. They are usually used by people with impairments of the arm,

8
CAI3034 Prepared by Ooi W.S.

hand and fingers. Therapy robots are robots that provide treatment for people with physical
and mental challenges.

 Companion robots are typically designed to enhance the health and psychological well-
being of the aged and the sick by providing companionship, alleviating stress and
increasing their immune system.

Ethical Issue

Social robots are now widely used in healthcare. Their applications range from surgery,
emotional and aging care, companionship to telemedicine and rehabilitation. However, there
are numerous challenges associated with the interaction between humans and social robots in
healthcare. These challenges range from ethical challenges, design issues to safety,
usefulness, acceptability and appropriateness.

Ethics is a philosophical discipline which is concerned with the morality of human behavior, with
right and wrong. Examples of the ethical challenges confronting HRI in healthcare are as
follows,

 Human-human relation HHR is a very important ethical issue that has to be considered
when using assistive/socialized robots. Typically, robots are used as means of addition or
enhancement of the therapy given by caregivers, not as a replacement of them. Thus, the
patient-care giver relation and interaction must not be disturbed. But, if the robot is used as
a replacement of the human therapist, then the robot may lead to a reduction of the amount
of “human-human” contact. This is a serious ethical problem if the robot is employed as the
only therapeutic aid for the patient's life. In this case, in fragile persons (children with
development disorders, elderly with dementia, etc.) that suffer from loneliness, the isolation
syndrome may be worsened.

 A robot designed to play the role of a therapist is given some authority to exert some
influence on the patient. Therefore, the ethical question arises who actually controls the
type, the level, and the duration of interaction. For example, if a patient wants to stop an
exercise due to stress or pain, a human therapist would accept this on the basis of his
general humanized evaluation of the patient's physical state. Such a feature is desirable to
be technically embedded to the robot in order to balance ethically the patient's autonomy
with the robot's authority.

Architectures for Robot Control


Mobile robot architecture can be classified according to the relationship between sensing,
planning and acting components inside the architecture. Based on this, there are three types of
architectures: reactive, deliberative, and hybrid (deliberative/reactive).

Reactive Control
 Reactive Control is a technique for tightly coupling sensory inputs and effector outputs, to
allow the robot to respond very quickly to changing and unstructured environments, i.e.,
sensors directly determine the actions.
 Limitations to this approach are that such robots, because they only look up actions for any
sensory input, do not usually keep much information around, have no memory, no internal
representations of the world around them, and no ability to learn over time.
 Example of reactive control is a light-chasing robot:
(behavior chase-light
:period (1 ms)
:actions ((set left-motor (right-sensor-value))
(set right-motor (left-sensor-value))))

9
CAI3034 Prepared by Ooi W.S.

Deliberative Control
 In deliberative control, the robot takes all of the available sensory information, and all of the
internally stored knowledge it has, and it thinks ("reasons") about it in order to create a plan
of action. To do so, the robot must search through potentially all possible plans until it finds
one that will do the job. This requires the robot to look ahead, and think in terms of: "if I do
this next, and then this happens, then what if I do this next, then this happens,..." and so on.
 This can take a long time. However, if there is time, this allows the robot to act strategically.
 The deliberative systems were also used in non-physical domains, such as playing chess.

Hybrid Control
 In hybrid control, the goal is to combine the speed of reactive control and the brains of
deliberative control. In it, one part of the robot's "brain" plans, while another deals with
immediate reaction, such as avoiding obstacles and staying on the road.

 The three-layer hybrid architecture is often known as hybrid architecture which uses higher
level planning in order to guide the lower level of reactive components.
 The bottom layer is the reactive/behavior-based layer, in which sensors and actuators are
closely coupled
 The upper layer provides the deliberative component such as planning and localisation.
 The control execution layer is responsible to supervise the interaction between the high level
layer and low level layer / The control execution layer combines the speed of reactive control
and the brains of deliberative control.

When we compare deliberative architectures with reactive architectures we observe that


deliberative architectures work in a more predictable way, have a high dependency of a precise
and complete model of the world, and they can generate optimized trajectories for the robot. On
the other end, reactive architectures have a faster response to dynamic changes in the
environment, can work without a model of the world, and are computationally much simpler.
Finally, hybrid architectures try to present the best characteristics of the other two architectures.

 Good paper for reading:


Nakhaeinia, D., Tang, S.H., Noor, S.M. and Motlagh, O., 2011. A review of control
architectures for autonomous navigation of mobile robots. International Journal of Physical
Sciences, 6(2), pp.169-174.

Artificial Intelligence (AI) in Robotics Automation

 With AI integrated with industrial robotics technology, robots can monitor their own accuracy
and performance, signaling when maintenance is required to avoid expensive downtime.
Hence, increase uptime and productivity from predictive maintenance.

 AI helps to simplify robots training process. AI can train several robots at once, so less time
is spent on training. This is especially helpful to industries experiencing labor shortages. AI

10
CAI3034 Prepared by Ooi W.S.

technology lets the operator teach the robot more intuitively. With AI, it’s easier for an
operator to track the robot’s learning than it previously was with conventional training
methods.

 In the assembly application, AI is a highly useful tool in robotic assembly applications. When
combined with advanced vision systems, AI can help with real-time course correction, which
is particularly useful in complex manufacturing sectors. AI can also be used to help a robot
learn on its own which paths are best for certain processes while it’s in operation

 Robotic packaging uses forms of AI frequently for quicker, lower cost and more accurate
packaging. AI helps save certain motions a robotic system makes, while constantly refining
them, which makes installing and moving robotic systems easy enough for anybody to do.

The impact of this AI on the future workforce:

 One of the impacts is over-reliance on robot. Recent studies compared automotive


manufacturers in the US, and their relative performance since introducing industrial robots
to their production lines. They found that over reliance on robots over humans actually saw
a drop off in efficiency and an increase in waste and down time.

 Another possible impact is social unrest. If these devices displacing workers from tasks they
were previously performing, workers will loss career opportunities and what will they do to
earn money to pay for the food.

Application of Computer Vision in Robotics

In a robot soccer competition, two cameras are mounted over the field. Each camera
captures half of the field so that the whole field can be captured.

The first process to extract the information from the camera is data pre-processing. To pre-
process the data, the continuous video is first sampled into images. The RGB image is then
transformed to HSV or YUV which is a more perceptually uniform colour space. Color
normalization is used to handle non-uniform lighting across the field. Adaptive colour histogram
is used to remove the background, in this case the green field colour.

RGB colour model

11
CAI3034 Prepared by Ooi W.S.

HSV colour model YUV colour model

Background removal

After the data is pre-processed, image segmentation is perform to extract useful information.
To segment the image, each image pixel is classified into different classes of object (robots and
ball) using thresholding method. The threshold image is dilated to fill holes which might appear
inside a region. The relevant information such as the colour, area, and bounding box are
computed from the segmented region. This information is stored in a data structure for
recognition to be done.

After image segmentation, the objects in the image must be classified. In the classification
process, the set of attributes assigned in the rules to the objects of interest is recognized. For
example, the ball is an orange spherical object and smaller in size, thus it can be detected as a
small orange circular region on the image plane. Robots are detected as red or blue regions
(the team colors). The goals are detected as large yellow objects on the image. Thus, the
objects can be classified.

One of the possible challenges of implementing computer vision is the narrow field of view
achieved if a single camera is used. Although special lenses can be mounted to the camera to
broaden the field of view, they cause image deformation. Using two cameras can improve the
field of view. However, this method causes higher image processing complexity as well as
doubling required hardware. Another challenge is real-time reaction has to be achieved. Higher
image resolutions result in better precision to image segmentation but also increase the time to
process each image since the bigger the image the longer it takes to be analysed.

12

You might also like