0% found this document useful (0 votes)
12 views35 pages

M3(11&12)

This document discusses the primary goals of AI research, distinguishing between the science goal of replicating human intelligence and the innovation goal of enhancing human capabilities. It highlights key themes in AI, including the shift from symbolic AI to machine learning, the importance of human-centered design, and the need for transparency and accountability in AI systems. The document emphasizes balancing automation and human control to ensure AI serves societal needs while maintaining safety and reliability.

Uploaded by

rajujeevan61
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views35 pages

M3(11&12)

This document discusses the primary goals of AI research, distinguishing between the science goal of replicating human intelligence and the innovation goal of enhancing human capabilities. It highlights key themes in AI, including the shift from symbolic AI to machine learning, the importance of human-centered design, and the need for transparency and accountability in AI systems. The document emphasizes balancing automation and human control to ensure AI serves societal needs while maintaining safety and reliability.

Uploaded by

rajujeevan61
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

HUMAN-CENTERED ARTIFICIAL Module3: Design Metaphors

Chapter 11: Introduction: What


INTELLIGENCE Are the Goals of AI Research?
INTRODUCTION
This chapter explores the primary goals of artificial intelligence (AI) research,
breaking them down into two main categories: science and innovation. The science
goal focuses on creating AI systems that mimic or surpass human perceptual, cognitive,
and motor abilities, often driven by the ambition to build intelligent, autonomous
systems. The innovation goal, on the other hand, prioritizes developing AI
technologies that enhance human capabilities, emphasizing transparency, human
control, and practical applications.
KEY THEMES AND DEBATES IN AI
RESEARCH
Early AI Aspirations and the Turing Test
AI research initially sought to answer Alan Turing’s question, "Can machines think?",
leading to the development of systems aimed at passing the Turing Test—where AI
would be indistinguishable from a human in conversation.
Over time, AI research expanded to include areas like pattern recognition, natural
language processing, emotion recognition, and game-playing systems (e.g., chess,
Go).
KEY THEMES AND DEBATES IN AI
RESEARCH
Shift from Symbolic AI to Machine Learning
Early AI research focused on symbolic reasoning, but this approach gave way to
machine learning and deep learning techniques such as neural networks,
generative adversarial networks (GANs), and reinforcement learning.
These advances led to breakthroughs in areas like speech recognition, image
generation, and self-learning models.
KEY THEMES AND DEBATES IN AI
RESEARCH
Criticism and Challenges
Some argue that AI systems remain brittle and unreliable in real-world applications.
Critics highlight AI’s failures in areas such as bias in language models, self-driving
car accidents, and failures in medical recommendations.
AI methods sometimes lose out to traditional engineering solutions, such as IBM’s
Deep Blue, which defeated Garry Kasparov using brute-force computation rather
than AI-style learning.
KEY THEMES AND DEBATES IN AI
RESEARCH
Human-Centered AI (HCAI) and Explainability
 AI research should integrate human-centered design with transparent algorithms, user
control, and audit trails for accountability.
 Systems should be explainable and predictable, ensuring human oversight in critical
applications like healthcare, finance, and autonomous vehicles.
Finding a Middle Ground: Combined AI Designs
 The book suggests four pairs of design strategies that blend automation and human
control, ensuring AI serves societal needs while maintaining safety and reliability.
 Examples include self-driving car assist features (like lane-keeping and collision
avoidance), control panels for AI-driven automation, and AI-assisted search and
messaging services.
Ultimately, AI research must balance the science of autonomous intelligence with the
practical goal of enhancing human capabilities, fostering systems that are reliable, safe,
and beneficial.
HUMAN-CENTERED ARTIFICIAL CHAPTER 12 Science and

INTELLIGENCE Innovation Goals


INTRODUCTION
This chapter distinguishes between two fundamental goals in AI research:
the science goal (understanding and replicating human intelligence)
the innovation goal (using AI to enhance human capabilities).
While some researchers may identify with both goals, this distinction helps clarify
the different motivations and methodologies within AI development.
THE SCIENCE GOAL: UNDERSTANDING
AND EMULATING HUMAN INTELLIGENCE
The science goal focuses on studying and replicating human perceptual, cognitive,
and motor abilities in machines. This includes ambitions such as:
Artificial General Intelligence (AGI) – AI that can perform any intellectual task a
human can do.
Social robots – Machines that interact with humans in a human-like way.
Common-sense reasoning – AI that understands and applies everyday human logic.
Affective computing – AI that can recognize and respond to human emotions.
Researchers in this field aim to develop AI that can think and act like humans or
think and act rationally, as described in textbooks by Stuart Russell & Peter Norvig
and David Poole & Alan Mackworth.
THE SCIENCE GOAL: UNDERSTANDING AND
EMULATING HUMAN INTELLIGENCE
Many AI scientists believe that humans are complex biological machines, so creating an
artificial human is a logical scientific pursuit. The AI 100 Report even suggests that the
difference between a human brain and a computer is only a matter of scale, speed,
autonomy, and generality rather than a fundamental distinction.
This belief drives efforts to build machines that learn like humans, requiring training and
development.
Science goal researchers often compare AI performance to human experts, such as AI
outperforming doctors in detecting cancer.
The media amplifies this competitive framing with headlines like “Robots Can Now Read
Better than Humans”.
Despite these advancements, some experts warn against overhyping AI capabilities. Gary
Marcus and Ernest Davis, in Rebooting AI, argue that the media makes AI seem closer to
human-like intelligence than it actually is, leading to misplaced public expectations.
AUTONOMY VS. AUTOMATION
Automation – AI performs tasks based on pre-set instructions and data.
Autonomy – AI develops new goals and decisions independently, responding to new
information.
Many science goal researchers believe AI should move beyond automation and become
fully autonomous systems that can set their own objectives and monitor their own
actions. This leads to debates on whether AI should have moral and legal
responsibilities, similar to humans and corporations.
Some ethicists even suggest that future AI could have rights and ethical obligations, but
this remains a controversial topic. The book focuses on practical AI design rather than
theoretical discussions on AI morality.
•.

SCIENCE GOAL
•The science goal aims to understand and replicate human intelligence, with long-term
ambitions for AGI and autonomous AI.
•AI scientists often see humans as biological machines, making full emulation a realistic goal.
•Media and hype can exaggerate AI’s progress, misleading the public.
•The shift from automation to autonomy raises ethical and legal questions about AI's
responsibility and rights
THE INNOVATION GOAL IN AI

The innovation goal in AI focuses on developing practical, widely used products


and services that enhance human abilities. Sometimes called the engineering goal, it
prioritizes human-centered AI (HCAI), ensuring that AI serves as a supertool rather
than replacing human intelligence.
THE INNOVATION GOAL: AI AS A
SUPERTOOL
Instead of striving to create autonomous human-like machines (as in the science goal),
innovation-focused AI aims to build tools that extend human capabilities. These tools
are often:
Embedded in technology such as cloud computing, websites, mobile devices,
home automation, manufacturing, and virtual assistants.
Designed for user convenience – instead of a humanoid airport robot, an
innovation-focused solution might be an airport app that provides maps, wait times,
and flight information.
THE INNOVATION GOAL: AI AS A
SUPERTOOL

To develop such AI systems, researchers study human behavior and social dynamics
to ensure user acceptance and ease of use. This often involves:
Design thinking – focusing on user needs and behavior.
User testing and market research – refining AI products to make them intuitive and
beneficial.
Balancing automation and human control – allowing AI to assist in tasks without
removing human oversight.
1.DESIGN THINKING: FOCUSING ON
USER NEEDS AND BEHAVIOR
What is Design Thinking?
Design thinking is a problem-solving approach that prioritizes human needs, emphasizing
empathy, experimentation, and iterative design. Instead of starting with technology, AI
developers begin by understanding user challenges and behaviors before creating
solutions.
Key Phases of Design Thinking in AI Development
Empathize – Understand users’ needs, frustrations, and behaviors.
Define – Identify the core problem AI should solve.
Ideate – Brainstorm possible AI-driven solutions.
Prototype – Build quick, testable versions of AI interfaces.
Test & Iterate – Refine AI systems based on user feedback.
1. DESIGN THINKING: FOCUSING ON
USER NEEDS AND BEHAVIOR
Example: AI-Powered Virtual Assistants
Instead of just focusing on speech recognition accuracy, design thinking ensures that AI
assistants like Siri, Alexa, and Google Assistant are intuitive and context-aware.
Developers study how people naturally speak and structure conversations to make
interactions more human-friendly.
AI assistants now understand follow-up questions and multiple requests in a single sentence.
Why it matters: AI should solve real user problems, not just showcase technological
advancements.
2. USER TESTING AND MARKET RESEARCH:
REFINING AI FOR INTUITIVENESS AND
PRACTICAL BENEFITS
Once an AI concept is designed, it must be tested with real users to ensure it is intuitive,
effective, and beneficial. This involves:
User Testing – Ensuring AI Works for People
What it is: Observing real users interact with AI to identify problems, confusion, or usability
gaps.
Methods:
✔ A/B Testing – Comparing two AI versions to see which performs better.
✔ Usability Testing – Asking users to complete tasks while analyzing their experience.
✔ Eye-Tracking Studies – Understanding how users visually navigate AI interfaces.
✔ Think-Aloud Testing – Users describe their thoughts while interacting with AI, revealing
confusion points.
2. USER TESTING AND MARKET RESEARCH: REFINING
AI FOR INTUITIVENESS AND PRACTICAL BENEFITS
Market Research – Understanding Real-World Adoption
What it is: Studying customer needs, preferences, and expectations to align AI products with market
demands.
Methods:
✔ Surveys & Interviews – Gathering opinions on AI usability and functionality.
✔ Competitor Analysis – Studying existing AI solutions to find improvement opportunities.
✔ Behavioral Data Analysis – Understanding how people interact with AI in real-world settings.
Example: AI-Powered Image Recognition (Google Photos & Apple Photos)
Early AI image tagging misclassified people and objects, leading to backlash.
User feedback led to better AI training and error correction.
Now, users can manually adjust AI-generated tags to improve future recommendations.
Why it matters: AI must continuously learn from real users to remain effective and relevant.
3. BALANCING AUTOMATION AND HUMAN
CONTROL: ENSURING AI ASSISTS WITHOUT
REPLACING HUMANS
AI should enhance human decision-making, not take full control. Finding the right balance is critical to
avoiding over-reliance on AI while still benefiting from its efficiency.
Three Levels of AI Control & Automation
Full Human Control – AI provides insights, but humans make all decisions.
Examples:
Medical AI suggests diagnoses, but doctors make final treatment decisions.
AI-powered writing tools suggest edits, but users accept or reject them.
Shared Control – AI automates repetitive tasks but allows human intervention when needed.
Examples:
Self-driving car assist features (lane keeping, adaptive cruise control) help drivers but don’t take
full control.
AI in customer support handles simple queries but transfers complex issues to human agents.
3. BALANCING AUTOMATION AND HUMAN CONTROL:
ENSURING AI ASSISTS WITHOUT REPLACING HUMANS

Full AI Automation (Only in Safe Scenarios) – AI handles tasks without human input, only where
failure has low consequences.
Examples:
Spam filters in email automatically remove junk messages.
AI in digital cameras automatically adjusts lighting and focus for better photos.
Example: AI in Healthcare
Full automation can be risky (e.g., an AI making a medical diagnosis without doctor verification).
Shared control is safer: AI assists in analyzing scans, but doctors validate the findings.
Why it matters: AI should empower humans, not take away control over critical decisions.
BALANCING AUTOMATION AND HUMAN CONTROL

The HCAI framework emphasizes finding a balance between automation and user control:
Some applications require full automation, such as airbag deployment or pacemakers, where instant
responses are necessary.
Other applications demand full human control, such as bicycle riding or playing the piano.
Most AI applications lie in between, combining automation and human oversight to create safe,
reliable, and explainable systems.
To achieve this balance, AI engineers implement:
Interlocks to prevent human mistakes.
Controls to prevent AI failures.
Audit trails and logs to track system decisions and identify errors (useful in critical applications like
self-driving cars and medical devices).
1. INTERLOCKS – PREVENTING HUMAN MISTAKES
What are interlocks?
Interlocks are fail-safe mechanisms designed to prevent humans from making errors when
interacting with AI systems. They act as safeguards to ensure users cannot accidentally trigger unsafe
or incorrect actions.
How do interlocks work?
They introduce checkpoints, warnings, or permissions before allowing critical actions.
Examples of Interlocks in AI Applications
Self-Driving Cars (Tesla, Waymo, etc.)
Require hands on the steering wheel at intervals to ensure drivers are still alert.
Issue audible and visual alerts if driver attention drops.
1. INTERLOCKS – PREVENTING HUMAN MISTAKES

Healthcare AI (Medical Diagnosis Systems, Robotic Surgery)


Double-check confirmations before allowing an AI-suggested diagnosis or surgery step.
Require human approval before AI can administer medication.
Financial AI (Automated Trading Systems, Fraud Detection)
AI-based trading platforms include risk limits that prevent excessive losses.
Fraud detection AI flags suspicious transactions but requires human validation for blocking
accounts.
Why interlocks matter: They prevent users from making dangerous mistakes, ensuring AI operates
as an assistive tool, not an uncontrolled system.
2. CONTROLS – PREVENTING AI FAILURES
What are controls?
Controls are system-based safeguards that limit AI errors and prevent unintended
consequences.
How do controls work?
They restrict AI’s autonomy when high-stakes decisions are involved.
They provide override options so humans can correct or stop AI actions.
Examples of AI Controls in Different Industries
Self-Driving Cars (Waymo, Tesla, etc.)
AI may handle lane-keeping and braking, but human drivers can override and take full
control.
In emergencies, the car automatically slows down or pulls over instead of taking
unexpected actions.
2. CONTROLS – PREVENTING AI FAILURES

Medical AI (AI-Powered Diagnosis, Robotic Surgery)


AI-assisted cancer detection models highlight possible tumors, but doctors make the final call.
Surgical robots (e.g., Da Vinci Surgical System) allow human surgeons to remain in control, rather
than AI operating autonomously.
 AI in Air Traffic Control (Flight Systems, Boeing & Airbus Autopilot)
AI helps adjust altitude and speed, but pilots can manually override AI when necessary.
AI detects weather risks and recommends alternate routes, but human pilots confirm the decision.
Why controls matter: They prevent AI from acting unpredictably, ensuring humans can intervene
whenever necessary.
3. AUDIT TRAILS & LOGS – TRACKING AI DECISIONS FOR
ACCOUNTABILITY

What are audit trails?


Audit trails are record-keeping mechanisms that track AI’s decisions, actions, and interactions over
time.
Why are they important?
Allow engineers to analyze past AI behavior and detect errors or biases.
Help regulators and companies investigate failures in high-risk applications.
Ensure AI remains accountable and transparent in its decision-making.
Examples of AI Audit Trails in Critical Applications
Self-Driving Cars (Accident Investigation & Safety Improvements)
AI logs sensor data, steering adjustments, and braking decisions in case of accidents.
If a Tesla crashes, investigators can review the AI’s decision-making process to determine fault.
3. AUDIT TRAILS & LOGS – TRACKING AI DECISIONS FOR
ACCOUNTABILITY
 Healthcare AI (Patient Safety & Compliance Monitoring)
AI-based medical imaging systems (e.g., IBM Watson Health, Google DeepMind) track how AI
arrived at a diagnosis.
If AI misdiagnoses a disease, logs help doctors understand why and improve future models.
AI in Financial Systems (Fraud Detection & Compliance)
Banks use AI to detect fraudulent transactions and record each flagged case.
AI audit logs help prevent bias in loan approvals and explain why customers were rejected.
Why audit trails matter: They provide transparency, accountability, and oversight, ensuring AI
remains trustworthy and fair.
INNOVATION GOAL
Innovation-focused AI solutions often start from scientific research and adapt it into
user-friendly commercial products. For example:
Speech recognition research led to virtual assistants like Siri, Alexa, and Google
Assistant.
Machine translation research powered services like Google Translate.
Image recognition AI enabled automatic alt-tags for accessibility in websites.
MOVING BEYOND HUMAN-LIKE AI
Unlike science-focused AI, which often tries to mimic humans, innovation-driven AI prioritizes
function over form.
Instead of humanoid robots, engineers build specialized machines suited to specific tasks.
 Rovers and drones perform rescue missions better than a bipedal robot.
 Surgical “robots” are actually tele-operated tools for precision surgery.
This shift aligns with Lewis Mumford’s concept of avoiding animism in technology. He argued
that early attempts at new machines were often misguided by human and animal models. Instead,
more effective designs emerge when engineers focus on utility rather than imitating human forms.
Examples:
Four-wheeled vehicles outperform two-legged robots in transport.
Airplanes have wings, but they don’t flap like birds.
Exoskeletons and prosthetics enhance human abilities rather than mimicking biological limbs
exactly.
AI IN SOCIAL AND COLLABORATIVE
TECHNOLOGIES
Many innovation-focused AI solutions emphasize human connection and collaboration
rather than autonomy:
Google Docs and shared databases enable real-time teamwork.
Zoom, Microsoft Teams, and Webex revolutionized remote work and education during the
COVID-19 pandemic.
Social media platforms connect billions of users but also raise concerns about privacy,
misinformation, and AI-driven manipulation.
These challenges highlight the need for human oversight in AI—combining AI’s efficiency
with human ethical responsibility.
AI IN SOCIAL AND COLLABORATIVE
TECHNOLOGIES

•The innovation goal focuses on AI as a supertool, helping humans rather than replacing
them.
•AI solutions should balance automation and human control, ensuring safety, reliability,
and usability.
•Functionality matters more than human-like design—robots and AI should be optimized
for their tasks rather than imitating humans.
•AI plays a critical role in collaboration and communication, enhancing teamwork,
education, and remote work.
•AI must be explainable and accountable, ensuring ethical and responsible
implementation.
COMPARISON: SCIENCE GOAL VS. INNOVATION
GOAL IN AI
Science Goal (Understanding Innovation Goal
Aspect
Intelligence) (Human-Centered AI Tools)
Develop AI to enhance human capabilities
Objective Replicate or surpass human intelligence.
and productivity.
Key Question “Can machines think like humans?” “How can AI assist and empower humans?”
AI that mimics perception, cognition, and AI as supertools that improve human
Research Focus
motor abilities. decision-making and efficiency.
Long-term, fundamental research into AI’s Applied research & engineering to solve
Approach
cognitive functions. real-world problems.
- AI-powered recommendation systems
- Artificial General Intelligence (AGI)
- AI in healthcare (decision-support tools)
Examples of AI - Common-sense reasoning
- Smart assistants (Siri, Alexa, Google
Development - Emotionally aware AI
Assistant)
- Fully autonomous robots
- Self-driving car assist features
COMPARISON: SCIENCE GOAL VS. INNOVATION
GOAL IN AI
Innovation Goal
Aspect Science Goal (Understanding Intelligence)
(Human-Centered AI Tools)
Automation vs. Prioritizes human oversight with AI
Focus on fully autonomous AI systems.
Human Control assistance.
AI as independent agents capable of
AI as supertools, designed to work under
Design Approach learning, reasoning, and making decisions
human control.
autonomously.
- Risk of AI autonomy without - Need for transparent, explainable AI
Challenges & explainability - Risk of over-reliance on AI in critical
Risks - Ethical concerns over AI decision-making tasks
- Computational complexity - Balancing automation with user control
- Humanoid robots designed to assist in - AI-powered grammar checkers,
Real-World homes and workplaces recommendation systems, and medical AI
Example - AGI systems capable of learning any that assists doctors but doesn’t replace
task them

You might also like