M3(11&12)
M3(11&12)
SCIENCE GOAL
•The science goal aims to understand and replicate human intelligence, with long-term
ambitions for AGI and autonomous AI.
•AI scientists often see humans as biological machines, making full emulation a realistic goal.
•Media and hype can exaggerate AI’s progress, misleading the public.
•The shift from automation to autonomy raises ethical and legal questions about AI's
responsibility and rights
THE INNOVATION GOAL IN AI
To develop such AI systems, researchers study human behavior and social dynamics
to ensure user acceptance and ease of use. This often involves:
Design thinking – focusing on user needs and behavior.
User testing and market research – refining AI products to make them intuitive and
beneficial.
Balancing automation and human control – allowing AI to assist in tasks without
removing human oversight.
1.DESIGN THINKING: FOCUSING ON
USER NEEDS AND BEHAVIOR
What is Design Thinking?
Design thinking is a problem-solving approach that prioritizes human needs, emphasizing
empathy, experimentation, and iterative design. Instead of starting with technology, AI
developers begin by understanding user challenges and behaviors before creating
solutions.
Key Phases of Design Thinking in AI Development
Empathize – Understand users’ needs, frustrations, and behaviors.
Define – Identify the core problem AI should solve.
Ideate – Brainstorm possible AI-driven solutions.
Prototype – Build quick, testable versions of AI interfaces.
Test & Iterate – Refine AI systems based on user feedback.
1. DESIGN THINKING: FOCUSING ON
USER NEEDS AND BEHAVIOR
Example: AI-Powered Virtual Assistants
Instead of just focusing on speech recognition accuracy, design thinking ensures that AI
assistants like Siri, Alexa, and Google Assistant are intuitive and context-aware.
Developers study how people naturally speak and structure conversations to make
interactions more human-friendly.
AI assistants now understand follow-up questions and multiple requests in a single sentence.
Why it matters: AI should solve real user problems, not just showcase technological
advancements.
2. USER TESTING AND MARKET RESEARCH:
REFINING AI FOR INTUITIVENESS AND
PRACTICAL BENEFITS
Once an AI concept is designed, it must be tested with real users to ensure it is intuitive,
effective, and beneficial. This involves:
User Testing – Ensuring AI Works for People
What it is: Observing real users interact with AI to identify problems, confusion, or usability
gaps.
Methods:
✔ A/B Testing – Comparing two AI versions to see which performs better.
✔ Usability Testing – Asking users to complete tasks while analyzing their experience.
✔ Eye-Tracking Studies – Understanding how users visually navigate AI interfaces.
✔ Think-Aloud Testing – Users describe their thoughts while interacting with AI, revealing
confusion points.
2. USER TESTING AND MARKET RESEARCH: REFINING
AI FOR INTUITIVENESS AND PRACTICAL BENEFITS
Market Research – Understanding Real-World Adoption
What it is: Studying customer needs, preferences, and expectations to align AI products with market
demands.
Methods:
✔ Surveys & Interviews – Gathering opinions on AI usability and functionality.
✔ Competitor Analysis – Studying existing AI solutions to find improvement opportunities.
✔ Behavioral Data Analysis – Understanding how people interact with AI in real-world settings.
Example: AI-Powered Image Recognition (Google Photos & Apple Photos)
Early AI image tagging misclassified people and objects, leading to backlash.
User feedback led to better AI training and error correction.
Now, users can manually adjust AI-generated tags to improve future recommendations.
Why it matters: AI must continuously learn from real users to remain effective and relevant.
3. BALANCING AUTOMATION AND HUMAN
CONTROL: ENSURING AI ASSISTS WITHOUT
REPLACING HUMANS
AI should enhance human decision-making, not take full control. Finding the right balance is critical to
avoiding over-reliance on AI while still benefiting from its efficiency.
Three Levels of AI Control & Automation
Full Human Control – AI provides insights, but humans make all decisions.
Examples:
Medical AI suggests diagnoses, but doctors make final treatment decisions.
AI-powered writing tools suggest edits, but users accept or reject them.
Shared Control – AI automates repetitive tasks but allows human intervention when needed.
Examples:
Self-driving car assist features (lane keeping, adaptive cruise control) help drivers but don’t take
full control.
AI in customer support handles simple queries but transfers complex issues to human agents.
3. BALANCING AUTOMATION AND HUMAN CONTROL:
ENSURING AI ASSISTS WITHOUT REPLACING HUMANS
Full AI Automation (Only in Safe Scenarios) – AI handles tasks without human input, only where
failure has low consequences.
Examples:
Spam filters in email automatically remove junk messages.
AI in digital cameras automatically adjusts lighting and focus for better photos.
Example: AI in Healthcare
Full automation can be risky (e.g., an AI making a medical diagnosis without doctor verification).
Shared control is safer: AI assists in analyzing scans, but doctors validate the findings.
Why it matters: AI should empower humans, not take away control over critical decisions.
BALANCING AUTOMATION AND HUMAN CONTROL
The HCAI framework emphasizes finding a balance between automation and user control:
Some applications require full automation, such as airbag deployment or pacemakers, where instant
responses are necessary.
Other applications demand full human control, such as bicycle riding or playing the piano.
Most AI applications lie in between, combining automation and human oversight to create safe,
reliable, and explainable systems.
To achieve this balance, AI engineers implement:
Interlocks to prevent human mistakes.
Controls to prevent AI failures.
Audit trails and logs to track system decisions and identify errors (useful in critical applications like
self-driving cars and medical devices).
1. INTERLOCKS – PREVENTING HUMAN MISTAKES
What are interlocks?
Interlocks are fail-safe mechanisms designed to prevent humans from making errors when
interacting with AI systems. They act as safeguards to ensure users cannot accidentally trigger unsafe
or incorrect actions.
How do interlocks work?
They introduce checkpoints, warnings, or permissions before allowing critical actions.
Examples of Interlocks in AI Applications
Self-Driving Cars (Tesla, Waymo, etc.)
Require hands on the steering wheel at intervals to ensure drivers are still alert.
Issue audible and visual alerts if driver attention drops.
1. INTERLOCKS – PREVENTING HUMAN MISTAKES
•The innovation goal focuses on AI as a supertool, helping humans rather than replacing
them.
•AI solutions should balance automation and human control, ensuring safety, reliability,
and usability.
•Functionality matters more than human-like design—robots and AI should be optimized
for their tasks rather than imitating humans.
•AI plays a critical role in collaboration and communication, enhancing teamwork,
education, and remote work.
•AI must be explainable and accountable, ensuring ethical and responsible
implementation.
COMPARISON: SCIENCE GOAL VS. INNOVATION
GOAL IN AI
Science Goal (Understanding Innovation Goal
Aspect
Intelligence) (Human-Centered AI Tools)
Develop AI to enhance human capabilities
Objective Replicate or surpass human intelligence.
and productivity.
Key Question “Can machines think like humans?” “How can AI assist and empower humans?”
AI that mimics perception, cognition, and AI as supertools that improve human
Research Focus
motor abilities. decision-making and efficiency.
Long-term, fundamental research into AI’s Applied research & engineering to solve
Approach
cognitive functions. real-world problems.
- AI-powered recommendation systems
- Artificial General Intelligence (AGI)
- AI in healthcare (decision-support tools)
Examples of AI - Common-sense reasoning
- Smart assistants (Siri, Alexa, Google
Development - Emotionally aware AI
Assistant)
- Fully autonomous robots
- Self-driving car assist features
COMPARISON: SCIENCE GOAL VS. INNOVATION
GOAL IN AI
Innovation Goal
Aspect Science Goal (Understanding Intelligence)
(Human-Centered AI Tools)
Automation vs. Prioritizes human oversight with AI
Focus on fully autonomous AI systems.
Human Control assistance.
AI as independent agents capable of
AI as supertools, designed to work under
Design Approach learning, reasoning, and making decisions
human control.
autonomously.
- Risk of AI autonomy without - Need for transparent, explainable AI
Challenges & explainability - Risk of over-reliance on AI in critical
Risks - Ethical concerns over AI decision-making tasks
- Computational complexity - Balancing automation with user control
- Humanoid robots designed to assist in - AI-powered grammar checkers,
Real-World homes and workplaces recommendation systems, and medical AI
Example - AGI systems capable of learning any that assists doctors but doesn’t replace
task them