Unit-1 Introduction to AI
Unit-1 Introduction to AI
By,
Nabaraj Bahadur Negi
Contents
• Intelligence, Artificial Intelligence (AI)
• AI Perspectives: acting and thinking humanly, acting and thinking rationally
• History of AI
• Foundations of AI: Philosophy, Economics, Psychology, Sociology, Linguistics,
Neuroscience, Mathematics, Computer Science, Control Theory
• AI Ethics and Responsible
• Applications of AI.
What is Artificial Intelligence (AI) ?
• Intelligence is
• ability to reason
• ability to understand
• ability to create
• ability to Learn from experience
• ability to plan and execute complex tasks
• Artificial is
• Made as copy something like natural.
• Major AI textbooks define artificial intelligence as "the study and design of intelligent
agents," where an intelligent agent is a system that perceives its environment and takes
actions which maximize its chances of success.
• The definitions of AI according to some text books are categorized into four approaches and are
summarized in the table below :
System that think like Humans System that Think Rationally
“The exciting new effort to make computers “The Study of mental faculties through the
think…..machine with minds, in the full and use of computational models” (Charniak and
literal sense.” (Haugeland, 1985) McDermott, 1985)
“The automation of activities that we associate “The study of computations that make it
with human thinking, activities such as decision possible to perceive, reason and act”. (Winston
making, problem solving, learning ….” 1992)
(Bellman, 1978)
System that acts like Humans System that acts Rationally
“The art of creating machine that perform “Computational Intelligence is the study of
functions that require intelligence when the design of intelligent agents” (Poole
performed by people” (Kurzweil, 1990) et.al., 1998)
“The study of how to make computer do things “AI is concerned with intelligent behavior in
at which, at the moment, people are better” artifacts” (Nilsson, 1998)
(Rich and Knight, 1991)
Contd….
• Top dimension is concerned with thought processes and reasoning, where as bottom
dimension addresses the behavior.
• The definition on the left measures the success in terms of fidelity of human
performance, whereas definitions on the right measure an ideal concept of intelligence,
which is called rationality.
• "In 1991, the New York businessman Hugh Loebner announces the prize competition,
offering a $100,000 prize for the first computer to pass the Turing test. However, no
AI program to till date, come close to passing an undiluted Turing test".
Chatbots to attempt the Turing test:
• ELIZA: ELIZA was a Natural language processing computer program created by Joseph Weizenbaum. It
was created to demonstrate the ability of communication between machine and humans. It was one of the
first chatterbots, which has attempted the Turing Test.
• Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to simulate a
person with Paranoid schizophrenia (most common chronic mental disorder). Parry was described as
"ELIZA with attitude." Parry was tested using a variation of the Turing Test in the early 1970s.
• Eugene Goostman: Eugene Goostman was a chatbot developed in Saint Petersburg in 2001. This bot has
competed in the various number of Turing Test. In June 2012, at an event, Goostman won the competition
promoted as largest-ever Turing test content, in which it has convinced 29% of judges that it was a
human. Goostman resembled as a 13-year old virtual boy.
Features required for a machine to pass the turing test
• Natural language processing: to communicate successfully in a human language;
• Knowledge representation: to store what it knows or hears;
• Automated reasoning: to answer questions and to draw new conclusions;
• Machine learning: to adapt to new circumstances and to detect and extrapolate patterns.
• To pass the total turing test, a robot will need
• Computer vision and speech recognition to perceive the world;
• Robotics to manipulate objects and move about.
2. Thinking Humanly: Cognitive Modeling Approach
• To say that a program thinks like a human, we must know how humans think. We can
learn about human thought in three ways:
• introspection—trying to catch our own thoughts as they go by;
• psychological experiments—observing a person in action;
• brain imaging—observing the brain in action.
• Once we have precise theory of mind, it is possible to express the theory as a computer
program.
• But unfortunately until up to now there is no precise theory about thinking process of
human brain. Therefore it is not possible to make the machine that think like human
brain.
Contd….
• First, It is not easy to take informal knowledge and state in the formal terms required
by logical notation, particularly when knowledge is not 100% certain.
• In the laws of thought approach to AI, the emphasis was given to correct inferences.
• Making correct inferences is sometimes part of being a rational agent, because one
way to act rationally is to reason logically to the conclusion and act on that
conclusion.
• On the other hand, there are also some ways of acting rationally that cannot be said to involve
inference.
• For Example, recoiling from a hot stove is a reflex action that usually more successful than a
slower action taken after careful deliberation.
History of Artificial Intelligence
Psychology:
How do humans and animals think and act?
• Psychology investigates human cognition, learning, perception, and behavior. AI benefits
from psychological research by incorporating models of human-like reasoning, decision-
making, and problem-solving into intelligent systems.
• The origins of scientific psychology are usually traced to the work of the German physicist
Hermann von Helmholtz (1821-1894) and his student Wilhelm Wundt (1 832-1920).
• Helmholtz applied the scientific method to the study of human vision, and his Handbook of
Physiological Optics is even now described as “the single most important treatise on the physics
and physiology of human vision” (Nalwa, 1993, p.15).
• In 1879, Wundt opened the first laboratory of experimental psychology at the University of
Leipzig. Wundt insisted on carefully controlled experiments in which his workers would perform
a perceptual or associative task while introspecting on their thought processes.
• Behaviorism discovered a lot about rats and pigeons, but had less success at understanding
humans.
Philosophy:
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
• Philosophy addresses about intelligence, consciousness, ethics, and knowledge representation.
It influences AI through discussions on machine ethics, explainable AI, and the societal
implications of AI technologies.
• Aristotle (384-322 B.C.) was the first to formulate a precise set of laws governing the rational part
of the mind. He developed an informal system of syllogisms for proper reasoning, which in
principle allowed one to generate conclusions mechanically, given initial premises.
• Much later, Ramon Lull (d. 13 15) had the idea that useful reasoning could actually be carried out
by a mechanical artifact – “concept wheels“.
• Thomas Hobbes (1588-1679) proposed that reasoning was like numerical computation that “we add
and subtract in our silent thoughts.” The automation of computation itself was already well under
way.
• Around 1500, Leonardo da Vinci (1452-1519) designed but did not build a mechanical calculator;
recent reconstructions have shown the design to be functional.
• The first known calculating machine was constructed around 1623 by the German scientist
Wilhelm Schickard (1592-1635).
• The Pascaline, built in 1642 by Blaise Pascal (1623-1662), is more famous. Pascal wrote that “the
arithmetical machine produces effects which appear nearer to thought than all the actions of
animals.”
• Gottfried Wilhelm Leibniz (1646-1716) built a mechanical device intended to carry out
operations on concepts rather than numbers, but its scope was rather limited.
• Now that we have the idea of a set of rules that can describe the formal, rational part of the mind,
the next step is to consider the mind as a physical system.
• Rene Descartes (1596-1650) gave the first clear discussion of the distinction between mind and
matter and of the problems that arise.
• One problem with a purely physical conception of the mind is that it seems to leave little room
for free will: if the mind is governed entirely by physical laws, then it has no more free will than
a rock “deciding” to fall toward the center of the earth.
Mathematics:
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
• Mathematics provides the foundation for AI algorithms, including calculus, linear algebra,
probability theory, statistics, and optimization methods. AI applications heavily rely on
mathematical concepts for modeling, analysis, and decision-making.
• Philosophers staked out most of the important ideas of AI, but the leap to a formal science
required a level of mathematical formalization in three fundamental areas: logic, computation,
and probability.
• The idea of formal logic can be traced back to the philosophers of ancient Greece, but its
mathematical development really began with the work of George Boole (1815-1864), who
worked out the details of propositional or Boolean logic.
• In 1879, Gottlob Frege (1848-1925) extended Boole’s logic to include objects and relations,
creating the first-order logic that is used today as the most basic knowledge representation
system.
• Alfred Tarski (1902-1983) introduced a theory of reference that shows how to relate the
objects in a logic to objects in the real world.
• The first nontrivial algorithm is thought to be Euclid’s algorithm for computing greatest
common denominators.
Linguistics:
How does language relate to thought?
• Linguistics studies language structure, semantics, pragmatics, and language processing. In AI,
linguistics informs natural language processing (NLP) tasks such as machine translation,
sentiment analysis, and dialogue systems.
• In 1957, B. F. Skinner published Verbal Behavior. This was a comprehensive, detailed account of the
behaviorist approach to language learning, written by the foremost expert in the field.
• A review of the book became as well known as the book itself, and served to almost kill off interest in
behaviorism. The author of the review was Noam Chomsky, who had just published a book on his
own theory, Syntactic Structures.
• Chomsky showed how the behaviorist theory did not address the notion of creativity in language. It
did not explain how a child could understand and make up sentences that he or she had never heard
before.
• Chomsky’s theory, based on syntactic models going back to the Indian linguist Panini
(b.c.350), could explain this, and unlike previous theories, it was formal enough that it could in
principle be programmed.
• Modem linguistics and AI, then, were “born” at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language processing.
• Understanding language requires an understanding of the subject matter and context, not
just an understanding of the structure of sentences.
Neuroscience:
How do brains process information?
• Neuroscience studies the brain's structure, function, and neural processes. AI draws
inspiration from neuroscience for developing neural networks, deep learning algorithms, and
cognitive models of intelligence.
• Paul Broca‘s (1824-1880) study of aphasia (speech deficit) in brain-damaged patients in 1861
reinvigorated the field and persuaded the medical establishment of the existence of localized areas
of the brain responsible for specific cognitive functions.
• Despite these advances, we are still a long way from understanding how any of these cognitive
processes actually work.
• The truly amazing conclusion is that a collection of simple cells can lead to thought, action, and
consciousness or, in other words, that brains cause minds (Searle, 1992).
• Brains and computers perform quite different tasks and have different properties.
• Moore’s Law predicts that the CPU’s gate count will equal the brain’s neuron count around 2020.
Moore’s Law says that the number of transistors per square inch doubles every 1 to 1.5 years.
Human brain capacity doubles roughly every 2 to 4 million years.
• Even though a computer is a million times faster in raw switching speed, the brain ends up being
100,000 times faster at what it does.
Computer engineering:
How can we build an efficient computer?
• Computer science encompasses algorithms, data structures, programming languages,
software engineering, and computational theory. AI research in computer science focuses on
developing AI algorithms, systems, and applications.
• The first operational computer was the electromechanical Heath Robinson,9 built in 1943 by Alan
Turing’s team for a single purpose: deciphering German messages. In 1943, the same group
developed the Colossus, a powerful general-purpose machine based on vacuum tubes.10 The first
operational programmable computer was the Z-3, the invention of Konrad Zuse in Germany in
1941.
Control theory :
How can artifacts operate under their own control?
• Control theory deals with regulating and optimizing systems based on feedback and control
signals. In AI, control theory is applied to robotics, autonomous systems, industrial
automation, and adaptive control algorithms.
• Ktesibios of Alexandria (c. 250 BCE) built the first self-controlling machine: a water clock with a
regulator that maintained a constant flow rate.
• Other examples of self-regulating feedback control systems include the steam engine governor,
created by James Watt (1736–1819), and the thermostat, invented by Cornelis Drebbel (1572–1633),
who also invented the submarine. James Clerk Maxwell (1868) initiated the mathematical theory of
control systems.
AI Ethics and Responsible AI
Strategies for Promoting Responsible AI Development
To address these ethical challenges and promote responsible AI development, there are several steps
that can be taken.
Bias detection and mitigation:
• It is important to work to identify potential sources of bias in AI systems and develop strategies to
mitigate them. This can involve careful data selection, reweighting, or using techniques such as
adversarial training.
Fairness testing:
• Rigorous testing can be used to ensure that AI systems are fair and unbiased. This can involve using
techniques such as statistical parity, equal opportunity, or equalized odds.
• For example, when developing an AI-powered credit scoring system, it is important to ensure that
the system is not biased against any particular demographic group and that it treats everyone fairly
and equally.
Transparency and explainability :
• AI systems should be designed to be transparent and explainable, so that stakeholders can
understand how decisions are being made. This can help build trust in the system and prevent
unintended consequences.
Ethical design principles:
• Ethical design principles can be used to guide the development of AI systems, ensuring that they
are designed to be ethical and responsible from the outset.
• For example, it is important to ensure that the data used to train an AI system is diverse and
representative, and that the system does not perpetuate any existing inequalities or biases.
Ethical AI rests in three pillars of equal importance:
• Accountability: It refers to the need to explain and justify one’s decisions and actions to its
partners, users and others with whom the system interacts. To ensure accountability, decisions
must be derivable from, and explained by, the decision-making algorithms used.
• Responsibility: refers to the role of people themselves and to the capability of AI systems to
answer for one’s decision and identify errors or unexpected results. As the chain of responsibility
grows means are needed to link the AI systems’s decisions to the fair use of data and to the
actions of stakeholders involved in the system’s decision.
• Transparency: refers to the need to describe, inspect and reproduce the mechanisms through
which AI systems make decisions and learns to adapt to its environment, and to the governance of
the data used created. Current AI algorithms are basically black boxes.
Applications of AI
Healthcare: AI is used for medical imaging analysis, disease diagnosis, drug discovery, personalized
treatment plans, and patient data analysis for better healthcare outcomes.
Finance: AI is employed in fraud detection, algorithmic trading, credit scoring, risk assessment, and
personalized financial services based on customer behavior analysis.
Retail: AI powers recommendation systems, demand forecasting, supply chain optimization,
inventory management, and customer service chatbots.
Manufacturing: AI is used for predictive maintenance, quality control, process optimization, supply
chain management, and autonomous robots for assembly and logistics.
Transportation: AI enables autonomous vehicles, route optimization, traffic management, predictive
maintenance for vehicles and infrastructure, and intelligent transportation systems.
Marketing: AI is used for customer segmentation, personalized marketing campaigns, sentiment
analysis, social media monitoring, and content generation.
Education: AI applications include personalized learning platforms, automated grading and
feedback systems, educational content generation, and adaptive learning environments.
Agriculture: AI is used for precision agriculture, crop monitoring, yield prediction, pest
detection, and autonomous farming equipment.
Cybersecurity: AI is employed for threat detection, anomaly detection, user behavior analysis,
security automation, and response orchestration.
Smart Cities: AI is used for traffic management, energy optimization, waste management,
public safety monitoring, and infrastructure maintenance.
Thank you