0% found this document useful (0 votes)
30 views

Unit 1ppt

The document discusses artificial intelligence (AI), including definitions of AI, goals of AI, what comprises AI, problems with AI, and challenges for AI in 2020. Some key challenges discussed are limited knowledge of AI, the 'black box' problem of not understanding how AI arrives at predictions, requiring high computing power, potential for AI bias, and issues with data scarcity.

Uploaded by

Dhruv Dalwadi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Unit 1ppt

The document discusses artificial intelligence (AI), including definitions of AI, goals of AI, what comprises AI, problems with AI, and challenges for AI in 2020. Some key challenges discussed are limited knowledge of AI, the 'black box' problem of not understanding how AI arrives at predictions, requiring high computing power, potential for AI bias, and issues with data scarcity.

Uploaded by

Dhruv Dalwadi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

ARTIFICIAL INTELLIGANCE

UNIT 1
What is Artificial Intelligence (AI)?

• In today's world, technology is growing very fast, and we are getting in touch with
different new technologies day by day.
• Here, one of the booming technologies of computer science is Artificial
Intelligence which is ready to create a new revolution in the world by making
intelligent machines.The Artificial Intelligence is now all around us. It is currently
working with a variety of subfields, ranging from general to specific, such as
self-driving cars, playing chess, proving theorems, playing music, Painting, etc.
AI is one of the fascinating and universal fields of
Computer science which has a great scope in future.
AI holds a tendency to cause a machine to work as a
human.

Artificial Intelligence is composed of two


words Artificial and Intelligence, where Artificial
defines "man-made," and intelligence defines "thinking
power", hence AI means "a man-made thinking power."
So, we can define AI as:

"It is a branch of computer science by which we can create intelligent


machines which can behave like a human, think like humans, and
able to make decisions."

Artificial Intelligence exists when a machine can have human based skills such as
learning, reasoning, and solving problems
With Artificial Intelligence you do not need to preprogram a machine to do some work,
despite that you can create a machine with programmed algorithms which can work with
own intelligence, and that is the awesomeness of AI.
It is believed that AI is not a new technology, and some people says that as per Greek
myth, there were Mechanical men in early days which can work and behave like
humans.
Why Artificial Intelligence?

Before Learning about Artificial Intelligence, we should know that


what is the importance of AI and why should we learn it.
Following are some main reasons to learn about AI:

•With the help of AI, you can create such software or devices which
can solve real-world problems very easily and with accuracy such as
health issues, marketing, traffic issues, etc.

•With the help of AI, you can create your personal virtual Assistant,
such as Cortana, Google Assistant, Siri, etc.
•With the help of AI, you can build such Robots which can
work in an environment where survival of humans can be
at risk.
•AI opens a path for other new technologies, new devices,
and new Opportunities.
Goals of Artificial Intelligence

Following are the main goals of Artificial Intelligence:

1.Replicate human intelligence

2.Solve Knowledge-intensive tasks

3.An intelligent connection of perception and action


4.Building a machine which can perform tasks that requires
human intelligence such as:

1. Proving a theorem
2. Playing chess
3. Plan some surgical operation
4. Driving a car in traffic

5.Creating some system which can exhibit intelligent


behavior, learn new things by itself, demonstrate, explain, and
can advise to its user.
What Comprises to Artificial Intelligence?

• Artificial Intelligence is not just a part of computer science even


it's so vast and requires lots of other factors which can contribute
to it. To create the AI first we should know that how intelligence
is composed, so the Intelligence is an intangible part of our brain
which is a combination of Reasoning, learning, problem-solving
perception, language understanding, etc.
To achieve the above factors for a machine or software
Artificial Intelligence requires the following discipline:

•Mathematics
•Biology
•Psychology
•Sociology
•Computer Science
•Neurons Study
•Statistics
The AI Problems
AI problems in artificial intelligence are the challenges
and limitations that hinder its development and
adoption. Some of the AI problems include

•Lack of AI transparency and explainability


•Job losses due to AI automation
•Social manipulation and surveillance
through AI algorithms
•Lack of data privacy using AI tools
•Biases and inequality due to AI
•Weakening ethics and goodwill because of
AI
•Safety and trust issues with AI
•Computation power and resource
constraints for AI
•Irrecoverable consequences of AI actions
Top Challenges for Artificial Intelligence in
2020
There’s no doubt that Artificial Intelligence is very popular with it being a hot
topic in tech circles! Many companies already use it in their business
operations with huge successes (ever heard of Google, Facebook, Amazon?!)
But there are still many real-world challenges, especially for small and
medium-sized companies to fully embrace Artificial Intelligence. Some
companies believe this is just because AI is not needed in their corporate
culture while others think its because there is not enough quality data. Other
reasons can be because companies don’t have access to highly skilled AI
professionals or they don’t have the infrastructure to sustain high-level AI
solutions.
Most of these challenges for Artificial Intelligence faced by companies
can be handled if they are keen to move ahead into the AI market. And so,
this article details some of these challenges and how companies can
overcome them to implement AI in their work culture. So let’s see these
challenges now!

1. Limited Knowledge
Artificial Intelligence may be a buzzword in tech circles but there are
very few people who understand what it is. Many myths are floating
about Artificial Intelligence such as only big companies like Google,
Facebook, etc. have AI capabilities or even that AI can become
smarter than humans and end the world!
This lack of knowledge about how Artificial Intelligence is
practically implemented in the day to day operations of
companies means that it is very difficult for smaller and
mid-level companies to use it successfully. Another factor that
leads to limited knowledge of AI is that there are very few AI
experts that can apply AI solutions to real-life business
problems. Most of the smaller companies struggle to find good
AI talent that can form theirs in house AI team. However, a
solution to this is that these companies can outsource their
Artificial Intelligence and Data Science team.
2. Black Box Problem
Artificial Intelligence algorithms are like block boxes, which
means humans know what the prediction generated by the
algorithm is but they don’t know how it arrived at that prediction.
This means that people have no means of understanding the inner
working of AI algorithms. This makes them slightly unreliable. If
the predictions generated are the same as those anticipated by the
AI professionals then that’s great, but what if they are not? There is
no way to understand how AI algorithms arrive at their predictions.
An approach that aims to solve this problem is Local interpretable
model-agnostic explanations or LIME.
This means that the AI algorithm will also provide the pieces of
data that led to its eventual prediction. So if humans are
provided the rationale behind why an algorithm made a
particular prediction, it eliminates the black box problem and
also makes the algorithm more trustworthy in general.

3. High Computing Power


Artificial Intelligence is becoming more and more popular but it takes up
a lot of computing power to train the AI. As deep learning algorithms
become more and more complex, it becomes even more difficult to
arrange the number of cores and GPUs they require to work efficiently.
This is the reason that Artificial Intelligence is still underused in some
fields like asteroid tracking, healthcare deployment, even though it could
contribute a lot of value. Another factor is that Artificial Intelligence
algorithms require a supercomputer’s computing power at complex
levels of computing. There are only a few supercomputers in the world
and they are expensive, so this limits the types of algorithms that can be
implemented and also reduces the companies that can try high-level AI
to those that have high levels of resources. The integration of Cloud
Computing and parallel processing systems have made it easier to work
with artificial intelligence but it is still a power that few can afford to
utilize fully.
4. Artificial Intelligence Bias
Artificial Intelligence Bias is also a challenge for companies to fully
integrate AI into their business practices. AI bias can unconsciously
enter into the Artificial Intelligence Systems that are developed by
human beings as they are inherently biased.
The bias may also creep into the systems because of the flawed data
that is generated by human beings. For example, Amazon recently
found out that their Machine Learning based recruiting algorithm was
biased against women. This algorithm was based on the number of
resumes submitted over the past 10 years and the candidates hired. And
since most of the candidates were men, so the algorithm also favored
men over women.
So the clear question for companies is “How to tackle this Bias?” How to
make sure that Artificial Intelligence is not racist or sexist like some
humans in this world. Well, the only way to handle this is that AI
researchers manually try to remove the bias while developing and training
the AI systems and selecting the data.

5. Data Scarcity

Artificial Intelligence algorithms learn from the data already available.


Therefore, the better the data they are provided, the better the final
algorithm will be. However, this requires a lot of data that may sometimes
not even be available.
The only method to resolve this is to understand the available data and
the data that is missing. When the AI experts know the data that is
missing, they may be able to obtain it if it is data that is publicly
available or even buy it from third parties data vendors.

However, some data is difficult or even illegal to obtain. In that case,


some AI algorithms use synthetic data which is artificially created from
scratch while simulating the real data. This method is using synthetic
data is a good option when there is data scarcity and there is not enough
data available to train the AI model.
6. Situation-specific Learning

Artificial Intelligence algorithms can be trained for a particular situation


but they cannot transfer their learning from one situation to another. For
example, humans can use their experiences in a situation to help them in
other situations as well. But this is not possible for AI algorithms as they
are trained on data for only one specified task. However, what if AI
could transfer their learning in one situation to another related situation
instead of developing a new AI model from scratch?
This can be done using transfer learning wherein an AI model is trained
on data in a particular situation but it can transfer its learning to another
similar situation without starting from scratch. This means that an AI
model developed for a particular task can then be used as a starting point
for another AI model for a related task.

The most important thing to remember is that these challenges cannot just
be handled in a short time. So companies must familiarise themselves
with Artificial Intelligence and understand the process to create AI
solutions. Then they must create an AI strategy for its implementation
into their work culture. After the strategy is created, it is much easier to
just follow it and handle the challenges as they arise.
The Underlying Assumption

• The underlying assumption in artificial intelligence is


that intelligence can be realized in appropriate physical
processes, whose nature and characteristics are identifiable
and understandable. This assumption is based on the Physical
Symbol System Hypothesis, which states that a physical symbol
system has the necessary and sufficient means for general
intelligent action. To predict what will happen in the future, AI needs
to be able to learn and recognize behavior patterns. Research in AI
is focused on identifying and understanding the processes that
might or might not lead to intelligence
What is an AI Techniques
• Artificial intelligence can be divided into categories based on the
machine’s capacity to use past experiences to predict future
decisions, memory, and self-awareness. IBM came up with Deep
Blue, a chess program that can identify the pieces in the
chessboard. But it does not have the memory to predict future
actions. This system, though valuable, is not adaptable to other
situations. Another type of AI system uses past experiences and
has the bonus of a limited memory to predict decisions.
1. Machine Learning

It is one of the applications of AI where machines are


not explicitly programmed to perform specific tasks;
instead, they learn and improve from experience
automatically. Deep Learning is a subset of machine
learning based on artificial neural networks for
predictive analysis. There are various machine
learning algorithms, such as Unsupervised,
Supervised, and Reinforcement Learning.
In Unsupervised Learning, the algorithm does not use
classified information to act on it without any guidance.

Supervised Learning deduces a function from the training


data, consisting of an input object and the desired output.
Machines use reinforcement learning to take suitable actions
to increase the reward to find the best possible, which should
be considered.
2. NLP (Natural Language Processing)
Natural Language Processing involves programming
computers to process human languages to facilitate
interactions between humans and computers. Machine
Learning is a reliable technology for Natural Language
Processing to obtain meaning from human languages.

In NLP, the machine captures the audio of a human talk.


After the audio-to-text conversion, the text is processed
and converted back into audio data. Then the machine
uses the audio to respond to humans.
Applications of Natural Language Processing can be
found in IVR (Interactive Voice Response) applications
used in call centers, language translation applications like
Google Translate, and word processors such as Microsoft
Word to check the accuracy of grammar in text.

However, the nature of human languages makes Natural


Language Processing difficult because of the rules involved
in passing information using natural language. They are
challenging for computers to understand.
NLP leverages algorithms to recognize and abstract the
rules of natural languages, converting unstructured human
language data into a computer-understandable format.
Moreover, NLP can also be found in content optimization,
such as paraphrasing applications, which helps to improve
the readability of complex text.
3. Automation and Robotics

Automation aims to improve productivity and efficiency


by having machines perform monotonous and repetitive
tasks, resulting in cost-effective outcomes. Many
organizations use machine learning, neural networks,
and graphs in automation. Using CAPTCHA technology,
such automation can prevent fraud issues during online
financial transactions. Programmers create robotic
process automation to perform high-volume repetitive
tasks that can adapt to changes in different
circumstances.
4. Machine Vision
Machines can capture visual information and then
analyze it. This process involves using cameras to
capture visual information, converting the analog image
to digital data, and processing the data through digital
signal processing. Then the resulting data is fed to a
computer. In machine vision, two vital aspects are
sensitivity, the ability to perceive weak impulses, and
resolution, the range to which the machine can
distinguish objects. The usage of machine vision can be
found in signature identification, pattern recognition,
medical image analysis, etc.
The Level Of The Model
• The level of an AI system can help predict how the system will
change over time. The levels of AI systems range from
traditional software (Level 0) to fully intelligent software (Level
4). The level of an AI system can be measured using a score,
which is grouped into three ranges: low (0 to 40), medium (40 to
60), and high (60 to 100).
Level 0: Deterministic
No required training data, no required testing data

Algorithms that involve no learning (e.g. adapting parameters


to data) are at level zero.

The great benefit of level 0 (traditional algorithms in


computer science) is that they are very reliable and, if you
solve the problem, can be shown to be the optimal solution. If
you can solve a problem at level 0 it’s hard to beat.
In some respect, all algorithms–even sorting algorithms
(like binary search) – are “adaptive” to the data. We do
not generally consider sorting algorithms to be “learning”.
Learning involves memory–the system changing how it
behaves in the future, based on what it’s learned in the
past.
However, some problems defy a pre-specified
algorithmic solution. The downside is that for problems
that defy human understanding (either once, or in
number) it can be difficult to perform well (e.g. speech to
text, translation, image recognition, utterance
suggestion, etc.).
Examples:
•Luhn Algorithm for credit card validation

•Regex-based systems (e.g. simple redaction systems


for credit card numbers).

•Information retrieval algorithms like TFIDF retrieval or


BM25.

•Dictionary-based spell correction.


Level 1: Learned

Static training data, static testing data

Systems where you train the model in an offline setting


and deploy to production with “frozen” weights. There
may be an updating cadence to the model (e.g. adding
more annotated data), but the environment the model
operates in does not affect the model.
The benefit of level 1 is that you can learn and deploy
any function at the modest cost of some training data.
This is a great place to experiment with different types of
solutions. And, for problems with common elements
(e.g. speech recognition) you can benefit from
diminishing marginal costs.
The downside is that customization to a single use case
is linear in their number: you need to curate training
data for each use case. And that can change over time,
so you need to continuously add annotations to
preserve performance. This cost can be hard to bear.
Examples:

•Custom text classification models


•Speech to text (acoustic model)

Level 2: Self-learning
Dynamic + static training data, static testing data
Systems that use training data generated from the
system for the model to improve. In some cases, the
data generation is independent of the model (so we
expect increasing model performance over time as more
data is added);
in other cases, the model intervening can reinforce model
biases and performance can get worse over time. To
eliminate the chance of reinforcing biases, practitioners
need to evaluate new models on static (potentially
annotated) data sets.
Level 2 is great because performance seems to improve
over time for free. The downside is that, left unattended,
the system can get worse – it may not be consistent in
getting better with more data. The other limitation is that
some systems at level two might have limited capacity to
improve as they essentially feed on themselves
(generating their own training data); addressing this bias
can be challenging.
Examples:

•Naive spam filters


•Common speech to text models (language model)

Level 3: Autonomous (or


self-correcting)
Dynamic training data, dynamic test
data
Systems that both alter human behavior (e.g. recommend
an action and let the user opt-in) and learn directly from that
behavior, including how the systems’ choice changes the
user behavior. Moving from Level 2 to 3 potentially
represents a big increase in system reliability and total
achievable performance.
Level 3 is great because it can consistently get better over
time. However, it is more complex: it might require truly
staggering amounts of data, or a very carefully designed
setup, to do better than simpler systems; its ability to adapt
to the environment also makes it very hard to debug.

It is also possible to have truly catastrophic feedback


loops. For example, a human corrects an email spam filter
– however, because the human can only ever correct
misclassifications that the system made, it learns that all
its predictions are wrong and inverts its own predictions.
Level 4: Intelligent (or globally optimizing)

Dynamic training data, dynamic test data, dynamic goal

Systems that both dynamically interact with an


environment and globally optimizes (e.g. towards some
set of downstream objectives), e.g. facilitating an agent
while optimizing for AHT and CSAT, or optimizing
directly for profit. For example,
an AutoCompose system that optimizes for the best
series of clicks to optimize the conversation.
Level 4 can be very attractive. However, it is not always
obvious how to get there, and unless carefully designed,
these systems can optimize towards degenerate
solutions. Aiming them at the right problem, shaping the
reward, and auditing its behavior are large and non-trivial
tasks.
Why consider levels?

Designing and building AI systems is difficult. A core part


of that difficulty is understanding how they change over
time (or don’t change!): how the performance, and
maintenance cost, of the system will develop.
In general, there is increasing value as you move up
levels, e.g. one goal might be to move a system
operating at Level 1 to be at Level 2 – but complexity
(and cost) of system build also increases as levels go
up.
It can make a lot of sense to start with a novel feature at a
“low” level, where the system behavior is well understood, and
progressively increase the level – as understanding the failure
cases of the system becomes more difficult as the level
increases.
The focus should be on learning about the problem and the
solution space. Lower levels are more consistent and can be
much better avenues to explore possible solutions than higher
levels, whose cost and variability in performance can be large
hindrances.
This set of levels provides some core breakpoints for how
different AI systems can behave. Employing these levels – and
making trade-offs between levels – can help provide a
shorthand for differences post-deployment.
Criteria For Success
The 9 Key Critical Success Factors
• Create A Clear, Futuristic Business Vision ...
• Evaluate the Complexity of the AI Model to be Created ...
• Ensure Appropriate Resource Management at the Right Time ...
• Focus on Detailed Business and Data Analysis ...
•Select Data and Data Sources Intelligently ...

•Create a Project Structure with Definite Landmarks ...

•Achieve Scientific Consistency Between AI Model and


Actuality ...

•Testing and Operationalization of AI Model ...

You might also like