AI 3rd and 4th Unit Notes
AI 3rd and 4th Unit Notes
An intelligent agent needs knowledge about the real world for taking decisions and reasoning to
act efficiently. Knowledge-based agents are those agents who have the capability of maintaining
an internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions. These agents can represent the world with some formal
representation and act intelligently.
Knowledge-based agents are composed of two main parts:
• Knowledge-base and
• Inference system.
In Knowledge based agent:
• An agent should be able to represent states, actions, etc.
• An agent Should be able to incorporate new percepts
• An agent can update the internal representation of the world
• An agent can deduce the internal representation of the world
• An agent can deduce appropriate actions.
The architecture of knowledge-based agent:
• There are also some components which can help the agent to navigate the cave. These
components are given as follows:
a. The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
b. The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will
perceive the breeze.
c. There will be glitter in the room if and only if the room has gold.
d. The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will
emit a horrible scream which can be heard anywhere in the cave.
PEAS description of Wumpus world
• To explain the Wumpus world we have given PEAS description as below:
Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the pit.
• -1 for each action, and -10 for using an arrow.
• The game ends if either agent dies or came out of the cave.
Environment:
• A 4*4 grid of rooms.
• The agent initially in room square [1, 1], facing toward the right.
• Location of Wumpus and gold are chosen randomly except the first square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.
Actuators:
• Left turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot.
Sensors: The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not
diagonally). The agent will perceive breeze if he is in the room directly adjacent to the Pit. The
agent will perceive the glitter in the room where the gold is present. The agent will perceive
the bump if he walks into a wall. When the Wumpus is shot, it emits a horrible scream which can
be perceived anywhere in the cave. These percepts can be represented as five element list, in which
we will have different indicators for each sensor. Example if agent perceives stench, breeze, but
no glitter, no bump, and no scream then it can be represent as
[Stench, Breeze, None, None, None].
The Wumpus world Properties: Partially observable: The Wumpus world is partially observable
because the agent can only perceive the close environment such as an adjacent room.
Deterministic: It is deterministic, as the result and outcome of the world are already known.
Sequential: The order is important, so it is sequential.
Static: It is static as Wumpus and Pits are not moving.
Discrete: The environment is discrete.
One agent: The environment is a single agent as we have one agent only and Wumpus is not
considered as an agent.
Exploring the Wumpus world:
• Now we will explore the Wumpus world and will determine how the agent will find its
goal by applying logical reasoning.
• Agent's First step:
• Initially, the agent is in the first room or on the square [1,1], and we already know that this
room is safe for the agent, so to represent on the below diagram (a) that room is safe we
will add symbol OK.
• Symbol A is used to represent agent, symbol B for the breeze, G for Glitter or gold, V for
the visited room,P for Pits and W for Wumpus
• At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent
squares are also OK.
Agents first Step
Precedence Operators
T T F T T
T F F F F
F T T T T
F F T T T
• Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
• Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
• Identity element:
• P ∧ True = P,
• P ∨ True= True.
• Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
• DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
• Double-negation elimination:
• ¬ (¬P) = P.
• Limitations of Propositional logic:
• We cannot represent relations like ALL, some, or none with propositional logic. Example:
• All the girls are intelligent.
• Some apples are sweet.
• Propositional logic has limited expressive power.
• In propositional logic, we cannot describe statements in terms of their properties or logical
relationships.
Inference:
In artificial intelligence, we need intelligent computers which can create new logic from old logic
or by evidence, so generating the conclusions from evidence and facts is termed as Inference.
Inference rules are the templates for generating valid arguments. Inference rules are applied to
derive proofs in artificial intelligence, and the proof is a sequence of the conclusion that leads to
the desired goal.
In inference rules, the implication among all the connectives plays an important role. Following
are some terminologies related to inference rules:
Implication: It is one of the logical connectives which can be represented as P → Q. It is a Boolean
expression.
Converse: The converse of implication, which means the right-hand side proposition goes to the
left-hand side and vice-versa. It can be written as Q → P.
Contrapositive: The negation of converse is termed as contrapositive, and it can be represented
as ¬ Q → ¬ P.
Inverse: The negation of implication is called inverse. It can be represented as ¬ P → ¬ Q.
Truth Table
T T T T T T
T F F T F T
F T T F T F
F F T T T T
Logical Equivalence:
Rules for propositional theorem proving:
Types of Inference rules:
The Modus Ponens rule is one of the most important rules of inference, and it states that if P and
P → Q is true, then we can infer that Q will be true. It can be represented as:
Example:
Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
Here in the first row, we have mentioned propositional variables for room[1,1], which is showing
that room does not have wumpus(¬ W11), no stench (¬S11), no Pit(¬P11), no breeze(¬B11), no gold
(¬G11), visited (V11), and the room is Safe(OK11).
In the second row, we have mentioned propositional variables for room [1,2], which is showing
that there is no wumpus, stench and breeze are unknown as an agent has not visited room [1,2], no
Pit, not visited yet, and the room is safe.
In the third row we have mentioned propositional variable for room[2,1], which is showing that
there is no wumpus(¬ W21), no stench (¬S 21), no Pit (¬P21), Perceives breeze(B21), no
glitter(¬G21), visited (V21), and room is safe (OK21 ).
Prove that Wumpus is in the room (1, 3)
We can prove that wumpus is in the room (1, 3) using propositional rules which we have derived
for the wumpus world and using inference rule.
Apply Modus Ponens with ¬S11 and R1:
We will firstly apply MP rule with R1 which is ¬S 11 → ¬ W11 ^ ¬ W12 ^ ¬ W21, and ¬S11 which
will give the output ¬ W11 ^ W12 ^ W12.
Variables x, y, z, a, b,....
Connectives ∧, ∨, ¬, ⇒, ⇔
Equality ==
Quantifier ∀, ∃
Syntax for First Order Logic
Atomic sentences: Atomic sentences are the most basic sentences of first-order logic. These
sentences are formed from a predicate symbol followed by a parenthesis with a sequence of terms.
We can represent atomic sentences as Predicate (term1, term2, ......, term n).
Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
Complex Sentences:
Complex sentences are made by combining atomic sentences using connectives. First-order logic
statements can be divided into two parts:
Subject: Subject is the main part of the statement.
Predicate: A predicate can be defined as a relation, which binds two atoms together in a statement.
Consider the statement: "x is an integer.", it consists of two parts, the first part x is the subject
of the statement and second part "is an integer," is known as a predicate.
This rule can be used if we want to show that every element has a similar property.
In this rule, x must not appear as a free variable.
Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All bytes contain 8 bits.",
it will also be true.
Universal Instantiation:
Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can
be applied multiple times to add new sentences.
The new KB is logically equivalent to the previous KB.
As per UI, we can infer any sentence obtained by substituting a ground term for the variable.
The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.
It can be represented as:
• Example:1.
• IF "Every person like ice-cream"=> ∀x P(x) so we can infer that
"John likes ice-cream" => P(c)
Existential Instantiation:
Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic. It can be applied only once to replace the existential sentence. The new KB is not
logically equivalent to old KB, but it will be satisfiable if old KB was satisfiable. This rule states
that one can infer P(c) from the formula given in the form of ∃x P(x) for a new constant symbol c.
The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.
It can be represented as:
Example:
From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),
So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge
base. The above used K is a constant symbol, which is called Skolem constant. The Existential
instantiation is a special case of Skolemization process.
Existential introduction
• An existential introduction is also known as an existential generalization, which is a valid
inference rule in first-order logic.
• This rule states that if there is some element c in the universe of discourse which has a
property P, then we can infer that there exists something in the universe which has the
property P.
• It is represented as:
•
• Example: Let's say that,
"Priyanka got good marks in English."
"Therefore, someone got good marks in English."
What is Unification?
Unification is a process of making two different logical atomic expressions identical by finding a
substitution. Unification depends on the substitution process. It takes two literals as input and
makes them identical using substitution. Let Ψ1 and Ψ2 be two atomic sentences and 𝜎 be a unifier
such that, Ψ1𝜎 = Ψ2𝜎, then it can be expressed as UNIFY(Ψ1, Ψ2).
Forward Chaining and backward chaining in AI
Inference engine: The inference engine is the component of the intelligent system in artificial
intelligence, which applies logical rules to the knowledge base to infer new information from
known facts. The first inference engine was part of the expert system. Inference engine commonly
proceeds in two modes, which are:
• Forward chaining
• Backward chaining
Forward Chaining: Forward chaining is also known as a forward deduction or forward
reasoning method when using an inference engine. Forward chaining is a form of reasoning
which start with atomic sentences in the knowledge base and applies inference rules (Modus
Ponens) in the forward direction to extract more data until a goal is reached. The Forward-
chaining algorithm starts from known facts, triggers all rules whose premises are satisfied, and
add their conclusion to the known facts. This process repeats until the problem is solved.
Properties of Forward Chaining
It is a down-up approach, as it moves from bottom to top. It is a process of making a conclusion
based on known facts or data, by starting from the initial state and reaches the goal state. Forward-
chaining approach is also called as data-driven as we reach to the goal using available data.
Forward -chaining approach is commonly used in the expert system, such as CLIPS, business, and
production rule systems.
"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who
is an American citizen." Prove that "Robert is criminal."
• To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.
• Facts Conversion into FOL:
• It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
• Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite
clauses by using Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
• All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
• Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
• Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
• Country A is an enemy of America.
Enemy (A, America) .........(7)
• Robert is American
American(Robert). ..........(8)
Forward Chaining Proof:
In the first step we will start with the known facts and will choose the sentences which do not
have implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.
Step-2:
At the second step, we will see those facts which infer from available facts and with satisfied
premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(2) and (3) are already added.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-
(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we
can add Criminal(Robert) which infers all the available facts. And hence we reached our goal
statement. Hence it is proved that Robert is Criminal using forward chaining approach.
Backward Chaining: Backward-chaining is also known as a backward deduction or backward
reasoning method when using an inference engine.
• A backward chaining algorithm is a form of reasoning, which starts with the goal and works
backward, chaining through rules to find known facts that support the goal.
Properties of backward chaining:
• It is known as a top-down approach.
• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.
• It is called a goal-driven approach, as a list of goals decides which rules are selected and
used.
• Backward -chaining algorithm is used in game theory, automated theorem proving tools,
inference engines, proof assistants, and various AI applications.
• The backward-chaining method mostly used a depth-first search strategy for proof.
In backward-chaining, we will use the same above example, and will rewrite all the rules.
• American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
Owns(A, T1) ........(2)
• Missile(T1)
• ?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
• Missile(p) → Weapons (p) .......(5)
• Enemy(p, America) →Hostile(p) ........(6)
• Enemy (A, America) .........(7)
• American(Robert). ..........(8)
Backward-Chaining proof:
Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and then
infer further rules.
Step-1: At the first step, we will take the goal fact. And from the goal fact, we will infer other facts,
and at last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is
the predicate of it.
Step-2: At the second step, we will infer other facts form goal fact which satisfies the rules. So as
we can see in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}.
So we will add all the conjunctive facts below the first level and will replace p with Robert.
• Here we can see American (Robert) is a fact, so it is proved here.
Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.
IV UNIT:
Machine Learning
Machine learning is a subset of AI, which enables the machine to automatically learn from data,
improve performance from past experiences, and make predictions. Machine learning contains a
set of algorithms that work on a huge amount of data. Data is fed to these algorithms to train them,
and on the basis of training, they build the model & perform a specific task.
ML algorithms: These ML algorithms help to solve different business problems like Regression,
Classification, Forecasting, Clustering, and Associations, etc. Based on the methods and way of
learning, machine learning is divided into mainly four types, which are:
• Supervised Machine Learning
• Unsupervised Machine Learning
• Semi-Supervised Machine Learning
• Reinforcement Learning
Supervised Machine Learning: As its name suggests, supervised machine learning is based on
supervision. It means in the supervised learning technique, we train the machines using the
"labelled" dataset, and based on the training, the machine predicts the output. Here, the labelled
data specifies that some of the inputs are already mapped to the output. More preciously, we can
say; first, we train the machine with the input and corresponding output, and then we ask the
machine to predict the output using the test dataset.
Example: Let's understand supervised learning with an example. Suppose we have an input dataset
of cats and dog images.
• So, first, we will provide the training to the machine to understand the images, such as
the shape & size of the tail of cat and dog, Shape of eyes, colour, height (dogs are taller,
cats are smaller), etc.
• After completion of training, we input the picture of a cat and ask the machine to identify
the object and predict the output. Now, the machine is well trained, so it will check all the
features of the object, such as height, shape, colour, eyes, ears, tail, etc., and find that it's a
cat.
• So, it will put it in the Cat category. This is the process of how the machine identifies the
objects in Supervised Learning.
• The main goal of the supervised learning technique is to map the input variable(x) with the
output variable(y). Some real-world applications of supervised learning are Risk
Assessment, Fraud Detection, Spam filtering, etc.
Categories of Supervised Machine Learning:
Categories of Supervised Machine Learning
Classification
Regression
a) Classification
Classification algorithms are used to solve the classification problems in which the output variable
is categorical, such as "Yes" or No, Male or Female, Red or Blue, etc. The classification algorithms
predict the categories present in the dataset. Some real-world examples of classification algorithms
are Spam Detection, Email filtering, etc.
• Random Forest Algorithm
• Decision Tree Algorithm
• Logistic Regression Algorithm
• Support Vector Machine Algorithm
Regression: Regression algorithms are used to solve regression problems in which there is a linear
relationship between input and output variables. These are used to predict continuous output
variables, such as market trends, weather prediction, etc.
• Some popular Regression algorithms are given below:
• Simple Linear Regression Algorithm
• Multivariate Regression Algorithm
• Decision Tree Algorithm
• Lasso Regression
Applications of Supervised Learning:
Some common applications of Supervised Learning are given below:
• Image Segmentation:
Supervised Learning algorithms are used in image segmentation. In this process, image
classification is performed on different image data with pre-defined labels.
• Medical Diagnosis:
Supervised algorithms are also used in the medical field for diagnosis purposes. It is done
by using medical images and past labelled data with labels for disease conditions. With
such a process, the machine can identify a disease for the new patients.
Applications of Supervised Learning
Fraud Detection - Supervised Learning classification algorithms are used for identifying fraud
transactions, fraud customers, etc. It is done by using historic data to identify the patterns that can
lead to possible fraud.
Spam detection - In spam detection & filtering, classification algorithms are used. These
algorithms classify an email as spam or not spam. The spam emails are sent to the spam folder.
Speech Recognition - Supervised learning algorithms are also used in speech recognition. The
algorithm is trained with voice data, and various identifications can be done using the same, such
as voice-activated passwords, voice commands, etc.
Unsupervised Learning:
Unsupervised learning is a type of machine learning that learns from unlabeled data. This means
that the data does not have any pre-existing labels or categories. The goal of unsupervised
learning is to discover patterns and relationships in the data without any explicit guidance.
Unsupervised learning is the training of a machine using information that is neither classified
nor labeled and allowing the algorithm to act on that information without guidance. Here the
task of the machine is to group unsorted information according to similarities, patterns, and
differences without any prior training of data.
Unlike supervised learning, no teacher is provided that means no training will be given to the
machine. Therefore the machine is restricted to find the hidden structure in unlabeled data by
itself.
You can use unsupervised learning to examine the animal data that has been gathered and
distinguish between several groups according to the traits and actions of the animals. These
groupings might correspond to various animal species, providing you to categorize the creatures
without depending on labels that already exist.
Types of Unsupervised Learning
Unsupervised learning is classified into two categories of algorithms:
Clustering: A clustering problem is where you want to discover the inherent groupings in
the data, such as grouping customers by purchasing behavior.
Association: An association rule learning problem is where you want to discover rules that
describe large portions of your data, such as people that buy X also tend to buy Y.
Clustering
Clustering is a type of unsupervised learning that is used to group similar data points
together. Clustering algorithms work by iteratively moving data points closer to their cluster
centers and further away from data points in other clusters.
1. Exclusive (partitioning)
2. Agglomerative
3. Overlapping
4. Probabilistic
Clustering Types:-
1. Hierarchical clustering
2. K-means clustering
3. Principal Component Analysis
4. Singular Value Decomposition
5. Independent Component Analysis
6. Gaussian Mixture Models (GMMs)
7. Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
Association rule learning
Association rule learning is a type of unsupervised learning that is used to identify patterns in a
data. Association rule learning algorithms work by finding relationships between different items
in a dataset.
Some common association rule learning algorithms include:
Apriori Algorithm
Eclat Algorithm
FP-Growth Algorithm
Application of Unsupervised learning
Non-supervised learning can be used to solve a wide variety of problems, including:
Anomaly detection: Unsupervised learning can identify unusual patterns or deviations from
normal behavior in data, enabling the detection of fraud, intrusion, or system failures.
Scientific discovery: Unsupervised learning can uncover hidden relationships and patterns in
scientific data, leading to new hypotheses and insights in various scientific fields.
Recommendation systems: Unsupervised learning can identify patterns and similarities in
user behavior and preferences to recommend products, movies, or music that align with their
interests.
Customer segmentation: Unsupervised learning can identify groups of customers with
similar characteristics, allowing businesses to target marketing campaigns and improve
customer service more effectively.
Image analysis: Unsupervised learning can group images based on their content, facilitating
tasks such as image classification, object detection, and image retrieval.
Advantages of Unsupervised learning
It does not require training data to be labeled.
Dimensionality reduction can be easily accomplished using unsupervised learning.
Capable of finding previously unknown patterns in data.
Unsupervised learning can help you gain insights from unlabeled data that you might not
have been able to get otherwise.
Unsupervised learning is good at finding patterns and relationships in data without being told
what to look for. This can help you learn new things about your data.
Disadvantages of Unsupervised learning
Difficult to measure accuracy or effectiveness due to lack of predefined answers during
training.
The results often have lesser accuracy.
The user needs to spend time interpreting and label the classes which follow that
classification.
Unsupervised learning can be sensitive to data quality, including missing values, outliers,
and noisy data.
Without labeled data, it can be difficult to evaluate the performance of unsupervised learning
models, making it challenging to assess their effectiveness.
Computational
Simpler method Computationally complex
Complexity
Model We can test our model. We can not test our model.
Natural Language Processing is a part of artificial intelligence that aims to teach the human
language with all its complexities to computers. This is so that machines can understand and
interpret the human language to eventually understand human communication in a better way.
Natural Language Processing is a cross among many different fields such as artificial intelligence,
computational linguistics, human-computer interaction, etc. There are many different methods
in NLP to understand human language which include statistical and machine learning methods.
These involve breaking down human language into its most basic pieces and then understand how
these pieces relate to each other and work together to create meanings in sentences.
Chatbots : are a form of artificial intelligence that are programmed to interact with humans in such
a way that they sound like humans themselves. Depending on the complexity of the chatbots, they
can either just respond to specific keywords or they can even hold full conversations that make it
tough to distinguish them from humans. Chatbots are created using Natural Language Processing
and Machine Learning, which means that they understand the complexities of the English language
and find the actual meaning of the sentence and they also learn from their conversations with
humans and become better with time. Chatbots work in two simple steps. First, they identify the
meaning of the question asked and collect all the data from the user that may be required to answer
the question. Then they answer the question appropriately.
Have you noticed that search engines tend to guess what you are typing and automatically complete
your sentences? For example, On typing “game” in Google, you may get further suggestions for
“game of thrones”, “game of life” or if you are interested in maths then “game theory”. All these
suggestions are provided using autocomplete that uses Natural Language Processing to guess what
you want to ask. Search engines use their enormous data sets to analyze what their customers are
probably typing when they enter particular words and suggest the most common possibilities. They
use Natural Language Processing to make sense of these words and how they are interconnected
to form different sentences.
Voice Assistants
These days voice assistants are all the rage! Whether its Siri, Alexa, or Google Assistant, almost
everyone uses one of these to make calls, place reminders, schedule meetings, set alarms, surf the
internet, etc. These voice assistants have made life much easier. But how do they work? They use
a complex combination of speech recognition, natural language understanding, and natural
language processing to understand what humans are saying and then act on it. The long term goal
of voice assistants is to become a bridge between humans and the internet and provide all manner
of services based on just voice interaction. However, they are still a little far from that goal seeing
as Siri still can’t understand what you are saying sometimes!
Language Translator
Want to translate a text from English to Hindi but don’t know Hindi? Well, Google Translate is
the tool for you! While it’s not exactly 100% accurate, it is still a great tool to convert text from
one language to another. Google Translate and other translation tools as well as use Sequence to
sequence modeling that is a technique in Natural Language Processing. It allows the algorithm to
convert a sequence of words from one language to another which is translation. Earlier, language
translators used Statistical machine translation (SMT) which meant they analyzed millions of
documents that were already translated from one language to another (English to Hindi in this
case) and then looked for the common patterns and basic vocabulary of the language. However,
this method was not that accurate as compared to Sequence to sequence modeling.
o Self-Moving Robots: AI makes robots really smart at moving around on their own. It's
like giving them a built-in GPS and a clever brain. They can figure out where to go and
how to get there without bumping into things or needing a person to show them the way.
This helps them do tasks like delivering packages or exploring places on their own, making
them super independent.
o Object Recognition and Manipulation: AI gives robots sharp eyes and clever hands. It
helps them see objects clearly and then pick them up and move them just right. This is
super useful, especially in places like warehouses, where they can do things like sorting
and packing items accurately.
o Collaboration of Humans and Robots: AI makes it possible for robots to be great team
players with people. They can work alongside humans, helping out and learning from them.
If a person does something, the robot can understand and follow their lead. This makes
workplaces safer and more efficient, like having a trusty robot colleague who understands
and supports you.