0% found this document useful (0 votes)
2 views

Introduction to AI

The document outlines various types of artificial intelligence, including Artificial Narrow Intelligence (ANI), Generative AI, and Artificial General Intelligence (AGI), detailing their definitions, capabilities, and examples. It also discusses machine learning techniques such as supervised, unsupervised, and reinforcement learning, along with their applications and common algorithms. Additionally, it covers neural networks, deep learning, and graphical models, providing insights into their structures and functionalities.

Uploaded by

Saad Abbasi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Introduction to AI

The document outlines various types of artificial intelligence, including Artificial Narrow Intelligence (ANI), Generative AI, and Artificial General Intelligence (AGI), detailing their definitions, capabilities, and examples. It also discusses machine learning techniques such as supervised, unsupervised, and reinforcement learning, along with their applications and common algorithms. Additionally, it covers neural networks, deep learning, and graphical models, providing insights into their structures and functionalities.

Uploaded by

Saad Abbasi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

ANI (Artificial Narrow Intelligence)

 Definition: ANI refers to AI designed to perform a single or narrow set


of tasks.

 Capabilities: It is highly specialized but lacks general understanding


or adaptability beyond its pre-defined functions.

 Examples:

o Virtual assistants like Siri and Alexa

o Recommendation systems (Netflix, YouTube)

o Image recognition in security systems

2. Generative AI

 Definition: A branch of AI that generates new content (text, images,


music, code, etc.) based on training data.

 How it Works: Often uses deep learning models like GANs


(Generative Adversarial Networks) or Transformers (GPT
models).

 Examples:

o ChatGPT – generates human-like text

o DALL·E – creates images from text prompts

o MidJourney – AI-generated artwork

o Music generation models – create original compositions

3. AGI (Artificial General Intelligence)

 Definition: AGI refers to AI with human-like cognitive abilities. It can


understand, learn, and apply knowledge across different domains
without specific programming for each task.

 Goal: To create machines that can perform any intellectual task a


human can do.

 Current Status: Still theoretical; no true AGI exists yet.

 Challenges: Complex human-like reasoning, adaptability, ethical and


safety concerns.
Supervised Learning is a type of machine learning
where the algorithm learns from labeled training data to make predictions or
decisions. It’s like learning with a teacher guiding you. In supervised
learning, the dataset has input-output pairs, where the input data (features)
is linked to the correct output (labels), and the model tries to generalize this
relationship.

How It Works:

1. Training Phase:
The algorithm learns by finding patterns in labeled data.
Example: If you’re training a model to identify fruits, the input might be
features like color, shape, and size, with labels like "apple," "banana,"
etc.

2. Prediction Phase:
Once trained, the model predicts labels for new, unseen data.

Types of Supervised Learning:

1. Classification:
Predicts discrete labels (categories).
Example: Email classification as "Spam" or "Not Spam."

2. Regression:
Predicts continuous values (numbers).
Example: Predicting house prices based on size, location, etc.

Common Algorithms:

 Linear Regression

 Logistic Regression

 Decision Trees

 Support Vector Machines (SVM)

 Neural Networks

 Random Forest
Real-Life Applications:

 Fraud Detection (Classifying transactions as fraudulent or not)

 Weather Forecasting (Predicting temperature)

 Medical Diagnosis (Identifying diseases based on symptoms)

LLM (Large Language Model) is a type of artificial intelligence model


trained on massive amounts of text data to understand, generate, and
manipulate human language. These models are based on neural networks,
especially Transformer architecture, and are capable of tasks like language
generation, translation, summarization, question-answering, and more.

Types of Data in AI

1. Structured Data (Organized and easily searchable)


Example: Excel sheets, databases (e.g., employee details, sales data,
sensor readings).

2. Unstructured Data (Unorganized and more complex)


Example: Images, videos, text documents, emails, social media posts.

3. Semi-Structured Data (Hybrid format with some structure)


Example: JSON, XML, or log files.

How Data Is Used in AI

1. Training Data: Used to teach the AI model (e.g., images labeled as


“crack” or “no crack” in a construction defect detection model).

2. Validation Data: Helps tune the model to avoid overfitting.

3. Test Data: Evaluates how well the AI model generalizes to new,


unseen data.
Neural Network (NN)

A neural network is a computational model inspired by how the human


brain works. It consists of layers of connected nodes (neurons) that process
input data and make decisions or predictions.

Structure of a Neural Network:

1. Input Layer – Receives the data (e.g., an image, a sentence, or


numerical data).

2. Hidden Layers – Process data through multiple neurons using weights


and biases. Activation functions (like ReLU, Sigmoid) introduce non-
linearity.

3. Output Layer – Produces the final result (e.g., classification label or


numerical prediction).

Example: Classifying emails as spam or not spam.

Deep Learning (DL)

Deep Learning is a subfield of machine learning that uses deep neural


networks—networks with many hidden layers. It’s called “deep” because of
the depth (number of layers) in the network.

Why Deep Learning?

Traditional machine learning struggles with large datasets or unstructured


data (images, text, etc.). Deep learning shines here by automatically learning
complex features without manual intervention.

Common Deep Learning Architectures:

1. Feedforward Neural Network (FNN) – Basic architecture for


structured data.

2. Convolutional Neural Network (CNN) – For image and video


recognition.

3. Recurrent Neural Network (RNN) – For sequential data like time


series or language models.
4. Transformer – Revolutionized natural language processing (e.g., GPT-
based models).

1. Generative AI (Step-by-Step Explanation)

Definition: Generative AI focuses on creating new content, such as images,


text, music, and more, by learning patterns from existing data.

How it Works:

1. Data Collection: The model is trained on large datasets (e.g., images,


text documents).

2. Model Training: Generative models, such as GANs (Generative


Adversarial Networks) or Transformers, learn to generate realistic
content.

o GANs: Two neural networks (Generator & Discriminator)


compete with each other. The generator creates fake data, and
the discriminator tries to distinguish it from real data until the
generator produces high-quality content.

o Transformers: Used in text generation (like ChatGPT) for


understanding and generating coherent sentences.

Example Applications:

 Text Generation: Chatbots, email writing assistance.

 Image Generation: AI-generated art or synthetic image datasets.

 Code Generation: Automatic code completion (like GitHub Copilot).

2. Unsupervised Learning (Step-by-Step Explanation)

Definition: A type of machine learning where the model finds patterns or


structures in data without labeled outcomes.

How it Works:

1. Data Input: Raw, unlabeled data is provided (e.g., customer purchase


history).
2. Pattern Detection: The model uses clustering or dimensionality
reduction algorithms to group similar data points or reduce complexity.

o Clustering (e.g., k-means): Groups data into clusters based on


similarity.

o Dimensionality Reduction (e.g., PCA): Reduces data to a


smaller set of features while preserving important information.

Example Applications:

 Customer Segmentation: Grouping customers into segments for


targeted marketing.

 Anomaly Detection: Detecting fraud by identifying unusual behavior.

 Recommendation Systems: Suggesting products based on similar


user behaviors.

3. Reinforcement Learning (Step-by-Step Explanation)

Definition: A learning approach where an agent interacts with an


environment, learns through trial and error, and receives rewards or
penalties for its actions.

How it Works:

1. Agent & Environment: The agent performs actions in an


environment (e.g., a robot moving in a room).

2. Reward Mechanism: The agent receives positive rewards for good


actions and penalties for bad ones.

3. Policy Learning: The agent learns an optimal strategy (policy) to


maximize rewards over time.

Key Concepts:

 State: The current condition of the agent.

 Action: What the agent can do.

 Reward: Feedback from the environment.

 Policy: The strategy the agent follows.

Example Applications:
 Robotics: Teaching robots to walk or manipulate objects.

 Game AI: Training AI to play games like chess or Go.

 Self-Driving Cars: Learning optimal driving behavior.

4. Graphical Models (Step-by-Step Explanation)

Definition: A mathematical model that represents random variables and


their conditional dependencies using a graph structure.

Types of Graphical Models:

1. Bayesian Networks: Directed graphs representing probabilistic


relationships.

2. Markov Random Fields: Undirected graphs for modeling spatial or


sequential data.

How it Works:

1. Nodes: Represent random variables (e.g., symptoms of a disease).

2. Edges: Represent dependencies between variables.

3. Inference: Calculates probabilities of certain outcomes given


observed data.

Example Applications:

 Medical Diagnosis: Inferring the likelihood of a disease based on


symptoms.

 Natural Language Processing: Sentence structure analysis.

 Image Denoising: Removing noise from images.

5. Planning (Step-by-Step Explanation)

Definition: AI planning involves creating a sequence of actions to achieve a


specific goal or solve a problem.

How it Works:

1. Define Initial State: The current situation (e.g., robot at the start of a
maze).
2. Define Goal State: The desired outcome (e.g., robot reaching the
maze exit).

3. Action Space: Possible actions the agent can take (e.g., move
forward, turn left).

4. Plan Generation: The AI generates an optimal sequence of actions to


reach the goal.

Example Applications:

 Robotics: Path planning for autonomous robots.

 Logistics: Optimizing delivery routes.

 Game AI: Planning strategies in video games.

6. Knowledge Graph (Step-by-Step Explanation)

Definition: A knowledge graph is a structured representation of entities and


their relationships, often visualized as a graph.

How it Works:

1. Nodes: Represent entities (e.g., people, places, organizations).

2. Edges: Represent relationships between entities (e.g., "works at,"


"located in").

3. Querying: You can query the graph to find relationships or infer new
knowledge.

Example Applications:

 Search Engines: Google’s Knowledge Graph enhances search results


by linking related information.

 Recommendation Systems: Suggesting movies or books based on


relationships between genres and user preferences.

 Fraud Detection: Analyzing transaction networks to detect suspicious


activity.

You might also like