0% found this document useful (0 votes)
43 views14 pages

AI Notes Module 1

Artificial intelligence b.com 3rd sem 1st module notes bu university

Uploaded by

appuanupgowdaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views14 pages

AI Notes Module 1

Artificial intelligence b.com 3rd sem 1st module notes bu university

Uploaded by

appuanupgowdaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

SYLLABUS

Course Content (Artificial Intelligence) Details of topic Duration


Course – 1 - AI-900 pathway consists of 5 courses and 2 05 hours
Azure AI reading material:
Fundamentals i. Introduction to AI on Azure
(AI-900) ii. Use visual tools to create machine learning
models with
Azure Machine Learning
iii. Explore computer vision in Microsoft Azure
iv. Explore natural language processing
v. Explore conversational AI
vi. Tune Model Hyperparameters - Azure
Machine Learning
(Reading)
vii. Neural Network Regression: Module
Reference - Azure
Machine Learning (Reading
Practical 1. Prepare the data 13 hours
2. Model the data
3. Visualize the data
4. Analyse the data
5. Deploy and maintain deliverables
Course – 2 - DA-100 pathway consists of 5 courses and 2 08 hours
Data Analyst
reading material:
Associate (DA-100) 1. Get started with Microsoft data analytics
2. Prepare data for analysis
3. Model data in Power BI
4. Visualize data in Power BI
5. Data analysis in Power BI
6. Manage workspaces and datasets in Power BI
7. Key Influencers Visualizations Tutorial -
Power BI
8. Smart Narratives Tutorial - Power BI
Microsoft Docs

Practical 13 hours
1. Describe Artificial Intelligence workloads and
considerations
ARTIFICIAL INTELLIGENCE, MACHINE LEARNING AND DEEP
LEARNING
• AI stands for Artificial Intelligence. It is a broad field of computer science that focuses on creating
intelligent machines capable of performing tasks that typically require human intelligence. AI
involves the development of algorithms and systems that can analyze data, reason, learn from
experience, and make decisions.
• ML stands for Machine Learning. It is a subset of AI that focuses on enabling computers to learn
and make predictions or decisions without being explicitly programmed. ML algorithms learn
from data and improve their performance over time through experience. They can recognize
patterns, make predictions, and uncover insights from large datasets.
• DL stands for Deep Learning. It is a subfield of ML that is inspired by the structure and function of
the human brain's neural networks. Deep Learning models, also known as artificial neural
networks, are designed to automatically learn hierarchical representations of data by using
multiple layers of interconnected nodes, or "neurons." DL algorithms excel at solving complex
problems, such as image and speech recognition, natural language processing, and autonomous
driving, by learning directly from raw data.

Here are some examples of AI, ML, and DL applications:

Artificial Intelligence

1. Personal Assistants - Virtual assistants like Siri, Google Assistant, and Amazon Alexa use AI to
understand natural language, perform tasks, and provide information or assistance to users.
2. Chatbots - AI-powered chatbots are used in customer service to interact with users, answer
questions, provide recommendations, and assist with various tasks, often using natural
language processing and machine learning techniques.
3. Autonomous Vehicles - Self-driving cars leverage AI technologies, including computer vision,
sensor fusion, and decision-making algorithms, to navigate roads, detect obstacles, and make
real-time driving decisions.

Machine Learning

1. Spam Detection - Email providers often use ML algorithms to analyze incoming emails and
classify them as either spam or legitimate based on patterns and characteristics identified in
a large dataset of labeled emails.
2. Recommendation Systems - Online platforms like Netflix and Amazon use ML algorithms to
analyze user preferences, past behavior, and item attributes to generate personalized
recommendations for movies, TV shows, products, and more.
3. Fraud Detection - ML algorithms can analyze large volumes of financial transaction data and
detect patterns indicative of fraudulent activity, helping financial institutions identify and
prevent fraudulent transactions.

Deep Learning

1. Image Recognition - Deep Learning models, such as Convolutional Neural Networks (CNNs),
can be trained to accurately classify and recognize objects in images. Applications include
facial recognition, self-driving cars identifying objects on the road, and medical image
analysis.
2. Natural Language Processing - Deep Learning models, such as Recurrent Neural Networks
(RNNs) and Transformers, are used in language processing tasks like machine translation,
sentiment analysis, and speech recognition.

ARTIFICIAL INTELLIGENCE TECHNOLOGY LANDSCAPE

1. Autonomous systems - Self-operating machines or software capable of performing tasks


without human intervention.
2. Neural networks - Computational models inspired by the human brain's interconnected
neurons, used for pattern recognition and machine learning.
3. Pattern recognition - The process of identifying and classifying patterns or regularities in
data.
4. Natural language processing - AI technology that enables computers to understand,
interpret, and generate human language.
5. Chatbots - AI-powered conversational agents designed to interact with users and provide
automated responses or assistance.
6. Real-time emotion analytics - Analyzing and interpreting human emotions in real-time using
AI algorithms and data processing.
7. Virtual companions - AI-based digital entities that simulate human-like interactions and
provide companionship or assistance.
8. Real-time universal translation - AI technology that enables instant translation between
different languages in real-time.
9. Thought-controlled gaming - Gaming systems that use brain-computer interface technology
to allow users to control the game using their thoughts.
10. Next-gen cloud robotics - Integration of AI, robotics, and cloud computing to enable
advanced capabilities and collaboration among robots.
11. Autonomous surgical robotics - Robotic systems designed to perform surgical procedures
with minimal human intervention.
12. Robotic personal assistants - AI-powered robots or software that assist users in daily tasks
or activities.
13. Cognitive cybersecurity - AI-based systems that use machine learning and natural language
processing to detect and respond to cyber threats.
14. Neuromorphic computing - Computing systems designed to mimic the structure and
function of the human brain for efficient and parallel processing.

Trends of AI in Healthcare
1. Early detection - Using AI and ML algorithms to analyze medical data and identify early
signs or risk factors for diseases or conditions.
2. Diagnosis - Leveraging AI systems to assist in diagnosing diseases or medical conditions by
analyzing patient data, medical images, and clinical records.
3. Decision making - AI systems providing support to healthcare professionals in making
informed decisions about treatment plans or interventions based on patient data and
medical knowledge.
4. Treatment - Utilizing AI and ML techniques to develop personalized treatment plans,
recommend therapies, or optimize drug dosage based on patient-specific characteristics
and medical evidence.
5. End of life care - AI systems providing support and guidance in managing palliative care,
pain management, and emotional support for patients and their families during the end-of-
life stage.
6. Research - AI algorithms analyzing large-scale medical datasets to uncover patterns,
correlations, and insights that can contribute to medical research, drug discovery, and
advancements in healthcare.
7. Training - AI-based simulations, virtual reality, or augmented reality tools used for medical
training and education, allowing healthcare professionals to practice procedures, surgical
techniques, and decision-making in a safe and controlled environment.

Difference between AI, ML and DL


Who uses Machine Learning ?
1. Financial Services: Banks and other businesses in the financial industry use machine
learning technology for two key purposes: to identify important insights in data, and
prevent fraud. The insights can identify investment opportunities, or help investors know
when to trade. Data mining can also identify clients with high-risk profiles, or use cyber-
surveillance to pinpoint warning signs of fraud.
2. Government Agencies: Government agencies such as public safety and utilities have a
particular need for machine learning since they have multiple sources of data that can be
mined for insights. Analyzing sensor data, for example, identifies ways to increase efficiency
and save money. Machine learning can also help detect fraud and minimize identity theft.
3. Healthcare: Machine learning is a fast-growing trend in the health care industry, thanks to
the advent of wearable devices and sensors that can use data to assess a patient's health in
real time. The technology can also help medical experts analyze data to identify trends or
red flags that may lead to improved diagnoses and treatment.
4. Retail: Websites recommending items you might like based on previous purchases are using
machine learning to analyze your buying history. Retailers rely on machine learning to
capture data, analyze it and use it to personalize a shopping experience, implement a
marketing campaign, price optimization, merchandise supply planning, and for customer
insights.
5. Oil and Gas: Finding new energy sources. Analyzing minerals in the ground. Predicting
refinery sensor failure. Streamlining oil distribution to make it more efficient and cost-
effective. The number of machine learning use cases for this industry is vast – and still
expanding.
6. Transportation and Logistics: Analyzing data to identify patterns and trends is key to the
transportation industry, which relies on making routes more efficient and predicting
potential problems to increase profitability. The data analysis and modeling aspects of
machine learning are important tools to delivery companies, public transportation and other
transportation organizations.

Machine Learning Methods


1. Supervised Learning: Supervised learning is commonly used in applications where historical
data predicts likely future events. For example, it can anticipate when credit card transactions
are likely to be fraudulent or which insurance customer is likely to file a claim. The most
common fields of use for supervised learning are price prediction and trend forecasting in
sales, retail commerce, and stock trading. In both cases, an algorithm uses incoming data to
assess the possibility and calculate possible outcomes.
2. Unsupervised Learning: Unsupervised learning works well on transactional data. For
example, it can identify segments of customers with similar attributes who can then be
treated similarly in marketing campaigns. Or it can find the main attributes that separate
customer segments from each other. Digital marketing and ad tech are the fields where
unsupervised learning is used to its maximum effect. In addition to that, this algorithm is
often applied to explore customer information and adjust the service accordingly.
3. Reinforcement Learning: This is often used for robotics, gaming and navigation. With
reinforcement learning, the system discovers through trial and error which actions yield the
greatest rewards. Modern NPCs (non-playing characters) and other video games use this type
of machine learning model a lot. Reinforcement Learning provides flexibility to the AI
reactions to the player's action thus providing viable challenges. For example, the collision
detection feature uses this type of ML algorithm for the moving vehicles and people in the
Grand Theft Auto series.

Artificial Intelligence and Azure


Definition - Azure is a cloud computing platform offered by Microsoft that provides a comprehensive
suite of services and tools for building, deploying, and managing applications and services.

Azure provides a robust platform for developing, deploying, and managing AI applications. Azure
offers a wide range of AI services and tools, such as Azure Cognitive Services and Azure Machine
Learning, that enable developers to integrate AI capabilities into their applications without the need
for extensive infrastructure setup. Azure provides scalable computing resources, data storage and
analytics capabilities, and a supportive developer ecosystem, making it an ideal platform for
leveraging AI technologies and building intelligent applications.

Popular tools for visualizing machine learning experiments


1. TensorBoard: TensorBoard is a web-based visualization tool provided by TensorFlow that
allows users to visualize and analyze machine learning models. It provides interactive
visualizations of metrics, graphs, and histograms to monitor and debug model performance.
2. MLflow: MLflow is an open-source platform that offers experiment tracking, model
packaging, and deployment capabilities. It includes a UI component that allows users to
visualize experiment runs, compare metrics, and track model versions.
3. Neptune: Neptune is a cloud-based experiment management tool designed for machine
learning workflows. It provides interactive visualizations, charts, and tables to track
experiment results, compare models, and collaborate with team members.
4. Sacredboard: Sacredboard is a lightweight web interface for visualizing experiments
conducted with the Sacred library. It provides real-time visualizations of metrics, plots, and
configurations, making it easy to monitor and analyze experiment results.
5. Comet.ml: Comet.ml is a cloud-based experiment tracking and visualization platform. It
offers interactive visualizations, comparison tools, and collaboration features to track and
analyze experiments across different frameworks and languages.
6. Guild AI: Guild AI is a command-line tool that helps with experiment management and
visualization. It provides various visualizations, including performance comparisons and
model diagnostics, to gain insights into experiment results.
7. Weights & Biases: Weights & Biases is a platform that offers experiment tracking,
visualization, and collaboration features. It provides interactive visualizations, charts, and
dashboards to monitor and analyze machine learning experiments.

Computer Vision
Computer vision is a field of study where computers learn to see and understand images and videos,
just like humans do. It involves developing algorithms and techniques that allow computers to
extract information from visual data and make sense of what they "see." In simpler terms, computer
vision is about teaching computers to recognize and understand the world through visual
information. It enables computers to identify objects, detect patterns, recognize faces, understand
gestures, and even interpret emotions from images or videos. This technology has many practical
applications, such as self-driving cars, facial recognition, medical imaging, surveillance systems, and
augmented reality.
Computer vision algorithms analyze pixels in images or frames in videos, looking for specific patterns,
shapes, colors, or movements. By using machine learning techniques, computers can learn from
large datasets to recognize objects, classify images, and perform tasks like object tracking or image
segmentation.
Computer vision is an exciting field that aims to bridge the gap between human vision and machine
intelligence. Its applications are diverse and can enhance various aspects of our lives, making
computers more perceptive and enabling them to assist us in tasks that require visual understanding.
Working of Computer Vision
Scenario 1

To an AI application, an image is just an array of pixel values. These numeric values can be used as
features to train machine learning models that make predictions about the image and its contents.

Training machine learning models from scratch can be very time intensive and require a large amount
of data. Microsoft's Computer Vision service gives you access to pre-trained computer vision
capabilities.
Points –

1. Image analysis - Analyzing and extracting meaningful information from images.


2. Face detection - Identifying and locating human faces in images or videos.
3. Optical character recognition - Converting printed or handwritten text into machine-
readable text.

Conversational AI

Natural Language Processing


Natural Language Processing (NLP) is a field of study where computers learn to understand and
interact with human language, just like how we communicate with each other. It involves developing
algorithms and techniques that allow computers to process and interpret text or speech, enabling
them to understand, analyze, and generate human language.

In simpler terms, NLP is about teaching computers to understand and make sense of human
language. It helps computers to read, comprehend, and respond to text or speech in a way that is
similar to how we humans do. NLP algorithms can recognize the meaning of words, identify the
structure of sentences, extract information from texts, and even generate coherent responses.

NLP technology powers many everyday applications, such as virtual assistants (like Siri or Alexa),
chatbots, language translation services, spam filters, sentiment analysis tools, and recommendation
systems. It allows computers to understand our questions, provide relevant answers, analyze our
sentiments, and assist us in various language-related tasks.

NLP algorithms use techniques like machine learning and natural language understanding to process
and interpret human language. They learn from large amounts of text data to recognize patterns,
understand context, and make predictions about the meaning of words or sentences.
In summary, NLP is a field that focuses on enabling computers to understand and interact with
human language, enabling them to read, comprehend, and generate text or speech in a way that
resembles human communication.

Examples of NLP
1. Sentiment Analysis: Analyzing text to determine the sentiment expressed (positive, negative,
neutral).
2. Language Translation: Automatic translation of text from one language to another.
3. Chatbots: AI-powered conversational agents that interact with users in natural language.
4. Named Entity Recognition: Identifying and extracting specific entities from text (names,
locations, dates).
5. Question Answering Systems: Understanding and providing accurate answers to user
questions using NLP techniques.
6. Text Summarization: Generating concise summaries of long texts by extracting key
information.
7. Text Classification: Categorizing text into predefined categories or topics.
8. Voice Assistants: Virtual assistants that understand spoken commands and perform tasks
using NLP capabilities.

Working of Conversational AI
How to Create Conversational AI
Difference between Traditional AI-powered chatbots
chatbots and AI – Powered
Chatbots: Traditional Chatbots
Low complexity Focused, Complex, contextual
▪ Basic answer and transactional ▪ Goes beyond
response machines ▪ Can manage conversations
▪ Allow for simple complex dialogues ▪ Contextually aware
integration ▪ Integrate with and intelligent
▪ Based on limited multiple legacy/back- ▪ Can self-learn and
end systems improve over time

Tune HyperParameters
Tuning model hyperparameters refers to the process of selecting the optimal values for various
parameters that define the architecture, settings, or behavior of a machine learning model. Here's
a definition of "Tune Model Hyperparameters" in the context of Conversational AI:

In Conversational AI, tuning model hyperparameters involves adjusting the settings and
configurations of the conversational model to optimize its performance and achieve the desired
outcomes. This process helps improve the accuracy, robustness, and responsiveness of the
Conversational AI system.

Hyperparameters can vary depending on the specific conversational model or framework being
used. Here are some examples of hyperparameters that can be tuned in Conversational AI:

1. Learning rate: This hyperparameter determines the step size at which the model's
parameters are updated during training. Adjusting the learning rate can impact the
convergence speed and stability of the model.
2. Number of layers: The number of layers in a conversational model, such as a neural
network, can affect its capacity to capture complex patterns and generalize well. Tuning the
number of layers helps find the right balance between model complexity and generalization
ability.
3. Hidden units/neurons: The number of hidden units or neurons in each layer of a
conversational model influences its representational power and computational efficiency.
Finding the appropriate number of hidden units can optimize the model's performance.
4. Activation functions: Choosing the right activation functions for the model's neurons can
impact its ability to capture non-linear relationships within the conversational data.
Common choices include sigmoid, ReLU, or tanh functions, and tuning them can improve
the model's performance.
5. Regularization parameters: Regularization techniques like L1 or L2 regularization help
prevent overfitting by adding penalties to the model's loss function. Tuning the
regularization parameters helps control the trade-off between model complexity and
regularization strength.
6. Dropout rate: Dropout is a regularization technique that randomly drops out a fraction of
neurons during training to reduce over-reliance on specific connections. Tuning the dropout
rate helps control the amount of regularization applied and prevent overfitting.
7. Batch size: The batch size determines the number of training examples processed in each
iteration during training. Adjusting the batch size can affect the convergence speed,
memory usage, and generalization performance of the model.
8. Epochs: The number of epochs defines the number of times the training data is passed
through the model during training. Tuning the number of epochs helps strike a balance
between underfitting and overfitting, ensuring the model converges to an optimal state.

The process of tuning hyperparameters typically involves performing multiple experiments with
different parameter values, evaluating the model's performance on a validation set, and selecting
the best combination of hyperparameters that maximizes the desired metrics (e.g., accuracy, F1
score, or user satisfaction).

Train a model by using a parameter sweep


This section describes how to perform a basic parameter sweep, which trains a model by using the
Tune Model Hyper parameters component.

1. Add the Tune Model Hyper parameters component to your pipeline in the designer.
2. Connect an untrained model to the leftmost input.
3. Add the dataset that you want to use for training, and connect it to the middle input of
Tune Model Hyper parameters.

You might also like