AI Notes Module 1
AI Notes Module 1
Practical 13 hours
1. Describe Artificial Intelligence workloads and
considerations
ARTIFICIAL INTELLIGENCE, MACHINE LEARNING AND DEEP
LEARNING
• AI stands for Artificial Intelligence. It is a broad field of computer science that focuses on creating
intelligent machines capable of performing tasks that typically require human intelligence. AI
involves the development of algorithms and systems that can analyze data, reason, learn from
experience, and make decisions.
• ML stands for Machine Learning. It is a subset of AI that focuses on enabling computers to learn
and make predictions or decisions without being explicitly programmed. ML algorithms learn
from data and improve their performance over time through experience. They can recognize
patterns, make predictions, and uncover insights from large datasets.
• DL stands for Deep Learning. It is a subfield of ML that is inspired by the structure and function of
the human brain's neural networks. Deep Learning models, also known as artificial neural
networks, are designed to automatically learn hierarchical representations of data by using
multiple layers of interconnected nodes, or "neurons." DL algorithms excel at solving complex
problems, such as image and speech recognition, natural language processing, and autonomous
driving, by learning directly from raw data.
Artificial Intelligence
1. Personal Assistants - Virtual assistants like Siri, Google Assistant, and Amazon Alexa use AI to
understand natural language, perform tasks, and provide information or assistance to users.
2. Chatbots - AI-powered chatbots are used in customer service to interact with users, answer
questions, provide recommendations, and assist with various tasks, often using natural
language processing and machine learning techniques.
3. Autonomous Vehicles - Self-driving cars leverage AI technologies, including computer vision,
sensor fusion, and decision-making algorithms, to navigate roads, detect obstacles, and make
real-time driving decisions.
Machine Learning
1. Spam Detection - Email providers often use ML algorithms to analyze incoming emails and
classify them as either spam or legitimate based on patterns and characteristics identified in
a large dataset of labeled emails.
2. Recommendation Systems - Online platforms like Netflix and Amazon use ML algorithms to
analyze user preferences, past behavior, and item attributes to generate personalized
recommendations for movies, TV shows, products, and more.
3. Fraud Detection - ML algorithms can analyze large volumes of financial transaction data and
detect patterns indicative of fraudulent activity, helping financial institutions identify and
prevent fraudulent transactions.
Deep Learning
1. Image Recognition - Deep Learning models, such as Convolutional Neural Networks (CNNs),
can be trained to accurately classify and recognize objects in images. Applications include
facial recognition, self-driving cars identifying objects on the road, and medical image
analysis.
2. Natural Language Processing - Deep Learning models, such as Recurrent Neural Networks
(RNNs) and Transformers, are used in language processing tasks like machine translation,
sentiment analysis, and speech recognition.
Trends of AI in Healthcare
1. Early detection - Using AI and ML algorithms to analyze medical data and identify early
signs or risk factors for diseases or conditions.
2. Diagnosis - Leveraging AI systems to assist in diagnosing diseases or medical conditions by
analyzing patient data, medical images, and clinical records.
3. Decision making - AI systems providing support to healthcare professionals in making
informed decisions about treatment plans or interventions based on patient data and
medical knowledge.
4. Treatment - Utilizing AI and ML techniques to develop personalized treatment plans,
recommend therapies, or optimize drug dosage based on patient-specific characteristics
and medical evidence.
5. End of life care - AI systems providing support and guidance in managing palliative care,
pain management, and emotional support for patients and their families during the end-of-
life stage.
6. Research - AI algorithms analyzing large-scale medical datasets to uncover patterns,
correlations, and insights that can contribute to medical research, drug discovery, and
advancements in healthcare.
7. Training - AI-based simulations, virtual reality, or augmented reality tools used for medical
training and education, allowing healthcare professionals to practice procedures, surgical
techniques, and decision-making in a safe and controlled environment.
Azure provides a robust platform for developing, deploying, and managing AI applications. Azure
offers a wide range of AI services and tools, such as Azure Cognitive Services and Azure Machine
Learning, that enable developers to integrate AI capabilities into their applications without the need
for extensive infrastructure setup. Azure provides scalable computing resources, data storage and
analytics capabilities, and a supportive developer ecosystem, making it an ideal platform for
leveraging AI technologies and building intelligent applications.
Computer Vision
Computer vision is a field of study where computers learn to see and understand images and videos,
just like humans do. It involves developing algorithms and techniques that allow computers to
extract information from visual data and make sense of what they "see." In simpler terms, computer
vision is about teaching computers to recognize and understand the world through visual
information. It enables computers to identify objects, detect patterns, recognize faces, understand
gestures, and even interpret emotions from images or videos. This technology has many practical
applications, such as self-driving cars, facial recognition, medical imaging, surveillance systems, and
augmented reality.
Computer vision algorithms analyze pixels in images or frames in videos, looking for specific patterns,
shapes, colors, or movements. By using machine learning techniques, computers can learn from
large datasets to recognize objects, classify images, and perform tasks like object tracking or image
segmentation.
Computer vision is an exciting field that aims to bridge the gap between human vision and machine
intelligence. Its applications are diverse and can enhance various aspects of our lives, making
computers more perceptive and enabling them to assist us in tasks that require visual understanding.
Working of Computer Vision
Scenario 1
To an AI application, an image is just an array of pixel values. These numeric values can be used as
features to train machine learning models that make predictions about the image and its contents.
Training machine learning models from scratch can be very time intensive and require a large amount
of data. Microsoft's Computer Vision service gives you access to pre-trained computer vision
capabilities.
Points –
Conversational AI
In simpler terms, NLP is about teaching computers to understand and make sense of human
language. It helps computers to read, comprehend, and respond to text or speech in a way that is
similar to how we humans do. NLP algorithms can recognize the meaning of words, identify the
structure of sentences, extract information from texts, and even generate coherent responses.
NLP technology powers many everyday applications, such as virtual assistants (like Siri or Alexa),
chatbots, language translation services, spam filters, sentiment analysis tools, and recommendation
systems. It allows computers to understand our questions, provide relevant answers, analyze our
sentiments, and assist us in various language-related tasks.
NLP algorithms use techniques like machine learning and natural language understanding to process
and interpret human language. They learn from large amounts of text data to recognize patterns,
understand context, and make predictions about the meaning of words or sentences.
In summary, NLP is a field that focuses on enabling computers to understand and interact with
human language, enabling them to read, comprehend, and generate text or speech in a way that
resembles human communication.
Examples of NLP
1. Sentiment Analysis: Analyzing text to determine the sentiment expressed (positive, negative,
neutral).
2. Language Translation: Automatic translation of text from one language to another.
3. Chatbots: AI-powered conversational agents that interact with users in natural language.
4. Named Entity Recognition: Identifying and extracting specific entities from text (names,
locations, dates).
5. Question Answering Systems: Understanding and providing accurate answers to user
questions using NLP techniques.
6. Text Summarization: Generating concise summaries of long texts by extracting key
information.
7. Text Classification: Categorizing text into predefined categories or topics.
8. Voice Assistants: Virtual assistants that understand spoken commands and perform tasks
using NLP capabilities.
Working of Conversational AI
How to Create Conversational AI
Difference between Traditional AI-powered chatbots
chatbots and AI – Powered
Chatbots: Traditional Chatbots
Low complexity Focused, Complex, contextual
▪ Basic answer and transactional ▪ Goes beyond
response machines ▪ Can manage conversations
▪ Allow for simple complex dialogues ▪ Contextually aware
integration ▪ Integrate with and intelligent
▪ Based on limited multiple legacy/back- ▪ Can self-learn and
end systems improve over time
Tune HyperParameters
Tuning model hyperparameters refers to the process of selecting the optimal values for various
parameters that define the architecture, settings, or behavior of a machine learning model. Here's
a definition of "Tune Model Hyperparameters" in the context of Conversational AI:
In Conversational AI, tuning model hyperparameters involves adjusting the settings and
configurations of the conversational model to optimize its performance and achieve the desired
outcomes. This process helps improve the accuracy, robustness, and responsiveness of the
Conversational AI system.
Hyperparameters can vary depending on the specific conversational model or framework being
used. Here are some examples of hyperparameters that can be tuned in Conversational AI:
1. Learning rate: This hyperparameter determines the step size at which the model's
parameters are updated during training. Adjusting the learning rate can impact the
convergence speed and stability of the model.
2. Number of layers: The number of layers in a conversational model, such as a neural
network, can affect its capacity to capture complex patterns and generalize well. Tuning the
number of layers helps find the right balance between model complexity and generalization
ability.
3. Hidden units/neurons: The number of hidden units or neurons in each layer of a
conversational model influences its representational power and computational efficiency.
Finding the appropriate number of hidden units can optimize the model's performance.
4. Activation functions: Choosing the right activation functions for the model's neurons can
impact its ability to capture non-linear relationships within the conversational data.
Common choices include sigmoid, ReLU, or tanh functions, and tuning them can improve
the model's performance.
5. Regularization parameters: Regularization techniques like L1 or L2 regularization help
prevent overfitting by adding penalties to the model's loss function. Tuning the
regularization parameters helps control the trade-off between model complexity and
regularization strength.
6. Dropout rate: Dropout is a regularization technique that randomly drops out a fraction of
neurons during training to reduce over-reliance on specific connections. Tuning the dropout
rate helps control the amount of regularization applied and prevent overfitting.
7. Batch size: The batch size determines the number of training examples processed in each
iteration during training. Adjusting the batch size can affect the convergence speed,
memory usage, and generalization performance of the model.
8. Epochs: The number of epochs defines the number of times the training data is passed
through the model during training. Tuning the number of epochs helps strike a balance
between underfitting and overfitting, ensuring the model converges to an optimal state.
The process of tuning hyperparameters typically involves performing multiple experiments with
different parameter values, evaluating the model's performance on a validation set, and selecting
the best combination of hyperparameters that maximizes the desired metrics (e.g., accuracy, F1
score, or user satisfaction).
1. Add the Tune Model Hyper parameters component to your pipeline in the designer.
2. Connect an untrained model to the leftmost input.
3. Add the dataset that you want to use for training, and connect it to the middle input of
Tune Model Hyper parameters.