0% found this document useful (0 votes)
1 views

intro to ai and ml microsoft

The document introduces the Coursera Community as a platform for course participants to engage, seek guidance, and build networks while discussing AI/ML engineering roles and responsibilities. It provides a step-by-step guide for setting up an environment in Microsoft Azure, emphasizing the importance of tools like Azure Machine Learning Studio and Jupyter Notebooks for AI/ML tasks. Additionally, it outlines key factors for selecting model deployment strategies in Azure, focusing on speed, cost efficiency, ease of use, and scalability.

Uploaded by

soumyamish810
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

intro to ai and ml microsoft

The document introduces the Coursera Community as a platform for course participants to engage, seek guidance, and build networks while discussing AI/ML engineering roles and responsibilities. It provides a step-by-step guide for setting up an environment in Microsoft Azure, emphasizing the importance of tools like Azure Machine Learning Studio and Jupyter Notebooks for AI/ML tasks. Additionally, it outlines key factors for selecting model deployment strategies in Azure, focusing on speed, cost efficiency, ease of use, and scalability.

Uploaded by

soumyamish810
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Welcome to the Coursera Community

Throughout the course, you will have the opportunity to connect with the Coursera Community - a
dedicated space for engaging in discussions, seeking guidance, and building a network of support.
This Community is a place to connect, start conversations, ask questions, support each other, and
learn together. This space is for you. Engage with your peers while earning reputation points and
badges along the way. Use the Coursera Community to get help when you’re stuck, get career
advice, and meet people with similar interests.

Use the Machine Learning topic to ask questions and learn more about subject matter related to this
course, or join a group or two to find people similar to you to talk about your common learning and
career interests. To learn more about the Coursera Community, view the Quick Start Guide.

Start by introducing yourself to others in the Machine Learning topic on Coursera Community. Let
others know a bit about your background, your interests, the course or program you’re currently
pursuing, and what got you interested in the topic. If you’re not yet ready to post, simply explore
and follow the topics and groups you’re interested in so you can stay abreast of the conversations
taking place.

Discussion: AI/ML engineer responsibilities

Discussion prompt

In this discussion, you will explore the roles and responsibilities of a professional AI/ML engineer.
Reflect on what you've learned so far, and answer the following questions:

1. Core responsibilities: What do you believe are the key responsibilities of an AI/ML engineer
in a professional setting? Consider tasks such as model development, data management,
deployment, and monitoring.

2. Skill set: What specific skills (technical and non-technical) are essential for success in this
role? How do these skills contribute to solving real-world problems?

3. Challenges and trends: What are some of the challenges AI/ML engineers face in today's
industry? How do emerging trends, such as ethical AI and model interpretability, impact the
role?

Instructions

 Write a post between 150 and 300 words, addressing these questions.

 Be specific, and provide examples where possible.

 After posting, respond to at least two of your peers' posts, offering feedback or expanding
on their ideas. Consider discussing different roles within AI/ML, such as research,
development, or deployment.

Example post

As an AI/ML engineer, one of the core responsibilities is developing and training ML models that
solve specific business problems. For example, in finance, this could mean building models to detect
fraud. Engineers also need strong data management skills, ensuring data is clean, properly
formatted, and representative of the real-world problem. Additionally, technical skills in Python,
TensorFlow, and cloud platforms such as Azure are essential, but so are soft skills such as
communication, especially when explaining model results to nontechnical stakeholders. One
emerging challenge is ensuring AI systems are ethical and transparent. As models become more
complex, maintaining interpretability becomes crucial for responsible AI deployment.

Coursera Community

Submit your answer in this Coursera Community discussion thread.

The Coursera Community is a place to connect, start conversations, ask questions, support each
other, and learn together.

Microsoft updates

In the fast-paced world of AI/ML development, staying up to date with the latest software updates is
crucial. Microsoft Azure is a dynamic platform that regularly introduces new features,
enhancements, and fixes to improve its services. As you progress through this course, it's essential to
be aware that some tools and functionalities in Azure may change. These updates can impact how
you implement solutions, so it’s important to keep an eye on the latest developments.

To help you stay informed, we recommend regularly checking the official Microsoft Azure Release
Notes page. This resource provides detailed information on all recent updates, ensuring you are
always aware of new features or changes. Remember, adapting to these updates is a standard
practice in the industry, and being comfortable with evolving platforms will make you a more
effective and agile AI/ML engineer.

Mark as completed

Like

Dislike

Report an issue

Practice activity: Setting up your environment in Microsoft Azure

Introduction

In this reading, you’ll set up your environment in Microsoft Azure, building a solid foundation for all
testing, application, and deployment tasks you’ll be working on throughout the course. Follow each
step to ensure your environment is configured accurately, setting you up for success as you dive into
hands-on exercises and apply new concepts.

Step-by-step guide for setting up your environment in Microsoft Azure

Creating your environment in Azure consists of the following five phases that will include various
steps:

 Phase 1: Create an Azure account

 Phase 2: Set up a workspace

 Phase 3: Create a compute instance

 Phase 4: Execute code in your Jupyter Notebook

 Phase 5: Identify common pitfalls with Notebooks


Phase 1: Create an Azure account

Step 1: Sign up for Azure

If you don't already have an Azure account, visit the Azure Portal, and sign up for a free account.

IMPORTANT: your free account will include a $200 credit that expires after 30 days. If you do not
complete this program within 30 days, you’ll need to upgrade to a pay-as-you-go account to
complete the program.

Students at eligible institutions may be eligible for a $100 credit that expires after 12 months.

Step 2: Access the Azure Portal

Once you’ve signed in, you’ll be directed to the Azure Portal dashboard.

Phase 2: Set up a workspace

Step 1: Access the Azure Machine Learning Studio

Go to https://ml.azure.com.

Step 2: Create a new workspace

1. Azure may prompt you to create a new workspace upon your first time visiting
ml.azure.com. If it does not, click “Create workspace” near the top right.

2. Choose a name for your workspace. Optionally, choose a friendly name.

3. Select your existing Azure subscription.

4. Create a new resource group; the default name is fine.

5. Select the region that’s geographically closest to you.

6. Click “Create.”
Step 3: Enter the workspace

It will take a few minutes for Azure to create your workspace. Once it’s finished, click on the
workspace to enter it.

Step 4: Enter the Notebooks section

In the left panel, under “Authoring,” is the “Notebooks” section. Click on it.
Phase 3: Create a compute instance

Step 1: In the Notebooks section, click “Create compute”


Step 2: Define the required settings

1. Give your compute instance a name.

2. Select virtual machine type CPU.

3. Keep the preselected virtual machine size.

4. Click “Review + Create.”

Step 3: Create the compute instance

Click “Create.”

Phase 4: Execute code in your Jupyter Notebook

Step 1: Create a Jupyter Notebook

1. Click the “+ Files button”

2. Select “Create new file”

3. Change the file name to test.ipynb

4. Click “Create”

Step 2: Attach a compute instance

If you just created your first compute instance, you’ll have to wait for Azure to finish creating it
before you can proceed.

The instance should automatically attach. If it doesn’t, select your instance from the drop-down
menu.
Step 3: Select the appropriate kernel

The kernel selection menu is on the top right. Select “Python 3.8 - Azure ML” or the latest version of
the Azure ML kernel. It’ll take a moment to become active.

Step 4: Execute code

Type this code into the code cell in your notebook:

import tensorflow as tf

print("TensorFlow Version: " + tf.__version__)

Press Shift+Enter to run the code and proceed to the next cell. You should see some error messages
followed by your TensorFlow version.

Phase 5: Identify common pitfalls with Notebooks

Pitfall 1: Error messages

When using libraries like TensorFlow, you will get error messages you can safely ignore because they
pertain to GPU functionality and you are using a CPU-only instance:

Pitfall 2: Modifications to previous cells don’t take effect until you run those cells

Modifications to code in cells don’t immediately take effect. If you change code in a previous cell, it’s
recommended to click the “Restart kernel and run all cells” button:
Pitfall 3: Wrong kernel

New notebooks will use the Python 3.10 - SDK v2 kernel by default. Make sure you change your
kernel to Python 3.8 - Azure ML or the latest version of the Azure ML kernel.

Conclusion

After following the steps above, your Azure environment should now be set up and ready to use.

If you need further help, see the following documents or consult Microsoft Copilot:

 Quickstart: Get started with Azure Machine Learning - Azure Machine Learning | Microsoft
Learn

 Tutorial: Create workspace resources - Azure Machine Learning | Microsoft Learn

Walkthrough: Setting up your environment in Microsoft Azure (Optional)

By now, you have set up a lab environment in Microsoft Azure. If you haven't, see "Practice activity:
Setting Up Your Environment in Microsoft Azure" for step-by-step instructions. This reading provides
a more general overview, as well as an explanation as to the rationale behind some of the steps you
took to set up your environment, and explains how these steps are used by professionals in the
AI/ML industry.

By the end of this walkthrough, you will be able to:

 Understand the process of setting up an AI/ML environment in Azure.

 Apply professional cloud management practices.

 Install and configure essential AI/ML tools.

Overview of setting up your environment in Microsoft Azure

An Azure account
What it does: Creating an Azure account is your entry point into the Azure ecosystem. This account
gives you access to Azure's vast array of cloud services, including those necessary for AI/ML
development. By signing up, you get a centralized dashboard (Azure Portal) in which you can
manage all your resources, such as virtual machines (VMs), databases, and networking components.

Professional use: Professionals use Azure accounts to deploy and manage scalable applications in
the cloud. Having an Azure account is essential for accessing the tools and services required for
AI/ML projects, from initial data processing to model deployment.

Resource groups

What it does: A resource group in Azure is a logical container for resources that share the same life
cycle, such as VMs, storage accounts, and databases. By organizing resources into groups, you can
manage and monitor them collectively, apply access controls, and track costs more effectively.

Professional use: In a professional setting, resource groups help teams organize resources for
different projects or environments (e.g., development, testing, production). This organizational
structure simplifies resource management and groups related assets together, making it easier to
manage and scale AI/ML solutions.

Virtual machines

What it does: A VM is a software-based emulation of a physical computer. In Azure, a VM allows you


to run an isolated environment in which you can install and configure the software needed for AI/ML
tasks, such as Python, Jupyter Notebooks, and ML libraries.

Professional use: Professionals use VMs to create development environments tailored to specific
projects. For AI/ML engineers, VMs provide the flexibility to experiment with different tools, test
code, and run ML models without impacting their local machine or production systems. Azure’s VMs
are scalable, meaning you can adjust computing resources based on your workload’s demands.

SSH access

What it does: Secure Shell (SSH) access allows you to connect to your VM securely from your local
machine. It encrypts the connection between your computer and the VM, ensuring that your data
and commands are secure.

Professional use: SSH is a fundamental tool for professionals who need to manage and operate their
VMs remotely. By using SSH, AI/ML engineers can interact with their cloud-based environments as if
they were sitting directly at the machine, allowing them to run scripts, install software, and
troubleshoot issues from anywhere.

Essential tools and libraries

What it does: Python is the primary programming language used for ML, and libraries such
as NumPy, pandas, and Scikit-learn provide the tools necessary for data manipulation, analysis, and
model development. Jupyter Notebook offers an interactive environment for writing and running
code, making it easier to visualize data and share results.

Professional use: Professionals rely on these tools to conduct data analysis, develop ML models, and
iterate quickly on their experiments. Python's rich ecosystem of libraries makes it the go-to language
for AI/ML development. Jupyter Notebooks are widely used in the industry for developing and
documenting ML workflows, especially in collaborative environments.
Save as a template

What it does: Saving your environment as an Azure Resource Manager (ARM) template allows you
to capture the configuration of your lab environment in a reusable format. You can use this template
to recreate the environment quickly, ensuring consistency across different projects or team
members.

Professional use: Professionals use ARM templates to automate the deployment of complex
environments. By saving a VM configuration as a template, AI/ML teams can ensure that their
environments are reproducible, which is critical for collaborative projects and scaling solutions
across multiple instances or regions. It also allows for quick recovery if an environment needs to be
rebuilt.

Conclusion

Setting up your environment in Azure is a crucial step in your journey to becoming proficient in
AI/ML deployment. Each step in this process mirrors the real-world practices of AI/ML professionals,
from organizing resources and managing environments to ensuring security and scalability.
Understanding the purpose and application of each step will help you work more effectively and
prepare you for the tasks you’ll face as an AI/ML engineer in the industry.

Mark as completed

Like

Dislike

Report an issueSelecting the right model deployment strategy in Microsoft Azure

Introduction

Deploying a machine learning model in a Microsoft Azure environment involves several critical
decisions. The choices you make can significantly impact the performance, cost, and scalability of
your solution. In this reading, we'll explore the key factors to consider when selecting the right
model deployment strategy in Azure. By understanding these elements, you'll be better equipped to
choose a deployment method that aligns with your project requirements and business goals.

By the end of this reading, you will be able to:

 Evaluate and select the appropriate model deployment strategy in Azure by considering key
factors such as speed, cost, ease of use, scalability, updates, and security to ensure effective
and efficient AI/ML project outcomes.

Deployment speed

Why it matters

Speed is a crucial factor when deploying models, especially in scenarios where quick iteration or
real-time predictions are necessary. The faster you can deploy your model, the quicker you can start
gathering insights and adjusting your strategies based on real-world performance.

Considerations
 Azure Machine Learning service: This service offers a streamlined way to deploy models
with minimal setup time. It supports deploying models as RESTful web services, allowing for
rapid deployment and easy integration into existing applications.

 Azure Kubernetes Service (AKS): If you require high availability and rapid scaling, AKS can
quickly deploy containerized models. However, it requires more initial setup and familiarity
with Kubernetes.

Professional tip

For projects requiring rapid prototyping or low-latency predictions, Azure Machine Learning service
is often the best choice due to its simplicity and speed.

Cost efficiency

Why it matters

Cost is a significant consideration, especially when deploying models at scale. Azure offers various
pricing tiers and services, each with different cost implications. Understanding the cost structure can
help you optimize your deployment for budget constraints.

Considerations

 Azure Functions: For infrequent or lightweight deployments, Azure Functions offers a


serverless computing option where you only pay for the execution time of your function.
This can be cost-effective for models that don't require constant availability.

 Azure Container Instances (ACI): ACI is a lower-cost option for deploying containerized
models without the need for orchestration. It’s ideal for small-scale or temporary
deployments.

 Reserved instances: For long-term deployments, consider using reserved instances, which
offer significant discounts compared to pay-as-you-go pricing.

Professional tip

Evaluate the expected usage of your model, and choose a deployment option that balances
performance and cost. For enterprise-level deployments, consider reserved instances or volume
discounts.

Ease of use

Why it matters

The complexity of setting up and maintaining your deployment environment can affect your
productivity and the overall success of your project. Selecting an option that matches your team's
expertise and project requirements is essential.

Considerations

 Azure Machine Learning Studio: This low-code/no-code environment allows for easy
deployment with a graphical interface. It’s ideal for teams that may not have deep DevOps
or cloud computing expertise.
 Azure App Service: This option offers a straightforward way to deploy web applications and
APIs. If your model needs to be part of a web-based application, Azure App Service provides
an easy-to-manage environment with integrated deployment pipelines.

Professional tip

For teams with limited cloud or DevOps experience, Azure Machine Learning studio provides a user-
friendly interface that simplifies the deployment process.

Scalability

Why it matters

As your model's usage grows, so too will the need for a scalable deployment solution. Azure provides
various options that allow your deployment to scale seamlessly, ensuring that your model can
handle increased demand without compromising performance.

Considerations

 Azure Kubernetes Service (AKS): For large-scale, enterprise-level deployments, AKS provides
robust scalability features. It supports autoscaling, load balancing, and orchestrating
multiple container instances.

 Azure Batch: If your deployment involves processing large volumes of data or requires
parallel execution of multiple models, Azure Batch offers a scalable solution that can
distribute workloads across many virtual machines.

Professional tip

Choose AKS for deployments that require extensive scaling and high availability, particularly in
production environments where performance is critical.

Updates and maintenance

Why it matters

Maintaining and updating deployed models is an ongoing process. The ease with which you can push
updates, monitor performance, and troubleshoot issues can greatly impact the long-term success of
your deployment.

Considerations

 Azure DevOps: Integrating your deployment pipeline with Azure DevOps allows for
continuous integration and continuous deployment (CI/CD). This makes it easier to push
updates, roll back changes, and automate testing.

 Azure monitoring tools: Azure provides a range of monitoring tools such as Azure Monitor,
Log Analytics, and Application Insights. These tools help you track model performance,
detect anomalies, and troubleshoot issues in real time.

Professional tip

Integrate Azure DevOps into your deployment strategy to ensure smooth and consistent updates.
Use Azure’s monitoring tools to keep a close eye on your model’s performance and health.

Security and compliance


Why it matters

Security and compliance are critical, especially when dealing with sensitive data or deploying models
in regulated industries. Azure provides built-in security features and compliance certifications that
can help protect your deployment.

Considerations

 Azure Security Center: This service provides a unified security management system and
advanced threat protection across your Azure environment. It helps identify vulnerabilities
and ensures that your deployment complies with industry standards.

 Compliance certifications: Azure meets a wide range of international and industry-specific


compliance standards, such as GDPR, HIPAA, and ISO/IEC 27001. Ensure that your
deployment strategy aligns with the necessary compliance requirements.

Professional tip

Always review the security and compliance requirements of your project before choosing a
deployment method. Use the Azure Security Center to maintain a secure deployment environment.

Conclusion

Selecting the right model deployment strategy in Azure involves balancing multiple factors, including
speed, cost, ease of use, scalability, updates, and security. By carefully considering each of these
elements, you can choose a deployment method that not only meets your immediate needs but also
supports the long-term success of your AI/ML projects. As you continue to develop your skills and
expertise in Azure, you'll become more adept at making these critical decisions, ensuring your
deployments are both effective and efficient.

Mark as completed

Like

Dislike

Report an issuePractice activity: Selecting the right model deployment strategy in Microsoft Azure

Introduction

In this activity, you will play the AI/ML engineer at an e-commerce company with the goal of driving
customer retention. Review the business use case below and follow the instructions to solve the
business problem.

By the end of this activity, you will be able to:

 Analyze and evaluate different ML models in the context of predicting customer churn.

 Make informed decisions about the most suitable model for this use case.

 Justify your choice of model based on model interpretability, predictive accuracy, and
resource constraints.

Scenario

Business use case: Predicting customer churn for an e-commerce platform


You are working as an AI/ML engineer for an e-commerce company that is concerned about
customer retention. The company has noticed that a significant number of customers are not
returning to make additional purchases after their initial transactions. Your task is to develop an ML
model that can predict customer churn—the likelihood that a customer will stop doing business with
the company.

The company has provided a dataset that includes various customer attributes, such as purchase
history, browsing behavior, customer support interactions, and demographic information. Your goal
is to build a model that identifies customers at high risk of churning so that the marketing team can
target these customers with retention strategies.

Models to consider

Model A: Logistic regression

A simple, interpretable model that uses a linear approach to classify customers into “will churn” or
“will not churn” categories based on their attributes.

Pros:

 High interpretability: easy to understand and explain to non-technical stakeholders.

 Quick to train and deploy: suitable for situations where rapid deployment is needed.

 Low computational cost: ideal for smaller datasets and less complex environments.

Cons:

 Limited complexity: may not capture complex patterns in customer behavior.

 May struggle with large, highly variable datasets.

Model B: Random forest

An ensemble learning method that creates multiple decision trees and merges them to improve
accuracy and prevent overfitting.

Pros:

 Handles large datasets well: can manage thousands of variables and data points.

 High accuracy: often performs well in prediction tasks, especially with complex data.

 Robust to overfitting: ensemble methods reduce the risk of overfitting to the training data.

Cons:

 Less interpretable: the model's complexity makes it harder to explain the results.

 Slower to train: requires more computational resources and time compared to simpler
models.

Model C: Gradient boosting machine (GBM)

Another ensemble technique that builds models sequentially, each new model correcting the errors
of the previous one. It’s known for its high predictive power.

Pros:
 High predictive accuracy: particularly effective for small to medium-sized structured
datasets.

 Can handle non-linear relationships: excels in capturing complex patterns in the data.

Cons:

 Computationally intensive: requires significant processing power and time, especially with
large datasets.

 Risk of overfitting: if not tuned properly, GBM models can overfit the training data.

Activity instructions

1. Analyze the use case. Consider the specific business problem—predicting customer churn in
an e-commerce setting. Think about what aspects of the models are most important for this
use case, such as interpretability, accuracy, and computational efficiency.

2. Evaluate each model. Reflect on the pros and cons of logistic regression, random forest, and
gradient boosting machine (GBM) in the context of the customer churn prediction task.
Consider the dataset you are working with, the importance of model interpretability to your
business stakeholders, and the resources (time and computational power) you have
available for training and deploying the model.

3. Make your selection. Choose the model that you believe is best suited for this business use
case, and justify your choice by explaining why you selected this model over the others.
Consider factors such as the complexity of the problem, the need for accuracy versus
interpretability, and the resources available for deployment.

4. Justify your selection. Write down one to two paragraphs about which model you chose and
your justification. Be sure to include why that model works better than others.

Conclusion

In this activity, you assessed various ML models to predict customer churn for an e-commerce
platform. By balancing model interpretability with predictive accuracy and computational efficiency,
you made an informed choice that addresses the specific needs of the business. Your ability to justify
this selection will empower the marketing team to implement targeted retention strategies,
ultimately enhancing customer loyalty and driving sustainable growth.

Mark as completed

Like

Dislike

Report an issueWalkthrough: Justifying your choice of model selection (Optional)

Introduction

You’ve just selected a model for an e-commerce company to enhance customer retention. Now, you
will review a solution that uses the random forest model.

By the end of this walkthrough, you will be able to:


 Explain the rationale behind selecting the random forest model for customer churn
prediction.

 Describe the limitations of alternative models in the context of this specific use case,
enabling you to make informed decisions in similar scenarios.

Chosen model: Random forest

For the customer churn prediction use case, the random forest model is the ideal choice. This model
excels at handling large and complex datasets, which is crucial given the diverse range of customer
attributes available—such as purchase history, browsing behavior, and demographic information.
The ability of random forest to manage thousands of variables and detect subtle patterns in the data
makes it highly effective for identifying customers at risk of churning. Additionally, random forest is
robust against overfitting due to its ensemble nature, where multiple decision trees work together
to produce a more accurate and generalized prediction.

Why random forest is ideal

The random forest model offers a strong balance between predictive accuracy and resilience, making
it well-suited for the e-commerce environment where predicting customer behavior is challenging
due to the variability in customer interactions and the large volume of data. While it requires more
computational resources and time to train compared with simpler models, the increase in accuracy
and robustness makes it a worthwhile investment. Given that the business goal is to identify churn
risk with high precision, the model's complexity and predictive power align well with the needs of
the company.

Why logistic regression was not selected

Logistic regression, while highly interpretable and easy to deploy, was not selected due to its
limitations in handling complex patterns and interactions in the data. Customer churn prediction
often involves nonlinear relationships and interactions between various features, such as the
combined effect of a customer’s purchasing frequency and their browsing behavior. Logistic
regression's linear approach may not capture these intricate patterns, leading to less accurate
predictions. Furthermore, while interpretability is important, the need for a more sophisticated
model that can better handle the complexity of the data takes precedence in this scenario.

Why gradient boosting machine was not selected

Although the gradient boosting machine (GBM) offers excellent predictive accuracy and the ability to
model complex relationships, it was not chosen due to its computational intensity and higher risk of
overfitting. GBM models require significant processing power and time, especially when dealing with
large datasets. In the context of this use case, where the goal is to deploy a model that is both
effective and manageable, the increased complexity and potential for overfitting make GBM a less
favorable option compared with random forest. While GBM could be considered in scenarios where
maximum accuracy is essential and computational resources are abundant, random forest provides a
more balanced solution for the task at hand.

Conclusion

The random forest model emerges as the optimal choice for predicting customer churn in an e-
commerce setting due to its ability to manage large and intricate datasets effectively. While Logistic
regression and GBM each have their merits, the robustness and accuracy of random forest,
combined with its resistance to overfitting, make it particularly well-suited for this challenge. By
leveraging this model, the company can accurately identify customers at risk of churning, allowing
for targeted retention strategies that enhance overall business performance.

Mark as completed

Like

Dislike

Report an issueourse syllabus: Foundations of AI and Machine Learning Infrastructure

Introduction

This course is part of the Microsoft AI & ML Engineering Professional Certificate, which includes this
series of courses:

1. Foundations of AI and Machine Learning Infrastructure

2. AI and Machine Learning Algorithms and Techniques

3. Building Intelligent Troubleshooting Agents

4. Microsoft Azure for AI and Machine Learning

5. Advanced AI and Machine Learning Techniques and Capstone

In this course, “Foundations of AI and Machine Learning Infrastructure,” you will explore advanced
AI/ML techniques, ending with a comprehensive capstone project. You will learn about cutting-edge
ML methods, ethical considerations in GenAI, and strategies for building scalable AI systems. The
capstone project allows you to apply all your learned skills to solve a real-world problem.

By the end of the course, you will be able to:

 Implement advanced ML techniques, such as ensemble methods and transfer learning.

 Analyze ethical implications and develop strategies for responsible AI.

 Design scalable AI/ML systems for high-performance scenarios.

 Develop and present a comprehensive AI/ML solution addressing a real-world problem.

Activities and assignments

There are plenty of hands-on practice activities in this course. You should take advantage of them all
to ensure you can pass the graded assessments below.

Assignment Type Percentage of grade

AI/ML applications Quiz 15 percent

Data management in AI/ML Quiz 15 percent

Selecting a framework Quiz 15 percent

Platform deployment Quiz 15 percent

AI/ML concepts in practice Quiz 15 percent

Draft your pitch to the C-suite Peer review assignment 25 percent


Assignment Type Percentage of grade

Peer review assignments

In this course, you will encounter peer review assignments, where other learners review and grade
your submission. To get a grade, you have to receive a certain number of peer reviews and review a
certain number of submissions.

Review the instructions for submitting a peer review assignment before attempting to submit.

Optional walkthroughs

As previously mentioned, throughout this course, you will engage in practical hands-on activities.
Following each activity, you will have an opportunity to reflect on your experiences and evaluate
your understanding of the material. For those seeking further assistance, optional walkthroughs will
be provided for each activity. These walkthroughs are designed to offer support as needed;
however, you are welcome to bypass them if you feel confident in your abilities.

Code snippets

On video pages, you’ll find code snippets with the code used in screen recordings attached as
“Downloads” to the page.

Coursera Learner Help Center

If you have questions or issues while using the Coursera platform, please navigate to the Learner
Help Center for assistance.

Coursera Honor Code

All users of learning materials hosted on the Coursera platform are expected to abide by the
Coursera standards to ensure the integrity of learning within Coursera learning experiences.

Please take some time to review the Coursera Honor Code before beginning this course.

Subject matter experts

This course was created by the following industry experts.

Mark DiMauro, PhD, assistant professor

Dr. Mark DiMauro is an assistant professor of digital humanities and literature at the University of
Pittsburgh at Johnstown. He has studied AI and GenAI for nearly five years, focusing on deployment,
edge cases, and futurism, with some of his projects receiving global media coverage (Sophocles
reconstruction) and others serving as the basis for novel prompting methodologies (MDAP, MPE). In
addition to his teaching responsibilities, he has worked with various Fortune 500 companies for AI
use cases, deployment, L&D, ethics, and governance, including PricewaterhouseCoopers, Rakuten,
Microsoft, AccuWeather, and a series of colleges and universities.

Jimmy Ririe, software developer

Mr. Jimmy Ririe is a software developer, cybersecurity researcher, and AI entrepreneur, focusing on
the development of AI systems for processing sensitive data, such as health records and confidential
corporate knowledge. He studied cognitive science with a focus on ML before earning bachelor’s
degrees in cybersecurity and computer science.

Mike Pino, PhD, managing director at Problem Solutions

Dr. Mike Pino brings over 20 years of expertise in technology innovation, business strategy, and
enterprise growth, helping deliver platforms, products, and capabilities. He has led major initiatives
at Harvard Business School Publishing, GE’s Crotonville Leadership Development Institute, and
Cognizant Technology Solutions, where he served as Global Learning Leader. Additionally, he was a
principal (partner) at PricewaterhouseCoopers, where he helped launch the Future of Work practice,
and he is now the general manager of Problem Solutions. Mike is focused on developing processes,
practices, and technology to improve human performance.

Diane Weaver, chief intelligence officer (CIO) at Problem Solutions

Ms. Diane Weaver is a multigenerational entrepreneur and educator with a passion for emerging
technologies. As CIO, she brings subject matter expertise in AI, human intelligence, and how and
when to bring the two together to create smart, simple, and delightful solutions to deeply complex
problems. Diane’s skilled leadership in product development, business strategy and development,
and operations demonstrates tenacity and the ability to establish a common vision while driving
initiatives through collaboration. Adept at building coalitions and employing a systems-thinking
approach to ensure successful execution, she is proficient in inspiring and empowering stakeholders
to achieve common goals.

Mark as completed

Like

Dislike

Report an issue

You might also like