Azure Fundamentals
Azure Fundamentals
3. Cloud models
d. Multi-cloud: A fourth, and increasingly likely scenario is a multi-cloud scenario. In a multi-cloud scenario,
you use multiple public cloud providers. Maybe you use different features from different cloud providers. Or
maybe you started your cloud journey with one provider and are in the process of migrating to a different
provider. Regardless, in a multi-cloud environment you deal with two (or more) public cloud providers and
manage resources and security in both environments.
e. Azure Arc: Azure Arc is a set of technologies that helps manage your cloud environment. Azure Arc can
help manage your cloud environment, whether it's a public cloud solely on Azure, a private cloud in your
datacenter, a hybrid configuration, or even a multi-cloud environment running on multiple cloud providers at
once.
What if you’re already established with VMware in a private cloud environment but want to migrate to a
public or hybrid cloud? Azure VMware Solution lets you run your VMware workloads in Azure with seamless
integration and scalability.
4. consumption-based model
5. Describe the benefits of high availability and scalability in the cloud
a. High availability
b. Scalability
i. Horizontal
ii. Vertical
6. Describe the benefits of reliability and predictability in the cloud
a. Reliability
b. Predictability
7. Describe the benefits of security and governance in the cloud
a. Whether you’re deploying infrastructure as a service or software as a service, cloud features support
governance and compliance. Things like set templates help ensure that all your deployed resources meet
corporate standards and government regulatory requirements. Plus, you can update all your deployed
resources to new standards as standards change. Cloud-based auditing helps flag any resource that’s out of
compliance with your corporate standards and provides mitigation strategies. Depending on your operating
model, software patches and updates may also automatically be applied, which helps with both governance
and security.
b. On the security side, you can find a cloud solution that matches your security needs. If you want maximum
control of security, infrastructure as a service provides you with physical resources but lets you manage the
operating systems and installed software, including patches and maintenance. If you want patches and
maintenance taken care of automatically, platform as a service or software as a service deployments may be
the best cloud strategies for you.
c. And because the cloud is intended as an over-the-internet delivery of IT resources, cloud providers are
typically well suited to handle things like distributed denial of service (DDoS) attacks, making your network
more robust and secure.
8. Management of the cloud
a. Automatically scale resource deployment based on need.
b. Deploy resources based on a preconfigured template, removing the need for manual configuration.
c. Monitor the health of resources and automatically replace failing resources.
d. Receive automatic alerts based on configured metrics, so you’re aware of performance in real time.
a. Lift-and-shift migration: You’re standing up cloud resources similar to your on-prem datacenter, and then
simply moving the things running on-prem to running on the IaaS infrastructure.
b. Testing and development: You have established configurations for development and test environments that
you need to rapidly replicate. You can stand up or shut down the different environments rapidly with an IaaS
structure, while maintaining complete control.
11. Describe Platform as a Service
In a PaaS environment, the cloud provider maintains the physical infrastructure, physical security, and connection to the
internet. They also maintain the operating systems, middleware, development tools, and business intelligence services
that make up a cloud solution. In a PaaS scenario, you don't have to worry about the licensing or patching for operating
systems and databases.
PaaS is well suited to provide a complete development environment without the headache of maintaining all the
development infrastructure.
a. Development framework: PaaS provides a framework that developers can build upon to develop or customize
cloud-based applications. Similar to the way you create an Excel macro, PaaS lets developers create
applications using built-in software components. Cloud features such as scalability, high-availability, and multi-
tenant capability are included, reducing the amount of coding that developers must do.
b. Analytics or business intelligence: Tools provided as a service with PaaS allow organizations to analyze and
mine their data, finding insights and patterns and predicting outcomes to improve forecasting, product design
decisions, investment returns, and other business decisions.
12. Describe Software as a Service
Software as a service (SaaS) is the most complete cloud service model from a product perspective. With SaaS, you’re
essentially renting or using a fully developed application. Email, financial software, messaging applications, and
connectivity software are all common examples of a SaaS implementation.
2. Sandbox:
a. Commands:
i. Get-date
1. To get the date in powershell (Powershell command prompt is signified by PS in the
starting of the line.) to switch to Powershell from bash use command: pwsh.
ii. az version
1. Most azure commands start with az. This command is used to get version of azure.
Output:
2. { "azure-cli": "2.56.0",
3. "azure-cli-core": "2.56.0",
4. "azure-cli-telemetry": "1.1.0",
5. "extensions": {
6. "ai-examples": "0.2.5",
7. "ml": "2.22.0",
8. "ssh": "2.0.2"
9. }
10. }
g. NOTE: Not all Azure services automatically replicate data or automatically fall back from a failed region to
cross-replicate to another enabled region. In these scenarios, recovery and replication must be
configured by the customer.
h. NOTE: Directional: Most regions are paired in two directions, meaning they are the backup for the region
that provides a backup for them (West US and East US back each other up). However, some regions, such
as West India and Brazil South, are paired in only one direction. In a one-direction pairing, the Primary
region does not provide backup for its secondary region. So, even though West India’s secondary region is
South India, South India does not rely on West India. West India's secondary region is South India, but
South India's secondary region is Central India. Brazil South is unique because it's paired with a region
outside of its geography. Brazil South's secondary region is South Central US. The secondary region of
South Central US isn't Brazil South.
i. Sovereign Regions: Sovereign regions are instances of Azure that are isolated from the main instance of
Azure. You may need to use a sovereign region for compliance or legal purposes. Examples:
i. US DoD Central, US Gov Virginia, US Gov Iowa and more: These regions are physical and logical
network-isolated instances of Azure for U.S. government agencies and partners. These
datacenters are operated by screened U.S. personnel and include additional compliance
certifications.
ii. China East, China North, and more: These regions are available through a unique partnership
between Microsoft and 21Vianet, whereby Microsoft doesn't directly maintain the datacenters.
4. Azure Management infrastructure
a. Azure resources and resource groups: A resource is the basic building block of Azure. Anything you create,
provision, deploy, etc. is a resource. Virtual Machines (VMs), virtual networks, databases, cognitive
services, etc. are all considered resources within Azure.
b. Important points for resource groups:
i. Single resource can only be part of a single resource group
ii. When you move a resource from one group to another, it will no longer be part of old resource
group, i.e. resource groups cannot be nested
c. Azure subscriptions: In Azure, subscriptions are a unit of management, billing, and scale. Like how
resource groups are a way to logically organize resources, subscriptions allow you to logically organize
your resource groups and facilitate billing. An Azure subscription links to an Azure account, which is an
identity in Microsoft Entra ID or in a directory that Microsoft Entra ID trusts. An account can have multiple
subscriptions, but it’s only required to have one. In a multi-subscription account, you can use the
subscriptions to configure different billing models and apply different access-management policies.
Types of subscription boundaries:
i. Billing boundary: This subscription type determines how an Azure account is billed for using
Azure. You can create multiple subscriptions for different types of billing requirements. Azure
generates separate billing reports and invoices for each subscription so that you can organize and
manage costs.
ii. Access control boundary: Azure applies access-management policies at the subscription level, and
you can create separate subscriptions to reflect different organizational structures. An example is
that within a business, you have different departments to which you apply distinct Azure
subscription policies. This billing model allows you to manage and control access to the resources
that users provision with specific subscriptions.
d. Purpose to create azure subsricptions:
i. Environments: You can choose to create subscriptions to set up separate environments for
development and testing, security, or to isolate data for compliance reasons. This design is
particularly useful because resource access control occurs at the subscription level.
ii. Organizational structures: You can create subscriptions to reflect different organizational
structures. For example, you could limit one team to lower-cost resources, while allowing the IT
department a full range. This design allows you to manage and control access to the resources
that users provision within each subscription.
iii. Billing: You can create additional subscriptions for billing purposes. Because costs are first
aggregated at the subscription level, you might want to create subscriptions to manage and track
costs based on your needs. For instance, you might want to create one subscription for your
production workloads and another subscription for your development and testing workloads.
e. Management groups: Azure management groups provide a level of scope above subscriptions. You
organize subscriptions into containers called management groups and apply governance conditions to the
management groups. All subscriptions within a management group automatically inherit the conditions
applied to the management group, the same way that resource groups inherit settings from subscriptions
and resources inherit from resource groups. Management groups give you enterprise-grade management
at a large scale, no matter what type of subscriptions you might have. Management groups can be nested.
i. Some examples of how you could use management groups might be:
1. Create a hierarchy that applies a policy. You could limit VM locations to the US West
Region in a group called Production. This policy will inherit onto all the subscriptions that
are descendants of that management group and will apply to all VMs under those
subscriptions. This security policy can't be altered by the resource or subscription owner,
which allows for improved governance.
2. Provide user access to multiple subscriptions. By moving multiple subscriptions under a
management group, you can create one Azure role-based access control (Azure RBAC)
assignment on the management group. Assigning Azure RBAC at the management group
level means that all sub-management groups, subscriptions, resource groups, and
resources underneath that management group would also inherit those permissions. One
assignment on the management group can enable users to have access to everything they
need instead of scripting Azure RBAC over different subscriptions.
ii. Important facts about management groups:
1. 10,000 management groups can be supported in a single directory.
2. A management group tree can support up to six levels of depth. This limit doesn't include
the root level or the subscription level.
3. Each management group and subscription can support only one parent.
b. Virtual machine scale sets: Scale sets allow you to centrally manage, configure, and update a large
number of VMs in minutes. The number of VM instances can automatically increase or decrease in
response to demand, or you can set it to scale based on a defined schedule. Virtual machine scale sets
also automatically deploy a load balancer to make sure that your resources are being used efficiently.
With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and
container workloads.
c. Virtual machine availability sets: Availability sets are designed to ensure that VMs stagger updates and
have varied power and network connectivity, preventing you from losing all your VMs with a single
network or power failure.
d. Types of domains for availability sets:
i. Update domain: The update domain groups VMs that can be rebooted at the same time. This
allows you to apply updates while knowing that only one update domain grouping will be offline at
a time. All of the machines in one update domain will be updated. An update group going through
the update process is given a 30-minute time to recover before maintenance on the next update
domain starts.
ii. Fault domain: The fault domain groups your VMs by common power source and network switch.
By default, an availability set will split your VMs across up to three fault domains. This helps
protect against a physical power or networking failure by having VMs in different fault domains
(thus being connected to different power and networking resources).
e. Examples of when to use VMs
i. During testing and development. VMs provide a quick and easy way to create different OS and
application configurations. Test and development personnel can then easily delete the VMs when
they no longer need them.
ii. When running applications in the cloud. The ability to run certain applications in the public cloud
as opposed to creating a traditional infrastructure to run them can provide substantial economic
benefits. For example, an application might need to handle fluctuations in demand. Shutting down
VMs when you don't need them or quickly starting them up to meet a sudden increase in demand
means you pay only for the resources you use.
iii. When extending your datacenter to the cloud: An organization can extend the capabilities of its
own on-premises network by creating a virtual network in Azure and adding VMs to that virtual
network. Applications like SharePoint can then run on an Azure VM instead of running locally. This
arrangement makes it easier or less expensive to deploy than in an on-premises environment.
iv. During disaster recovery: As with running certain types of applications in the cloud and extending
an on-premises network to the cloud, you can get significant cost savings by using an IaaS-based
approach to disaster recovery. If a primary datacenter fails, you can create VMs running on Azure
to run your critical applications and then shut them down when the primary datacenter becomes
operational again.
f. VM Resources: When you provision a VM, you’ll also have the chance to pick the resources that are
associated with that VM, including:
i. Size (purpose, number of processor cores, and amount of RAM)
ii. Storage disks (hard disk drives, solid state drives, etc.)
iii. Networking (virtual network, public IP address, and port configuration)
6. Azure virtual desktop: Azure Virtual Desktop is a desktop and application virtualization service that runs on the
cloud. It enables you to use a cloud-hosted version of Windows from any location. Azure Virtual Desktop works
across devices and operating systems, and works with apps that you can use to access remote desktops or most
modern browsers.
a. Enhance security: Azure Virtual Desktop provides centralized security management for users' desktops
with Microsoft Entra ID. You can enable multifactor authentication to secure user sign-ins. You can also
secure access to data by assigning granular role-based access controls (RBACs) to users. With Azure Virtual
Desktop, the data and apps are separated from the local hardware. The actual desktop and apps are
running in the cloud, meaning the risk of confidential data being left on a personal device is reduced.
Additionally, user sessions are isolated in both single and multi-session environments.
7. Azure containers: If you want to run multiple instances of an application on a single host machine, containers are
an excellent choice.
a. Containers are a virtualization environment. Much like running multiple virtual machines on a single
physical host, you can run multiple containers on a single physical or virtual host. Unlike virtual machines,
you don't manage the operating system for a container. Virtual machines appear to be an instance of an
operating system that you can connect to and manage. Containers are lightweight and designed to be
created, scaled out, and stopped dynamically. It's possible to create and deploy virtual machines as
application demand increases, but containers are a lighter weight, more agile method. Containers are
designed to allow you to respond to changes on demand. With containers, you can quickly restart if
there's a crash or hardware interruption. One of the most popular container engines is Docker, and Azure
supports Docker.
b. Azure Container Instances: Azure Container Instances offer the fastest and simplest way to run a container
in Azure; without having to manage any virtual machines or adopt any additional services. Azure Container
Instances are a platform as a service (PaaS) offering. Azure Container Instances allow you to upload your
containers and then the service will run the containers for you.
c. Azure Container Apps are similar in many ways to a container instance. They allow you to get up and
running right away, they remove the container management piece, and they're a PaaS offering. Container
Apps have extra benefits such as the ability to incorporate load balancing and scaling. These other
functions allow you to be more elastic in your design.
d. Azure Kubernetes Service (AKS) is a container orchestration service. An orchestration service manages the
lifecycle of containers. When you're deploying a fleet of containers, AKS can make fleet management
simpler and more efficient.
e. Use containers in your solutions : Containers are often used to create solutions by using a microservice
architecture. This architecture is where you break solutions into smaller, independent pieces. For example,
you might split a website into a container hosting your front end, another hosting your back end, and a
third for storage. This split allows you to separate portions of your app into logical sections that can be
maintained, scaled, or updated independently.Imagine your website back-end has reached capacity but
the front end and storage aren't being stressed. With containers, you could scale the back end separately
to improve performance. If something necessitated such a change, you could also choose to change the
storage service or modify the front end without impacting any of the other components.
8. Azure functions
a. Azure Functions is an event-driven, serverless compute option that doesn’t require maintaining virtual
machines or containers. If you build an app using VMs or containers, those resources have to be “running”
in order for your app to function. With Azure Functions, an event wakes the function, alleviating the need
to keep resources provisioned when there are no events.
b. Benefits of Azure Functions:
i. Using Azure Functions is ideal when you're only concerned about the code running your service
and not about the underlying platform or infrastructure. Functions are commonly used when you
need to perform work in response to an event (often via a REST request), timer, or message from
another Azure service, and when that work can be completed quickly, within seconds or less.
ii. Functions scale automatically based on demand, so they may be a good choice when demand is
variable.
iii. Azure Functions runs your code when it's triggered and automatically deallocates resources
when the function is finished. In this model, you're only charged for the CPU time used while your
function runs.
iv. Functions can be either stateless or stateful. When they're stateless (the default), they behave as
if they're restarted every time they respond to an event. When they're stateful (called Durable
Functions), a context is passed through the function to track prior activity.
v. Functions are a key component of serverless computing. They're also a general compute platform
for running any type of code. If the needs of the developer's app change, you can deploy the
project in an environment that isn't serverless. This flexibility allows you to manage scaling, run on
virtual networks, and even completely isolate the functions.
i. Azure virtual network allows you to create multiple isolated virtual networks. When you set up a
virtual network, you define a private IP address space by using either public or private IP address
ranges. The IP range only exists within the virtual network and isn't internet routable. You can
divide that IP address space into subnets and allocate part of the defined address space to each
named subnet.
ii. For name resolution, you can use the name resolution service that's built into Azure. You also can
configure the virtual network to use either an internal or an external DNS server.
c. Internet communications:
i. You can enable incoming connections from the internet by assigning a public IP address to an
Azure resource, or putting the resource behind a public load balancer.
d. Communicate between Azure resources:
i. Virtual networks can connect not only VMs but other Azure resources, such as the App Service
Environment for Power Apps, Azure Kubernetes Service, and Azure virtual machine scale sets.
ii. Service endpoints can connect to other Azure resource types, such as Azure SQL databases and
storage accounts. This approach enables you to link multiple Azure resources to virtual networks
to improve security and provide optimal routing between resources.
e. Communicate with on-premises resources: Azure virtual networks enable you to link resources together in
your on-premises environment and within your Azure subscription. In effect, you can create a network
that spans both your local and cloud environments.
i. Point-to-site virtual private network connections are from a computer outside your organization
back into your corporate network. In this case, the client computer initiates an encrypted VPN
connection to connect to the Azure virtual network.
ii. Site-to-site virtual private networks link your on-premises VPN device or gateway to the Azure
VPN gateway in a virtual network. In effect, the devices in Azure can appear as being on the local
network. The connection is encrypted and works over the internet.
iii. Azure ExpressRoute provides a dedicated private connectivity to Azure that doesn't travel over
the internet. ExpressRoute is useful for environments where you need greater bandwidth and
even higher levels of security.
f. Route network traffic: By default, Azure routes traffic between subnets on any connected virtual
networks, on-premises networks, and the internet. You also can control routing and override those
settings, as follows:
i. Route tables allow you to define rules about how traffic should be directed. You can create
custom route tables that control how packets are routed between subnets.
ii. Border Gateway Protocol (BGP) works with Azure VPN gateways, Azure Route Server, or Azure
ExpressRoute to propagate on-premises BGP routes to Azure virtual networks.
g. Filter network traffic: Azure virtual networks enable you to filter traffic between subnets by using the
following approaches:
i. Network security groups are Azure resources that can contain multiple inbound and outbound
security rules. You can define these rules to allow or block traffic, based on factors such as source
and destination IP address, port, and protocol.
ii. Network virtual appliances are specialized VMs that can be compared to a hardened network
appliance. A network virtual appliance carries out a particular network function, such as running a
firewall or performing wide area network (WAN) optimization.
ii. All data transfer is encrypted inside a private tunnel as it crosses the internet. You can deploy only
one VPN gateway in each virtual network. However, you can use one gateway to connect to
multiple locations, which includes other virtual networks or on-premises datacenters.
iii. When setting up a VPN gateway, you must specify the type of VPN - either policy-based or route-
based. The primary distinction between these two types is how they determine which traffic
needs encryption. In Azure, regardless of the VPN type, the method of authentication employed is
a pre-shared key.
1. Policy-based VPN gateways specify statically the IP address of packets that should be
encrypted through each tunnel. This type of device evaluates every data packet against
those sets of IP addresses to choose the tunnel where that packet is going to be sent
through.
2. In Route-based gateways, IPSec tunnels are modeled as a network interface or virtual
tunnel interface. IP routing (either static routes or dynamic routing protocols) decides
which one of these tunnel interfaces to use when sending each packet. Route-based
VPNs are the preferred connection method for on-premises devices. They're more
resilient to topology changes such as the creation of new subnets. Use a route-based VPN
gateway if you need any of the following types of connectivity:
a. Connections between virtual networks
b. Point-to-site connections
c. Multisite connections
d. Coexistence with an Azure ExpressRoute gateway
b. High-availability scenarios: There are a few ways to maximize the resiliency of your VPN gateway.
i. Active/standby: By default, VPN gateways are deployed as two instances in an active/standby
configuration, even if you only see one VPN gateway resource in Azure. When planned
maintenance or unplanned disruption affects the active instance, the standby instance
automatically assumes responsibility for connections without any user intervention. Connections
are interrupted during this failover, but they're typically restored within a few seconds for
planned maintenance and within 90 seconds for unplanned disruptions.
ii. Active/active: With the introduction of support for the BGP routing protocol, you can also deploy
VPN gateways in an active/active configuration. In this configuration, you assign a unique public IP
address to each instance. You then create separate tunnels from the on-premises device to each
IP address. You can extend the high availability by deploying an additional VPN device on-
premises.
iii. ExpressRoute failover: Another high-availability option is to configure a VPN gateway as a secure
failover path for ExpressRoute connections. ExpressRoute circuits have resiliency built in.
However, they aren't immune to physical problems that affect the cables delivering connectivity
or outages that affect the complete ExpressRoute location. In high-availability scenarios, where
there's risk associated with an outage of an ExpressRoute circuit, you can also provision a VPN
gateway that uses the internet as an alternative method of connectivity. In this way, you can
ensure there's always a connection to the virtual networks.
iv. Zone-redundant gateways: In regions that support availability zones, VPN gateways and
ExpressRoute gateways can be deployed in a zone-redundant configuration. This configuration
brings resiliency, scalability, and higher availability to virtual network gateways. Deploying
gateways in Azure availability zones physically and logically separates gateways within a region
while protecting your on-premises network connectivity to Azure from zone-level failures. These
gateways require different gateway stock keeping units (SKUs) and use Standard public IP
addresses instead of Basic public IP addresses.
12. Azure ExpressRoute
a. Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private
connection, with the help of a connectivity provider. This connection is called an ExpressRoute Circuit.
With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure
and Microsoft 365. This allows you to connect offices, datacenters, or other facilities to the Microsoft
cloud. Each location would have its own ExpressRoute circuit.
b. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual
cross-connection through a connectivity provider at a colocation facility. ExpressRoute connections don't
go over the public Internet.
c. Features and benefits of ExpressRoute:
i. Connectivity to Microsoft cloud services: ExpressRoute enables direct access to the following
services in all regions:
1. Microsoft Office 365
2. Microsoft Dynamics 365
3. Azure compute services, such as Azure Virtual Machines
4. Azure cloud services, such as Azure Cosmos DB and Azure Storage
ii. Global connectivity: You can enable ExpressRoute Global Reach to exchange data across your on-
premises sites by connecting your ExpressRoute circuits. For example, say you had an office in Asia
and a datacenter in Europe, both with ExpressRoute circuits connecting them to the Microsoft
network. You could use ExpressRoute Global Reach to connect those two facilities, allowing them
to communicate without transferring data over the public internet.
iii. Dynamic routing: ExpressRoute uses the BGP. BGP is used to exchange routes between on-
premises networks and resources running in Azure. This protocol enables dynamic routing
between your on-premises network and services running in the Microsoft cloud.
iv. Built-in redundancy: Each connectivity provider uses redundant devices to ensure that
connections established with Microsoft are highly available. You can configure multiple circuits to
complement this feature.
d. ExpressRoute connectivity models:
i. Co-location at a cloud exchange: Co-location refers to your datacenter, office, or other facility
being physically co-located at a cloud exchange, such as an ISP. If your facility is co-located at a
cloud exchange, you can request a virtual cross-connect to the Microsoft cloud.
ii. Point-to-point Ethernet connection: Point-to-point ethernet connection refers to using a point-to-
point connection to connect your facility to the Microsoft cloud.
iii. Any-to-any networks: With any-to-any connectivity, you can integrate your wide area network
(WAN) with Azure by providing connections to your offices and datacenters. Azure integrates with
your WAN connection to provide a connection like you would have between your datacenter and
any branch offices.
iv. Directly from ExpressRoute sites: You can connect directly into the Microsoft's global network at a
peering location strategically distributed across the world. ExpressRoute Direct provides dual 100
Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale.
e. Security considerations: With ExpressRoute, your data doesn't travel over the public internet, so it's not
exposed to the potential risks associated with internet communications. ExpressRoute is a private
connection from your on-premises infrastructure to your Azure infrastructure. Even if you have an
ExpressRoute connection, DNS queries, certificate revocation list checking, and Azure Content Delivery
Network requests are still sent over the public internet.
13. Azure DNS: Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft
Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records using the same
credentials, APIs, tools, and billing as your other Azure services.
a. Benefits of Azure DNS:
i. Reliability and performance: DNS domains in Azure DNS are hosted on Azure's global network of
DNS name servers, providing resiliency and high availability. Azure DNS uses anycast networking,
so each DNS query is answered by the closest available DNS server to provide fast performance
and high availability for your domain.
ii. Security: Azure DNS is based on Azure Resource Manager, which provides features such as:
1. Azure role-based access control (Azure RBAC) to control who has access to specific actions
for your organization.
2. Activity logs to monitor how a user in your organization modified a resource or to find an
error when troubleshooting.
3. Resource locking to lock a subscription, resource group, or resource. Locking prevents
other users in your organization from accidentally deleting or modifying critical resources.
iii. Ease of use: Azure DNS can manage DNS records for your Azure services and provide DNS for your
external resources as well. Azure DNS is integrated in the Azure portal and uses the same
credentials, support contract, and billing as your other Azure services. Because Azure DNS is
running on Azure, it means you can manage your domains and records with the Azure portal,
Azure PowerShell cmdlets, and the cross-platform Azure CLI. Applications that require automated
DNS management can integrate with the service by using the REST API and SDKs.
iv. Customizable virtual networks with private domains: Azure DNS also supports private DNS
domains. This feature allows you to use your own custom domain names in your private virtual
networks, rather than being stuck with the Azure-provided names.
v. Alias records: Azure DNS also supports alias record sets. You can use an alias record set to refer to
an Azure resource, such as an Azure public IP address, an Azure Traffic Manager profile, or an
Azure Content Delivery Network (CDN) endpoint. If the IP address of the underlying resource
changes, the alias record set seamlessly updates itself during DNS resolution. The alias record set
points to the service instance, and the service instance is associated with an IP address.
b. NOTE: You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name by
using App Service domains or a third-party domain name registrar. Once purchased, your domains can be
hosted in Azure DNS for record management.
14. Azure storage accounts: A storage account provides a unique namespace for your Azure Storage data that's
accessible from anywhere in the world over HTTP or HTTPS.
a. When you create your storage account, you’ll start by picking the storage account type. The type of
account determines the storage services and redundancy options and has an impact on the use cases.
Below is a list of redundancy options that will be covered later in this module:
i. Locally redundant storage (LRS)
ii. Geo-redundant storage (GRS)
iii. Read-access geo-redundant storage (RA-GRS)
iv. Zone-redundant storage (ZRS)
v. Geo-zone-redundant storage (GZRS)
vi. Read-access geo-zone-redundant storage (RA-GZRS)
b. Storage account endpoints
i. One of the benefits of using an Azure Storage Account is having a unique namespace in Azure for
your data. In order to do this, every storage account in Azure must have a unique-in-Azure
account name. The combination of the account name and the Azure Storage service endpoint
forms the endpoints for your storage account. When naming your storage account, keep these
rules in mind:
1. Storage account names must be between 3 and 24 characters in length and may contain
numbers and lowercase letters only.
2. Your storage account name must be unique within Azure. No two storage accounts can
have the same name. This supports the ability to have a unique, accessible namespace in
Azure.
ii. The following table shows the endpoint format for Azure Storage services.
15. Azure storage redundancy: Azure Storage always stores multiple copies of your data so that it's protected from
planned and unplanned events such as transient hardware failures, network or power outages, and natural
disasters.
a. The factors that help determine which redundancy option you should choose include:
i. How your data is replicated in the primary region.
ii. Whether your data is replicated to a second region that is geographically distant to the primary
region, to protect against regional disasters.
iii. Whether your application requires read access to the replicated data in the secondary region if the
primary region becomes unavailable.
b. Redundancy in the primary region: Data in an Azure Storage account is always replicated three times in
the primary region.
i. Locally redundant storage:
1. Locally redundant storage (LRS) replicates your data three times within a single data
center in the primary region. LRS provides at least 11 nines of durability (99.999999999%)
of objects over a given year.
2.
3. LRS is the lowest-cost redundancy option and offers the least durability compared to other
options. LRS protects your data against server rack and drive failures. However, if a
disaster such as fire or flooding occurs within the data center, all replicas of a storage
account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft
recommends using zone-redundant storage (ZRS), geo-redundant storage (GRS), or geo-
zone-redundant storage (GZRS).
ii. Zone-redundant storage:
1. For Availability Zone-enabled Regions, zone-redundant storage (ZRS) replicates your Azure
Storage data synchronously across three Azure availability zones in the primary region.
ZRS offers durability for Azure Storage data objects of at least 12 nines (99.9999999999%)
over a given year.
2. With ZRS, your data is still accessible for both read and write operations even if a zone
becomes unavailable. No remounting of Azure file shares from the connected clients is
required. If a zone becomes unavailable, Azure undertakes networking updates, such as
DNS repointing. These updates may affect your application if you access data before the
updates have completed.
3. Microsoft recommends using ZRS in the primary region for scenarios that require high
availability. ZRS is also recommended for restricting replication of data within a country or
region to meet data governance requirements.
c. Redundancy in a secondary region: When you create a storage account, you select the primary region for
the account. The paired secondary region is based on Azure Region Pairs, and can't be changed. GRS is
similar to running LRS in two regions, and GZRS is similar to running ZRS in the primary region and LRS in
the secondary region. By default, data in the secondary region isn't available for read or write access
unless there's a failover to the secondary region. If the primary region becomes unavailable, you can
choose to fail over to the secondary region. After the failover has completed, the secondary region
becomes the primary region, and you can again read and write data.
NOTE : Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may
result in data loss if the primary region can't be recovered. The interval between the most recent writes to the primary
region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the
point in time to which data can be recovered. Azure Storage typically has an RPO of less than 15 minutes, although there's
currently no SLA on how long it takes to replicate data to the secondary region.
i. Geo-redundant storage: GRS copies your data synchronously three times within a single physical
location in the primary region using LRS. It then copies your data asynchronously to a single
physical location in the secondary region (the region pair) using LRS. GRS offers durability for
Azure Storage data objects of at least 16 nines (99.99999999999999%) over a given year.
ii. Geo-zone-redundant storage: GZRS combines the high availability provided by redundancy across
availability zones with protection from regional outages provided by geo-replication. Data in a
GZRS storage account is copied across three Azure availability zones in the primary region (similar
to ZRS) and is also replicated to a secondary geographic region, using LRS, for protection from
regional disasters. Microsoft recommends using GZRS for applications requiring maximum
consistency, durability, and availability, excellent performance, and resilience for disaster
recovery. GZRS is designed to provide at least 16 nines (99.99999999999999%) of durability of
objects over a given year.
d. Read access to data in the secondary region: Geo-redundant storage (with GRS or GZRS) replicates your
data to another physical location in the secondary region to protect against regional outages. However,
that data is available to be read only if the customer or Microsoft initiates a failover from the primary to
secondary region. However, if you enable read access to the secondary region, your data is always
available, even when the primary region is running optimally. For read access to the secondary region,
enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-
GZRS). Remember that the data in your secondary region may not be up-to-date due to RPO (Recovery
point objective).
16. Azure storage services:
a. The Azure Storage platform includes the following data services:
i. Azure Blobs: A massively scalable object store for text and binary data. Also includes support for
big data analytics through Data Lake Storage Gen2.
ii. Azure Files: Managed file shares for cloud or on-premises deployments.
iii. Azure Queues: A messaging store for reliable messaging between application components.
iv. Azure Disks: Block-level storage volumes for Azure VMs.
v. Azure Tables: NoSQL table option for structured, non-relational data.
c. Azure Blobs: Azure Blob storage is an object storage solution for the cloud. It can store massive amounts
of data, such as text or binary data. Azure Blob storage is unstructured, meaning that there are no
restrictions on the kinds of data it can hold. Blob storage can manage thousands of simultaneous
uploads, massive amounts of video data, constantly growing log files, and can be reached from
anywhere with an internet connection. Blobs aren't limited to common file formats. A blob could contain
gigabytes of binary data streamed from a scientific instrument, an encrypted message for another
application, or data in a custom format for an app you're developing. One advantage of blob storage over
disk storage is that it doesn't require developers to think about or manage disks. Data is uploaded as
blobs, and Azure takes care of the physical storage needs.
i. Blob storage is ideal for:
1. Serving images or documents directly to a browser.
2. Storing files for distributed access.
3. Streaming video and audio.
4. Storing data for backup and restore, disaster recovery, and archiving.
5. Storing data for analysis by an on-premises or Azure-hosted service.
ii. Accessing blob storage: Objects in blob storage can be accessed from anywhere in the world via
HTTP or HTTPS. Users or client applications can access blobs via URLs, the Azure Storage REST API,
Azure PowerShell, Azure CLI, or an Azure Storage client library. The storage client libraries are
available for multiple languages, including .NET, Java, Node.js, Python, PHP, and Ruby
iii. Blob storage tiers: The available access tiers include:
1. Hot access tier: Optimized for storing data that is accessed frequently (for example,
images for your website).
2. Cool access tier: Optimized for data that is infrequently accessed and stored for at least 30
days (for example, invoices for your customers).
3. Cold access tier: Optimized for storing data that is infrequently accessed and stored for at
least 90 days.
4. Archive access tier: Appropriate for data that is rarely accessed and stored for at least 180
days, with flexible latency requirements (for example, long-term backups).
iv. The following considerations apply to the different access tiers:
1. Hot and cool access tiers can be set at the account level. The cold and archive access tiers
aren't available at the account level.
2. Hot, cool, cold, and archive tiers can be set at the blob level, during or after upload.
3. Data in the cool and cold access tiers can tolerate slightly lower availability, but still
requires high durability, retrieval latency, and throughput characteristics similar to hot
data. For cool and cold data, a lower availability service-level agreement (SLA) and higher
access costs compared to hot data are acceptable trade-offs for lower storage costs.
4. Archive storage stores data offline and offers the lowest storage costs, but also the highest
costs to rehydrate and access data.
d. Azure Files: Azure File storage offers fully managed file shares in the cloud that are accessible via the
industry standard Server Message Block (SMB) or Network File System (NFS) protocols. SMB Azure file
shares are accessible from Windows, Linux, and macOS clients. NFS Azure Files shares are accessible from
Linux or macOS clients. Additionally, SMB Azure file shares can be cached on Windows Servers with Azure
File Sync for fast access near where the data is being used.
i. Azure Files key benefits:
1. Shared access: Azure file shares support the industry standard SMB and NFS protocols,
meaning you can seamlessly replace your on-premises file shares with Azure file shares
without worrying about application compatibility.
2. Fully managed: Azure file shares can be created without the need to manage hardware or
an OS. This means you don't have to deal with patching the server OS with critical security
upgrades or replacing faulty hard disks.
3. Scripting and tooling: PowerShell cmdlets and Azure CLI can be used to create, mount, and
manage Azure file shares as part of the administration of Azure applications. You can
create and manage Azure file shares using Azure portal and Azure Storage Explorer.
4. Resiliency: Azure Files has been built from the ground up to always be available. Replacing
on-premises file shares with Azure Files means you don't have to wake up in the middle of
the night to deal with local power outages or network issues.
5. Familiar programmability: Applications running in Azure can access data in the share via
file system I/O APIs. Developers can therefore use their existing code and skills to migrate
existing applications. In addition to System IO APIs, you can use Azure Storage Client
Libraries or the Azure Storage REST API.
e. Azure Queues: Azure Queue storage is a service for storing large numbers of messages. Once stored, you
can access the messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue
can contain as many messages as your storage account has room for (potentially millions). Each individual
message can be up to 64 KB in size. Queues are commonly used to create a backlog of work to process
asynchronously. Queue storage can be combined with compute functions like Azure Functions to take an
action when a message is received. For example, you want to perform an action after a customer uploads
a form to your website. You could have the submit button on the website trigger a message to the Queue
storage. Then, you could use Azure Functions to trigger an action once the message was received.
f. Azure Disks: Azure Disk storage, or Azure managed disks, are block-level storage volumes managed by
Azure for use with Azure VMs.
g. Azure Tables: Azure Table storage stores large amounts of structured data. Azure tables are a NoSQL
datastore that accepts authenticated calls from inside and outside the Azure cloud. This enables you to
use Azure tables to build your hybrid or multi-cloud solution and have your data always available. Azure
tables are ideal for storing structured, non-relational data.
17. Identify Azure data migration options: Azure supports both real-time migration of infrastructure, applications, and
data using Azure Migrate as well as asynchronous migration of data using Azure Data Box.
a. Azure Migrate: Azure Migrate is a service that helps you migrate from an on-premises environment to the
cloud. Azure Migrate functions as a hub to help you manage the assessment and migration of your on-
premises datacenter to Azure. It provides the following:
i. Unified migration platform: A single portal to start, run, and track your migration to Azure.
ii. Range of tools: A range of tools for assessment and migration. Azure Migrate tools include Azure
Migrate: Discovery and assessment and Azure Migrate: Server Migration. Azure Migrate also
integrates with other Azure services and tools, and with independent software vendor (ISV)
offerings.
iii. Assessment and migration: In the Azure Migrate hub, you can assess and migrate your on-
premises infrastructure to Azure.
b. Integrated tools: In addition to working with tools from ISVs, the Azure Migrate hub also includes the
following tools to help with migration:
i. Azure Migrate: Discovery and assessment. Discover and assess on-premises servers running on
VMware, Hyper-V, and physical servers in preparation for migration to Azure.
ii. Azure Migrate: Server Migration. Migrate VMware VMs, Hyper-V VMs, physical servers, other
virtualized servers, and public cloud VMs to Azure.
iii. Data Migration Assistant. Data Migration Assistant is a stand-alone tool to assess SQL Servers. It
helps pinpoint potential problems blocking migration. It identifies unsupported features, new
features that can benefit you after migration, and the right path for database migration.
iv. Azure Database Migration Service. Migrate on-premises databases to Azure VMs running SQL
Server, Azure SQL Database, or SQL Managed Instances.
v. Azure App Service migration assistant. Azure App Service migration assistant is a standalone tool
to assess on-premises websites for migration to Azure App Service. Use Migration Assistant to
migrate .NET and PHP web apps to Azure.
vi. Azure Data Box. Use Azure Data Box products to move large amounts of offline data to Azure.
2. Here are the various scenarios where Data Box can be used to export data from Azure.:
a. Disaster recovery - when a copy of the data from Azure is restored to an on-
premises network. In a typical disaster recovery scenario, a large amount of Azure
data is exported to a Data Box. Microsoft then ships this Data Box, and the data is
restored on your premises in a short time.
b. Security requirements - when you need to be able to export data out of Azure due
to government or security requirements.
c. Migrate back to on-premises or to another cloud service provider - when you
want to move all the data back to on-premises, or to another cloud service
provider, export data via Data Box to migrate the workloads.
3. Once the data from your import order is uploaded to Azure, the disks on the device are
wiped clean in accordance with NIST 800-88r1 standards. For an export order, the disks
are erased once the device reaches the Azure datacenter.
18. Azure file movement options: In addition to large scale migration using services like Azure Migrate and Azure Data
Box, Azure also has tools designed to help you move or interact with individual files or small file groups.
a. AzCopy:
i. AzCopy is a command-line utility that you can use to copy blobs or files to or from your storage
account. With AzCopy, you can upload files, download files, copy files between storage accounts,
and even synchronize files. AzCopy can even be configured to work with other cloud providers to
help move files back and forth between clouds.
ii. Synchronizing blobs or files with AzCopy is one-direction synchronization. When you synchronize,
you designated the source and destination, and AzCopy will copy files or blobs in that direction. It
doesn't synchronize bi-directionally based on timestamps or other metadata.
b. Azure Storage Explorer: Azure Storage Explorer is a standalone app that provides a graphical interface to
manage files and blobs in your Azure Storage Account. It works on Windows, macOS, and Linux operating
systems and uses AzCopy on the backend to perform all of the file and blob management tasks. With
Storage Explorer, you can upload to Azure, download from Azure, or move between storage accounts.
c. Azure File Sync: Azure File Sync is a tool that lets you centralize your file shares in Azure Files and keep the
flexibility, performance, and compatibility of a Windows file server. It’s almost like turning your Windows
file server into a miniature content delivery network. Once you install Azure File Sync on your local
Windows server, it will automatically stay bi-directionally synced with your files in Azure. With Azure File
Sync, you can:
i. Use any protocol that's available on Windows Server to access your data locally, including SMB,
NFS, and FTPS.
ii. Have as many caches as you need across the world.
iii. Replace a failed local server by installing Azure File Sync on a new server in the same datacenter.
iv. Configure cloud tiering so the most frequently accessed files are replicated locally, while
infrequently accessed files are kept in the cloud until requested.
19. Azure directory services: Microsoft Entra ID is a directory service that enables you to sign in and access both
Microsoft cloud applications and cloud applications that you develop. Microsoft Entra ID can also help you
maintain your on-premises Active Directory deployment. For on-premises environments, Active Directory running
on Windows Server provides an identity and access management service that's managed by your organization.
Microsoft Entra ID is Microsoft's cloud-based identity and access management service. With Microsoft Entra ID,
you control the identity accounts, but Microsoft ensures that the service is available globally. When you secure
identities on-premises with Active Directory, Microsoft doesn't monitor sign-in attempts. When you connect
Active Directory with Microsoft Entra ID, Microsoft can help protect you by detecting suspicious sign-in attempts
at no extra cost. For example, Microsoft Entra ID can detect sign-in attempts from unexpected locations or
unknown devices.
a. Who uses Microsoft Entra ID?
i. IT administrators. Administrators can use Microsoft Entra ID to control access to applications and
resources based on their business requirements.
ii. App developers. Developers can use Microsoft Entra ID to provide a standards-based approach for
adding functionality to applications that they build, such as adding SSO functionality to an app or
enabling an app to work with a user's existing credentials.
iii. Users. Users can manage their identities and take maintenance actions like self-service password
reset.
iv. Online service subscribers. Microsoft 365, Microsoft Office 365, Azure, and Microsoft Dynamics
CRM Online subscribers are already using Microsoft Entra ID to authenticate into their account.
a. single sign-on: Single sign-on (SSO) enables a user to sign in one time and use that credential to access
multiple resources and applications from different providers.
NOTE: Single sign-on is only as secure as the initial authenticator because the subsequent connections are all based on the
security of the initial authenticator.
b. multifactor authentication: Multifactor authentication is the process of prompting a user for an extra form
(or factor) of identification during the sign-in process. MFA helps protect against a password compromise
in situations where the password was compromised but the second factor wasn't.
i. Multifactor authentication provides additional security for your identities by requiring two or
more elements to fully authenticate. These elements fall into three categories:
1. Something the user knows – this might be a challenge question.
2. Something the user has – this might be a code that's sent to the user's mobile phone.
3. Something the user is – this is typically some sort of biometric property, such as a
fingerprint or face scan.
b.
c. On the left is a list of the sources of logging and metric data that can be collected at every layer in your
application architecture, from application to operating system and network. In the center, the logging and
metric data are stored in central repositories. On the right, the data is used in several ways. You can view
real-time and historical performance across each layer of your architecture or aggregated and detailed
information. The data is displayed at different levels for different audiences. You can view high-level
reports on the Azure Monitor Dashboard or create custom views by using Power BI and Kusto queries.
Additionally, you can use the data to help you react to critical events in real time, through alerts delivered
to teams via SMS, email, and so on. Or you can use thresholds to trigger autoscaling functionality to scale
to meet the demand.
d. Azure Log Analytics: Azure Log Analytics is the tool in the Azure portal where you’ll write and run log
queries on the data gathered by Azure Monitor. Log Analytics is a robust tool that supports both simple,
complex queries, and data analysis. You can write a simple query that returns a set of records and then
use features of Log Analytics to sort, filter, and analyze the records. You can write an advanced query to
perform statistical analysis and visualize the results in a chart to identify a particular trend. Whether you
work with the results of your queries interactively or use them with other Azure Monitor features such as
log query alerts or workbooks, Log Analytics is the tool that you're going to use to write and test those
queries.