Azure Security Fundamentals
Azure Security Fundamentals
e OVERVIEW
p CONCEPT
Get started
e OVERVIEW
Mitigate threats
e OVERVIEW
p CONCEPT
Threat protection
Recover from identity compromise
p CONCEPT
Network security
IaaS workloads
PaaS deployments
Operational security
p CONCEPT
Encryption at rest
Data protection
Network security
d TRAINING
d TRAINING
Overview
We know that security is job one in the cloud and how important it is that you find
accurate and timely information about Azure security. One of the best reasons to use
Azure for your applications and services is to take advantage of its wide array of security
tools and capabilities. These tools and capabilities help make it possible to create secure
solutions on the secure Azure platform. Microsoft Azure provides confidentiality,
integrity, and availability of customer data, while also enabling transparent
accountability.
This article provides a comprehensive look at the security available with Azure.
Azure platform
Azure is a public cloud service platform that supports a broad selection of operating
systems, programming languages, frameworks, tools, databases, and devices. It can run
Linux containers with Docker integration; build apps with JavaScript, Python, .NET, PHP,
Java, and Node.js; build back-ends for iOS, Android, and Windows devices.
Azure public cloud services support the same technologies millions of developers and IT
professionals already rely on and trust. When you build on, or migrate IT assets to, a
public cloud service provider you are relying on that organization’s abilities to protect
your applications and data with the services and the controls they provide to manage
the security of your cloud-based assets.
In addition, Azure provides you with a wide array of configurable security options and
the ability to control them so that you can customize security to meet the unique
requirements of your organization’s deployments. This document helps you understand
how Azure security capabilities can help you fulfill these requirements.
7 Note
The primary focus of this document is on customer-facing controls that you can use
to customize and increase security for your applications and services.
For information on how Microsoft secures the Azure platform itself, see Azure
infrastructure security.
The built-in capabilities are organized in six functional areas: Operations, Applications,
Storage, Networking, Compute, and Identity. Additional detail on the features and
capabilities available in the Azure Platform in these six areas are provided through
summary information.
Operations
This section provides additional information regarding key features in security
operations and summary information about these capabilities.
Microsoft Sentinel
Microsoft Sentinel is a scalable, cloud-native, security information and event
management (SIEM) and security orchestration, automation, and response (SOAR)
solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence
across the enterprise, providing a single solution for attack detection, threat visibility,
proactive hunting, and threat response.
Application Insights
Application Insights is an extensible Application Performance Management (APM)
service for web developers. With Application Insights, you can monitor your live web
applications and automatically detect performance anomalies. It includes powerful
analytics tools to help you diagnose issues and to understand what users actually do
with your apps. It monitors your application all the time it's running, both during testing
and after you've published or deployed it.
Application Insights creates charts and tables that show you, for example, what times of
day you get most users, how responsive the app is, and how well it is served by any
external services that it depends on.
If there are crashes, failures or performance issues, you can search through the
telemetry data in detail to diagnose the cause. And the service sends you emails if there
are any changes in the availability and performance of your app. Application Insight thus
becomes a valuable security tool because it helps with the availability in the
confidentiality, integrity, and availability security triad.
Azure Monitor
Azure Monitor offers visualization, query, routing, alerting, auto scale, and automation
on data both from the Azure subscription (Activity Log) and each individual Azure
resource (Resource Logs). You can use Azure Monitor to alert you on security-related
events that are generated in Azure logs.
Azure Monitor logs can be a useful tool in forensic and other security analysis, as the
tool enables you to quickly search through large amounts of security-related entries
with a flexible query approach. In addition, on-premises firewall and proxy logs can be
exported into Azure and made available for analysis using Azure Monitor logs.
Azure Advisor
Azure Advisor is a personalized cloud consultant that helps you to optimize your Azure
deployments. It analyzes your resource configuration and usage telemetry. It then
recommends solutions to help improve the performance, security, and reliability of your
resources while looking for opportunities to reduce your overall Azure spend. Azure
Advisor provides security recommendations, which can significantly improve your overall
security posture for solutions you deploy in Azure. These recommendations are drawn
from security analysis performed by Microsoft Defender for Cloud.
Applications
The section provides additional information regarding key features in application
security and summary information about these capabilities.
Penetration Testing
We don’t perform penetration testing of your application for you, but we do understand
that you want and need to perform testing on your own applications. That’s a good
thing, because when you enhance the security of your applications you help make the
entire Azure ecosystem more secure. While notifying Microsoft of pen testing activities
is no longer required customers must still comply with the Microsoft Cloud Penetration
Testing Rules of Engagement .
Web Application firewall
The web application firewall (WAF) in Azure Application Gateway helps protect web
applications from common web-based attacks like SQL injection, cross-site scripting
attacks, and session hijacking. It comes preconfigured with protection from threats
identified by the Open Web Application Security Project (OWASP) as the top 10
common vulnerabilities .
The first new feature is real-time state information about application pools, worker
processes, sites, application domains, and running requests. The second new advantages
are the detailed trace events that track a request throughout the complete request-and-
response process.
To enable the collection of these trace events, IIS 7 can be configured to automatically
capture full trace logs, in XML format, for any particular request based on elapsed time
or error response codes.
Storage
The section provides additional information regarding key features in Azure storage
security and summary information about these capabilities.
Encryption in Transit
Encryption in transit is a mechanism of protecting data when it is transmitted across
networks. With Azure Storage, you can secure data using:
Transport-level encryption, such as HTTPS when you transfer data into or out of
Azure Storage.
Wire encryption, such as SMB 3.0 encryption for Azure File shares.
Client-side encryption, to encrypt the data before it is transferred into storage and
to decrypt the data after it is transferred out of storage.
Encryption at rest
For many organizations, data encryption at rest is a mandatory step towards data
privacy, compliance, and data sovereignty. There are three Azure storage security
features that provide encryption of data that is “at rest”:
Storage Service Encryption allows you to request that the storage service
automatically encrypt data when writing it to Azure Storage.
Azure Disk Encryption for Linux VMs and Azure Disk Encryption for Windows VMs
allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
Storage Analytics
Azure Storage Analytics performs logging and provides metrics data for a storage
account. You can use this data to trace requests, analyze usage trends, and diagnose
issues with your storage account. Storage Analytics logs detailed information about
successful and failed requests to a storage service. This information can be used to
monitor individual requests and to diagnose issues with a storage service. Requests are
logged on a best-effort basis. The following types of authenticated requests are logged:
Successful requests.
Failed requests, including timeout, throttling, network, authorization, and other
errors.
Requests using a Shared Access Signature (SAS), including failed and successful
requests.
Requests to analytics data.
Azure storage services now support CORS so that once you set the CORS rules for the
service, a properly authenticated request made against the service from a different
domain is evaluated to determine whether it is allowed according to the rules you have
specified.
Networking
The section provides additional information regarding key features in Azure network
security and summary information about these capabilities.
Network Layer Controls
Network access control is the act of limiting connectivity to and from specific devices or
subnets and represents the core of network security. The goal of network access control
is to make sure that your virtual machines and services are accessible to only users and
devices to which you want them accessible.
A Network Security Group (NSG) is a basic stateful packet filtering firewall and it enables
you to control access based on a 5-tuple. NSGs do not provide application layer
inspection or authenticated access controls. They can be used to control traffic moving
between subnets within an Azure Virtual Network and traffic between an Azure Virtual
Network and the Internet.
Azure Firewall
Azure Firewall is a cloud-native and intelligent network firewall security service that
provides threat protection for your cloud workloads running in Azure. It's a fully stateful
firewall as a service with built-in high availability and unrestricted cloud scalability. It
provides both east-west and north-south traffic inspection.
Azure Firewall is offered in two SKUs: Standard and Premium. Azure Firewall Standard
provides L3-L7 filtering and threat intelligence feeds directly from Microsoft Cyber
Security. Azure Firewall Premium provides advanced capabilities include signature-based
IDPS to allow rapid detection of attacks by looking for specific patterns.
User-Defined Routes allow you to customize inbound and outbound paths for traffic
moving into and out of individual virtual machines or subnets to ensure the most secure
route possible. Forced tunneling is a mechanism you can use to ensure that your
services are not allowed to initiate a connection to devices on the Internet.
This is different from being able to accept incoming connections and then responding
to them. Front-end web servers need to respond to requests from Internet hosts, and so
Internet-sourced traffic is allowed inbound to these web servers and the web servers can
respond.
While Network Security Groups, User-Defined Routes, and forced tunneling provide you
a level of security at the network and transport layers of the OSI model , there may be
times when you want to enable security at higher levels of the stack. You can access
these enhanced network security features by using an Azure partner network security
appliance solution. You can find the most current Azure partner network security
solutions by visiting the Azure Marketplace and searching for “security” and “network
security.”
Additionally, you can connect the virtual network to your on-premises network using
one of the connectivity options available in Azure. In essence, you can expand your
network to Azure, with complete control on IP address blocks with the benefit of
enterprise scale Azure provides.
Azure networking supports various secure remote access scenarios. Some of these
include:
Private Endpoints allow you to secure your critical Azure service resources to only your
virtual networks. Azure Private Endpoint uses a private IP address from your VNet to
connect you privately and securely to a service powered by Azure Private Link,
effectively bringing the service into your VNet. Exposing your virtual network to the
public internet is no longer necessary to consume services on Azure.
You can also create your own private link service in your virtual network. Azure Private
Link service is the reference to your own service that is powered by Azure Private Link.
Your service that is running behind Azure Standard Load Balancer can be enabled for
Private Link access so that consumers to your service can access it privately from their
own virtual networks. Your customers can create a private endpoint inside their virtual
network and map it to this service. Exposing your service to the public internet is no
longer necessary to render services on Azure.
VPN Gateway
To send network traffic between your Azure Virtual Network and your on-premises site,
you must create a VPN gateway for your Azure Virtual Network. A VPN gateway is a type
of virtual network gateway that sends encrypted traffic across a public connection. You
can also use VPN gateways to send traffic between Azure Virtual Networks over the
Azure network fabric.
Express Route
Microsoft Azure ExpressRoute is a dedicated WAN link that lets you extend your on-
premises networks into the Microsoft cloud over a dedicated private connection
facilitated by a connectivity provider.
With ExpressRoute, you can establish connections to Microsoft cloud services, such as
Microsoft Azure, Microsoft 365, and CRM Online. Connectivity can be from an any-to-
any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection
through a connectivity provider at a co-location facility.
ExpressRoute connections do not go over the public Internet and thus can be
considered more secure than VPN-based solutions. This allows ExpressRoute
connections to offer more reliability, faster speeds, lower latencies, and higher security
than typical connections over the Internet.
Application Gateway
Microsoft Azure Application Gateway provides an Application Delivery Controller
(ADC) as a service, offering various layer 7 load balancing capabilities for your
application.
It allows you to optimize web farm productivity by offloading CPU intensive TLS
termination to the Application Gateway (also known as “TLS offload” or “TLS bridging”).
It also provides other Layer 7 routing capabilities including round-robin distribution of
incoming traffic, cookie-based session affinity, URL path-based routing, and the ability
to host multiple websites behind a single Application Gateway. Azure Application
Gateway is a layer-7 load balancer.
Protection against HTTP protocol anomalies such as missing host user-agent and
accept headers
A centralized web application firewall to protect against web attacks makes security
management much simpler and gives better assurance to the application against the
threats of intrusions. A WAF solution can also react to a security threat faster by
patching a known vulnerability at a central location versus securing each of individual
web applications. Existing application gateways can be converted to an application
gateway with web application firewall easily.
Traffic Manager
Microsoft Azure Traffic Manager allows you to control the distribution of user traffic for
service endpoints in different data centers. Service endpoints supported by Traffic
Manager include Azure VMs, Web Apps, and Cloud services. You can also use Traffic
Manager with external, non-Azure endpoints. Traffic Manager uses the Domain Name
System (DNS) to direct client requests to the most appropriate endpoint based on a
traffic-routing method and the health of the endpoints.
Load balance traffic between virtual machines in a virtual network, between virtual
machines in cloud services, or between on-premises computers and virtual
machines in a cross-premises virtual network. This configuration is known as
internal load balancing.
Internal DNS
You can manage the list of DNS servers used in a VNet in the Management Portal, or in
the network configuration file. Customer can add up to 12 DNS servers for each VNet.
When specifying DNS servers, it's important to verify that you list customer’s DNS
servers in the correct order for customer’s environment. DNS server lists do not work
round-robin. They are used in the order that they are specified. If the first DNS server on
the list is able to be reached, the client uses that DNS server regardless of whether the
DNS server is functioning properly or not. To change the DNS server order for
customer’s virtual network, remove the DNS servers from the list and add them back in
the order that customer wants. DNS supports the availability aspect of the “CIA” security
triad.
Azure DNS
The Domain Name System, or DNS, is responsible for translating (or resolving) a website
or service name to its IP address. Azure DNS is a hosting service for DNS domains,
providing name resolution using Microsoft Azure infrastructure. By hosting your
domains in Azure, you can manage your DNS records using the same credentials, APIs,
tools, and billing as your other Azure services. DNS supports the availability aspect of
the “CIA” security triad.
Event: Contains entries for which NSG rules are applied to VMs and instance roles
based on MAC address. The status for these rules is collected every 60 seconds.
Rules counter: Contains entries for how many times each NSG rule is applied to
deny or allow traffic.
Compute
The section provides additional information regarding key features in this area and
summary information about these capabilities.
The spectrum of option ranges from enabling "lift and shift" scenarios of existing
applications, to a full control of security features. For Infrastructure as a Service (IaaS),
you can use confidential virtual machines powered by AMD SEV-SNP or confidential
application enclaves for virtual machines that run Intel Software Guard Extensions (SGX).
For Platform as a Service, we have multiple container based options, including
integrations with Azure Kubernetes Service (AKS).
Antimalware & Antivirus
With Azure IaaS, you can use antimalware software from security vendors such as
Microsoft, Symantec, Trend Micro, McAfee, and Kaspersky to protect your virtual
machines from malicious files, adware, and other threats. Microsoft Antimalware for
Azure Cloud Services and Virtual Machines is a protection capability that helps identify
and remove viruses, spyware, and other malicious software. Microsoft Antimalware
provides configurable alerts when known malicious or unwanted software attempts to
install itself or run on your Azure systems. Microsoft Antimalware can also be deployed
using Microsoft Defender for Cloud
SQL VM TDE
Transparent data encryption (TDE) and column level encryption (CLE) are SQL server
encryption features. This form of encryption requires customers to manage and store
the cryptographic keys you use for encryption.
The Azure Key Vault (AKV) service is designed to improve the security and management
of these keys in a secure and highly available location. The SQL Server Connector
enables SQL Server to use these keys from Azure Key Vault.
If you are running SQL Server with on-premises machines, there are steps you can follow
to access Azure Key Vault from your on-premises SQL Server instance. But for SQL
Server in Azure VMs, you can save time by using the Azure Key Vault Integration feature.
With a few Azure PowerShell cmdlets to enable this feature, you can automate the
configuration necessary for a SQL VM to access your key vault.
VM Disk Encryption
Azure Disk Encryption for Linux VMs and Azure Disk Encryption for Windows VMs helps
you encrypt your IaaS virtual machine disks. It applies the industry standard BitLocker
feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for
the OS and the data disks. The solution is integrated with Azure Key Vault to help you
control and manage the disk-encryption keys and secrets in your Key Vault subscription.
The solution also ensures that all data on the virtual machine disks are encrypted at rest
in your Azure storage.
Virtual networking
Virtual machines need network connectivity. To support that requirement, Azure requires
virtual machines to be connected to an Azure Virtual Network. An Azure Virtual Network
is a logical construct built on top of the physical Azure network fabric. Each logical Azure
Virtual Network is isolated from all other Azure Virtual Networks. This isolation helps
ensure that network traffic in your deployments is not accessible to other Microsoft
Azure customers.
Patch Updates
Patch Updates provide the basis for finding and fixing potential problems and simplify
the software update management process, both by reducing the number of software
updates you must deploy in your enterprise and by increasing your ability to monitor
compliance.
Secure Identity
Microsoft uses multiple security practices and technologies across its products and
services to manage identity and access.
Azure role-based access control (Azure RBAC) enables you to grant access based
on the user’s assigned role, making it easy to give users only the amount of access
they need to perform their job duties. You can customize Azure RBAC per your
organization’s business model and risk tolerance.
Cloud App Discovery is a premium feature of Microsoft Entra ID that enables you
to identify cloud applications that are used by the employees in your organization.
Microsoft Entra application proxy provides SSO and secure remote access for web
applications hosted on-premises.
Next Steps
Understand your shared responsibility in the cloud.
Learn how Microsoft Defender for Cloud can help you prevent, detect, and
respond to threats with increased visibility and control over the security of your
Azure resources.
End-to-end security in Azure
Article • 06/28/2024
One of the best reasons to use Azure for your applications and services is to take
advantage of its wide array of security tools and capabilities. These tools and capabilities
help make it possible to create secure solutions on the secure Azure platform. Microsoft
Azure provides confidentiality, integrity, and availability of customer data, while also
enabling transparent accountability.
The following diagram and documentation introduces you to the security services in
Azure. These security services help you meet the security needs of your business and
protect your users, devices, resources, data, and applications in the cloud.
Secure and protect - Services that let you implement a layered, defense in-depth
strategy across identity, hosts, networks, and data. This collection of security
services and capabilities provides a way to understand and improve your security
posture across your Azure environment.
Detect threats – Services that identify suspicious activities and facilitate mitigating
the threat.
Investigate and respond – Services that pull logging data so you can assess a
suspicious activity and respond.
Security controls and baselines
The Microsoft cloud security benchmark includes a collection of high-impact security
recommendations you can use to help secure the services you use in Azure:
Service Description
Microsoft Defender for Cloud A unified infrastructure security management system that
strengthens the security posture of your data centers, and
Service Description
Microsoft Entra ID Protection A tool that allows organizations to automate the detection and
remediation of identity-based risks, investigate risks using data
in the portal, and export risk detection data to third-party
utilities for further analysis.
VPN Gateway A virtual network gateway that is used to send encrypted traffic
between an Azure virtual network and an on-premises location
over the public Internet and to send encrypted traffic between
Azure virtual networks over the Microsoft network.
Azure DDoS Protection Provides enhanced DDoS mitigation features to defend against
DDoS attacks. It is automatically tuned to help protect your
specific Azure resources in a virtual network.
Azure Front Door A global, scalable entry-point that uses the Microsoft global
edge network to create fast, secure, and widely scalable web
applications.
Azure Key Vault A secure secrets store for tokens, passwords, certificates, API
keys, and other secrets. Key Vault can also be used to create
and control the encryption keys used to encrypt your data.
Key Vault Managed HSM A fully managed, highly available, single-tenant, standards-
compliant cloud service that enables you to safeguard
cryptographic keys for your cloud applications, using FIPS 140-
2 Level 3 validated HSMs.
Azure Private Link Enables you to access Azure PaaS Services (for example, Azure
Storage and SQL Database) and Azure hosted customer-
owned/partner services over a private endpoint in your virtual
network.
Azure Application Gateway An advanced web traffic load balancer that enables you to
manage traffic to your web applications. Application Gateway
can make routing decisions based on additional attributes of
an HTTP request, for example URI path or host headers.
Azure Service Bus A fully managed enterprise message broker with message
queues and publish-subscribe topics. Service Bus is used to
decouple applications and services from each other.
Web Application Firewall Provides centralized protection of your web applications from
common exploits and vulnerabilities. WAF can be deployed
with Azure Application Gateway and Azure Front Door.
API Management A way to create consistent and modern API gateways for
existing back-end services.
Azure confidential computing Allows you to isolate your sensitive data while it's being
processed in the cloud.
Customer Access
Microsoft Entra External ID With External Identities in Microsoft Entra ID, you can allow
people outside your organization to access your apps and
resources, while letting them sign in using whatever identity
they prefer.
You can share your apps and resources with external users via
Microsoft Entra B2B collaboration.
Detect threats
ノ Expand table
Service Description
Microsoft Defender for Cloud Brings advanced, intelligent, protection of your Azure and
hybrid resources and workloads. The workload protection
dashboard in Defender for Cloud provides visibility and control
of the cloud workload protection features for your
environment.
Microsoft Defender XDR A unified pre- and post-breach enterprise defense suite that
natively coordinates detection, prevention, investigation, and
response across endpoints, identities, email, and applications
to provide integrated protection against sophisticated attacks.
Microsoft Entra ID Protection Sends two types of automated notification emails to help you
manage user risk and risk detections: Users at risk detected
email and Weekly digest email.
Microsoft Defender for IoT A unified security solution for identifying IoT/OT devices,
vulnerabilities, and threats. It enables you to secure your entire
IoT/OT environment, whether you need to protect existing
IoT/OT devices or build security into new IoT innovations.
Azure Network Watcher Provides tools to monitor, diagnose, view metrics, and enable
or disable logs for resources in an Azure virtual network.
Network Watcher is designed to monitor and repair the
network health of IaaS products which includes virtual
machines, virtual networks, application gateways, and load
balancers.
Microsoft Defender for A cloud-native solution that is used to secure your containers
Containers so you can improve, monitor, and maintain the security of your
Service Description
Microsoft Defender for Cloud A cloud access security broker (CASB) that operates on
Apps multiple clouds. It provides rich visibility, control over data
travel, and sophisticated analytics to identify and combat
cyberthreats across all your cloud services.
Service Description
Microsoft Sentinel Powerful search and query tools to hunt for security threats
across your organization's data sources.
Azure Monitor logs and metrics Delivers a comprehensive solution for collecting, analyzing,
and acting on telemetry from your cloud and on-premises
environments. Azure Monitor collects and aggregates data
from a variety of sources into a common data platform where
it can be used for analysis, visualization, and alerting.
Azure AD reports and monitoring Microsoft Entra reports provide a comprehensive view of
activity in your environment.
Microsoft Entra PIM audit history Shows all role assignments and activations within the past 30
days for all privileged roles.
Microsoft Defender for Cloud Provides tools to gain a deeper understanding of what's
Apps happening in your cloud environment.
Next steps
Understand your shared responsibility in the cloud.
Understand the isolation choices in the Azure cloud against both malicious and
non-malicious users.
Feedback
Was this page helpful? Yes No
As you consider and evaluate public cloud services, it's critical to understand the shared
responsibility model and which security tasks the cloud provider handles and which
tasks you handle. The workload responsibilities vary depending on whether the
workload is hosted on Software as a Service (SaaS), Platform as a Service (PaaS),
Infrastructure as a Service (IaaS), or in an on-premises datacenter
Division of responsibility
In an on-premises datacenter, you own the whole stack. As you move to the cloud some
responsibilities transfer to Microsoft. The following diagram illustrates the areas of
responsibility between you and Microsoft, according to the type of deployment of your
stack.
For all cloud deployment types, you own your data and identities. You're responsible for
protecting the security of your data and identities, on-premises resources, and the cloud
components you control. Cloud components you control vary by service type.
Regardless of the type of deployment, you always retain the following responsibilities:
Data
Endpoints
Account
Access management
Next step
Learn more about shared responsibility and strategies to improve your security posture
in the Well-Architected Framework's overview of the security pillar.
Artificial intelligence (AI) shared
responsibility model
Article • 10/24/2023
As you consider and evaluate AI enabled integration, it's critical to understand the
shared responsibility model and which tasks the AI platform or application provider
handle and which tasks you handle. The workload responsibilities vary depending on
whether the AI integration is based on Software as a Service (SaaS), Platform as a Service
(PaaS), or Infrastructure as a Service (IaaS).
Division of responsibility
As with cloud services, you have options when implementing AI capabilities for your
organization. Depending on which option you choose, you take responsibility for
different parts of the necessary operations and policies needed to use AI safely.
The following diagram illustrates the areas of responsibility between you and Microsoft
according to the type of deployment.
AI layer overview
An AI enabled application consists of three layers of functionality that group together
tasks, which you or an AI provider perform. The security responsibilities generally reside
with whoever performs the tasks, but an AI provider might choose to expose security or
other controls as a configuration option to you as appropriate. These three layers
include:
AI platform
The AI platform layer provides the AI capabilities to the applications. At the platform
layer, there's a need to build and safeguard the infrastructure that runs the AI model,
training data, and specific configurations that change the behavior of the model, such as
weights and biases. This layer provides access to functionality via APIs, which pass text
known as a Metaprompt to the AI model for processing, then return the generated
outcome, known as a Prompt-Response.
AI platform security considerations - To protect the AI platform from malicious inputs,
a safety system must be built to filter out the potentially harmful instructions sent to the
AI model (inputs). As AI models are generative, there's also a potential that some
harmful content might be generated and returned to the user (outputs). Any safety
system must first protect against potentially harmful inputs and outputs of many
classifications including hate, jailbreaks, and others. These classifications will likely
evolve over time based on model knowledge, locale, and industry.
Microsoft has built-in safety systems for both PaaS and SaaS offerings:
AI application
The AI application accesses the AI capabilities and provides the service or interface that
the user consumes. The components in this layer can vary from relatively simple to
highly complex, depending on the application. The simplest standalone AI applications
act as an interface to a set of APIs taking a text-based user-prompt and passing that
data to the model for a response. More complex AI applications include the ability to
ground the user-prompt with extra context, including a persistence layer, semantic
index, or via plugins to allow access to more data sources. Advanced AI applications
might also interface with existing applications and systems. Existing applications and
systems might work across text, audio, and images to generate various types of content.
AI usage
The AI usage layer describes how the AI capabilities are ultimately used and consumed.
Generative AI offers a new type of user/computer interface that is fundamentally
different from other computer interfaces, such as API, command-prompt, and graphical
user interfaces (GUIs). The generative AI interface is both interactive and dynamic,
allowing the computer capabilities to adjust to the user and their intent. The generative
AI interface contrasts with previous interfaces that primarily force users to learn the
system design and functionality and adjust to it. This interactivity allows user input,
instead of application designers, to have a high level of influence of the output of the
system, making safety guardrails critical to protecting people, data, and business assets.
More emphasis is required on user behavior and accountability because of the increased
influence users have on the output of the systems. It's critical to update acceptable use
policies and educate users on the difference of standard IT applications to AI enabled
applications. These should include AI specific considerations related to security, privacy,
and ethics. Additionally, users should be educated on AI based attacks that can be used
to trick them with convincing fake text, voices, videos, and more.
Security lifecycle
As with security for other types of capability, it's critical to plan for a complete approach.
A complete approach includes people, process, and technology across the full security
lifecycle: identify, protect, detect, respond, recover, and govern. Any gap or weakness in
this lifecycle could have you:
To learn more about the unique nature of AI threat testing, read how Microsoft AI Red
Team is building the future of safer AI .
If the current "off the shelf" capabilities don't meet the specific needs for a workload,
you can adopt a PaaS model by using AI services, such as Azure OpenAI Service, to meet
those specific requirements.
Custom model building should only be adopted by organizations with deep expertise in
data science and the security, privacy, and ethical considerations of AI.
To help bring AI to the world, Microsoft is developing Copilot solutions for each of the
main productivity solutions: from Bing and Windows, to GitHub and Office 365.
Microsoft is developing full stack solutions for all types of productivity scenarios. These
are offered as SaaS solutions. Built into the user interface of the product, they're tuned
to assist the user with specific tasks to increase productivity.
Microsoft ensures that every Copilot solution is engineered following our strong
principles for AI governance .
Next steps
Learn more about Microsoft's product development requirements for responsible AI in
the Microsoft Responsible AI Standard .
Zero Trust is a new security model that assumes breach and verifies each request as
though it originated from an uncontrolled network. In this article, you'll learn about the
guiding principles of Zero Trust and find resources to help you implement Zero Trust.
To address this new world of computing, Microsoft highly recommends the Zero Trust
security model, which is based on these guiding principles:
Verify explicitly - Always authenticate and authorize based on all available data
points.
Use least privilege access - Limit user access with Just-In-Time and Just-Enough-
Access (JIT/JEA), risk-based adaptive policies, and data protection.
Assume breach - Minimize blast radius and segment access. Verify end-to-end
encryption and use analytics to get visibility, drive threat detection, and improve
defenses.
For more information about Zero Trust, see Microsoft's Zero Trust Guidance Center.
For more information about deploying technology components of the Zero Trust
architecture, see Microsoft's Deploying Zero Trust solutions.
Organizations must embrace a Zero Trust approach to access control as they embrace
remote work and use cloud technology to digitally transform their business model,
customer engagement model, employee engagement, and empowerment model.
Zero trust principles help establish and continuously improve security assurances, while
maintaining flexibility to keep pace with this new world. Most zero trust journeys start
with access control and focus on identity as a preferred and primary control while they
continue to embrace network security technology as a key element. Network technology
and the security perimeter tactic are still present in a modern access control model, but
they aren't the dominant and preferred approach in a complete access control strategy.
For more information on the Zero Trust transformation of access control, see the Cloud
Adoption Framework's access control.
To learn more about creating an access model based on Conditional Access that's
aligned with the guiding principles of Zero Trust, see Conditional Access for Zero Trust.
Develop apps using Zero Trust principles
Zero Trust is a security framework that does not rely on the implicit trust afforded to
interactions behind a secure network perimeter. Instead, it uses the principles of explicit
verification, least privileged access, and assuming breach to keep users and data secure
while allowing for common scenarios like access to applications from outside the
network perimeter.
As a developer, it is essential that you use Zero Trust principles to keep users safe and
data secure. App developers can improve app security, minimize the impact of breaches,
and ensure that their applications meet their customers' security requirements by
adopting Zero Trust principles.
For more information on best practices key to keeping your apps secure, see:
To learn about recommendations and core concepts for deploying secure email, docs,
and apps policies and configurations for Zero Trust access to Microsoft 365, see Zero
Trust identity and device access configurations.
Next steps
To learn how to enhance your security solutions by integrating with Microsoft
products, see Integrate with Microsoft's Zero Trust solutions
Feedback
Was this page helpful? Yes No
Ransomware and extortion are a high profit, low-cost business, which has a debilitating
impact on targeted organizations, national/regional security, economic security, and
public health and safety. What started as simple, single-PC ransomware grew to include
various extortion techniques directed at all types of corporate networks and cloud
platforms.
By using Azure native ransomware protections and implementing the best practices
recommended in this article, you're taking measures that positions your organization to
prevent, protect, and detect potential ransomware attacks on your Azure assets.
This article lays out key Azure native capabilities and defenses for ransomware attacks
and guidance on how to proactively use these to protect your assets on Azure cloud.
A growing threat
Ransomware attacks are one of the biggest security challenges facing businesses today.
When successful, ransomware attacks can disable a business core IT infrastructure, and
cause destruction that could have a debilitating impact on the physical, economic
security or safety of a business. Ransomware attacks are targeted to businesses of all
types. This requires that all businesses take preventive measures to ensure protection.
Recent trends on the number of attacks are alarming. While 2020 wasn't a good year for
ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the
Colonial pipeline (Colonial) attack shut down services such as pipeline transportation of
diesel, gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel
network supplying the populous eastern states.
For many organizations, the cost to rebuild from scratch after a ransomware incident far
outweighs the original ransom demanded. With a limited understanding of the threat
landscape and how ransomware operates, paying the ransom seems like the better
business decision to return to operations. However, the real damage is often done when
the cybercriminal exfiltrates files for release or sale, while leaving backdoors in the
network for future criminal activity—and these risks persist whether or not the ransom is
paid.
What is ransomware
Ransomware is a type of malware that infects a computer and restricts a user's access to
the infected system or specific files in order to extort them for money. After the target
system is compromised, it typically locks out most interaction and displays an on-screen
alert, typically stating that the system is locked or that all of their files have been
encrypted. It then demands a substantial ransom be paid before the system is released
or files decrypted.
Any business or organization that operates an IT system with data in it can be attacked.
Although individuals can be targeted in a ransomware attack, most attacks are targeted
at businesses. While the Colonial ransomware attack of May 2021 drew considerable
public attention, our Detection and Response team (DART)'s ransomware engagement
data shows that the energy sector represents one of the most targeted sectors, along
with the financial, healthcare, and entertainment sectors. And despite continued
promises not to attack hospitals or healthcare companies during a pandemic, healthcare
remains the number one target of human operated ransomware.
How your assets are targeted
When attacking cloud infrastructure, adversaries often attack multiple resources to try to
obtain access to customer data or company secrets. The cloud "kill chain" model
explains how attackers attempt to gain access to any of your resources running in the
public cloud through a four-step process: exposure, access, lateral movement, and
actions.
The attack surface is increased as more businesses offer more services through
digital outlets
There's a considerable ease of obtaining off-the-shelf malware, Ransomware-as-a-
Service (RaaS)
The option to use cryptocurrency for blackmail payments openes new avenues for
exploit
Expansion of computers and their usage in different workplaces (local school
districts, police departments, police squad cars, etc.) each of which is a potential
access point for malware, resulting in potential attack surface
Prevalence of old, outdated, and antiquated infrastructure systems and software
Poor patch-management regimens
Outdated or old operating systems that are close to or have gone beyond end-of-
support dates
Lack of resources to modernize the IT footprint
Knowledge gap
Lack of skilled staff and key personnel overdependency
Poor security architecture
Attackers use different techniques, such as Remote Desktop Protocol (RDP) brute force
attack to exploit vulnerabilities.
Should you pay?
There are varying opinions on what the best option is when confronted with this vexing
demand. The Federal Bureau of Investigation (FBI) advises victims not to pay ransom but
to instead be vigilant and take proactive measures to secure their data before an attack.
They contend that paying doesn't guarantee that locked systems and encrypted data
are released again. The FBI says another reason not to pay is that payments to cyber
criminals incentivize them to continue to attack organizations
Nevertheless, some victims elect to pay the ransom demand even though system and
data access isn't guaranteed after paying the ransom. By paying, such organizations take
the calculated risk to pay in hopes of getting back their system and data and quickly
resuming normal operations. Part of the calculation is reduction in collateral costs such
as lost productivity, decreased revenue over time, exposure of sensitive data, and
potential reputational damage.
The best way to prevent paying ransom isn't to fall victim by implementing preventive
measures and having tool saturation to protect your organization from every step that
attacker takes wholly or incrementally to hack into your system. In addition, having the
ability to recover impacted assets ensure restoration of business operations in a timely
fashion. Azure Cloud has a robust set of tools to guide you all the way.
What is the typical cost to a business?
The impact of a ransomware attack on any organization is difficult to quantify
accurately. However, depending on the scope and type, the impact is multi-dimensional
and is broadly expressed in:
Colonial Pipeline paid about $4.4 Million in ransom to have their data released. This
doesn't include the cost of downtime, lost productive, lost sales and the cost of
restoring services. More broadly, a significant impact is the "knock-on effect" of
impacting high numbers of businesses and organizations of all kinds including towns
and cities in their local areas. The financial impact is also staggering. According to
Microsoft, the global cost associated with ransomware recovery is projected to exceed
$20 billion in 2021.
Next steps
See the white paper: Azure defenses for ransomware attack whitepaper .
Ultimately, the Framework is aimed at reducing and better managing cybersecurity risks.
ノ Expand table
Prioritize mitigation
Based on our experience with ransomware attacks, we find that prioritization should
focus on: 1) prepare, 2) limit, 3) prevent. This may seem counterintuitive, since most
people want to prevent an attack and move on. Unfortunately, we must assume breach
(a key Zero Trust principle) and focus on reliably mitigating the most damage first. This
prioritization is critical because of the high likelihood of a worst-case scenario with
ransomware. While it's not a pleasant truth to accept, we're facing creative and
motivated human attackers who are adept at finding a way to control the complex real-
world environments in which we operate. Against that reality, it's important to prepare
for the worst and establish frameworks to contain and prevent attackers' ability to get
what they're after.
While these priorities should govern what to do first, we encourage organizations to run
steps in parallel where possible, including pulling quick wins forward from step 1 when
you can.
To achieve this, organizations should identify and execute quick wins to strengthen
security controls to prevent entry, and rapidly detect/evict attackers while implementing
a sustained program that helps them stay secure. Microsoft recommends organizations
follow the principles outlined in the Zero Trust strategy here . Specifically, against
Ransomware, organizations should prioritize:
Improving security hygiene by focusing efforts on attack surface reduction and
threat and vulnerability management for assets in their estate.
Implementing Protection, Detection and Response controls for their digital assets
that can protect against commodity and advanced threats, provide visibility and
alerting on attacker activity and respond to active threats.
Organizations should have elevated security for privileged accounts (tightly protect,
closely monitor, and rapidly respond to incidents related to these roles). See Microsoft's
Security rapid modernization plan, which covers:
End to End Session Security (including multifactor authentication (MFA) for admins)
Protect and Monitor Identity Systems
Mitigate Lateral Traversal
Rapid Threat Response
Limits damage for the worst-case scenario – While restoring all systems from
backups is highly disruptive to business, this is more effective and efficient than
trying to recovery using (low quality) attacker-provided decryption tools after
paying to get the key. Note: Paying is an uncertain path – You have no formal or
legal guarantee that the key works on all files, the tools work effectively, or that the
attacker (who may be an amateur affiliate using a professional's toolkit) will act in
good faith.
Limit the financial return for attackers – If an organization can restore business
operations without paying the attackers, the attack fails and results in zero return
on investment (ROI) for the attackers. This makes it less likely that they'll target the
organization in the future (and deprives them of more funding to attack others).
The attackers may still attempt to extort the organization through data disclosure or
abusing/selling the stolen data, but this gives them less leverage than if they have the
only access path to your data and systems.
Register Risk - Add ransomware to risk register as high likelihood and high impact
scenario. Track mitigation status via Enterprise Risk Management (ERM) assessment
cycle.
Define and Backup Critical Business Assets – Define systems required for critical
business operations and automatically back them up on a regular schedule
(including correct backup of critical dependencies like Active Directory) Protect
backups against deliberate erasure and encryption with offline storage, immutable
storage, and/or out of band steps (MFA or PIN) before modifying/erasing online
backups.
Test 'Recover from Zero' Scenario – test to ensure your business continuity /
disaster recovery (BC/DR) can rapidly bring critical business operations online from
zero functionality (all systems down). Conduct practice exercises to validate cross-
team processes and technical procedures, including out-of-band employee and
customer communications (assume all email/chat/etc. is down).
It's critical to protect (or print) supporting documents and systems required for
recovery including restoration procedure documents, CMDBs, network diagrams,
SolarWinds instances, etc. Attackers destroy these regularly.
Reduce on-premises exposure – by moving data to cloud services with automatic
backup & self-service rollback.
Data protection
Implement data protection to ensure rapid and reliable recovery from a
ransomware attack + block some techniques.
Designate Protected Folders – to make it more difficult for unauthorized
applications to modify the data in these folders.
Review Permissions – to reduce risk from broad access enabling ransomware
Discover broad write/delete permissions on fileshares, SharePoint, and other
solutions
Reduce broad permissions while meeting business collaboration requirements
Audit and monitor to ensure broad permissions don't reappear
Secure backups
Ensure critical systems are backed up and backups are protected against
deliberate attacker erasure/encryption.
Back up all critical systems automatically on a regular schedule
Ensure Rapid Recovery of business operations by regularly exercising business
continuity / disaster recovery (BC/DR) plan
Protect backups against deliberate erasure and encryption
Strong Protection – Require out of band steps (like MUA/MFA) before modifying
online backups such as Azure Backup
Strongest Protection – Isolate backups from online/production workloads to
enhance the protection of backup data.
Protect supporting documents required for recovery such as restoration
procedure documents, CMDB, and network diagrams
1. Preparation: This stage describes the various measures that should be put into
place prior to an incident. This may include both technical preparations (such as
the implementation of suitable security controls and other technologies) and non-
technical preparations (such as the preparation of processes and procedures).
2. Triggers / Detection: This stage describes how this type of incident may be
detected and what triggers may be available that should be used to initiate either
further investigation or the declaration of an incident. These are generally
separated into high-confidence and low-confidence triggers.
3. Investigation / Analysis: This stage describes the activities that should be
undertaken to investigate and analyze available data when it isn't clear that an
incident has occurred, with the goal of either confirming that an incident should be
declared or concluded that an incident hasn't occurred.
4. Incident Declaration: This stage covers the steps that must be taken to declare an
incident, typically with the raising of a ticket within the enterprise incident
management (ticketing) system and directing the ticket to the appropriate
personnel for further evaluation and action.
5. Containment / Mitigation: This stage covers the steps that may be taken either by
the Security Operations Center (SOC), or by others, to contain or mitigate (stop)
the incident from continuing to occur or limiting the effect of the incident using
available tools, techniques, and procedures.
6. Remediation / Recovery: This stage covers the steps that may be taken to
remediate or recover from damage that was caused by the incident before it was
contained and mitigated.
7. Post-Incident Activity: This stage covers the activities that should be performed
once the incident has been closed. This can include capturing the final narrative
associated with the incident as well as identifying lessons learned.
Ensure that you have well-documented procedures for engaging any third-party
support, particularly support from threat intelligence providers, antimalware solution
providers and from the malware analysis provider. These contacts may be useful if the
ransomware variant may have known weaknesses or decryption tools may be available.
The Azure platform provides backup and recovery options through Azure Backup as well
built-in within various data services and workloads.
Azure Files
Azure Blobs
Azure Disks
Data services like Azure Databases (SQL, MySQL, MariaDB, PostgreSQL), Azure
Cosmos DB, and ANF offer built-in backup capabilities
What's Next
See the white paper: Azure defenses for ransomware attack whitepaper .
There are several potential triggers that might indicate a ransomware incident. Unlike
many other types of malware, most will be higher-confidence triggers (where little
additional investigation or analysis should be required prior to the declaration of an
incident) rather than lower-confidence triggers (where more investigation or analysis
would likely be required before an incident should be declared).
In general, such infections obvious from basic system behavior, the absence of key
system or user files and the demand for ransom. In this case, the analyst should consider
whether to immediately declare and escalate the incident, including taking any
automated actions to mitigate the attack.
Ensure rapid detection and remediation of common attacks on VMs, SQL Servers, Web
applications, and identity.
Event Logs Clearing – especially the Security Event log and PowerShell Operational
logs
Disabling of security tools/controls (associated with some groups)
Incident declaration
Once a successful ransomware infection has been confirmed, the analyst should verify
this represents a new incident or whether it might be related to an existing incident.
Look for currently open tickets that indicate similar incidents. If so, update the current
incident ticket with new information in the ticketing system. If this is a new incident, an
incident should be declared in the relevant ticketing system and escalated to the
appropriate teams or providers to contain and mitigate the incident. Be mindful that
managing ransomware incidents might require actions taken by multiple IT and security
teams. Where possible, ensure that the ticket is clearly identified as a ransomware
incident to guide workflow.
Containment/Mitigation
In general, various server/endpoint antimalware, email antimalware and network
protection solutions should be configured to automatically contain and mitigate known
ransomware. There might be cases, however, where the specific ransomware variant has
been able to bypass such protections and successfully infect target systems.
Microsoft provides extensive resources to help update your incident response processes
on the Top Azure Security Best Practices.
Road to recovery
The Microsoft Detection and Response Team will help protect you from attacks
Understanding and fixing the fundamental security issues that led to the compromise in
the first place should be a priority for ransomware targets.
Customers can engage our security experts directly from within the Microsoft Defender
Portal for timely and accurate response. Experts provide insights needed to better
understand the complex threats affecting your organization, from alert inquiries,
potentially compromised devices, root cause of a suspicious network connection, to
additional threat intelligence regarding ongoing advanced persistent threat campaigns.
Our Rapid Ransomware Recovery services are treated as "Confidential" for the duration
of the engagement. Rapid Ransomware Recovery engagements are exclusively delivered
by the Compromise Recovery Security Practice (CRSP) team, part of the Azure Cloud &
AI Domain. For more information, you can contact CRSP at Request contact about Azure
security .
What's next
See the white paper: Azure defenses for ransomware attack whitepaper .
Other articles in this series:
Feedback
Was this page helpful? Yes No
Microsoft has invested in Azure native security capabilities that organizations can
leverage to defeat ransomware attack techniques found in both high-volume, everyday
attacks, and sophisticated targeted attacks.
Microsoft Defender for Cloud delivers protection for all resources from directly within
the Azure experience and extends protection to on-premises and multi-cloud virtual
machines and SQL databases using Azure Arc:
Microsoft Defender for Cloud provides you the tools to detect and block ransomware,
advanced malware and threats for your resources
Keeping your resources safe is a joint effort between your cloud provider, Azure, and
you, the customer. You have to make sure your workloads are secure as you move to the
cloud, and at the same time, when you move to IaaS (infrastructure as a service) there is
more customer responsibility than there was in PaaS (platform as a service), and SaaS
(software as a service). Microsoft Defender for Cloud provides you the tools needed to
harden your network, secure your services and make sure you're on top of your security
posture.
Defender for Cloud's threat protection enables you to detect and prevent threats at the
Infrastructure as a Service (IaaS) layer, non-Azure servers as well as for Platforms as a
Service (PaaS) in Azure.
Defender for Cloud's threat protection includes fusion kill-chain analysis, which
automatically correlates alerts in your environment based on cyber kill-chain analysis, to
help you better understand the full story of an attack campaign, where it started and
what kind of impact it had on your resources.
Key Features:
Microsoft Sentinel
Microsoft Sentinel helps to create a complete view of a kill chain
With Sentinel, you can connect to any of your security sources using built-in connectors
and industry standards and then take advantage of artificial intelligence to correlate
multiple low fidelity signals spanning multiple sources to create a complete view of a
ransomware kill chain and prioritized alerts so that defenders can accelerate their time
to evict adversaries.
Microsoft Sentinel is your birds-eye view across the enterprise alleviating the stress of
increasingly sophisticated attacks, increasing volumes of alerts, and long resolution time
frames.
Collect data at cloud scale across all users, devices, applications, and infrastructure, both
on-premises and in multiple clouds.
Detect previously undetected threats, and minimize false positives using Microsoft's
analytics and unparalleled threat intelligence.
Investigate threats with artificial intelligence, and hunt for suspicious activities at scale,
tapping into years of Cyber security work at Microsoft.
Microsoft Defender for Cloud provides security alerts and advanced threat protection
for virtual machines, SQL databases, containers, web applications, your network, and
more. When Microsoft Defender for Cloud detects a threat in any area of your
environment, it generates a security alert. These alerts describe details of the affected
resources, suggested remediation steps, and in some cases an option to trigger a logic
app in response.
Azure Backup: Azure Backup service provides simple, secure, and cost-effective
solution to back up your Azure VM. Currently, Azure Backup supports backing up
of all the disks (OS and Data disks) in a VM using backup solution for Azure Virtual
machine.
Azure Disaster Recovery: With disaster recovery from on-prem to the cloud, or
from one cloud to another, you can avoid downtime and keep your applications up
and running.
Built-in Security and Management in Azure: To be successful in the Cloud era,
enterprises must have visibility/metrics and controls on every component to
pinpoint issues efficiently, optimize and scale effectively, while having the
assurance the security, compliance and policies are in place to ensure the velocity.
Key Features:
Azure comes with Locally Redundant Storage (LRS), where data is stored locally, as
well as Geo Redundant Storage (GRS) in a second region
All data stored on Azure is protected by an advanced encryption process, and all
Microsoft's data centers have two-tier authentication, proxy card access readers,
biometric scanners
Azure has more certifications than any other public cloud provider on the market,
including ISO 27001, HIPAA, FedRAMP, SOC 1, SOC 2, and many international
specifications
Additional resources
Microsoft Cloud Adoption Framework for Azure
Build great solutions with the Microsoft Azure Well-Architected Framework
Azure Top Security Best Practices
Security Baselines
Microsoft Azure Resource Center
Azure Migration Guide
Security Compliance Management
Azure Security Control – Incident Response
Zero Trust Guidance Center
Azure Web Application Firewall
Azure VPN gateway
Azure Active Directory Multi-Factor Authentication (MFA)
Azure AD Identity Protection
Azure AD Conditional Access
Microsoft Defender for Cloud documentation
Conclusion
Microsoft focuses heavily on both security of our cloud and providing you the security
controls you need to protect your cloud workloads. As a leader in cybersecurity, we
embrace our responsibility to make the world a safer place. This is reflected in our
comprehensive approach to ransomware prevention and detection in our security
framework, designs, products, legal efforts, industry partnerships, and services.
[email protected]
www.microsoft.com/services
For detailed information on how Microsoft secures our cloud, visit the service trust
portal .
What's Next
See the white paper: Azure defenses for ransomware attack whitepaper .
Ransomware attacks deliberately encrypt or erase data and systems to force your
organization to pay money to attackers. These attacks target your data, your backups,
and also key documentation required for you to recover without paying the attackers (as
a means to increase the chances your organization will pay).
This article addresses what to do before an attack to protect your critical business
systems and during an attack to ensure a rapid recovery of business operations.
What is ransomware?
Ransomware is a type of extortion attack that encrypts files and folders, preventing
access to important data and systems. Attackers use ransomware to extort money from
victims by demanding money, usually in the form of cryptocurrencies, in exchange for a
decryption key or in exchange for not releasing sensitive data to the dark web or the
public internet.
While early ransomware mostly used malware that spread with phishing or between
devices, human-operated ransomware has emerged where a gang of active attackers,
driven by human attack operators, target all systems in an organization (rather than a
single device or set of devices). An attack can:
The ransomware leverages the attackers’ knowledge of common system and security
misconfigurations and vulnerabilities to infiltrate the organization, navigate the
enterprise network, and adapt to the environment and its weaknesses as they go.
Ransomware can be staged to exfiltrate your data first, over several weeks or months,
before the ransomware actually executes on a specific date.
Ransomware can also slowly encrypt your data while keeping your key on the system.
With your key still available, your data is usable to you and the ransomware goes
unnoticed. Your backups, though, are of the encrypted data. Once all of your data is
encrypted and recent backups are also of encrypted data, your key is removed so you
can no longer read your data.
The real damage is often done when the attack exfiltrates files while leaving backdoors
in the network for future malicious activity—and these risks persist whether or not the
ransom is paid. These attacks can be catastrophic to business operations and difficult to
clean up, requiring complete adversary eviction to protect against future attacks. Unlike
early forms of ransomware that only required malware remediation, human-operated
ransomware can continue to threaten your business operations after the initial
encounter.
Impact of an attack
The impact of a ransomware attack on any organization is difficult to quantify
accurately. Depending on the scope of the attack, the impact could include:
You can reduce your on-premises exposure by moving your organization to a cloud
service. Microsoft has invested in native security capabilities that make Microsoft Azure
resilient against ransomware attacks and helps organizations defeat ransomware attack
techniques. For a comprehensive view of ransomware and extortion and how to protect
your organization, use the information in the Human-Operated Ransomware Mitigation
Project Plan PowerPoint presentation.
You should assume that at some point in time you'll fall victim to a ransomware attack.
One of the most important steps you can take to protect your data and avoid paying a
ransom is to have a reliable backup and restore plan for your business-critical
information. Since ransomware attackers have invested heavily into neutralizing backup
applications and operating system features like volume shadow copy, it's critical to have
backups that are inaccessible to a malicious attacker.
Azure Backup
Azure Backup provides security to your backup environment, both when your data is in
transit and at rest. With Azure Backup, you can back up:
The backup data is stored in Azure storage and the guest or attacker has no direct
access to backup storage or its contents. With virtual machine backup, the backup
snapshot creation and storage is done by Azure fabric where the guest or attacker has
no involvement other than quiescing the workload for application consistent backups.
With SQL and SAP HANA, the backup extension gets temporary access to write to
specific blobs. In this way, even in a compromised environment, existing backups can't
be tampered with or deleted by the attacker.
Azure Backup provides built-in monitoring and alerting capabilities to view and
configure actions for events related to Azure Backup. Backup Reports serve as a one-
stop destination for tracking usage, auditing of backups and restores, and identifying
key trends at different levels of granularity. Using Azure Backup's monitoring and
reporting tools can alert you to any unauthorized, suspicious, or malicious activity as
soon as they occur.
Checks have been added to make sure only valid users can perform various operations.
These include adding an extra layer of authentication. As part of adding an extra layer of
authentication for critical operations, you're prompted to enter a security PIN before
modifying online backups.
Learn more about the security features built into Azure Backup.
Validate backups
Validate that your backup is good as your backup is created and before you restore. We
recommend that you use a Recovery Services vault, which is a storage entity in Azure
that houses data. The data is typically copies of data, or configuration information for
virtual machines (VMs), workloads, servers, or workstations. You can use Recovery
Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or
Windows) and Azure SQL databases as well as on-premises assets. Recovery Services
vaults make it easy to organize your backup data and provide features such as:
Enhanced capabilities to ensure you can secure your backups, and safely recover
data, even if production and backup servers are compromised. Learn more.
Monitoring for your hybrid IT environment (Azure IaaS VMs and on-premises
assets) from a central portal. Learn more.
Compatibility with Azure role-based access control (Azure RBAC), which restricts
backup and restore access to a defined set of user roles. Azure RBAC provides
various built-in roles, and Azure Backup has three built-in roles to manage
recovery points. Learn more.
Soft delete protection, even if a malicious actor deletes a backup (or backup data is
accidentally deleted). Backup data is retained for 14 additional days, allowing the
recovery of a backup item with no data loss. Learn more.
Cross Region Restore which allows you to restore Azure VMs in a secondary
region, which is an Azure paired region. You can restore the replicated data in the
secondary region any time. This enables you to restore the secondary region data
for audit-compliance, and during outage scenarios, without waiting for Azure to
declare a disaster (unlike the GRS settings of the vault). Learn more.
7 Note
There are two types of vaults in Azure Backup. In addition to the Recovery Services
vaults, there are also Backup vaults that house data for newer workloads supported
by Azure Backup.
In our experience, the five most important applications to customers fall into the
following categories in this priority order:
Identity systems – required for users to access any systems (including all others
described below) such as Active Directory, Microsoft Entra Connect, AD domain
controllers
Human life – any system that supports human life or could put it at risk such as
medical or life support systems, safety systems (ambulance, dispatch systems,
traffic light control), large machinery, chemical/biological systems, production of
food or personal products, and others
Financial systems – systems that process monetary transactions and keep the
business operating, such as payment systems and related databases, financial
system for quarterly reporting
Product or service enablement – any systems that are required to provide the
business services or produce/deliver physical products that your customers pay
you for, factory control systems, product delivery/dispatch systems, and similar
Security (minimum) – You should also prioritize the security systems required to
monitor for attacks and provide minimum security services. This should be focused
on ensuring that the current attacks (or easy opportunistic ones) aren't
immediately able to gain (or regain) access to your restored systems
Your prioritized back up list also becomes your prioritized restore list. Once you’ve
identified your critical systems and are performing regular backups, then take steps to
reduce your exposure level.
ノ Expand table
Task Detail
Identify the important systems that To get back up and running as quickly as possible after
you need to bring back online first an attack, determine today what is most important to
(using top five categories above) and you.
immediately begin performing regular
backups of those systems.
Migrate your organization to the Reduce your on-premises exposure by moving data to
cloud. cloud services with automatic backup and self-service
rollback. Microsoft Azure has a robust set of tools to
Consider purchasing a Microsoft help you back up your business-critical systems and
Unified Support plan or working with restore your backups faster.
a Microsoft partner to help support
your move to the cloud. Microsoft Unified Support is a cloud services support
model that is there to help you whenever you need it.
Unified Support:
escalation
Move user data to cloud solutions like User data in the Microsoft cloud can be protected by
OneDrive and SharePoint to take built-in security and data management features.
advantage of versioning and recycle
bin capabilities. It's good to teach users how to restore their own files
but you need to be careful that your users don't restore
Educate users on how to recover their the malware used to carry out the attack. You need to:
files by themselves to reduce delays
and cost of recovery. For example, if a Ensure your users don't restore their files until you're
user’s OneDrive files were infected by confident that the attacker has been evicted
malware, they can restore their
entire OneDrive to a previous time. Have a mitigation in place in case a user does restore
some of the malware
Consider a defense strategy, such as
Microsoft Defender XDR, before Microsoft Defender XDR uses AI-powered automatic
allowing users to restore their own actions and playbooks to remediate impacted assets
files. back to a secure state. Microsoft Defender XDR
leverages automatic remediation capabilities of the suite
products to ensure all impacted assets related to an
incident are automatically remediated where possible.
Implement the Microsoft cloud The Microsoft cloud security benchmark is our security
security benchmark. control framework based on industry-based security
control frameworks such as NIST SP800-53, CIS Controls
v7.1. It provides organizations guidance on how to
configure Azure and Azure services and implement the
security controls. See Backup and Recovery.
Consider creating a risk register to A risk register can help you prioritize risks based on the
identify potential risks and address likelihood of that risk occurring and the severity to your
how you'll mediate through business should that risk occur.
preventative controls and actions. Add
ransomware to risk register as high Track mitigation status via Enterprise Risk Management
likelihood and high impact scenario. (ERM) assessment cycle.
Back up all critical business systems Allows you to recover data up to the last backup.
automatically on a regular schedule
(including backup of critical
dependencies like Active Directory).
Protect (or print) supporting Attackers deliberately target these resources because it
documents and systems required for impacts your ability to recover.
recovery such as restoration
procedure documents, CMDB,
network diagrams, and SolarWinds
instances.
Ensure you have well-documented Third-party contacts may be useful if the given
procedures for engaging any third- ransomware variant has known weaknesses or
party support, particularly support decryption tools are available.
from threat intelligence providers,
antimalware solution providers, and
from the malware analysis provider.
Protect (or print) these procedures.
Ensure backup and recovery strategy Backups are essential for resilience after an organization
includes: has been breached. Apply the 3-2-1 rule for maximum
protection and availability: 3 copies (original + 2
Ability to back up data to a specific backups), 2 storage types, and 1 offsite or cold copy.
point in time.
Protect backups against deliberate Backups that are accessible by attackers can be rendered
erasure and encryption: unusable for business recovery.
Store backups in offline or off-site Offline storage ensures robust transfer of backup data
storage and/or immutable storage. without using any network bandwidth. Azure Backup
supports offline backup, which transfers initial backup
Require out of band steps (such as data offline, without the use of network bandwidth. It
MFA or a security PIN) before provides a mechanism to copy backup data onto
permitting an online backup to be physical storage devices. The devices are then shipped to
modified or erased. a nearby Azure datacenter and uploaded onto a
Recovery Services vault.
Create private endpoints within your
Azure Virtual Network to securely Online immutable storage (such as Azure Blob) enables
back up and restore data from your you to store business-critical data objects in a WORM
Recovery Services vault. (Write Once, Read Many) state. This state makes the data
non-erasable and non-modifiable for a user-specified
interval.
Protect against a phishing attempt: The most common method used by attackers to infiltrate
an organization is phishing attempts via email. Exchange
Conduct security awareness training Online Protection (EOP) is the cloud-based filtering
regularly to help users identify a service that protects your organization against spam,
phishing attempt and avoid clicking malware, and other email threats. EOP is included in all
on something that can create an initial Microsoft 365 organizations with Exchange Online
entry point for a compromise. mailboxes.
Apply security filtering controls to An example of a security filtering control for email is Safe
email to detect and minimize the Links. Safe Links is a feature in Defender for Office 365
likelihood of a successful phishing that provides scanning and rewriting of URLs and links in
attempt. email messages during inbound mail flow, and time-of-
click verification of URLs and links in email messages and
other locations (Microsoft Teams and Office documents).
Safe Links scanning occurs in addition to the regular
anti-spam and anti-malware protection in inbound email
messages in EOP. Safe Links scanning can help protect
your organization from malicious links that are used in
phishing and other attacks.
ノ Expand table
Task Detail
Early in the attack, engage third- These contacts may be useful if the given ransomware
party support, particularly support variant has a known weakness or decryption tools are
from threat intelligence providers, available.
antimalware solution providers and
from the malware analysis provider. The Microsoft Incident Response team can help protect
you from attacks. Microsoft Incident Response engages
with customers around the world, helping to protect and
harden against attacks before they occur, as well as
Task Detail
Contact your local or federal law If you're in the United States, contact the FBI to report a
enforcement agencies. ransomware breach using the IC3 Complaint Referral
Form .
Take steps to remove malware or You can use Windows Defender or (for older clients)
ransomware payload from your Microsoft Security Essentials .
environment and stop the spread.
An alternative that will also help you remove ransomware
Run a full, current antivirus scan on or malware is the Malicious Software Removal Tool
all suspected computers and devices (MSRT) .
to detect and remove the payload
that's associated with the
ransomware.
Restore business-critical systems At this point, you don’t need to restore everything. Focus
first. Remember to validate again on the top five business-critical systems from your restore
that your backup is good before you list.
restore.
If you have offline backups, you can To prevent future attacks, ensure ransomware or malware
probably restore the encrypted data is not on your offline backup before restoring.
after you've removed the
ransomware payload (malware) from
your environment.
Identify a safe point-in-time backup To prevent future attacks, scan backup for ransomware or
image that is known not to be malware before restoring.
infected.
Use a safety scanner and other tools Microsoft Safety Scanner is a scan tool designed to find
for full operating system restore as and remove malware from Windows computers. Simply
well as data restore scenarios. download it and run a scan to find malware and try to
reverse changes made by identified threats.
Ensure that your antivirus or An EDR solution, such as Microsoft Defender for Endpoint,
endpoint detection and response is preferred.
(EDR) solution is up to date. You
also need to have up-to-date
patches.
After business-critical systems are Telemetry data should help you identify if malware is still
up and running, restore other on your systems.
systems.
1. Identify lessons learned where the process didn't work well (and opportunities to
simplify, accelerate, or otherwise improve the process)
2. Perform root cause analysis on the biggest challenges (at enough detail to ensure
solutions address the right problem — considering people, process, and
technology)
3. Investigate and remediate the original breach (engage the Microsoft Detection and
Response Team (DART) to help)
4. Update your backup and restore strategy based on lessons learned and
opportunities — prioritizing based on highest impact and quickest implementation
steps first
Next steps
In this article, you learned how to improve your backup and restore plan to protect
against ransomware. For best practices on deploying ransomware protection, see
Rapidly protect against ransomware and extortion.
Microsoft Azure:
Help protect from ransomware with Microsoft Azure Backup (26-minute video)
Microsoft 365:
Feedback
Was this page helpful? Yes No
In this article, you learn how Azure Firewall Premium can help you protect against
ransomware.
What is ransomware?
Ransomware is a type of malicious software designed to block access to your computer
system until a sum of money is paid. The attacker usually exploits an existing
vulnerability in your system to penetrate your network and execute the malicious
software on the target host.
Azure Firewall Premium provides signature-based IDPS where every packet is inspected
thoroughly, including all its headers and payload, to identify a malicious activity and to
prevent it from penetrating into your network.
The IDPS signatures are applicable for both application and network level traffic (Layers
4-7), fully managed, and contain more than 65,000 signatures in over 50 different
categories. To keep them (the IDPS signatures?) up to date with the dynamic ever-
changing attack landscape:
Azure Firewall has early access to vulnerability information from Microsoft Active
Protections Program (MAPP) and Microsoft Security Response Center (MSRC) .
Azure Firewall releases 30 to 50 new signatures each day.
Today, modern encryption (SSL/TLS) is used globally to secure Internet traffic. Attackers
use encryption to carry their malicious software into the victim’s network. Therefore,
customers must inspect their encrypted traffic just like any other traffic.
Azure Firewall Premium IDPS allows you to detect attacks in all ports and protocols for
non-encrypted traffic. However, when HTTPS traffic needs to be inspected, Azure
Firewall can use its TLS inspection capability to decrypt the traffic and accurately detect
malicious activities.
After the ransomware is installed on the target machine, it may try to encrypt the
machine’s data. The ransomware requires an encryption key and may use the Command
and Control (C&C) to get the encryption key from the C&C server hosted by the
attacker. CryptoLocker, WannaCry, TeslaCrypt, Cerber, and Locky are some of the
ransomwares using C&C to fetch the required encryption keys.
Azure Firewall Premium has hundreds of signatures that are designed to detect C&C
connectivity and block it to prevent the attacker from encrypting your data. The
following diagram shows Azure Firewall protection against a ransomware attack using
the C&C channel.
Firewall Policy can be used for centralized configuration of firewalls. This helps with
responding to threats rapidly. Customers can enable Threat Intel and IDPS across
multiple firewalls with just a few clicks. Web categories lets administrators allow or deny
user access to web categories such as gambling websites, social media websites, and
others. URL filtering provides scoped access to external sites and can cut down risk even
further. In other words, Azure Firewall has everything necessary for companies to defend
comprehensively against malware and ransomware.
Detection is equally important as prevention. Azure Firewall solution for Azure Sentinel
gets you both detection and prevention in the form of an easy-to-deploy solution.
Combining prevention and detection allows you to ensure that you both prevent
sophisticated threats when you can, while also maintaining an “assume breach
mentality” to detect and quickly respond to cyberattacks.
Next steps
See Ransomware protection in Azure to learn more about defenses for ransomware
attacks in Azure and for guidance on how to proactively protect your assets.
Feedback
Was this page helpful? Yes No
This article describes Microsoft resources and recommendations for recovering from a
systemic identity compromise attack against your organization.
The content in this article is based on guidance provided by Microsoft's Detection and
Response Team (DART), which works to respond to compromises and help customers
become cyber-resilient. For more guidance from the DART team, see their Microsoft
security blog series .
) Important
If this has happened to your organization, you are in a race against the attacker to
secure your environment before further damage can be done.
Attackers can then use the certificate to forge SAML tokens to impersonate any
of the organization's existing users and accounts without requiring access to
account credentials, and without leaving any traces.
Step Description
Establish secure An organization that has experienced a systemic identity compromise must
communications assume that all communication is affected. Before taking any recovery
action, you must ensure that the members of your team who are key to
your investigation and response effort can communicate securely.
Securing communications must be your very first step so that you can
proceed without the attacker's knowledge.
Investigate your After you have secured communications on your core investigation team,
environment you can start looking for initial access points and persistence techniques.
Identify your indications of compromise, and then look for initial access
points and persistence. At the same time, start establishing continuous
monitoring operations during your recovery efforts.
Improve security Enable security features and capabilities following best practice
posture recommendations for improved system security moving forward.
Regain / retain You must regain administrative control of your environment from the
control attacker. After you have control again and have refreshed your system's
Step Description
For example:
1. For initial one-on-one and group communications, you may want to use PSTN
calls, conference bridges that are not connected to the corporate infrastructure,
and end-to-end encrypted messaging solutions.
2. After those initial conversations, you may want to create an entirely new Microsoft
365 tenant, isolated from the organization's production tenant. Create accounts
only for key personnel who need to be part of the response.
If you do create a new Microsoft 365 tenant, make sure to follow all best practices for
the tenant, and especially for administrative accounts and rights. Limit administrative
rights, with no trusts for outside applications or vendors.
) Important
Make sure that you do not communicate about your new tenant on your existing,
and potentially compromised, email accounts.
For more information, see Best practices for securely using Microsoft 365 .
Microsoft Sentinel
Microsoft 365 security solutions and services
Windows 10 Enterprise Security
Microsoft Defender for Cloud Apps
Microsoft Defender for IoT
Implementing new updates will help identify any prior campaigns and prevent future
campaigns against your system. Keep in mind that lists of IOCs may not be exhaustive,
and may expand as investigations continue.
Make sure that you've applied the Microsoft cloud security benchmark, and are
monitoring compliance via Microsoft Defender for Cloud.
Make sure that any extended detection and response tools, such as Microsoft
Defender for IoT, are using the most recent threat intelligence data.
You'll need to balance getting to the bottom of every anomalous behavior and taking
quick action to stop any further activity by the attacker. Any successful remediation
requires an understanding of the initial method of entry and persistence methods that
the attacker used, as complete as is possible at the time. Any persistence methods
missed during the investigation can result in continued access by the attacker, and a
potential recompromise.
At this point, you may want to perform a risk analysis to prioritize your actions. For more
information, see:
Datacenter threat, vulnerability, and risk assessment
Track and respond to emerging threats with threat analytics
Threat and vulnerability management
Microsoft's security services provide extensive resources for detailed investigations. The
following sections describe top recommended actions.
7 Note
If you find that one or more of the listed logging sources is not currently part of
your security program, we recommend configuring them as soon as possible to
enable detections and future log reviews.
Especially consider any of these changes that occur along with other typical signs of
compromise or activity.
Review administrative rights in your environments
Review administrative rights in both your cloud and on-premises environments. For
example:
Environment Description
All cloud environments - Review any privileged access rights in the cloud and remove any
unnecessary permissions
- Implement Privileged Identity Management (PIM)
- Set up Conditional Access policies to limit administrative access
during hardening
All Enterprise Review for delegated permissions and consent grants that allow any of
applications the following actions:
Microsoft 365 Review access and configuration settings for your Microsoft 365
environments environment, including:
- SharePoint Online Sharing
- Microsoft Teams
- Power Apps
- Microsoft OneDrive for Business
Review user accounts in - Review and remove guest user accounts that are no longer needed.
your environments - Review email configurations for delegates, mailbox folder
permissions, ActiveSync mobile device registrations, Inbox rules, and
Outlook on the Web options.
- Review ApplicationImpersonation rights and reduce any use of legacy
authentication as much as possible.
- Validate that MFA is enforced and that both MFA and self-service
password reset (SSPR) contact information for all users is correct.
For example, Microsoft security services may have specific resources and guidance that's
relevant to the attack, as described in the sections below.
) Important
Use Microsoft Sentinel's content hub to install extended security solutions and data
connectors that stream content from other services in your environment. For more
information, see:
Deploy Microsoft Defender for IoT to monitor and secure those devices, especially any
that aren't protected by traditional security monitoring systems. Install Defender for IoT
network sensors at specific points of interest in your environment to detect threats in
ongoing network activity using agentless monitoring and dynamic threat intelligence.
For more information, see Get started with OT network security monitoring.
Check for other examples of detections, hunting queries, and threat analytics reports in
the Microsoft security center, such as in Microsoft 365 Defender, Microsoft 365
Defender for Identity, and Microsoft Defender for Cloud Apps. To ensure coverage,
make sure that you install the Microsoft Defender for Identity agent on ADFS servers in
addition to all domain controllers.
For example, search or filter the results for when the MFA results field has a value of
MFA requirement satisfied by claim in the token. If your organization uses ADFS and
the claims logged are not included in the ADFS configuration, these claims may indicate
attacker activity.
Search or filter your results further to exclude extra noise. For example, you may want to
include results only from federated domains. If you find suspicious sign-ins, drill down
even further based on IP addresses, user accounts, and so on.
The following table describes more methods for using Microsoft Entra logs in your
investigation:
Method Description
Analyze risky sign-in Microsoft Entra ID and its Identity Protection platform may generate risk
events events associated with the use of attacker-generated SAML tokens.
We recommend that you closely analyze all risk events associated with
accounts that have administrative privileges, including any that may have
been automatically been dismissed or remediated. For example, a risk
event or an anonymous IP address might be automatically remediated
Method Description
Make sure to use ADFS Connect Health so that all authentication events
are visible in Microsoft Entra ID.
Detect domain Any attempt by the attacker to manipulate domain authentication policies
authentication will be recorded in the Microsoft Entra audit logs, and reflected in the
properties Unified Audit log.
Detect credentials Attackers who have gained control of a privileged account may search for
for OAuth an application with the ability to access any user's email in the
applications organization, and then add attacker-controlled credentials to that
application.
For example, you may want to search for any of the following activities,
which would be consistent with attacker behavior:
- Adding or updating service principal credentials
- Updating application certificates and secrets
- Adding an app role assignment grant to a user
- Adding Oauth2PermissionGrant
Detect e-mail access Search for access to email by applications in your environment. For
by applications example, use the Microsoft Purview Audit (Premium) features to
investigate compromised accounts.
Detect non- The Microsoft Entra sign-in reports provide details about any non-
interactive sign-ins interactive sign-ins that used service principal credentials. For example,
to service principals you can use the sign-in reports to find valuable data for your
investigation, such as an IP address used by the attacker to access email
applications.
The following sections list recommendations to improve both general and identity
security posture.
Ensure that your organization has extended detection and response (XDR) and
security information and event management (SIEM) solutions in place, such as
Microsoft 365 Defender for Endpoint, Microsoft Sentinel, and Microsoft Defender
for IoT.
Restrict local administrative access to the system, including the account that is
used to run the ADFS service.
The least privilege necessary for the account running ADFS is the Log on as a
Service User Right Assignment.
Block all inbound SMB access to the systems from anywhere in the environment.
For more information, see Beyond the Edge: How to Secure SMB Traffic in
Windows . We also recommend that you stream the Windows Firewall logs to a
SIEM for historical and proactive monitoring.
If you are using a Service Account and your environment supports it, migrate from
a Service Account to a group-Managed Service Account (gMSA). If you cannot
move to a gMSA, rotate the password on the Service Account to a complex
password.
This section provides possible methods and steps to consider when building your
administrative control recovery plan.
) Important
The exact steps required in your organization will depend on what persistence
you've discovered in your investigation, and how confident you are that your
investigation was complete and has discovered all possible entry and persistence
methods.
Ensure that any actions taken are performed from a trusted device, built from a
clean source. For example, use a fresh, privileged access workstation.
The following sections include the following types of recommendations for remediating
and retaining administrative control:
Removing trust and switching to cloud-mastered identity requires careful planning and
an in-depth understanding of the business operation effects of isolating identity. For
more information, see Protecting Microsoft 365 from on-premises attacks.
Rotating the token-signing certificate a single time still allows the previous token-
signing certificate to work. Continuing to allow previous certificates to work is a built-in
functionality for normal certificate rotations, which permits a grace period for
organizations to update any relying party trusts before the certificate expires.
If there was an attack, you don't want the attacker to retain access at all. Make sure that
the attacker doesn't retain the ability to forge tokens for your domain.
Reset passwords Reset passwords on any break-glass accounts and reduce the
number of break-glass accounts to the absolute minimum
required.
Restrict privileged access Ensure that service and user accounts with privileged access are
accounts cloud-only accounts, and do not use on-premises accounts that
are synced or federated to Microsoft Entra ID.
Enforce MFA Enforce Multi-Factor Authentication (MFA) across all elevated users
in the tenant. We recommend enforcing MFA across all users in
the tenant.
Limit administrative access Implement Privileged Identity Management (PIM) and conditional
access to limit administrative access.
Review / reduce delegated Review and reduce all Enterprise Applications delegated
permissions and consent permissions or consent grants that allow any of the following
grants functionalities:
Activity Description
Rebuild affected Rebuild systems that were identified as compromised by the attacker
systems during your investigation.
Remove unnecessary Remove unnecessary members from Domain Admins, Backup Operators,
admin users and Enterprise Admin groups. For more information, see Securing
Privileged Access.
Reset the krbtgt Reset the krbtgt account twice using the New-KrbtgtKeys script.
account
Note: If you are using Read-Only Domain Controllers, you will need to run
the script separately for Read-Write Domain Controllers and for Read-
Only Domain Controllers.
Schedule a system After you validate that no persistence mechanisms created by the attacker
restart exist or remain on your system, schedule a system restart to assist with
removing memory-resident malware.
Reset the DSRM Reset each domain controller’s DSRM (Directory Services Restore Mode)
password password to something unique and complex.
For example, an attacker who becomes aware of the detection might change techniques
or create more persistence.
Make sure to remediate any persistence techniques that you've identified in earlier
stages of the investigation.
Reset passwords after eviction for any user accounts that may have been
compromised. Make sure to also implement a mid-term plan to reset credentials
for all accounts in your directory.
Next steps
Get help from inside Microsoft products, including the Microsoft 365 Defender
portal, Microsoft Purview compliance portal, and Office 365 Security & Compliance
Center by selecting the Help (?) button in the top navigation bar.
) Important
If you believe you have been compromised and require assistance through an
incident response, open a Sev A Microsoft support case.
Azure threat protection
Article • 06/27/2024
Azure offers built in threat protection functionality through services such as Microsoft
Entra ID, Azure Monitor logs, and Microsoft Defender for Cloud. This collection of
security services and capabilities provides a simple and fast way to understand what is
happening within your Azure deployments.
Azure provides a wide array of options to configure and customize security to meet the
requirements of your app deployments. This article discusses how to meet these
requirements.
Identity Protection uses adaptive machine learning algorithms and heuristics to detect
anomalies and risk detections that might indicate that an identity has been
compromised. Using this data, Identity Protection generates reports and alerts so that
you can investigate these risk detections and take appropriate remediation or mitigation
action.
Examples of some of the ways that Azure Identity Protection can help secure your
accounts and identities include:
Detect six risk detection types using machine learning and heuristic rules.
Calculate user risk levels.
Provide custom recommendations to improve overall security posture by
highlighting vulnerabilities.
Get alerts and reports about Microsoft Entra administrators and just-in-time (JIT)
administrative access to Microsoft online services, such as Microsoft 365 and
Intune.
Azure Monitor logs help you quickly and easily understand the overall security posture
of any environment, all within the context of IT Operations, including software update
assessment, antimalware assessment, and configuration baselines. Security log data is
readily accessible to streamline the security and compliance audit processes.
You collect data into the repository from connected sources by configuring data sources
and adding solutions to your subscription.
Data sources and solutions each create separate record types with their own set of
properties, but you can still analyze them together in queries to the repository. You can
use the same tools and methods to work with a variety of data that's collected by
various sources.
Most of your interaction with Azure Monitor logs is through the Azure portal, which runs
in any browser and provides you with access to configuration settings and multiple tools
to analyze and act on collected data. From the portal, you can use:
Solutions add functionality to Azure Monitor logs. They primarily run in the cloud and
provide analysis of data that's collected in the log analytics repository. Solutions might
also define new record types to be collected that can be analyzed with log searches or
by using an additional user interface that the solution provides in the log analytics
dashboard.
You can create and manage DSC resources that are hosted in Azure and apply them to
cloud and on-premises systems. By doing so, you can define and automatically enforce
their configuration or get reports on drift to help ensure that security configurations
remain within policy.
Enabling Defender for Cloud's enhanced security features brings advanced, intelligent,
protection of your Azure, hybrid and multicloud resources and workloads. Learn more in
Microsoft Defender for Cloud's enhanced security features.
The workload protection dashboard in Defender for Cloud provides visibility and control
of the integrated cloud workload protection features provided by a range of Microsoft
Defender plans:
Tip
Learn more about the numbered sections in The workload protections dashboard.
Microsoft security researchers are constantly on the lookout for threats. They have
access to an expansive set of telemetry gained from Microsoft’s global presence in the
cloud and on-premises. This wide-reaching and diverse collection of datasets enables
Microsoft to discover new attack patterns and trends across its on-premises consumer
and enterprise products, as well as its online services.
Thus, Defender for Cloud can rapidly update its detection algorithms as attackers
release new and increasingly sophisticated exploits. This approach helps you keep pace
with a fast-moving threat environment.
Microsoft Defender for Cloud automatically collects security information from your
resources, the network, and connected partner solutions. It analyzes this information,
correlating information from multiple sources, to identify threats.
Security alerts are prioritized in Defender for Cloud along with recommendations on
how to remediate the threats.
Defender for Cloud employs advanced security analytics, which go far beyond
signature-based approaches. Breakthroughs in big data and machine learning
technologies are used to evaluate events across the entire cloud. Advanced analytics can
detect threats that would be impossible to identify through manual approaches and
predict the evolution of attacks. These security analytics types are covered in the next
sections.
Threat intelligence
Microsoft has access to an immense amount of global threat intelligence.
Telemetry flows in from multiple sources, such as Azure, Microsoft 365, Microsoft CRM
online, Microsoft Dynamics AX, outlook.com, MSN.com, the Microsoft Digital Crimes
Unit (DCU), and Microsoft Security Response Center (MSRC).
Researchers also receive threat intelligence information that is shared among major
cloud service providers, and they subscribe to threat intelligence feeds from third
parties. Microsoft Defender for Cloud can use this information to alert you to threats
from known bad actors. Some examples include:
Harnessing the power of machine learning: Microsoft Defender for Cloud has
access to a vast amount of data about cloud network activity, which can be used to
detect threats targeting your Azure deployments.
Outbound DDoS and botnet detection: A common objective of attacks that target
cloud resources is to use the compute power of these resources to execute other
attacks.
New behavioral analytics servers and VMs: After a server or virtual machine is
compromised, attackers employ a wide variety of techniques to execute malicious
code on that system while avoiding detection, ensuring persistence, and obviating
security controls.
Azure SQL Database Threat Detection: Threat detection for Azure SQL Database,
which identifies anomalous database activities that indicate unusual and
potentially harmful attempts to access or exploit databases.
Behavioral analytics
Behavioral analytics is a technique that analyzes and compares data to a collection of
known patterns. However, these patterns aren't simple signatures. They're determined
through complex machine learning algorithms that are applied to massive datasets.
The patterns are also determined through careful analysis of malicious behaviors by
expert analysts. Microsoft Defender for Cloud can use behavioral analytics to identify
compromised resources based on analysis of virtual machine logs, virtual network
device logs, fabric logs, crash dumps, and other sources.
In addition, patterns are correlated with other signals to check for supporting evidence
of a widespread campaign. This correlation helps to identify events that are consistent
with established indicators of compromise.
Outgoing attacks: Attackers often target cloud resources with the goal of using
those resources to mount additional attacks. Compromised virtual machines, for
example, might be used to launch brute force attacks against other virtual
machines, send spam, or scan open ports and other devices on the internet. By
applying machine learning to network traffic, Defender for Cloud can detect when
outbound network communications exceed the norm. When spam is detected,
Defender for Cloud also correlates unusual email traffic with intelligence from
Microsoft 365 to determine whether the mail is likely nefarious or the result of a
legitimate email campaign.
Anomaly detection
Microsoft Defender for Cloud also uses anomaly detection to identify threats. In contrast
to behavioral analytics (which depends on known patterns derived from large data sets),
anomaly detection is more “personalized” and focuses on baselines that are specific to
your deployments. Machine learning is applied to determine normal activity for your
deployments, and then rules are generated to define outlier conditions that could
represent a security event. Here’s an example:
Inbound RDP/SSH brute force attacks: Your deployments might have busy virtual
machines with many logins each day and other virtual machines that have few, if
any, logins. Microsoft Defender for Cloud can determine baseline login activity for
these virtual machines and use machine learning to define around the normal
login activities. If there's any discrepancy with the baseline defined for login related
characteristics, an alert might be generated. Again, machine learning determines
what is significant.
Signal sharing: Insights from security teams across the broad Microsoft portfolio
of cloud and on-premises services, servers, and client endpoint devices are shared
and analyzed.
Detection tuning: Algorithms are run against real customer data sets, and security
researchers work with customers to validate the results. True and false positives are
used to refine machine learning algorithms.
These combined efforts culminate in new and improved detections, which you can
benefit from instantly. There’s no action for you to take.
Microsoft Defender for Storage
Microsoft Defender for Storage is an Azure-native layer of security intelligence that
detects unusual and potentially harmful attempts to access or exploit your storage
accounts. It uses advanced threat detection capabilities and Microsoft Threat
Intelligence data to provide contextual security alerts. Those alerts also include steps
to mitigate the detected threats and prevent future attacks.
Here are the features of Azure that deploy and enable Microsoft antimalware for your
applications:
Currently, Azure SQL Database Threat Detection detects potential vulnerabilities and
SQL injection attacks, and anomalous database access patterns.
Upon receiving a threat-detection email notification, users are able to navigate and view
the relevant audit records through a deep link in the mail. The link opens an audit
viewer or a preconfigured auditing Excel template that shows the relevant audit records
around the time of the suspicious event, according to the following:
Audit storage for the database/server with the anomalous database activities.
Relevant audit storage table that was used at the time of the event to write the
audit log.
SQL Database threat detectors use one of the following detection methodologies:
Deterministic detection: Detects suspicious patterns (rules based) in the SQL client
queries that match known attacks. This methodology has high detection and low
false positive, but limited coverage because it falls within the category of “atomic
detections.”
Protections include:
SQL injection protection.
Protection against HTTP protocol anomalies, such as missing host user-agent and
accept headers.
Protects your web application from web vulnerabilities and attacks without
modification of the back-end code.
Monitors web applications against attacks by using real-time reports that are
generated by application gateway WAF logs.
With tools that help uncover shadow IT, assess risk, enforce policies, investigate
activities, and stop threats, your organization can more safely move to the cloud while
maintaining control of critical data.
ノ Expand table
Category Description
Discover Uncover shadow IT with Defender for Cloud Apps. Gain visibility by discovering
apps, activities, users, data, and files in your cloud environment. Discover third-party
apps that are connected to your cloud.
Investigate Investigate your cloud apps by using cloud forensics tools to deep-dive into risky
apps, specific users, and files in your network. Find patterns in the data collected
from your cloud. Generate reports to monitor your cloud.
Control Mitigate risk by setting policies and alerts to achieve maximum control over network
cloud traffic. Use Defender for Cloud Apps to migrate your users to safe, sanctioned
cloud app alternatives.
Protect Use Defender for Cloud Apps to sanction or prohibit applications, enforce data loss
prevention, control permissions and sharing, and generate custom reports and
alerts.
Control Mitigate risk by setting policies and alerts to achieve maximum control over network
cloud traffic. Use Defender for Cloud Apps to migrate your users to safe, sanctioned
cloud app alternatives.
Defender for Cloud Apps integrates visibility with your cloud by:
Using Cloud Discovery to map and identify your cloud environment and the cloud
apps your organization is using.
Using easy-to-deploy app connectors that take advantage of provider APIs, for
visibility and governance of apps that you connect to.
Helping you have continuous control by setting, and then continually fine-tuning,
policies.
On collecting data from these sources, Defender for Cloud Apps runs sophisticated
analysis on it. It immediately alerts you to anomalous activities, and gives you deep
visibility into your cloud environment. You can configure a policy in Defender for Cloud
Apps and use it to protect everything in your cloud environment.
Third-party threat protection capabilities
through the Azure Marketplace
Scans outbound traffic to detect sensitive data and can mask or block the
information from being leaked out.
For examples of web application firewalls that are available in the Azure Marketplace,
see Barracuda WAF, Brocade virtual web application firewall (vWAF), Imperva
SecureSphere, and the ThreatSTOP IP firewall .
Next step
Responding to today's threats: Helps identify active threats that target your Azure
resources and provides the insights you need to respond quickly.
Feedback
Was this page helpful? Yes No
This article provides an introduction to security services in Azure that help you protect
your data, resources, and applications in the cloud and meet the security needs of your
business.
Azure platform
Microsoft Azure is a cloud platform comprised of infrastructure and application
services, with integrated data services and advanced analytics, and developer tools and
services, hosted within Microsoft’s public cloud data centers. Customers use Azure for
many different capacities and scenarios, from basic compute, networking, and storage,
to mobile and web app services, to full cloud scenarios like Internet of Things, and can
be used with open-source technologies, and deployed as hybrid cloud or hosted within
a customer’s datacenter. Azure provides cloud technology as building blocks to help
companies save costs, innovate quickly, and manage systems proactively. When you
build on, or migrate IT assets to a cloud provider, you are relying on that organization’s
abilities to protect your applications and data with the services and the controls they
provide to manage the security of your cloud-based assets.
Microsoft Azure is the only cloud computing provider that offers a secure, consistent
application platform and infrastructure-as-a-service for teams to work within their
different cloud skillsets and levels of project complexity, with integrated data services
and analytics that uncover intelligence from data wherever it exists, across both
Microsoft and non-Microsoft platforms, open frameworks and tools, providing choice
for integrating cloud with on-premises as well deploying Azure cloud services within on-
premises datacenters. As part of the Microsoft Trusted Cloud, customers rely on Azure
for industry-leading security, reliability, compliance, privacy, and the vast network of
people, partners, and processes to support organizations in the cloud.
Microsoft Entra ID
Microsoft identity and access management solutions help IT protect access to
applications and resources across the corporate datacenter and into the cloud, enabling
additional levels of validation such as multifactor authentication and Conditional Access
policies. Monitoring suspicious activity through advanced security reporting, auditing
and alerting helps mitigate potential security issues. Microsoft Entra ID P1 or P2
provides single sign-on to thousands of cloud apps and access to web apps you run on-
premises.
Create and manage a single identity for each user across your hybrid enterprise,
keeping users, groups, and devices in sync.
Single sign-on
Multifactor authentication
Device registration
Identity protection
Single sign-on
Single sign-on (SSO) means being able to access all the applications and resources that
you need to do business, by signing in only once using a single user account. Once
signed in, you can access all the applications you need without being required to
authenticate (for example, type a password) a second time.
Many organizations rely upon software as a service (SaaS) applications such as Microsoft
365, Box, and Salesforce for end-user productivity. Historically, IT staff needed to
individually create and update user accounts in each SaaS application, and users had to
remember a password for each SaaS application.
Microsoft Entra ID extends on-premises Active Directory into the cloud, enabling users
to use their primary organizational account to not only sign in to their domain-joined
devices and company resources, but also all the web and SaaS applications needed for
their job.
Not only do users not have to manage multiple sets of usernames and passwords,
application access can be automatically provisioned or de-provisioned based on
organizational groups and their status as an employee. Microsoft Entra ID introduces
security and access governance controls that enable you to centrally manage users'
access across SaaS applications.
Multifactor authentication
Microsoft Entra multifactor authentication (MFA) is a method of authentication that
requires the use of more than one verification method and adds a critical second layer
of security to user sign-ins and transactions. MFA helps safeguard access to data and
applications while meeting user demand for a simple sign-in process. It delivers strong
authentication via a range of verification options—phone call, text message, or mobile
app notification or verification code and third-party OAuth tokens.
Anomaly reports – contain sign in events that we found to be anomalous. Our goal
is to make you aware of such activity and enable you to be able to decide about
whether an event is suspicious.
Integrated application reports – provide insights into how cloud applications are
being used in your organization. Microsoft Entra ID offers integration with
thousands of cloud applications.
Error reports – indicate errors that may occur when provisioning accounts to
external applications.
User-specific reports – display device and sign in activity data for a specific user.
Activity logs – contain a record of all audited events within the last 24 hours, last 7
days, or last 30 days, and group activity changes, and password reset and
registration activity.
Azure Active Directory B2C is a highly available, global, identity management service for
consumer-facing applications that scales to hundreds of millions of identities. It can be
integrated across mobile and web platforms. Your consumers can log on to all your
applications through customizable experiences by using their existing social accounts or
by creating new credentials.
In the past, application developers who wanted to sign up and sign in consumers into
their applications would have written their own code. And they would have used on-
premises databases or systems to store usernames and passwords. Azure Active
Directory B2C offers your organization a better way to integrate consumer identity
management into applications with the help of a secure, standards-based platform, and
a large set of extensible policies.
When you use Azure Active Directory B2C, your consumers can sign up for your
applications by using their existing social accounts (Facebook, Google, Amazon,
LinkedIn) or by creating new credentials (email address and password, or username and
password).
Device registration
Microsoft Entra device registration is the foundation for device-based Conditional
Access scenarios. When a device is registered, Microsoft Entra device registration
provides the device with an identity that is used to authenticate the device when the
user signs in. The authenticated device, and the attributes of the device, can then be
used to enforce Conditional Access policies for applications that are hosted in the cloud
and on-premises.
When combined with a mobile device management (MDM) solution such as Intune,
the device attributes in Microsoft Entra ID are updated with additional information
about the device. This allows you to create Conditional Access rules that enforce access
from devices to meet your standards for security and compliance.
Microsoft Entra Privileged Identity Management lets you manage, control, and monitor
your privileged identities and access to resources in Microsoft Entra ID as well as other
Microsoft online services like Microsoft 365 or Microsoft Intune.
Sometimes users need to carry out privileged operations in Azure or Microsoft 365
resources, or other SaaS apps. This often means organizations have to give them
permanent privileged access in Microsoft Entra ID. This is a growing security risk for
cloud-hosted resources because organizations can't sufficiently monitor what those
users are doing with their admin privileges. Additionally, if a user account with
privileged access is compromised, that one breach could impact their overall cloud
security. Microsoft Entra Privileged Identity Management helps to resolve this risk.
Identity protection
Microsoft Entra ID Protection is a security service that provides a consolidated view into
risk detections and potential vulnerabilities affecting your organization’s identities.
Identity Protection uses existing Microsoft Entra ID’s anomaly detection capabilities
(available through Microsoft Entra ID’s Anomalous Activity Reports), and introduces new
risk detection types that can detect anomalies in real time.
Subscriptions also have an association with a directory. The directory defines a set of
users. These can be users from the work or school that created the directory, or they can
be external users (that is, Microsoft Accounts). Subscriptions are accessible by a subset
of those directory users who have been assigned as either Service Administrator (SA) or
Co-Administrator (CA); the only exception is that, for legacy reasons, Microsoft Accounts
(formerly Windows Live ID) can be assigned as SA or CA without being present in the
directory.
At-rest: This includes all information storage objects, containers, and types that
exist statically on physical media, be it magnetic or optical disk.
In-transit: When data is being transferred between components, locations or
programs, such as over the network, across a service bus (from on-premises to
cloud and vice-versa, including hybrid connections such as ExpressRoute), or
during an input/output process, it is thought of as being in-motion.
Encryption at rest
Encryption at rest is discussed in detail in Azure Data Encryption at Rest.
Encryption in-transit
Protecting data in transit should be essential part of your data protection strategy. Since
data is moving back and forth from many locations, the general recommendation is that
you always use SSL/TLS protocols to exchange data across different locations. In some
circumstances, you may want to isolate the entire communication channel between your
on-premises and cloud infrastructure by using a virtual private network (VPN).
For data moving between your on-premises infrastructure and Azure, you should
consider appropriate safeguards such as HTTPS or VPN.
For organizations that need to secure access from multiple workstations located on-
premises to Azure, use Azure site-to-site VPN.
For organizations that need to secure access from one workstation located on-premises
to Azure, use Point-to-Site VPN.
Larger data sets can be moved over a dedicated high-speed WAN link such as
ExpressRoute. If you choose to use ExpressRoute, you can also encrypt the data at the
application-level using SSL/TLS or other protocols for added protection.
If you are interacting with Azure Storage through the Azure portal, all transactions occur
via HTTPS. Storage REST API over HTTPS can also be used to interact with Azure Storage
and Azure SQL Database.
You can learn more about Azure VPN option by reading the article Planning and design
for VPN Gateway.
Web application firewall is based on rules from the OWASP core rule sets . Web
applications are increasingly targets of malicious attacks that exploit common known
vulnerabilities. Common among these exploits are SQL injection attacks, cross site
scripting attacks to name a few. Preventing such attacks in application code can be
challenging and may require rigorous maintenance, patching and monitoring at multiple
layers of the application topology. A centralized web application firewall helps make
security management much simpler and gives better assurance to application
administrators against threats or intrusions. A WAF solution can also react to a security
threat faster by patching a known vulnerability at a central location versus securing each
of individual web applications. Existing application gateways can be converted to a web
application firewall enabled application gateway easily.
Some of the common web vulnerabilities which web application firewall protects against
includes:
Protection against HTTP protocol anomalies such as missing host user-agent and
accept headers
7 Note
For a more detailed list of rules and their protections see the following Core rule
sets.
Azure provides several easy-to-use features to help secure both inbound and outbound
traffic for your app. Azure helps customers secure their application code by providing
externally provided functionality to scan your web application for vulnerabilities. See
Azure App Services to learn more.
Azure App Service uses the same Antimalware solution used by Azure Cloud Services
and Virtual Machines. To learn more about this refer to our Antimalware documentation.
The Azure network infrastructure enables you to securely connect Azure resources to
each other with virtual networks (VNets). A VNet is a representation of your own
network in the cloud. A VNet is a logical isolation of the Azure cloud network dedicated
to your subscription. You can connect VNets to your on-premises networks.
If you need basic network level access control (based on IP address and the TCP or UDP
protocols), then you can use Network Security Groups. A Network Security Group (NSG)
is a basic stateful packet filtering firewall that enables you to control access.
Azure Firewall is a cloud-native and intelligent network firewall security service that
provides threat protection for your cloud workloads running in Azure. It's a fully stateful
firewall as a service with built-in high availability and unrestricted cloud scalability. It
provides both east-west and north-south traffic inspection.
Azure Firewall is offered in two SKUs: Standard and Premium. Azure Firewall Standard
provides L3-L7 filtering and threat intelligence feeds directly from Microsoft Cyber
Security. Azure Firewall Premium provides advanced capabilities include signature-based
IDPS to allow rapid detection of attacks by looking for specific patterns.
Azure networking supports the ability to customize the routing behavior for network
traffic on your Azure Virtual Networks. You can do this by configuring User-Defined
Routes in Azure.
Forced tunneling is a mechanism you can use to ensure that your services are not
allowed to initiate a connection to devices on the Internet.
Azure supports dedicated WAN link connectivity to your on-premises network and an
Azure Virtual Network with ExpressRoute. The link between Azure and your site uses a
dedicated connection that does not go over the public Internet. If your Azure
application is running in multiple datacenters, you can use Azure Traffic Manager to
route requests from users intelligently across instances of the application. You can also
route traffic to services not running in Azure if they are accessible from the Internet.
Azure also supports private and secure connectivity to your PaaS resources (for example,
Azure Storage and SQL Database) from your Azure Virtual Network with Azure Private
Link. PaaS resource is mapped to a private endpoint in your virtual network. The link
between private endpoint in your virtual network and your PaaS resource uses Microsoft
backbone network and does not go over the public Internet. Exposing your service to
the public internet is no longer necessary. You can also use Azure Private Link to access
Azure hosted customer-owned and partner services in your virtual network. In addition,
Azure Private Link enables you to create your own private link service in your virtual
network and deliver it to your customers privately in their virtual networks. Setup and
consumption using Azure Private Link is consistent across Azure PaaS, customer-owned,
and shared partner services.
With Azure, you can use antimalware software from security vendors such as Microsoft,
Symantec, Trend Micro, and Kaspersky to protect your virtual machines from malicious
files, adware, and other threats.
Microsoft Antimalware for Azure Cloud Services and Virtual Machines is a real-time
protection capability that helps identify and remove viruses, spyware, and other
malicious software. Microsoft Antimalware provides configurable alerts when known
malicious or unwanted software attempts to install itself or run on your Azure systems.
Azure Backup is a scalable solution that protects your application data with zero capital
investment and minimal operating costs. Application errors can corrupt your data, and
human errors can introduce bugs into your applications. With Azure Backup, your virtual
machines running Windows and Linux are protected.
Azure Site Recovery helps orchestrate replication, failover, and recovery of workloads
and apps so that they are available from a secondary location if your primary location
goes down.
The checklist provides a framework that aligns clause-by-clause with a new international
standard for cloud service agreements, ISO/IEC 19086. This standard offers a unified set
of considerations for organizations to help them make decisions about cloud adoption,
and create a common ground for comparing cloud service offerings.
The checklist promotes a thoroughly vetted move to the cloud, providing structured
guidance and a consistent, repeatable approach for choosing a cloud service provider.
Exposes key discussion topics for decision-makers at the beginning of the cloud
adoption process.
Helps organizations identify any potential issues that could affect a cloud project.
Provides a consistent set of questions, with the same terms, definitions, metrics,
and deliverables for each provider, to simplify the process of comparing offerings
from different cloud service providers.
With Azure Monitor, you can manage any instance in any cloud, including on-premises,
Azure, AWS, Windows Server, Linux, VMware, and OpenStack, at a lower cost than
competitive solutions. Built for the cloud-first world, Azure Monitor offers a new
approach to managing your enterprise that is the fastest, most cost-effective way to
meet new business challenges and accommodate new workloads, applications and
cloud environments.
This method allows you to consolidate data from a variety of sources, so you can
combine data from your Azure services with your existing on-premises environment. It
also clearly separates the collection of the data from the action taken on that data so
that all actions are available to all kinds of data.
Microsoft Sentinel
Microsoft Sentinel is a scalable, cloud-native, security information and event
management (SIEM) and security orchestration, automation, and response (SOAR)
solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence
across the enterprise, providing a single solution for attack detection, threat visibility,
proactive hunting, and threat response.
Defender for Cloud analyzes the security state of your Azure resources to identify
potential security vulnerabilities. A list of recommendations guides you through the
process of configuring needed controls.
Examples include:
Provisioning of web application firewalls to help defend against attacks that target
your web applications
Defender for Cloud automatically collects, analyzes, and integrates log data from your
Azure resources, the network, and partner solutions like antimalware programs and
firewalls. When threats are detected, a security alert is created. Examples include
detection of:
Azure monitor
Azure Monitor provides pointers to information on specific types of resources. It offers
visualization, query, routing, alerting, auto scale, and automation on data both from the
Azure infrastructure (Activity Log) and each individual Azure resource (Diagnostic Logs).
Cloud applications are complex with many moving parts. Monitoring provides data to
ensure that your application stays up and running in a healthy state. It also helps you to
stave off potential problems or troubleshoot past ones.
In addition, you can use monitoring data to gain deep insights about your application.
That knowledge can help you to improve application performance or maintainability, or
automate actions that would otherwise require manual intervention.
Auditing your network security is vital for detecting network vulnerabilities and ensuring
compliance with your IT security and regulatory governance model. With Security Group
view, you can retrieve the configured Network Security Group and security rules, as well
as the effective security rules. With the list of rules applied, you can determine the ports
that are open and ss network vulnerability.
Network watcher
Network Watcher is a regional service that enables you to monitor and diagnose
conditions at a network level in, to, and from Azure. Network diagnostic and
visualization tools available with Network Watcher help you understand, diagnose, and
gain insights to your network in Azure. This service includes packet capture, next hop, IP
flow verify, security group view, NSG flow logs. Scenario level monitoring provides an
end to end view of network resources in contrast to individual network resource
monitoring.
Storage analytics
Storage Analytics can store metrics that include aggregated transaction statistics and
capacity data about requests to a storage service. Transactions are reported at both the
API operation level as well as at the storage service level, and capacity is reported at the
storage service level. Metrics data can be used to analyze storage service usage,
diagnose issues with requests made against the storage service, and to improve the
performance of applications that use a service.
Application Insights
Application Insights is an extensible Application Performance Management (APM)
service for web developers on multiple platforms. Use it to monitor your live web
application. It will automatically detect performance anomalies. It includes powerful
analytics tools to help you diagnose issues and to understand what users do with your
app. It's designed to help you continuously improve performance and usability. It works
for apps on a wide variety of platforms including .NET, Node.js and Java EE, hosted on-
premises or in the cloud. It integrates with your DevOps process, and has connection
points to a various development tools.
It monitors:
Request rates, response times, and failure rates - Find out which pages are most
popular, at what times of day, and where your users are. See which pages perform
best. If your response times and failure rates go high when there are more
requests, then perhaps you have a resourcing problem.
Dependency rates, response times, and failure rates - Find out whether external
services are slowing you down.
Exceptions - Analyze the aggregated statistics, or pick specific instances and drill
into the stack trace and related requests. Both server and browser exceptions are
reported.
AJAX calls from web pages - rates, response times, and failure rates.
Performance counters from your Windows or Linux server machines, such as CPU,
memory, and network usage.
Diagnostic trace logs from your app - so that you can correlate trace events with
requests.
Custom events and metrics that you write yourself in the client or server code, to
track business events such as items sold, or games won.
You can deploy, update, or delete all the resources for your solution in a single,
coordinated operation. You use a template for deployment and that template can work
for different environments such as testing, staging, and production. Resource Manager
provides security, auditing, and tagging features to help you manage your resources
after deployment.
You can deploy, manage, and monitor all the resources for your solution as a
group, rather than handling these resources individually.
You can repeatedly deploy your solution throughout the development lifecycle and
have confidence your resources are deployed in a consistent state.
You can manage your infrastructure through declarative templates rather than
scripts.
You can define the dependencies between resources, so they are deployed in the
correct order.
You can apply access control to all services in your resource group because Azure
role-based access control (Azure RBAC) is natively integrated into the management
platform.
You can apply tags to resources to logically organize all the resources in your
subscription.
You can clarify your organization's billing by viewing costs for a group of resources
sharing the same tag.
7 Note
Resource Manager provides a new way to deploy and manage your solutions. If you
used the earlier deployment model and want to learn about the changes, see
Understanding Resource Manager Deployment and classic deployment.
Next step
The Microsoft cloud security benchmark includes a collection of security
recommendations you can use to help secure the services you use in Azure.
Feedback
Was this page helpful? Yes No
Physical security
Availability
Components and boundaries
Network architecture
Production network
SQL Database
Operations
Monitoring
Integrity
Data protection
Next steps
Understand your shared responsibility in the cloud.
Learn how Microsoft Defender for Cloud can help you prevent, detect, and
respond to threats with increased visibility and control over the security of your
Azure resources.
Feedback
Was this page helpful? Yes No
Provide product feedback
Azure facilities, premises, and physical
security
Article • 03/27/2024
This article describes what Microsoft does to secure the Azure infrastructure.
Datacenter infrastructure
Azure is composed of a globally distributed datacenter infrastructure , supporting
thousands of online services and spanning more than 100 highly secure facilities
worldwide.
The infrastructure is designed to bring applications closer to users around the world,
preserving data residency, and offering comprehensive compliance and resiliency
options for customers. Azure has over 60 regions worldwide, and is available in 140
countries/regions.
Azure regions are organized into geographies. An Azure geography ensures that data
residency, sovereignty, compliance, and resiliency requirements are honored within
geographical boundaries.
Availability zones are physically separate locations within an Azure region. Each
availability zone is made up of one or more datacenters equipped with independent
power, cooling, and networking. Availability zones allow you to run mission-critical
applications with high availability and low-latency replication.
The following figure shows how the Azure global infrastructure pairs region and
availability zones within the same data residency boundary for high availability, disaster
recovery, and backup.
Geographically distributed datacenters enables Microsoft to be close to customers, to
reduce network latency and allow for geo-redundant backup and failover.
Physical security
Microsoft designs, builds, and operates datacenters in a way that strictly controls
physical access to the areas where your data is stored. Microsoft understands the
importance of protecting your data, and is committed to helping secure the datacenters
that contain your data. We have an entire division at Microsoft devoted to designing,
building, and operating the physical facilities supporting Azure. This team is invested in
maintaining state-of-the-art physical security.
Access request and approval. You must request access prior to arriving at the
datacenter. You're required to provide a valid business justification for your visit,
such as compliance or auditing purposes. All requests are approved on a need-to-
access basis by Microsoft employees. A need-to-access basis helps keep the
number of individuals needed to complete a task in the datacenters to the bare
minimum. After Microsoft grants permission, an individual only has access to the
discrete area of the datacenter required, based on the approved business
justification. Permissions are limited to a certain period of time, and then expire.
Visitor access. Temporary access badges are stored within the access-controlled
SOC and inventoried at the beginning and end of each shift. All visitors that have
approved access to the datacenter are designated as Escort Only on their badges
and are required to always remain with their escorts. Escorted visitors do not have
any access levels granted to them and can only travel on the access of their
escorts. The escort is responsible for reviewing the actions and access of their
visitor during their visit to the datacenter. Microsoft requires visitors to surrender
badges upon departure from any Microsoft facility. All visitor badges have their
access levels removed before they are reused for future visits.
Inside the building. After you enter the building, you must pass two-factor
authentication with biometrics to continue moving through the datacenter. If your
identity is validated, you can enter only the portion of the datacenter that you have
approved access to. You can stay there only for the duration of the time approved.
Datacenter floor. You are only allowed onto the floor that you're approved to
enter. You are required to pass a full body metal detection screening. To reduce the
risk of unauthorized data entering or leaving the datacenter without our
knowledge, only approved devices can make their way into the datacenter floor.
Additionally, video cameras monitor the front and back of every server rack. When
you exit the datacenter floor, you again must pass through full body metal
detection screening. To leave the datacenter, you're required to pass through an
additional security scan.
Equipment disposal
Upon a system's end-of-life, Microsoft operational personnel follow rigorous data
handling and hardware disposal procedures to assure that hardware containing your
data is not made available to untrusted parties. We use a secure erase approach for hard
drives that support it. For hard drives that can’t be wiped, we use a destruction process
that destroys the drive and renders the recovery of information impossible. This
destruction process can be to disintegrate, shred, pulverize, or incinerate. We determine
the means of disposal according to the asset type. We retain records of the destruction.
All Azure services use approved media storage and disposal management services.
Compliance
We design and manage the Azure infrastructure to meet a broad set of international and
industry-specific compliance standards, such as ISO 27001, HIPAA, FedRAMP, SOC 1,
and SOC 2. We also meet country-/region-specific standards, including Australia IRAP,
UK G-Cloud, and Singapore MTCS. Rigorous third-party audits, such as those done by
the British Standards Institute, verify adherence to the strict security controls these
standards mandate.
For a full list of compliance standards that Azure adheres to, see the Compliance
offerings.
Next steps
To learn more about what Microsoft does to help secure the Azure infrastructure, see:
This article provides information about what Microsoft does to secure the Azure
infrastructure and provide maximum availability of customers' data. Azure provides
robust availability, based on extensive redundancy achieved with virtualization
technology.
Uninterruptible power supplies and vast banks of batteries ensure that electricity
remains continuous if a short-term power disruption occurs. Emergency generators
provide backup power for extended outages and planned maintenance. If a natural
disaster occurs, the datacenter can use onsite fuel reserves.
High-speed and robust fiber optic networks connect datacenters with other major hubs
and internet users. Compute nodes host workloads closer to users to reduce latency,
provide geo-redundancy, and increase overall service resiliency. A team of engineers
works around the clock to ensure services are persistently available.
Microsoft ensures high availability through advanced monitoring and incident response,
service support, and backup failover capability. Geographically distributed Microsoft
operations centers operate 24/7/365. The Azure network is one of the largest in the
world. The fiber optic and content distribution network connects datacenters and edge
nodes to ensure high performance and reliability.
Disaster recovery
Azure keeps your data durable in two locations. You can choose the location of the
backup site. In the primary location, Azure constantly maintains three healthy replicas of
your data.
Database availability
Azure ensures that a database is internet accessible through an internet gateway with
sustained database availability. Monitoring assesses the health and state of the active
databases at five-minute time intervals.
Storage availability
Azure delivers storage through a highly scalable and durable storage service, which
provides connectivity endpoints. This means that an application can access the storage
service directly. The storage service processes incoming storage requests efficiently, with
transactional integrity.
Next steps
To learn more about what Microsoft does to help secure the Azure infrastructure, see:
Feedback
Was this page helpful? Yes No
This article provides a general description of the Azure architecture and management.
The Azure system environment is made up of the following networks:
Separate IT teams are responsible for operations and maintenance of these networks.
Azure architecture
Azure is a cloud computing platform and infrastructure for building, deploying, and
managing applications and services through a network of datacenters. Microsoft
manages these datacenters. Based on the number of resources you specify, Azure
creates virtual machines (VMs) based on resource need. These VMs run on an Azure
hypervisor, which is designed for use in the cloud and isn't accessible to the public.
On each Azure physical server node, there's a hypervisor that runs directly over the
hardware. The hypervisor divides a node into a variable number of guest VMs. Each
node also has one root VM, which runs the host operating system. Windows Firewall is
enabled on each VM. You define which ports are addressable by configuring the service
definition file. These ports are the only ones open and addressable, internally or
externally. All traffic and access to the disk and network is mediated by the hypervisor
and root operating system.
At the host layer, Azure VMs run a customized and hardened version of the latest
Windows Server. Azure uses a version of Windows Server that includes only those
components necessary to host VMs. This improves performance and reduces attack
surface. Machine boundaries are enforced by the hypervisor, which doesn't depend on
the operating system security.
The datacenter is divided into clusters. Clusters isolate faults at the FC level, and prevent
certain classes of errors from affecting servers beyond the cluster in which they occur.
FCs that serve a particular Azure cluster are grouped into an FC cluster.
Hardware inventory
The FC prepares an inventory of Azure hardware and network devices during the
bootstrap configuration process. Any new hardware and network components entering
the Azure production environment must follow the bootstrap configuration process. The
FC is responsible for managing the entire inventory listed in the datacenter.xml
configuration file.
The host and native FC-managed operating systems are designed for use in the cloud,
and aren't publicly accessible.
Azure datacenters
The Microsoft Cloud Infrastructure and Operations (MCIO) team manages the physical
infrastructure and datacenter facilities for all Microsoft online services. MCIO is primarily
responsible for managing the physical and environmental controls within the
datacenters, as well as managing and supporting outer perimeter network devices (such
as edge routers and datacenter routers). MCIO is also responsible for setting up the bare
minimum server hardware on racks in the datacenter. Customers have no direct
interaction with Azure.
Application Platform
Microsoft Entra ID
Azure Compute
Azure Net
Cloud Engineering Services
ISSD: Security
Multifactor Authentication
SQL Database
Storage
Types of users
Employees (or contractors) of Microsoft are considered to be internal users. All other
users are considered to be external users. All Azure internal users have their employee
status categorized with a sensitivity level that defines their access to customer data
(access or no access). User privileges to Azure (authorization permission after
authentication takes place) are described in the following table:
ノ Expand table
Azure uses unique identifiers to authenticate organizational users and customers (or
processes acting on behalf of organizational users). This applies to all assets and devices
that are part of the Azure environment.
Microsoft uses encryption based on the FC's master identity public key. This occurs at FC
setup and FC reconfiguration times, to transfer the credentials used to access
networking hardware devices. When the FC needs the credentials, the FC retrieves and
decrypts them.
Network devices
The Azure networking team configures network service accounts to enable an Azure
client to authenticate to network devices (routers, switches, and load balancers).
Next steps
To learn more about what Microsoft does to help secure the Azure infrastructure, see:
Feedback
Was this page helpful? Yes No
The Azure network architecture provides connectivity from the Internet to the Azure
datacenters. Any workload deployed (IaaS, PaaS, and SaaS) on Azure is leveraging the
Azure datacenter network.
Network topology
The network architecture of an Azure datacenter consists of the following components:
Edge network
Wide area network
Regional gateways network
Datacenter network
Network components
A brief description of the network components.
Edge network
Demarcation point between Microsoft networking and other networks (for
example, Internet, Enterprise network)
Provides Internet and ExpressRoute peering into Azure
Regional gateway
Point of aggregation for all of the datacenters in an Azure region
Provides massive connectivity between datacenters within an Azure region (for
example, multi hundred terabits per datacenter)
Datacenter network
Provides connectivity between servers within the datacenter with low
oversubscribed bandwidth
The datacenter network is a modified version of a Clos network , providing high bi-
sectional bandwidth for cloud scale traffic. The network is constructed using a large
number of commodity devices to reduce the impact caused by individual hardware
failure. These devices are strategically located in different physical locations with
separate power and cooling domain to reduce impact of an environment event. On the
control plane, all network devices are running as OSI model Layer 3 routing mode, which
eliminates the historical issue of traffic loop. All paths between different tiers are active
to provide high redundancy and bandwidth using Equal-Cost Multi-Path (ECMP)
Routing.
Feedback
Was this page helpful? Yes No
The users of the Azure production network include both external customers who access
their own Azure applications and internal Azure support personnel who manage the
production network. This article discusses the security access methods and protection
mechanisms for establishing connections to the Azure production network.
The Azure DNS servers are located at multiple datacenter facilities. The Azure DNS
implementation incorporates a hierarchy of secondary and primary DNS servers to
publicly resolve Azure customer domain names. The domain names usually resolve to a
CloudApp.net address, which wraps the virtual IP (VIP) address for the customer's
service. Unique to Azure, the VIP that corresponds to internal dedicated IP (DIP) address
of the tenant translation is done by the Microsoft load balancers responsible for that
VIP.
Azure is hosted in geographically distributed Azure datacenters within the US, and it's
built on state-of-the-art routing platforms that implement robust, scalable architectural
standards. Among the notable features are:
Because Microsoft owns its own network circuits between datacenters, these attributes
help the Azure offering achieve 99.9+ percent network availability without the need for
traditional third-party internet service providers.
Connection to production network and
associated firewalls
The Azure network internet traffic flow policy directs traffic to the Azure production
network that's located in the nearest regional datacenter within the US. Because the
Azure production datacenters maintain consistent network architecture and hardware,
the traffic flow description that follows applies consistently to all datacenters.
After internet traffic for Azure is routed to the nearest datacenter, a connection is
established to the access routers. These access routers serve to isolate traffic between
Azure nodes and customer-instantiated VMs. Network infrastructure devices at the
access and edge locations are the boundary points where ingress and egress filters are
applied. These routers are configured through a tiered access-control list (ACL) to filter
unwanted network traffic and apply traffic rate limits, if necessary. Traffic that is allowed
by ACL is routed to the load balancers. Distribution routers are designed to allow only
Microsoft-approved IP addresses, provide anti-spoofing, and establish TCP connections
that use ACLs.
External load-balancing devices are located behind the access routers to perform
network address translation (NAT) from internet-routable IPs to Azure internal IPs. The
devices also route packets to valid production internal IPs and ports, and they act as a
protection mechanism to limit exposing the internal production network address space.
By default, Microsoft enforces Hypertext Transfer Protocol Secure (HTTPS) for all traffic
that's transmitted to customers' web browsers, including sign-in and all traffic
thereafter. The use of TLS v1.2 enables a secure tunnel for traffic to flow through. ACLs
on access and core routers ensure that the source of the traffic is consistent with what is
expected.
Hypervisor firewall (packet filter): This firewall is implemented in the hypervisor and
configured by the fabric controller (FC) agent. This firewall protects the tenant that runs
inside the VM from unauthorized access. By default, when a VM is created, all traffic is
blocked and then the FC agent adds rules and exceptions in the filter to allow
authorized traffic.
Native host firewall: Azure Service Fabric and Azure Storage run on a native OS, which
has no hypervisor and, therefore, Windows Firewall is configured with the preceding two
sets of rules.
Host firewall: The host firewall protects the host partition, which runs the hypervisor.
The rules are programmed to allow only the FC and jump boxes to talk to the host
partition on a specific port. The other exceptions are to allow DHCP response and DNS
replies. Azure uses a machine configuration file, which contains a template of firewall
rules for the host partition. A host firewall exception also exists that allows VMs to
communicate to host components, wire server, and metadata server, through specific
protocol/ports.
Guest firewall: The Windows Firewall piece of the guest OS, which is configurable by
customers on customer VMs and storage.
Additional security features that are built into the Azure capabilities include:
Infrastructure components that are assigned IP addresses that are from DIPs. An
attacker on the internet cannot address traffic to those addresses because it would
not reach Microsoft. Internet gateway routers filter packets that are addressed
solely to internal addresses, so they would not enter the production network. The
only components that accept traffic that's directed to VIPs are load balancers.
Firewalls that are implemented on all internal nodes have three primary security
architecture considerations for any given scenario:
Firewalls are placed behind the load balancer and accept packets from
anywhere. These packets are intended to be externally exposed and would
correspond to the open ports in a traditional perimeter firewall.
Firewalls accept packets only from a limited set of addresses. This consideration
is part of the defensive in-depth strategy against DDoS attacks. Such
connections are cryptographically authenticated.
Firewalls can be accessed only from select internal nodes. They accept packets
only from an enumerated list of source IP addresses, all of which are DIPs within
the Azure network. For example, an attack on the corporate network could
direct requests to these addresses, but the attacks would be blocked unless the
source address of the packet was one in the enumerated list within the Azure
network.
The access router at the perimeter blocks outbound packets that are
addressed to an address that's inside the Azure network because of its
configured static routes.
Next steps
To learn more about what Microsoft does to secure the Azure infrastructure, see:
Azure SQL Database provides a relational database service in Azure. To protect customer
data and provide strong security features that customers expect from a relational
database service, SQL Database has its own sets of security capabilities. These
capabilities build upon the controls that are inherited from Azure.
Security capabilities
The overarching principle for network security of the Azure SQL Database offering is to
allow only the connection and communication that is necessary to allow the service to
operate. All other ports, protocols, and connections are blocked by default. Virtual local
area networks (VLANs) and ACLs are used to restrict network communications by source
and destination networks, protocols, and port numbers.
All publicly accessible information is managed within the Azure production network. The
production network is:
VLAN isolation
The Azure production network is logically segregated into three primary VLANs:
Packet filtering
The IPFilter and the software firewalls that are implemented on the root OS and guest
OS of the nodes enforce connectivity restrictions and prevent unauthorized traffic
between VMs.
{Src IP, Src Port, Destination IP, Destination Port, Destination Protocol, In/Out,
Stateful/Stateless, Stateful Flow Timeout}.
Synchronous idle character (SYN) packets are allowed in or out only if any one of the
rules permits it. For TCP, Azure uses stateless rules where the principle is that it allows
only all non-SYN packets into or out of the VM. The security premise is that any host
stack is resilient of ignoring a non-SYN if it hasn't seen a SYN packet previously. The TCP
protocol itself is stateful, and in combination with the stateless SYN-based rule achieves
an overall behavior of a stateful implementation.
For User Datagram Protocol (UDP), Azure uses a stateful rule. Every time a UDP packet
matches a rule, a reverse flow is created in the other direction. This flow has a built-in
timeout.
Customers are responsible for setting up their own firewalls on top of what Azure
provides. Here, customers are able to define the rules for inbound and outbound traffic.
All configuration changes to Azure are developed and tested in the staging
environment, and they're thereafter deployed in production environment. Software
builds are reviewed as part of testing. Security and privacy checks are reviewed as part
of entry checklist criteria. Changes are deployed on scheduled intervals by the respective
deployment team. Releases are reviewed and signed off by the respective deployment
team personnel before they're deployed into production.
Changes are monitored for success. On a failure scenario, the change is rolled back to its
previous state or a hotfix is deployed to address the failure with approval of the
designated personnel. Source Depot, Git, TFS, Master Data Services (MDS), runners,
Azure security monitoring, the FC, and the WinFabric platform are used to centrally
manage, apply, and verify the configuration settings in the Azure virtual environment.
Similarly, hardware and network changes have established validation steps to evaluate
their adherence to the build requirements. The releases are reviewed and authorized
through a coordinated change advisory board (CAB) of respective groups across the
stack.
Next steps
To learn more about what Microsoft does to secure the Azure infrastructure, see:
This article describes how Microsoft manages and operates the Azure production
network to secure the Azure datacenters.
To ensure the secure execution of services running in the Azure environment, the
operations teams implement multiple levels of monitoring, logging, and reporting,
including the following actions:
If any anomalies occur, the incident response process followed by the Azure incident
triage team is activated. The appropriate Azure support personnel are notified to
respond to the incident. Issue tracking and resolution are documented and managed in
a centralized ticketing system. System uptime metrics are available under the non-
disclosure agreement (NDA) and upon request.
Next steps
To learn more about what Microsoft does to secure the Azure infrastructure, see:
Azure facilities, premises, and physical security
Azure infrastructure availability
Azure information system components and boundaries
Azure network architecture
Azure production network
Azure SQL Database security features
Azure infrastructure monitoring
Azure infrastructure integrity
Azure customer data protection
Azure infrastructure monitoring
Article • 08/29/2023
The baseline configurations that are required for Azure-based services are reviewed by
the Azure security and compliance team and by service teams. A service team review is
part of the testing that occurs before the deployment of their production service.
Vulnerability management
Security update management helps protect systems from known vulnerabilities. Azure
uses integrated deployment systems to manage the distribution and installation of
security updates for Microsoft software. Azure is also able to draw on the resources of
the Microsoft Security Response Center (MSRC). The MSRC identifies, monitors,
responds to, and resolves security incidents and cloud vulnerabilities around the clock,
every day of the year.
Vulnerability scanning
Vulnerability scanning is performed on server operating systems, databases, and
network devices. The vulnerability scans are performed on a quarterly basis at minimum.
Azure contracts with independent assessors to perform penetration testing of the Azure
boundary. Red-team exercises are also routinely performed and the results are used to
make security improvements.
Protective monitoring
Azure security has defined requirements for active monitoring. Service teams configure
active monitoring tools in accordance with these requirements. Active monitoring tools
include the Microsoft Monitoring Agent (MMA) and System Center Operations Manager.
These tools are configured to provide time alerts to Azure security personnel in
situations that require immediate action.
Incident management
Microsoft implements a security incident management process to facilitate a
coordinated response to incidents, should one occur.
If Microsoft becomes aware of unauthorized access to customer data that's stored on its
equipment or in its facilities, or it becomes aware of unauthorized access to such
equipment or facilities resulting in loss, disclosure, or alteration of customer data,
Microsoft takes the following actions:
An incident management framework has been established that defines roles and
allocates responsibilities. The Azure security incident management team is responsible
for managing security incidents, including escalation, and ensuring the involvement of
specialist teams when necessary. Azure operations managers are responsible for
overseeing the investigation and resolution of security and privacy incidents.
Next steps
To learn more about what Microsoft does to secure the Azure infrastructure, see:
Software installation
All components in the software stack that are installed in the Azure environment are
custom built following the Microsoft Security Development Lifecycle (SDL) process. All
software components, including operating system (OS) images and SQL Database, are
deployed as part of the change management and release management process. The OS
that runs on all nodes is a customized version. The exact version is chosen by the fabric
controller (FC) according to the role it intends for the OS to play. In addition, the host
OS doesn't allow installation of any unauthorized software components.
Web protocols
Compute connectivity
Azure ensures that the deployed application or service is reachable via standard web-
based protocols. Virtual instances of internet-facing web roles have external internet
connectivity and are reachable directly by web users. To protect the sensitivity and
integrity of the operations that worker roles perform on behalf of the publicly accessible
web role virtual instances, virtual instances of back-end processing worker roles have
external internet connectivity but can't be accessed directly by external web users.
Next steps
To learn more about what Microsoft does to secure the Azure infrastructure, see:
Azure support personnel are assigned unique corporate Active Directory accounts by
Microsoft. Azure relies on Microsoft corporate Active Directory, managed by Microsoft
Information Technology (MSIT), to control access to key information systems. Multi-
factor authentication is required, and access is granted only from secure consoles.
Data protection
Azure provides customers with strong data security, both by default and as customer
options.
Data segregation: Azure is a multi-tenant service, which means that multiple customer
deployments and VMs are stored on the same physical hardware. Azure uses logical
isolation to segregate each customer’s data from the data of others. Segregation
provides the scale and economic benefits of multi-tenant services while rigorously
preventing customers from accessing one another’s data.
At-rest data protection: Customers are responsible for ensuring that data stored in
Azure is encrypted in accordance with their standards. Azure offers a wide range of
encryption capabilities, giving customers the flexibility to choose the solution that best
meets their needs. Azure Key Vault helps customers easily maintain control of keys that
are used by cloud applications and services to encrypt data. Azure Disk Encryption
enables customers to encrypt VMs. Azure Storage Service Encryption makes it possible
to encrypt all data placed into a customer's storage account.
In-transit data protection: Microsoft provides a number of options that can be utilized
by customers for securing data in transit internally within the Azure network and
externally across the Internet to the end user. These include communication through
Virtual Private Networks (utilizing IPsec/IKE encryption), Transport Layer Security (TLS)
1.2 or later (via Azure components such as Application Gateway or Azure Front Door),
protocols directly on the Azure virtual machines (such as Windows IPsec or SMB), and
more.
Additionally, "encryption by default" using MACsec (an IEEE standard at the data-link
layer) is enabled for all Azure traffic traveling between Azure datacenters to ensure
confidentiality and integrity of customer data.
Data redundancy: Microsoft helps ensure that data is protected if there is a cyberattack
or physical damage to a datacenter. Customers may opt for:
Data can be replicated within a selected geographic area for redundancy but cannot be
transmitted outside it. Customers have multiple options for replicating data, including
the number of copies and the number and location of replication datacenters.
When you create your storage account, select one of the following replication options:
Locally redundant storage (LRS): Locally redundant storage maintains three copies
of your data. LRS is replicated three times within a single facility in a single region.
LRS protects your data from normal hardware failures, but not from a failure of a
single facility.
Zone-redundant storage (ZRS): Zone-redundant storage maintains three copies of
your data. ZRS is replicated three times across two to three facilities to provide
higher durability than LRS. Replication occurs within a single region or across two
regions. ZRS helps ensure that your data is durable within a single region.
Geo-redundant storage (GRS): Geo-redundant storage is enabled for your storage
account by default when you create it. GRS maintains six copies of your data. With
GRS, your data is replicated three times within the primary region. Your data is also
replicated three times in a secondary region hundreds of miles away from the
primary region, providing the highest level of durability. In the event of a failure at
the primary region, Azure Storage fails over to the secondary region. GRS helps
ensure that your data is durable in two separate regions.
Data destruction: When customers delete data or leave Azure, Microsoft follows strict
standards for deleting data, as well as the physical destruction of decommissioned
hardware. Microsoft executes a complete deletion of data on customer request and on
contract termination. For more information, see Data management at Microsoft .
Customer data ownership
Microsoft does not inspect, approve, or monitor applications that customers deploy to
Azure. Moreover, Microsoft does not know what kind of data customers choose to store
in Azure. Microsoft does not claim data ownership over the customer information that's
entered into Azure.
Records management
Azure has established internal records-retention requirements for back-end data.
Customers are responsible for identifying their own record retention requirements. For
records that are stored in Azure, customers are responsible for extracting their data and
retaining their content outside of Azure for a customer-specified retention period.
Azure allows customers to export data and audit reports from the product. The exports
are saved locally to retain the information for a customer-defined retention time period.
Next steps
To learn more about what Microsoft does to secure the Azure infrastructure, see:
The Azure fleet is composed of millions of servers (hosts) with thousands more added
on a daily basis. Thousands of hosts also undergo maintenance on a daily basis through
reboots, operating system refreshes, or repairs. Before a host can join the fleet and
begin accepting customer workloads, Microsoft verifies that the host is in a secure and
trustworthy state. This verification ensures that malicious or inadvertent changes have
not occurred on boot sequence components during the supply chain or maintenance
workflows.
Firmware security
Platform code integrity
UEFI Secure Boot
Measured boot and host attestation
Project Cerberus
Encryption at rest
Hypervisor security
Next steps
Learn how Microsoft actively partners within the cloud hardware ecosystem to
drive continuous firmware security improvements.
This article describes how Microsoft secures the cloud hardware ecosystem and supply
chains.
7 Note
Threat modeling
Secure design reviews
Firmware reviews and penetration testing
Secure build and test environments
Security vulnerability management and incident response
Next steps
To learn more about what we do to drive platform integrity and security, see:
Security risks such as dedicated attack tools, custom malware, and third-party
software with known vulnerabilities
Compliance risks when the approved change management process isn't used to
bring in new software
Quality risk from externally developed software, which may not meet the
operational requirements of the business
In Azure, we face the same challenge and at significant complexity. We have thousands
of servers running software developed and maintained by thousands of engineers. This
presents a large attack surface that cannot be managed through business processes
alone.
Code Integrity allows a system administrator to define a policy that authorizes only
binaries and scripts that have been signed by particular certificates or match specified
SHA256 hashes. The kernel enforces this policy by blocking execution of everything that
doesn't meet the set policy.
A concern with a code integrity policy is that unless the policy is perfectly correct, it can
block critical software in production and cause an outage. Given this concern, one may
ask why it isn’t sufficient to use security monitoring to detect when unauthorized
software has executed. Code integrity has an audit mode that, instead of preventing
execution, can alert when unauthorized software is run. Alerting certainly can add much
value in addressing compliance risks, but for security risks such as ransomware or
custom malware, delaying the response by even a few seconds can be the difference
between protection and an adversary gaining a persistent foothold in your fleet. In
Azure, we've invested significantly to manage any risk of code integrity contributing to a
customer impacting outage.
Build process
As discussed above, the Azure build system has a rich set of tests to ensure software
changes are secure and compliant. Once a build has progressed through validation, the
build system signs it using an Azure build certificate. The certificate indicates the build
has passed through the entire change management process. The final test that the build
goes through is called Code Signature Validation (CSV). CSV confirms the newly built
binaries meet the code integrity policy before we deploy to production. This gives us
high confidence that we won't cause a customer impacting outage because of
incorrectly signed binaries. If CSV finds a problem, the build breaks and the relevant
engineers are paged to investigate and fix the issue.
All changes in Azure are required to deploy through a series of stages. The first of these
are internal Azure testing instances. The next stage is used only to serve other Microsoft
product teams. The final stage serves third-party customers. When a change is
deployed, it moves to each of these stages in turn, and pauses to measure the health of
the stage. If the change is found to have no negative impact, then it moves to the next
stage. If we make a bad change to a code integrity policy, the change is detected during
this staged deployment and rolled back.
Incident response
Even with this layered protection, it's still possible that some server in the fleet may
block properly authorized software and cause a customer facing issue, one of our worst-
case scenarios. Our final layer of defense is human investigation. Each time code
integrity blocks a file, it raises an alert for the on-call engineers to investigate. The alert
allows us to start security investigations and intervene, whether the issue is an indicator
of a real attack, a false positive, or other customer-impacting situation. This minimizes
the time it takes to mitigate any code integrity related issues.
Next steps
Learn how Windows 10 uses configurable code integrity.
To learn more about what we do to drive platform integrity and security, see:
Firmware security
Secure boot
Measured boot and host attestation
Project Cerberus
Encryption at rest
Hypervisor security
Secure Boot
Article • 11/11/2022
Secure Boot is a feature of the Unified Extensible Firmware Interface (UEFI) that
requires all low-level firmware and software components to be verified prior to loading.
During boot, UEFI Secure Boot checks the signature of each piece of boot software,
including UEFI firmware drivers (also known as option ROMs), Extensible Firmware
Interface (EFI) applications, and the operating system drivers and binaries. If the
signatures are valid or trusted by the Original Equipment Manufacturer (OEM), the
machine boots and the firmware gives control to the operating system.
Platform key (PK) - Establishes trust between the platform owner (Microsoft) and
the firmware. The public half is PKpub and the private half is PKpriv.
Key enrollment key database (KEK) - Establishes trust between the OS and the
platform firmware. The public half is KEKpub and the private half is KEKpriv.
Signature database (db) - Holds the digests for trusted signers (public keys and
certificates) of the firmware and software code modules authorized to interact with
platform firmware.
Revoked signatures database (dbx) – Holds revoked digests of code modules that
have been identified to be malicious, vulnerable, compromised, or untrusted. If a
hash is in the signature db and the revoked signatures db, the revoked signatures
database takes precedent.
The following figure and process explains how these components are updated:
The OEM stores the Secure Boot digests on the machine’s nonvolatile RAM (NV-RAM) at
the time of manufacturing.
1. The signature database (db) is populated with the signers or image hashes of UEFI
applications, operating system loaders (such as the Microsoft Operating System
Loader or Boot Manager), and UEFI drivers that are trusted.
2. The revoked signatures database (dbx) is populated with digests of modules that
are no longer trusted.
3. The key enrollment key (KEK) database is populated with signing keys that can be
used to update the signature database and revoked signatures database. The
databases can be edited via updates that are signed with the correct key or via
updates by a physically present authorized user using firmware menus.
4. After the db, dbx, and KEK databases have been added and final firmware
validation and testing is complete, the OEM locks the firmware from editing and
generates a platform key (PK). The PK can be used to sign updates to the KEK or to
turn off Secure Boot.
During each stage in the boot process, the digests of the firmware, bootloader,
operating system, kernel drivers, and other boot chain artifacts are calculated and
compared to acceptable values. Firmware and software that are discovered to be
untrusted are not allowed to load. Thus, low-level malware injection or pre-boot
malware attacks can be blocked.
By validating the signatures of KEKpub and PKpub, we can confirm that only trusted
parties have permission to modify the definitions of what software is considered trusted.
Lastly, by ensuring that secure boot is active, we can validate that these definitions are
being enforced.
Next steps
To learn more about what we do to drive platform integrity and security, see:
Firmware security
Platform code integrity
Measured boot and host attestation
Project Cerberus
Encryption at rest
Hypervisor security
Measured boot and host attestation
Article • 11/11/2022
This article describes how Microsoft ensures integrity and security of hosts through
measured boot and host attestation.
Measured boot
The Trusted Platform Module (TPM) is a tamper-proof, cryptographically secure auditing
component with firmware supplied by a trusted third party. The boot configuration log
contains hash-chained measurements recorded in its Platform Configuration Registers
(PCR) when the host last underwent the bootstrapping sequence. The following figure
shows this recording process. Incrementally adding a previously hashed measurement
to the next measurement’s hash and running the hashing algorithm on the union
accomplishes hash-chaining.
Attestation is accomplished when a host furnishes proof of its configuration state using
its boot configuration log (TCGLog). Forgery of a boot log is difficult because the TPM
doesn't expose its PCR values other than the read and extend operations. Furthermore,
the credentials supplied by the Host Attestation Service are sealed to specific PCR
values. The use of hash-chaining makes it computationally infeasible to spoof or unseal
the credentials out-of-band.
Host Attestation Service is present in each Azure cluster within a specialized locked-
down environment. The locked down environment includes other gatekeeper services
that participate in the host machine bootstrapping protocol. A public key infrastructure
(PKI) acts as an intermediary for validating the provenance of attestation requests and as
an identity issuer (contingent upon successful host attestation). The post-attestation
credentials issued to the attesting host are sealed to its identity. Only the requesting
host can unseal the credentials and leverage them for obtaining incremental
permissions. This prevents against man-in-the middle and spoofing attacks.
If an Azure host arrives from factory with a security misconfiguration or is tampered with
in the datacenter, its TCGLog contains indicators of compromise flagged by the Host
Attestation Service upon the next attestation, which causes an attestation failure.
Attestation failures prevent the Azure fleet from trusting the offending host. This
prevention effectively blocks all communications to and from the host and triggers an
incident workflow. Investigation and a detailed post-mortem analysis are conducted to
determine root causes and any potential indications of compromise. It's only after the
analysis is complete that a host is remediated and has the opportunity to join the Azure
fleet and take on customer workloads.
Attestation measurements
Following are examples of the many measurements captured today.
Secure Boot and Secure Boot keys
By validating that the signature database and revoked signatures database digests are
correct, the Host Attestation Service assures the client agent considers the right software
to be trusted. By validating the signatures of the public key enrollment key database
and public platform key, the Host Attestation Service confirms that only trusted parties
have permission to modify the definitions of what software is considered trusted. Lastly,
by ensuring that secure boot is active the Host Attestation Service validates these
definitions are being enforced.
Debug controls
Debuggers are powerful tools for developers. However, the unfettered access to
memory and other debug commands could weaken data protection and system
integrity if given to a non-trusted party. Host Attestation Service ensures any kind of
debugging is disabled on boot on production machines.
Code integrity
UEFI Secure Boot ensures that only trusted low-level software can run during the boot
sequence. The same checks, though, must also be applied in the post-boot environment
to drivers and other executables with kernel-mode access. To that end, a code integrity
(CI) policy is used to define which drivers, binaries, and other executables are considered
trusted by specifying valid and invalid signatures. These policies are enforced. Violations
of policy generate alerts to the security incident response team for investigation.
Next steps
To learn more about what we do to drive platform integrity and security, see:
Firmware security
Platform code integrity
Secure boot
Project Cerberus
Encryption at rest
Hypervisor security
Project Cerberus
Article • 11/11/2022
Host
Baseboard Management Controller (BMC)
All peripherals, including network interface card and system-on-a-chip (SoC)
Cerberus attestation
Cerberus authenticates firmware integrity for server components using a Platform
Firmware Manifest (PFM). PFM defines a list of authorized firmware versions and
provides a platform measurement to the Azure Host Attestation Service. The Host
Attestation Service validates the measurements and makes a determination to only
allow trusted hosts to join the Azure fleet and host customer workloads.
In conjunction with the Host Attestation Service, Cerberus’ capabilities enhance and
promote a highly secure Azure production infrastructure.
7 Note
Firmware security
Platform code integrity
Secure boot
Measured boot and host attestation
Encryption at rest
Hypervisor security
Azure Data Encryption at rest
Article • 11/15/2022
Microsoft Azure includes tools to safeguard data according to your company's security
and compliance needs. This paper focuses on:
In practice, key management and control scenarios, as well as scale and availability
assurances, require additional constructs. Microsoft Azure Encryption at Rest concepts
and components are described below.
Encryption at rest is designed to prevent the attacker from accessing the unencrypted
data by ensuring the data is encrypted when on disk. If an attacker obtains a hard drive
with encrypted data but not the encryption keys, the attacker must defeat the
encryption to read the data. This attack is much more complex and resource consuming
than accessing unencrypted data on a hard drive. For this reason, encryption at rest is
highly recommended and is a high priority requirement for many organizations.
Encryption at rest may also be required by an organization's need for data governance
and compliance efforts. Industry and government regulations such as HIPAA, PCI and
FedRAMP, lay out specific safeguards regarding data protection and encryption
requirements. Encryption at rest is a mandatory measure required for compliance with
some of those regulations. For more information on Microsoft's approach to FIPS 140-2
validation, see Federal Information Processing Standard (FIPS) Publication 140-2.
Microsoft is committed to encryption at rest options across cloud services and giving
customers control of encryption keys and logs of key use. Additionally, Microsoft is
working towards encrypting all customer data at rest by default.
Resource providers and application instances store the encrypted Data Encryption Keys
as metadata. Only an entity with access to the Key Encryption Key can decrypt these
Data Encryption Keys. Different models of key storage are supported. For more
information, see data encryption models.
Encrypted storage
Like PaaS, IaaS solutions can leverage other Azure services that store data encrypted at
rest. In these cases, you can enable the Encryption at Rest support as provided by each
consumed Azure service. The Data encryption models: supporting services table
enumerates the major storage, services, and application platforms and the model of
Encryption at Rest supported.
Encrypted compute
All Managed Disks, Snapshots, and Images are encrypted using Storage Service
Encryption using a service-managed key. A more complete Encryption at Rest solution
ensures that the data is never persisted in unencrypted form. While processing the data
on a virtual machine, data can be persisted to the Windows page file or Linux swap file,
a crash dump, or to an application log. To ensure this data is encrypted at rest, IaaS
applications can use Azure Disk Encryption on an Azure IaaS virtual machine (Windows
or Linux) and virtual disk.
Azure storage
All Azure Storage services (Blob storage, Queue storage, Table storage, and Azure Files)
support server-side encryption at rest; some services additionally support customer-
managed keys and client-side encryption.
Support for server encryption is currently provided through the SQL feature called
Transparent Data Encryption. Once an Azure SQL Database customer enables TDE key
are automatically created and managed for them. Encryption at rest can be enabled at
the database and server levels. As of June 2017, Transparent Data Encryption (TDE) is
enabled by default on newly created databases. Azure SQL Database supports RSA
2048-bit customer-managed keys in Azure Key Vault. For more information, see
Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database
and Data Warehouse.
Client-side encryption of Azure SQL Database data is supported through the Always
Encrypted feature. Always Encrypted uses a key that created and stored by the client.
Customers can store the master key in a Windows certificate store, Azure Key Vault, or a
local Hardware Security Module. Using SQL Server Management Studio, SQL users
choose what key they'd like to use to encrypt which column.
Conclusion
Protection of customer data stored within Azure Services is of paramount importance to
Microsoft. All Azure hosted services are committed to providing Encryption at Rest
options. Azure services support either service-managed keys, customer-managed keys,
or client-side encryption. Azure services are broadly enhancing Encryption at Rest
availability and new options are planned for preview and general availability in the
upcoming months.
Next steps
See data encryption models to learn more about service-managed keys and
customer-managed keys.
Learn how Azure uses double encryption to mitigate threats that come with
encrypting data.
Learn what Microsoft does to ensure platform integrity and security of hosts
traversing the hardware and firmware build-out, integration, operationalization,
and repair pipelines.
Hypervisor security on the Azure fleet
Article • 11/11/2022
The Azure hypervisor system is based on Windows Hyper-V. The hypervisor system
enables the computer administrator to specify guest partitions that have separate
address spaces. The separate address spaces allow you to load an operating system and
applications operating in parallel of the (host) operating system that executes in the
root partition of the computer. The host OS (also known as privileged root partition) has
direct access to all the physical devices and peripherals on the system (storage
controllers, networking adaptions). The host OS allows guest partitions to share the use
of these physical devices by exposing “virtual devices” to each guest partition. Thus, an
operating system executing in a guest partition has access to virtualized peripheral
devices that are provided by virtualization services executing in the root partition.
The Azure hypervisor is built keeping the following security objectives in mind:
Objective Source
Isolation A security policy mandates no information transfer between VMs. This constraint
requires capabilities in the Virtual Machine Manager (VMM) and hardware for
isolation of memory, devices, the network, and managed resources such as persisted
data.
VMM To achieve overall system integrity, the integrity of individual hypervisor components
integrity is established and maintained.
Platform The integrity of the hypervisor depends on the integrity of the hardware and
integrity software on which it relies. Although the hypervisor doesn't have direct control over
the integrity of the platform, Azure relies on hardware and firmware mechanisms
such as the Cerberus chip to protect and detect the underlying platform integrity.
The VMM and guests are prevented from running if platform integrity is
compromised.
Audit Azure enables audit capability to capture and protect data about what happens on a
system so that it can later be inspected.
Confidentiality, integrity, and availability are assured for the hypervisor security
boundaries. The boundaries defend against a range of attacks including side-channel
information leaks, denial-of-service, and elevation of privilege.
The hypervisor security boundary also provides segmentation between tenants for
network traffic, virtual devices, storage, compute resources, and all other VM resources.
These mitigations are designed to make the development of an exploit for a cross-VM
vulnerability infeasible.
7 Note
Next steps
To learn more about what we do to drive platform integrity and security, see:
Firmware security
Platform code integrity
Secure boot
Measured boot and host attestation
Project Cerberus
Encryption at rest
Isolation in the Azure Public Cloud
Article • 10/12/2023
Azure allows you to run applications and virtual machines (VMs) on shared physical
infrastructure. One of the prime economic motivations to running applications in a
cloud environment is the ability to distribute the cost of shared resources among
multiple customers. This practice of multi-tenancy improves efficiency by multiplexing
resources among disparate customers at low costs. Unfortunately, it also introduces the
risk of sharing physical servers and other infrastructure resources to run your sensitive
applications and VMs that may belong to an arbitrary and potentially malicious user.
This article outlines how Azure provides isolation against both malicious and non-
malicious users and serves as a guide for architecting cloud solutions by offering various
isolation choices to architects.
Each Microsoft Entra directory is distinct and separate from other Microsoft Entra
directories. Just like a corporate office building is a secure asset specific to only your
organization, a Microsoft Entra directory was also designed to be a secure asset for use
by only your organization. The Microsoft Entra architecture isolates customer data and
identity information from co-mingling. This means that users and administrators of one
Microsoft Entra directory can't accidentally or maliciously access data in another
directory.
Azure Tenancy
Azure tenancy (Azure Subscription) refers to a “customer/billing” relationship and a
unique tenant in Microsoft Entra ID. Tenant level isolation in Microsoft Azure is achieved
using Microsoft Entra ID and Azure role-based access control offered by it. Each Azure
subscription is associated with one Microsoft Entra directory.
Users, groups, and applications from that directory can manage resources in the Azure
subscription. You can assign these access rights using the Azure portal, Azure
command-line tools, and Azure Management APIs. A Microsoft Entra tenant is logically
isolated using security boundaries so that no customer can access or compromise co-
tenants, either maliciously or accidentally. Microsoft Entra ID runs on “bare metal”
servers isolated on a segregated network segment, where host-level packet filtering and
Windows Firewall block unwanted connections and traffic.
Physical access to servers that comprise the Microsoft Entra service, and direct
access to Microsoft Entra ID’s back-end systems, is restricted.
Microsoft Entra users have no access to physical assets or locations, and therefore
it isn't possible for them to bypass the logical Azure RBAC policy checks stated
following.
For diagnostics and maintenance needs, an operational model that employs a just-in-
time privilege elevation system is required and used. Microsoft Entra Privileged Identity
Management (PIM) introduces the concept of an eligible admin. Eligible admins should
be users that need privileged access now and then, but not every day. The role is
inactive until the user needs access, then they complete an activation process and
become an active admin for a predetermined amount of time.
Microsoft Entra ID hosts each tenant in its own protected container, with policies and
permissions to and within the container solely owned and managed by the tenant.
The concept of tenant containers is deeply ingrained in the directory service at all layers,
from portals all the way to persistent storage.
Even when metadata from multiple Microsoft Entra tenants is stored on the same
physical disk, there's no relationship between the containers other than what is defined
by the directory service, which in turn is dictated by the tenant administrator.
Azure RBAC has three basic roles that apply to all resource types:
Owner has full access to all resources including the right to delegate access to
others.
Contributor can create and manage all types of Azure resources but can’t grant
access to others.
The rest of the Azure roles in Azure allow management of specific Azure resources. For
example, the Virtual Machine Contributor role allows the user to create and manage
virtual machines. It doesn't give them access to the Azure Virtual Network or the subnet
that the virtual machine connects to.
Azure built-in roles list the roles available in Azure. It specifies the operations and scope
that each built-in role grants to users. If you're looking to define your own roles for even
more control, see how to build Custom roles in Azure RBAC.
Microsoft Entra Domain Services lets you join Azure virtual machines to an
Active Directory domain without deploying domain controllers. You can sign in to
these virtual machines with your corporate Active Directory credentials and
administer domain-joined virtual machines by using Group Policy to enforce
security baselines on all your Azure virtual machines.
Microsoft engineers don't have default access to your data in the cloud. Instead,
they're granted access, under management oversight, only when necessary. That
access is carefully controlled and logged, and revoked when it's no longer needed.
Microsoft may hire other companies to provide limited services on its behalf.
Subcontractors may access customer data only to deliver the services for which, we
have hired them to provide, and they're prohibited from using it for any other
purpose. Further, they're contractually bound to maintain the confidentiality of our
customers’ information.
Business services with audited certifications such as ISO/IEC 27001 are regularly verified
by Microsoft and accredited audit firms, which perform sample audits to attest that
access, only for legitimate business purposes. You can always access your own customer
data at any time and for any reason.
If you delete any data, Microsoft Azure deletes the data, including any cached or backup
copies. For in-scope services, that deletion will occur within 90 days after the end of the
retention period. (In-scope services are defined in the Data Processing Terms section of
our Online Services Terms .)
If a disk drive used for storage suffers a hardware failure, it's securely erased or
destroyed before Microsoft returns it to the manufacturer for replacement or repair.
The data on the drive is overwritten to ensure that the data can't be recovered by any
means.
Compute Isolation
Microsoft Azure provides various cloud-based computing services that include a wide
selection of compute instances & services that can scale up and down automatically to
meet the needs of your application or enterprise. These compute instance and service
offer isolation at multiple levels to secure data without sacrificing the flexibility in
configuration that customers demand.
Isolated virtual machine sizes are best suited for workloads that require a high degree of
isolation from other customers’ workloads. This is sometimes required to meet
compliance and regulatory requirements. Utilizing an isolated size guarantees that your
virtual machine is the only one running on that specific server instance.
Additionally, as the Isolated size VMs are large, customers may choose to subdivide the
resources of these VMs by using Azure support for nested virtual machines .
Standard_E80ids_v4
Standard_E80is_v4
Standard_E104i_v5
Standard_E104is_v5
Standard_E104id_v5
Standard_E104ids_v5
Standard_M192is_v2
Standard_M192ims_v2
Standard_M192ids_v2
Standard_M192idms_v2
Standard_F72s_v2
Standard_M128ms
7 Note
FAQ
Date Action
1
Existing customer using these sizes will receive an announcement email with detailed
instructions on the next steps.
Q: What are the milestones for G5, Gs5, E64i_v3 and
E64is_v3 isolation retirement?
A:
Date Action
1
Existing customer using these sizes will receive an announcement email with detailed
instructions on the next steps.
Next steps
Customers can also choose to further subdivide the resources of these Isolated virtual
machines by using Azure support for nested virtual machines .
Dedicated hosts
In addition to the isolated hosts described in the preceding section, Azure also offers
dedicated hosts. Dedicated hosts in Azure is a service that provides physical servers that
can host one or more virtual machines, and which are dedicated to a single Azure
subscription. Dedicated hosts provide hardware isolation at the physical server level. No
other VMs will be placed on your hosts. Dedicated hosts are deployed in the same
datacenters and share the same network and underlying storage infrastructure as other,
non-isolated hosts. For more information, see the detailed overview of Azure dedicated
hosts.
The Azure platform uses a virtualized environment. User instances operate as standalone
virtual machines that don't have access to a physical host server.
The Azure hypervisor acts like a micro-kernel and passes all hardware access requests
from guest virtual machines to the host for processing by using a shared-memory
interface called VM Bus. This prevents users from obtaining raw read/write/execute
access to the system and mitigates the risk of sharing system resources.
The Azure hypervisor enforces memory and process separation between virtual
machines, and it securely routes network traffic to guest OS tenants. This eliminates
possibility of and side channel attack at VM level.
In Azure, the root VM is special: it runs a hardened operating system called the root OS
that hosts a fabric agent (FA). FAs are used in turn to manage guest agents (GA) within
guest operating systems on customer VMs. FAs also manage storage nodes.
The collection of Azure hypervisor, root OS/FA, and customer VMs/GAs comprises a
compute node. FAs are managed by a fabric controller (FC), which exists outside of
compute and storage nodes (compute and storage clusters are managed by separate
FCs). If a customer updates their application’s configuration file while it’s running, the FC
communicates with the FA, which then contacts GAs, which notify the application of the
configuration change. In the event of a hardware failure, the FC will automatically find
available hardware and restart the VM there.
Communication from a Fabric Controller to an agent is unidirectional. The agent
implements an SSL-protected service that only responds to requests from the controller.
It cannot initiate connections to the controller or other privileged internal nodes. The FC
treats all responses as if they were untrusted.
Isolation extends from the Root VM from Guest VMs, and the Guest VMs from one
another. Compute nodes are also isolated from storage nodes for increased protection.
The hypervisor and the host OS provide network packet - filters to help assure that
untrusted virtual machines cannot generate spoofed traffic or receive traffic not
addressed to them, direct traffic to protected infrastructure endpoints, or send/receive
inappropriate broadcast traffic.
VLAN Isolation
There are three VLANs in each cluster:
Communication is permitted from the FC VLAN to the main VLAN, but cannot be
initiated from the main VLAN to the FC VLAN. Communication is also blocked from the
main VLAN to the device VLAN. This assures that even if a node running customer code
is compromised, it cannot attack nodes on either the FC or device VLANs.
Storage Isolation
The SAS means that we can grant a client limited permissions, to objects in our storage
account for a specified period of time and with a specified set of permissions. We can
grant these limited permissions without having to share your account access keys.
IP storage data can be protected from unauthorized users via a networking mechanism
that's used to allocate a dedicated or dedicated tunnel of traffic to IP storage.
Encryption
Azure offers the following types of Encryption to protect data:
Encryption in transit
Encryption at rest
Encryption in Transit
Encryption in transit is a mechanism of protecting data when it's transmitted across
networks. With Azure Storage, you can secure data using:
Transport-level encryption, such as HTTPS when you transfer data into or out of
Azure Storage.
Wire encryption, such as SMB 3.0 encryption for Azure File shares.
Client-side encryption, to encrypt the data before it's transferred into storage and
to decrypt the data after it's transferred out of storage.
Encryption at Rest
For many organizations, data encryption at rest is a mandatory step towards data
privacy, compliance, and data sovereignty. There are three Azure features that provide
encryption of data that's "at rest":
Storage Service Encryption allows you to request that the storage service
automatically encrypt data when writing it to Azure Storage.
Client-side Encryption also provides the feature of encryption at rest.
Azure Disk Encryption for Linux VMs and Azure Disk Encryption for Windows VMs.
The Disk Encryption solution for Windows is based on Microsoft BitLocker Drive
Encryption, and the Linux solution is based on dm-crypt .
The solution supports the following scenarios for IaaS VMs when they're enabled in
Microsoft Azure:
The solution doesn't support the following scenarios, features, and technology in the
release:
From an application perspective, SQL Database provides the following hierarchy: Each
level has one-to-many containment of levels below.
The account and subscription are Microsoft Azure platform concepts to associate billing
and management.
Logical SQL servers and databases are SQL Database-specific concepts and are
managed by using SQL Database, provided OData and TSQL interfaces or via the Azure
portal.
Servers in SQL Database aren't physical or VM instances, instead they 're collections of
databases, sharing management and security policies, which are stored in so called
“logical master” database.
Logical master databases include:
Billing and usage-related information for databases from the same server aren't
guaranteed to be on the same physical instance in the cluster, instead applications must
provide the target database name when connecting.
Behind the VIP (virtual IP address), we have a collection of stateless gateway services. In
general, gateways get involved when there's coordination needed between multiple
data sources (master database, user database, etc.). Gateway services implement the
following:
TDS connection proxying. This includes locating user database in the backend
cluster, implementing the login sequence and then forwarding the TDS packets to
the backend and back.
Database management. This includes implementing a collection of workflows to
do CREATE/ALTER/DROP database operations. The database operations can be
invoked by either sniffing TDS packets or explicit OData APIs.
CREATE/ALTER/DROP login/user operations
Server management operations via OData API
The tier behind the gateways is called “back-end”. This is where all the data is stored in a
highly available fashion. Each piece of data is said to belong to a “partition” or “failover
unit”, each of them having at least three replicas. Replicas are stored and replicated by
SQL Server engine and managed by a failover system often referred to as “fabric”.
Traffic isolation: A virtual network is the traffic isolation boundary on the Azure
platform. Virtual machines (VMs) in one virtual network cannot communicate directly to
VMs in a different virtual network, even if both virtual networks are created by the same
customer. Isolation is a critical property that ensures customer VMs and communication
remains private within a virtual network.
Subnet offers an additional layer of isolation with in virtual network based on IP range.
IP addresses in the virtual network, you can divide a virtual network into multiple
subnets for organization and security. VMs and PaaS role instances deployed to subnets
(same or different) within a VNet can communicate with each other without any extra
configuration. You can also configure network security group (NSGs) to allow or deny
network traffic to a VM instance based on rules configured in access control list (ACL) of
NSG. NSGs can be associated with either subnets or individual VM instances within that
subnet. When an NSG is associated with a subnet, the ACL rules apply to all the VM
instances in that subnet.
Next Steps
Learn about Network Isolation Options for Machines in Windows Azure Virtual
Networks . This includes the classic front-end and back-end scenario where
machines in a particular back-end network or subnetwork may only allow certain
clients or other computers to connect to a particular endpoint based on an
allowlist of IP addresses.
Learn about virtual machine isolation in Azure. Azure Compute offers virtual
machine sizes that are isolated to a specific hardware type and dedicated to a
single customer.
Azure identity management security
overview
Article • 01/25/2024
By taking advantage of the security benefits of Microsoft Entra ID, you can:
Create and manage a single identity for each user across your hybrid enterprise,
keeping users, groups, and devices in sync.
Provide SSO access to your applications, including thousands of pre-integrated
SaaS apps.
Enable application access security by enforcing rules-based multifactor
authentication for both on-premises and cloud applications.
Provision secure remote access to on-premises web applications through Microsoft
Entra application proxy.
The goal of this article is to provide an overview of the core Azure security features that
help with identity management. We also provide links to articles that give details of each
feature so you can learn more.
The article focuses on the following core Azure Identity management capabilities:
Single sign-on
Reverse proxy
Multifactor authentication
Azure role-based access control (Azure RBAC)
Security monitoring, alerts, and machine learning-based reports
Consumer identity and access management
Device registration
Privileged identity management
Identity protection
Hybrid identity management/Azure AD connect
Microsoft Entra access reviews
Single sign-on
Single sign-on (SSO) means being able to access all the applications and resources that
you need to do business, by signing in only once using a single user account. Once
signed in, you can access all of the applications you need without being required to
authenticate (for example, type a password) a second time.
Many organizations rely upon SaaS applications such as Microsoft 365, Box, and
Salesforce for user productivity. Historically, IT staff needed to individually create and
update user accounts in each SaaS application, and users had to remember a password
for each SaaS application.
Microsoft Entra ID extends on-premises Active Directory environments into the cloud,
enabling users to use their primary organizational account to sign in not only to their
domain-joined devices and company resources, but also to all the web and SaaS
applications they need for their jobs.
Not only do users not have to manage multiple sets of usernames and passwords, you
can provision or de-provision application access automatically, based on their
organizational groups and their employee status. Microsoft Entra ID introduces security
and access governance controls with which you can centrally manage users' access
across SaaS applications.
Learn more:
Overview on SSO
Video on authentication fundamentals
Quickstart series on application management
Reverse proxy
Microsoft Entra application proxy lets you publish applications on a private network,
such as SharePoint sites, Outlook Web App, and IIS -based apps inside your private
network and provides secure access to users outside your network. Application Proxy
provides remote access and SSO for many types of on-premises web applications with
the thousands of SaaS applications that Microsoft Entra ID supports. Employees can sign
in to your apps from home on their own devices and authenticate through this cloud-
based proxy.
Learn more:
Multifactor authentication
Microsoft Entra multifactor authentication is a method of authentication that requires
the use of more than one verification method and adds a critical second layer of security
to user sign-ins and transactions. Multifactor authentication helps safeguard access to
data and applications while meeting user demand for a simple sign-in process. It
delivers strong authentication via a range of verification options: phone calls, text
messages, or mobile app notifications or verification codes and third-party OAuth
tokens.
Azure RBAC
Azure RBAC is an authorization system built on Azure Resource Manager that provides
fine-grained access management of resources in Azure. Azure RBAC allows you to
granularly control the level of access that users have. For example, you can limit a user
to only manage virtual networks and another user to manage all resources in a resource
group. Azure includes several built-in roles that you can use. The following lists four
fundamental built-in roles. The first three apply to all resource types.
Owner - Has full access to all resources including the right to delegate access to
others.
Contributor - Can create and manage all types of Azure resources but can't grant
access to others.
Reader - Can view existing Azure resources.
User Access Administrator - Lets you manage user access to Azure resources.
Learn more:
Anomaly reports: Contain sign-in events that we found to be anomalous. Our goal
is to make you aware of such activity and enable you to determine whether an
event is suspicious.
Integrated Application reports: Provide insights into how cloud applications are
being used in your organization. Microsoft Entra ID offers integration with
thousands of cloud applications.
Error reports: Indicate errors that might occur when you provision accounts to
external applications.
User-specific reports: Display device sign-in activity data for a specific user.
Activity logs: Contain a record of all audited events within the last 24 hours, last 7
days, or last 30 days, and group activity changes and password reset and
registration activity.
In the past, application developers who wanted to sign up customers and sign them in
to their applications would have written their own code. And they would have used on-
premises databases or systems to store usernames and passwords. Azure AD B2C offers
your organization a better way to integrate consumer identity management into
applications with the help of a secure, standards-based platform and a large set of
extensible policies.
When you use Azure AD B2C, your consumers can sign up for your applications by using
their existing social accounts (Facebook, Google, Amazon, LinkedIn) or by creating new
credentials (email address and password, or username and password).
Learn more:
Device registration
Microsoft Entra device registration is the foundation for device-based Conditional
Access scenarios. When a device is registered, Microsoft Entra device registration
provides the device with an identity that it uses to authenticate the device when a user
signs in. The authenticated device and the attributes of the device can then be used to
enforce Conditional Access policies for applications that are hosted in the cloud and on-
premises.
When combined with a mobile device management solution such as Intune, the device
attributes in Microsoft Entra ID are updated with additional information about the
device. You can then create Conditional Access rules that enforce access from devices to
meet your standards for security and compliance.
Learn more:
Users sometimes need to carry out privileged operations in Azure or Microsoft 365
resources, or in other SaaS apps. This need often means that organizations have to give
users permanent privileged access in Microsoft Entra ID. Such access is a growing
security risk for cloud-hosted resources, because organizations can't sufficiently monitor
what the users are doing with their administrator privileges. Additionally, if a user
account with privileged access is compromised, that one breach could affect the
organization's overall cloud security. Microsoft Entra Privileged Identity Management
helps to mitigate this risk.
Learn more:
Identity protection
Microsoft Entra ID Protection is a security service that provides a consolidated view into
risk detections and potential vulnerabilities that affect your organization's identities.
Identity Protection takes advantage of existing Microsoft Entra anomaly-detection
capabilities, which are available through Microsoft Entra Anomalous Activity reports.
Identity Protection also introduces new risk detection types that can detect anomalies in
real time.
Synchronization
AD FS and federation integration
Pass through authentication
Health Monitoring
Learn more:
In this article, we discuss a collection of Azure identity management and access control
security best practices. These best practices are derived from our experience with
Microsoft Entra ID and the experiences of customers like yourself.
This Azure identity management and access control security best practices article is
based on a consensus opinion and Azure platform capabilities and feature sets, as they
exist at the time this article was written.
The intention in writing this article is to provide a general roadmap to a more robust
security posture after deployment guided by our “5 steps to securing your identity
infrastructure” checklist, which walks you through some of our core features and
services.
Opinions and technologies change over time and this article will be updated on a
regular basis to reflect those changes.
Azure identity management and access control security best practices discussed in this
article include:
Microsoft Entra ID is the Azure solution for identity and access management. Microsoft
Entra ID is a multitenant, cloud-based directory and identity management service from
Microsoft. It combines core directory services, application access management, and
identity protection into a single solution.
The following sections list best practices for identity and access security using Microsoft
Entra ID.
Best practice: Center security controls and detections around user and service identities.
Detail: Use Microsoft Entra ID to collocate controls and identities.
Best practice: Establish a single Microsoft Entra instance. Consistency and a single
authoritative source will increase clarity and reduce security risks from human errors and
configuration complexity.
Detail: Designate a single Microsoft Entra directory as the authoritative source for
corporate and organizational accounts.
Best practice: Integrate your on-premises directories with Microsoft Entra ID.
Detail: Use Microsoft Entra Connect to synchronize your on-premises directory with
your cloud directory.
7 Note
There are factors that affect the performance of Microsoft Entra Connect. Ensure
Microsoft Entra Connect has enough capacity to keep underperforming systems
from impeding security and productivity. Large or complex organizations
(organizations provisioning more than 100,000 objects) should follow the
recommendations to optimize their Microsoft Entra Connect implementation.
Best practice: Don’t synchronize accounts to Microsoft Entra ID that have high privileges
in your existing Active Directory instance.
Detail: Don’t change the default Microsoft Entra Connect configuration that filters out
these accounts. This configuration mitigates the risk of adversaries pivoting from cloud
to on-premises assets (which could create a major incident).
Even if you decide to use federation with Active Directory Federation Services (AD FS) or
other identity providers, you can optionally set up password hash synchronization as a
backup in case your on-premises servers fail or become temporarily unavailable. This
sync enables users to sign in to the service by using the same password that they use to
sign in to their on-premises Active Directory instance. It also allows Identity Protection
to detect compromised credentials by comparing synchronized password hashes with
passwords known to be compromised, if a user has used the same email address and
password on other services that aren't connected to Microsoft Entra ID.
For more information, see Implement password hash synchronization with Microsoft
Entra Connect Sync.
Best practice: For new application development, use Microsoft Entra ID for
authentication.
Detail: Use the correct capabilities to support authentication:
Organizations that don’t integrate their on-premises identity with their cloud identity
can have more overhead in managing accounts. This overhead increases the likelihood
of mistakes and security breaches.
7 Note
You need to choose which directories critical accounts will reside in and whether
the admin workstation used is managed by new cloud services or existing
processes. Using existing management and identity provisioning processes can
decrease some risks but can also create the risk of an attacker compromising an
on-premises account and pivoting to the cloud. You might want to use a different
strategy for different roles (for example, IT admins vs. business unit admins). You
have two options. First option is to create Microsoft Entra accounts that aren’t
synchronized with your on-premises Active Directory instance. Join your admin
workstation to Microsoft Entra ID, which you can manage and patch by using
Microsoft Intune. Second option is to use existing admin accounts by synchronizing
to your on-premises Active Directory instance. Use existing workstations in your
Active Directory domain for management and security.
See elevate access to manage all Azure subscriptions and management groups to
ensure that you and your security group can view all subscriptions or management
groups connected to your environment. You should remove this elevated access after
you’ve assessed risks.
By using the same identity solution for all your apps and resources, you can achieve
SSO. And your users can use the same set of credentials to sign in and access the
resources that they need, whether the resources are located on-premises or in the
cloud.
Use SSO to enable users to access their SaaS applications based on their work or school
account in Microsoft Entra ID. This is applicable not only for Microsoft SaaS apps, but
also other apps, such as Google Apps and Salesforce. You can configure your application
to use Microsoft Entra ID as a SAML-based identity provider. As a security control,
Microsoft Entra ID does not issue a token that allows users to sign in to the application
unless they have been granted access through Microsoft Entra ID. You can grant access
directly, or through a group that users are a member of.
Organizations that don’t create a common identity to establish SSO for their users and
applications are more exposed to scenarios where users have multiple passwords. These
scenarios increase the likelihood of users reusing passwords or using weak passwords.
To balance security and productivity, you need to think about how a resource is
accessed before you can make a decision about access control. With Microsoft Entra
Conditional Access, you can address this requirement. With Conditional Access, you can
make automated access control decisions based on conditions for accessing your cloud
apps.
Identity Secure Score is a set of recommended security controls that Microsoft publishes
that works to provide you a numerical score to objectively measure your security
posture and help plan future security improvements. You can also view your score in
comparison to those in other industries as well as your own trends over time.
Best practice: Plan routine security reviews and improvements based on best practices
in your industry.
Detail: Use the Identity Secure Score feature to rank your improvements over time.
Best practice: Set up self-service password reset (SSPR) for your users.
Detail: Use the Microsoft Entra ID self-service password reset feature.
There are multiple options for requiring two-step verification. The best option for you
depends on your goals, the Microsoft Entra edition you’re running, and your licensing
program. See How to require two-step verification for a user to determine the best
option for you. See the Microsoft Entra ID and Microsoft Entra multifactor
Authentication pricing pages for more information about licenses and pricing.
Option 1: Enable MFA for all users and login methods with Microsoft Entra Security
Defaults
Benefit: This option enables you to easily and quickly enforce MFA for all users in your
environment with a stringent policy to:
This method is available to all licensing tiers but is not able to be mixed with existing
Conditional Access policies. You can find more information in Microsoft Entra Security
Defaults
This method uses the Microsoft Entra ID Protection risk evaluation to determine if two-
step verification is required based on user and sign-in risk for all cloud applications. This
method requires Microsoft Entra ID P2 licensing. You can find more information on this
method in Microsoft Entra ID Protection.
7 Note
Organizations that don’t add extra layers of identity protection, such as two-step
verification, are more susceptible for credential theft attack. A credential theft attack can
lead to data compromise.
Designating groups or individual roles responsible for specific functions in Azure helps
avoid confusion that can lead to human and automation errors that create security risks.
Restricting access based on the need to know and least privilege security principles
is imperative for organizations that want to enforce security policies for data access.
Your security team needs visibility into your Azure resources in order to assess and
remediate risk. If the security team has operational responsibilities, they need additional
permissions to do their jobs.
You can use Azure RBAC to assign permissions to users, groups, and applications at a
certain scope. The scope of a role assignment can be a subscription, a resource group,
or a single resource.
Best practice: Segregate duties within your team and grant only the amount of access to
users that they need to perform their jobs. Instead of giving everybody unrestricted
permissions in your Azure subscription or resources, allow only certain actions at a
particular scope.
Detail: Use Azure built-in roles in Azure to assign privileges to users.
7 Note
Best practice: Grant security teams with Azure responsibilities access to see Azure
resources so they can assess and remediate risk.
Detail: Grant security teams the Azure RBAC Security Reader role. You can use the root
management group or the segment management group, depending on the scope of
responsibilities:
Root management group for teams responsible for all enterprise resources
Segment management group for teams with limited scope (commonly because of
regulatory or other organizational boundaries)
Best practice: Grant the appropriate permissions to security teams that have direct
operational responsibilities.
Detail: Review the Azure built-in roles for the appropriate role assignment. If the built-in
roles don't meet the specific needs of your organization, you can create Azure custom
roles. As with built-in roles, you can assign custom roles to users, groups, and service
principals at subscription, resource group, and resource scopes.
Best practices: Grant Microsoft Defender for Cloud access to security roles that need it.
Defender for Cloud allows security teams to quickly identify and remediate risks.
Detail: Add security teams with these needs to the Azure RBAC Security Admin role so
they can view security policies, view security states, edit security policies, view alerts and
recommendations, and dismiss alerts and recommendations. You can do this by using
the root management group or the segment management group, depending on the
scope of responsibilities.
Organizations that don’t enforce data access control by using capabilities like Azure
RBAC might be giving more privileges than necessary to their users. This can lead to
data compromise by allowing users to access types of data (for example, high business
impact) that they shouldn’t have.
Privileged accounts are accounts that administer and manage IT systems. Cyber
attackers target these accounts to gain access to an organization’s data and systems. To
secure privileged access, you should isolate the accounts and systems from the risk of
being exposed to a malicious user.
We recommend that you develop and follow a roadmap to secure privileged access
against cyber attackers. For information about creating a detailed roadmap to secure
identities and access that are managed or reported in Microsoft Entra ID, Microsoft
Azure, Microsoft 365, and other cloud services, review Securing privileged access for
hybrid and cloud deployments in Microsoft Entra ID.
The following summarizes the best practices found in Securing privileged access for
hybrid and cloud deployments in Microsoft Entra ID:
Best practice: Ensure all critical admin accounts are managed Microsoft Entra accounts.
Detail: Remove any consumer accounts from critical admin roles (for example, Microsoft
accounts like hotmail.com, live.com, and outlook.com).
Best practice: Ensure all critical admin roles have a separate account for administrative
tasks in order to avoid phishing and other attacks to compromise administrative
privileges.
Detail: Create a separate admin account that’s assigned the privileges needed to
perform the administrative tasks. Block the use of these administrative accounts for daily
productivity tools like Microsoft 365 email or arbitrary web browsing.
Best practice: Identify and categorize accounts that are in highly privileged roles.
Detail: After turning on Microsoft Entra Privileged Identity Management, view the users
who are in the global administrator, privileged role administrator, and other highly
privileged roles. Remove any accounts that are no longer needed in those roles, and
categorize the remaining accounts that are assigned to admin roles:
Best practice: Implement “just in time” (JIT) access to further lower the exposure time of
privileges and increase your visibility into the use of privileged accounts.
Detail: Microsoft Entra Privileged Identity Management lets you:
Evaluate the accounts that are assigned or eligible for the global admin role. If you don’t
see any cloud-only accounts by using the *.onmicrosoft.com domain (intended for
emergency access), create them. For more information, see Managing emergency access
administrative accounts in Microsoft Entra ID.
Best practice: Have a “break glass" process in place in case of an emergency.
Detail: Follow the steps in Securing privileged access for hybrid and cloud deployments
in Microsoft Entra ID.
Require Microsoft Entra multifactor authentication at sign-in for all individual users who
are permanently assigned to one or more of the Microsoft Entra admin roles: Global
Administrator, Privileged Role Administrator, Exchange Online Administrator, and
SharePoint Online Administrator. Enable multifactor authentication for your admin
accounts and ensure that admin account users have registered.
Best practice: For critical admin accounts, have an admin workstation where production
tasks aren’t allowed (for example, browsing and email). This will protect your admin
accounts from attack vectors that use browsing and email and significantly lower your
risk of a major incident.
Detail: Use an admin workstation. Choose a level of workstation security:
Highly secure productivity devices provide advanced security for browsing and
other productivity tasks.
Privileged Access Workstations (PAWs) provide a dedicated operating system
that’s protected from internet attacks and threat vectors for sensitive tasks.
Best practice: Deprovision admin accounts when employees leave your organization.
Detail: Have a process in place that disables or deletes admin accounts when employees
leave your organization.
Best practice: Regularly test admin accounts by using current attack techniques.
Detail: Use Microsoft 365 Attack Simulator or a third-party offering to run realistic attack
scenarios in your organization. This can help you find vulnerable users before a real
attack occurs.
Best practice: Take steps to mitigate the most frequently used attacked techniques.
Detail: Identify Microsoft accounts in administrative roles that need to be switched to
work or school accounts
Ensure separate user accounts and mail forwarding for global administrator accounts
Require multifactor authentication for users in all privileged roles as well as exposed
users
Obtain your Microsoft 365 Secure Score (if using Microsoft 365)
Review the Microsoft 365 security guidance (if using Microsoft 365)
If you don’t secure privileged access, you might find that you have too many users in
highly privileged roles and are more vulnerable to attacks. Malicious actors, including
cyber attackers, often target admin accounts and other elements of privileged access to
gain access to sensitive data and systems by using credential theft.
You can use Azure Resource Manager to create security policies whose definitions
describe the actions or resources that are specifically denied. You assign those policy
definitions at the desired scope, such as the subscription, the resource group, or an
individual resource.
7 Note
Security policies are not the same as Azure RBAC. They actually use Azure RBAC to
authorize users to create those resources.
Organizations that are not controlling how resources are created are more susceptible
to users who might abuse the service by creating more resources than they need.
Hardening the resource creation process is an important step to securing a multitenant
scenario.
Actively monitor for suspicious activities
An active identity monitoring system can quickly detect suspicious behavior and trigger
an alert for further investigation. The following table lists Microsoft Entra capabilities
that can help organizations monitor their identities:
Detail: Use Microsoft Entra ID P1 or P2 anomaly reports. Have processes and procedures
in place for IT admins to run these reports on a daily basis or on demand (usually in an
incident response scenario).
Best practice: Have an active monitoring system that notifies you of risks and can adjust
risk level (high, medium, or low) to your business requirements.
Detail: Use Microsoft Entra ID Protection, which flags the current risks on its own
dashboard and sends daily summary notifications via email. To help protect your
organization's identities, you can configure risk-based policies that automatically
respond to detected issues when a specified risk level is reached.
Organizations that don’t actively monitor their identity systems are at risk of having user
credentials compromised. Without knowledge that suspicious activities are taking place
through these credentials, organizations can’t mitigate this type of threat.
We recommend that you use Microsoft Entra ID for authenticating access to storage .
Next step
See Azure security best practices and patterns for more security best practices to use
when you’re designing, deploying, and managing your cloud solutions by using Azure.
Five steps to securing your identity
infrastructure
Article • 10/12/2023
If you're reading this document, you're aware of the significance of security. You likely
already carry the responsibility for securing your organization. If you need to convince
others of the importance of security, send them to read the latest Microsoft Digital
Defense Report .
This document will help you get a more secure posture using the capabilities of
Microsoft Entra ID by using a five-step checklist to improve your organization's
protection against cyber-attacks.
This checklist will help you quickly deploy critical recommended actions to protect your
organization immediately by explaining how to:
7 Note
Many of the recommendations in this document apply only to applications that are
configured to use Microsoft Entra ID as their identity provider. Configuring apps for
Single Sign-On assures the benefits of credential policies, threat detection,
auditing, logging, and other features add to those applications. Microsoft Entra
Application Management is the foundation on which all these recommendations
are based.
The recommendations in this document are aligned with the Identity Secure Score, an
automated assessment of your Microsoft Entra tenant’s identity security configuration.
Organizations can use the Identity Secure Score page in the Microsoft Entra admin
center to find gaps in their current security configuration to ensure they follow current
Microsoft best practices for security. Implementing each recommendation in the Secure
Score page will increase your score and allow you to track your progress, plus help you
compare your implementation against other similar size organizations.
7 Note
Attackers who get control of privileged accounts can do tremendous damage, so it's
critical to protect these accounts before proceeding. Enable and require Microsoft Entra
multifactor authentication (MFA) for all administrators in your organization using
Microsoft Entra Security Defaults or Conditional Access. It's critical.
All set? Let's get started on the checklist.
As an organization you need to make sure that your identities are validated and secured
with MFA everywhere. In 2020, the FBI IC3 Report identified phishing as the top crime
type for victim complaints. The number of reports doubled compared to the previous
year. Phishing poses a significant threat to both businesses and individuals, and
credential phishing was used in many of the most damaging attacks last year. Microsoft
Entra multifactor authentication (MFA) helps safeguard access to data and applications,
providing another layer of security by using a second form of authentication.
Organizations can enable multifactor authentication with Conditional Access to make
the solution fit their specific needs. Take a look at this deployment guide to see how you
how to plan, implement, and roll-out Microsoft Entra multifactor authentication.
If your organization has Microsoft Entra ID P1 or P2 licenses, then you can also use the
Conditional Access insights and reporting workbook to help you discover gaps in your
configuration and coverage. From these recommendations, you can easily close this gap
by creating a policy using the new Conditional Access templates experience. Conditional
Access templates are designed to provide an easy method to deploy new policies that
align with Microsoft recommended best practices, making it easy to deploy common
policies to protect your identities and devices.
The Users with leaked credentials report in Microsoft Entra ID warns of username
and password pairs, which have been exposed publically. An incredible volume of
passwords is leaked via phishing, malware, and password reuse on third-party sites
that are later breached. Microsoft finds many of these leaked credentials and will
tell you, in this report, if they match credentials in your organization – but only if
you enable password hash sync or have cloud-only identities.
If an on-premises outage happens, like a ransomware attack, you can switch over
to using cloud authentication using password hash sync. This backup
authentication method will allow you to continue accessing apps configured for
authentication with Microsoft Entra ID, including Microsoft 365. In this case, IT staff
won't need to resort to shadow IT or personal email accounts to share data until
the on-premises outage is resolved.
Passwords are never stored in clear text or encrypted with a reversible algorithm in
Microsoft Entra ID. For more information on the actual process of password hash
synchronization, see Detailed description of how password hash synchronization works.
For more information, see the article Blocking legacy authentication protocols in
Microsoft Entra ID.
For more information on how to use Conditional Access for your Cloud Apps and user
actions, see Conditional Access Cloud apps, actions, and authentication context.
Privileged roles in Microsoft Entra ID should be cloud only accounts in order to isolate
them from any on-premises environments and don’t use on-premises password vaults
to store the credentials.
Microsoft Entra Privileged Identity Management (PIM) helps you minimize account
privileges by helping you:
Enable Microsoft Entra PIM, then view the users who are assigned administrative roles
and remove unnecessary accounts in those roles. For remaining privileged users, move
them from permanent to eligible. Finally, establish appropriate policies to make sure
when they need to gain access to those privileged roles, they can do so securely, with
the necessary change control.
Microsoft Entra built-in and custom roles operate on concepts similar to roles found in
the role-based access control system for Azure resources (Azure roles). The difference
between these two role-based access control systems is:
Microsoft Entra roles control access to Microsoft Entra resources such as users,
groups, and applications using the Microsoft Graph API
Azure roles control access to Azure resources such as virtual machines or storage
using Azure Resource Management
Both systems contain similarly used role definitions and role assignments. However,
Microsoft Entra role permissions can't be used in Azure custom roles and vice versa. As
part of deploying your privileged account process, follow the best practice to create at
least two emergency accounts to make sure you still have access to Microsoft Entra ID if
you lock yourself out.
For more information, see the article Plan a Privileged Identity Management deployment
and securing privileged access.
Microsoft recommends restricting user consent to allow end-user consent only for apps
from verified publishers and only for permissions you select. If end-user consent is
restricted, previous consent grants will still be honored but all future consent operations
must be performed by an administrator. For restricted cases, admin consent can be
requested by users through an integrated admin consent request workflow or through
your own support processes. Before restricting end-user consent, use our
recommendations to plan this change in your organization. For applications you wish to
allow all users to access, consider granting consent on behalf of all users, making sure
users who haven’t yet consented individually will be able to access the app. If you don’t
want these applications to be available to all users in all scenarios, use application
assignment and Conditional Access to restrict user access to specific apps.
Make sure users can request admin approval for new applications to reduce user
friction, minimize support volume, and prevent users from signing up for applications
using non-Microsoft Entra credentials. Once you regulate your consent operations,
administrators should audit app and consent permissions regularly.
For more information, see the article Microsoft Entra consent framework.
For more information, see the article How To: Configure and enable risk policies.
Included in the user risk detection is a check whether the user's credentials match to
credentials leaked by cybercriminals. To function optimally, it’s important to implement
password hash synchronization with Microsoft Entra Connect Sync.
Learn more about Microsoft Threat Protection and the importance of integrating
different domains, in the following short video.
https://www.microsoft.com/en-us/videoplayer/embed/RE4Bzww?postJsllMsg=true
1. Risky sign-in reports will surface user sign-in activities you should investigate, the
legitimate owner may not have performed the sign-in.
2. Risky user reports will surface user accounts that may have been compromised,
such as leaked credential that was detected or the user signed in from different
locations causing an impossible travel event.
Microsoft Entra ID currently provides three areas of automated provisioning. They are:
Find out more here: What is provisioning with Microsoft Entra ID?
Summary
There are many aspects to a secure Identity infrastructure, but this five-step checklist will
help you quickly accomplish a safer and secure identity infrastructure:
We appreciate how seriously you take security and hope this document is a useful
roadmap to a more secure posture for your organization.
Next steps
If you need assistance to plan and deploy the recommendations, refer to the Microsoft
Entra ID project deployment plans for help.
If you're confident all these steps are complete, use Microsoft’s Identity Secure Score,
which will keep you up to date with the latest best practices and security threats.
Passwordless authentication options for
Microsoft Entra ID
Article • 08/06/2024
Features like multifactor authentication (MFA) are a great way to secure your
organization, but users often get frustrated with the extra security layer on top of having
to remember their passwords. Passwordless authentication methods are more
convenient because the password is removed and replaced with something you have or
something you are or know.
ノ Expand table
Each organization has different needs when it comes to authentication. Microsoft Entra
ID and Azure Government integrate the following passwordless authentication options:
The following steps show how the sign-in process works with Microsoft Entra ID:
1. A user signs into Windows using biometric or PIN gesture. The gesture unlocks the
Windows Hello for Business private key and is sent to the Cloud Authentication
security support provider, called the Cloud Authentication Provider (CloudAP). For
more information about CloudAP, see What is a Primary Refresh Token?.
2. The CloudAP requests a nonce (a random arbitrary number that can be used once)
from Microsoft Entra ID.
3. Microsoft Entra ID returns a nonce that's valid for 5 minutes.
4. The CloudAP signs the nonce using the user's private key and returns the signed
nonce to the Microsoft Entra ID.
5. Microsoft Entra ID validates the signed nonce using the user's securely registered
public key against the nonce signature. Microsoft Entra ID validates the signature,
and then validates the returned signed nonce. When the nonce is validated,
Microsoft Entra ID creates a primary refresh token (PRT) with session key that is
encrypted to the device's transport key, and returns it to the CloudAP.
6. The CloudAP receives the encrypted PRT with session key. The CloudAP uses the
device's private transport key to decrypt the session key, and protects the session
key by using the device's Trusted Platform Module (TPM).
7. The CloudAP returns a successful authentication response to Windows. The user is
then able to access Windows and cloud and on-premises applications by using
seamless sign-on (SSO).
The Windows Hello for Business planning guide can be used to help you make decisions
on the type of Windows Hello for Business deployment and the options you need to
consider.
Platform Credential for macOS can also be used as a phishing-resistant credential for
use in WebAuthn challenges, including browser re-authentication scenarios.
Authentication Policy Administrators need to enable the Passkey (FIDO2) authentication
method to support Platform Credential for macOS as a phishing-resistant credential. If
you use Key Restriction Policies in your FIDO policy, you need to add the AAGUID for the
macOS Platform Credential to your list of allowed AAGUIDs: 7FD635B3-2EF9-4542-8D9D-
164F2C771EFC .
1. A user unlocks macOS using fingerprint or password gesture, which unlocks the
key bag to provide access to UserSecureEnclaveKey.
2. The macOS requests a nonce (a random arbitrary number that can be used just
once) from Microsoft Entra ID.
3. Microsoft Entra ID returns a nonce that's valid for 5 minutes.
4. The operating system (OS) sends a login request to Microsoft Entra ID with an
embedded assertion signed with the UserSecureEnclaveKey that resides in the
Secure Enclave.
5. Microsoft Entra ID validates the signed assertion using the user's securely
registered public key of UserSecureEnclave key. Microsoft Entra ID validates the
signature and nonce. Once the assertion is validated, Microsoft Entra ID creates a
primary refresh token (PRT) encrypted with the public key of the
UserDeviceEncryptionKey that is exchanged during registration and sends the
response back to the OS.
6. The OS decrypts and validates the response, retrieves the SSO tokens, stores and
shares it with the SSO extension for providing SSO. The user is able to access
macOS, cloud and on-premises applications by using SSO.
Refer to macOS Platform SSO for more information on how to configure and deploy
Platform Credential for macOS.
Microsoft Authenticator
You can also allow your employee's phone to become a passwordless authentication
method. You could already be using the Authenticator app as a convenient multifactor
authentication option in addition to a password. You can also use the Authenticator App
as a passwordless option.
The Authenticator App turns any iOS or Android phone into a strong, passwordless
credential. Users can sign in to any platform or browser by getting a notification to their
phone, matching a number displayed on the screen to the one on their phone. Then
they can use their biometric (touch or face) or PIN to confirm. For installation details, see
Download and install the Microsoft Authenticator .
Passkeys (FIDO2)
Users can register a passkey (FIDO2) and choose it as their primary sign-in method. With
a hardware device that handles the authentication, the security of an account is
increased as there's no password that can be exposed or guessed. Currently in preview,
an Authentication Administrator can also provision a FIDO2 security on behalf of a
user by using Microsoft Graph API and a custom client. Provisioning on behalf of users is
currently limited to security keys at this time.
The FIDO (Fast IDentity Online) Alliance helps to promote open authentication standards
and reduce the use of passwords as a form of authentication. FIDO2 is the latest
standard that incorporates the web authentication (WebAuthn) standard. FIDO allows
organizations to apply the WebAuthn standard by using an external security key, or a
platform key built into a device, to sign in without a username or password.
FIDO2 security keys can be used to sign in to their Microsoft Entra ID or Microsoft Entra
hybrid joined Windows 10 devices and get single-sign on to their cloud and on-
premises resources. Users can also sign in to supported browsers. FIDO2 security keys
are a great option for enterprises who are very security sensitive or have scenarios or
employees who aren't willing or able to use their phone as a second factor.
For more information about passkey (FIDO2) support, see Support for passkey (FIDO2)
authentication with Microsoft Entra ID. For developer best practices, see Support FIDO2
auth in the applications they develop.
The following process is used when a user signs in with a FIDO2 security key:
1. The user plugs the FIDO2 security key into their computer.
2. Windows detects the FIDO2 security key.
3. Windows sends an authentication request.
4. Microsoft Entra ID sends back a nonce.
5. The user completes their gesture to unlock the private key stored in the FIDO2
security key's secure enclave.
6. The FIDO2 security key signs the nonce with the private key.
7. The primary refresh token (PRT) token request with signed nonce is sent to
Microsoft Entra ID.
8. Microsoft Entra ID verifies the signed nonce using the FIDO2 public key.
9. Microsoft Entra ID returns PRT to enable access to on-premises resources.
For a list FIDO2 security key providers, see Become a Microsoft-compatible FIDO2
security key vendor.
To get started with FIDO2 security keys, complete the following how-to:
Certificate-based authentication
Microsoft Entra certificate-based authentication (CBA) enables customers to allow or
require users to authenticate directly with X.509 certificates against their Microsoft Entra
ID for applications and browser sign-in. CBA enables customers to adopt phishing-
resistant authentication and sign in with an X.509 certificate against their Public Key
Infrastructure (PKI).
Key benefits of using Microsoft Entra CBA
ノ Expand table
Benefits Description
Great user - Users who need certificate-based authentication can now directly
experience authenticate against Microsoft Entra ID and not have to invest in federation.
- Portal UI enables users to easily configure how to map certificate fields to a
user object attribute to look up the user in the tenant (certificate username
bindings)
- Portal UI to configure authentication policies to help determine which
certificates are single-factor versus multifactor.
Easy to deploy - Microsoft Entra CBA is a free feature, and you don't need any paid editions of
and administer Microsoft Entra ID to use it.
- No need for complex on-premises deployments or network configuration.
- Directly authenticate against Microsoft Entra ID.
Secure - On-premises passwords don't need to be stored in the cloud in any form.
- Protects your user accounts by working seamlessly with Microsoft Entra
Conditional Access policies, including Phishing-Resistant multifactor
authentication (MFA requires licensed edition) and blocking legacy
authentication.
- Strong authentication support where users can define authentication policies
through the certificate fields, such as issuer or policy OID (object identifiers), to
determine which certificates qualify as single-factor versus multifactor.
- The feature works seamlessly with Conditional Access features and
authentication strength capability to enforce MFA to help secure your users.
Supported scenarios
The following scenarios are supported:
Supported scenarios
The following considerations apply:
Unsupported scenarios
We recommend no more than 20 sets of keys for each passwordless method for any
user account. As more keys are added, the user object size increases, and you could
notice degradation for some operations. In that case, you should remove unnecessary
keys. For more information and the PowerShell cmdlets to query and remove keys, see
Using WHfBTools PowerShell module for cleaning up orphaned Windows Hello for
Business Keys .Use the /UserPrincipalName optional parameter to query only keys for
a specific user. The permissions required are to run as an administrator or the specified
user.
When you use PowerShell to create a CSV file with all of the existing keys, carefully
identify the keys that you need to keep, and remove those rows from the CSV. Then use
the modified CSV with PowerShell to delete the remaining keys to bring the account key
count under the limit.
It's safe to delete any key reported as "Orphaned"="True" in the CSV. An orphaned key
is one for a device that isn't longer registered in Microsoft Entra ID. If removing all
Orphans still doesn't bring the User account below the limit, it's necessary to look at the
DeviceId and CreationTime columns to identify which keys to target for deletion. Be
careful to remove any row in the CSV for keys you want to keep. Keys for any DeviceID
corresponding to devices the user actively uses should be removed from the CSV before
the deletion step.
Here are some factors for you to consider when choosing Microsoft passwordless
technology:
ノ Expand table
Pre- Windows 10, version 1809 or Authenticator app Windows 10, version 1903
requisite later Phone (iOS and or later
Microsoft Entra ID Android devices) Microsoft Entra ID
Systems PC with a built-in Trusted PIN and biometrics FIDO2 security devices that
and devices Platform Module (TPM) recognition on are Microsoft compatible
PIN and biometrics phone
recognition
User Sign in using a PIN or Sign in using a Sign in using FIDO2 security
experience biometric recognition (facial, mobile phone with device (biometrics, PIN, and
Windows Hello for Business Passwordless sign- FIDO2 security keys
in with the
Authenticator app
Use the following table to choose which method supports your requirements and users.
ノ Expand table
Admin Secure access to a device Assigned Windows Windows Hello for Business
for management tasks 10 device and/or FIDO2 security key
Next steps
To get started with passwordless in Microsoft Entra ID, complete one of the following
how-tos:
External Links
FIDO Alliance
FIDO2 Client to Authenticator Protocol (CTAP) specification
Feedback
Was this page helpful? Yes No
ノ Expand table
SE:05 Implement strict, conditional, and auditable identity and access management (IAM)
across all workload users, team members, and system components. Limit access
exclusively to as necessary. Use modern industry standards for all authentication
and authorization implementations. Restrict and rigorously audit access that's not
based on identity.
This guide describes the recommendations for authenticating and authorizing identities
that are attempting to access your workload resources.
From a technical control perspective, identity is always the primary perimeter. This
scope doesn't just include the edges of your workload. It also includes individual
components that are inside your workload. Typical identities include:
Systems. Workload identities, managed identities, API keys, service principals, and
Azure resources.
Anonymous. Entities who haven't provided any evidence about who they are.
Definitions
ノ Expand table
Terms Definition
Authentication A process that verifies that an identity is who or what it says it is.
(AuthN)
Persona A job function or a title that has a set of responsibilities and actions.
Preshared keys A type of secret that's shared between a provider and consumer and used
through a secure and agreed upon mechanism.
Resource identity An identity defined for cloud resources that's managed by the platform.
Role A set of permissions that define what a user or group can do.
Security principal An identity that provides permissions. It can be a user, a group, or a service
principal. Any group members get the same level of access.
Workload identity A system identity for an application, service, script, container, or other
component of a workload that's used to authenticate itself to other services
and resources.
7 Note
An identity can be grouped with other, similar identities under a parent called a
security principal. A security group is an example of a security principal. This
hierarchical relationship simplifies maintenance and improves consistency. Because
identity attributes aren't handled at the individual level, chances of errors are also
reduced. In this article, the term identity is inclusive of security principals.
Take advantage of the capabilities provided by a trusted IdP for your identity and
access management. Don't implement custom systems to replace an IdP. IdP systems
are improved frequently based on the latest attack vectors by capturing billions of
signals across multiple tenants each day. Microsoft Entra ID is the IdP for Azure cloud
platform.
Authentication
Authentication is a process that verifies identities. The requesting identity is required to
provide some form of verifiable identification. For example:
Authorization
Authorization is a process that allows or denies actions that are requested by the
verified identity. The action might be operational or related to resource management.
Authorization requires that you assign permissions to the identities, which you need to
do by using the functionality provided by your IdP.
Each use case will probably have its own set of controls that you need to design with an
assume-breach mindset. Based on the identity requirements of the use case or the
personas, identify the conditional choices. Avoid using one solution for all use cases.
Conversely, the controls shouldn't be so granular that you introduce unnecessary
management overhead.
You need to log the identity access trail. Doing so helps validate the controls, and you
can use the logs for compliance audits.
Inside-out access. Your application will need to access other resources. For
example, reading from or writing to the data platform, retrieving secrets from the
secret store, and logging telemetry to monitoring services. It might even need to
access third-party services. These access needs require workload identity, which
enables the application to authenticate itself against the other resources.
The concept applies at the component level. In the following example, the
container might need access to deployment pipelines to get its configuration.
These access needs require resource identity.
Data plane access. Actions that take place in the data plane cause data transfer for
inside-out or outside-in access. For example, an application reading data from a
database and writing data to a database, fetching secrets, or writing logs to a
monitoring sink. At the component level, compute that's pulling or pushing images
to or from a registry are considered data plane operations.
Control plane access. Actions that take place in the control plane cause an Azure
resource to be created, modified, or deleted. For example, changes to resource
properties.
Applications typically target data plane operations, while operations often access both
control and data planes. To identify authorization needs, note the operational actions
that can be performed on the resource. For information about the permitted actions for
each resource, see Azure resource provider operations.
Consider a workload identity as an example. The application must have data plane
access to the database, so read and write actions to the data resource must be allowed.
However, does the application need control plane access to the secret store? If the
workload identity is compromised by a bad actor, what would the impact to the system
be, in terms of confidentiality, integrity, and availability?
Role assignment
A role is a set of permissions that's assigned to an identity. Assign roles that only allow
the identity to complete the task, and no more. When user's permissions are restricted
to their job requirements, it's easier to identify suspicious or unauthorized behavior in
the system.
There are scenarios in which users need more access because of the organizational
structure and team organization. There might be an overlap between various roles, or
single users might perform multiple standard roles. In this case, use multiple role
assignments that are based on the business function instead of creating a custom role
for each of these users. Doing so makes the roles easier to manage.
A role also has an associated scope. The role can operate at the allowed management
group, subscription, resource group, or resource scope, or at another custom scope.
Even if the identity has a limited set of permissions, widening the scope to include
resources that are outside the identity's job function is risky. For example, read access to
all source code and data can be dangerous and must be controlled.
You assign roles to identities by using role-based access control (RBAC). Always use IdP-
provided RBAC to take advantage of features that enable you to apply access control
consistently and revoke it rigorously.
Use built-in roles. They're designed to cover most use cases. Custom roles are powerful
and sometimes useful, but you should reserve them for scenarios in which built-in roles
won't work. Customization leads to complexity that increases confusion and makes
automation more complex, challenging, and fragile. These factors all negatively impact
security.
Grant roles that start with least privilege and add more based your operational or
data access needs. Your technical teams must have clear guidance to implement
permissions.
If you want fine-grained control on RBAC, add conditions on the role assignment based
on context, such as actions and attributes.
Those factors aren't mutually exclusive. A compromised identity that has more privileges
and unlimited duration of access can gain more control over the system and data or use
that access to continue to change the environment. Constrain those access factors both
as a preventive measure and to control the blast radius.
Just in Time (JIT) approaches provide the required privileges only when they're
needed.
Although time and privilege are the primary factors, there are other conditions that
apply. For example, you can also use the device, network, and location from which the
access originated to set policies.
Use strong controls that filter, detect, and block unauthorized access, including
parameters like user identity and location, device health, workload context, data
classification, and anomalies.
For example, your workload might need to be accessed by third-party identities like
vendors, partners, and customers. They need the appropriate level of access rather than
the default permissions that you provide to full-time employees. Clear differentiation of
external accounts makes it easier to prevent and detect attacks that come from these
vectors.
Your choice of IdP must be able to provide that differentiation, provide built-in features
that grant permissions based on the least privilege, and provide built-in threat
intelligence. This includes monitoring of access requests and sign-ins. The Azure IdP is
Microsoft Entra ID. For more information, see the Azure facilitation section of this article.
Avoid permanent or standing access by using the JIT features of your IdP. For
break glass situations, follow an emergency access process.
Use a single identity across environments and associate a single identity with the user or
principal. Consistency of identities across cloud and on-premises environments reduces
human errors and the resulting security risks. Teams in both environments that manage
resources need a consistent, authoritative source in order to meet security assurances.
Work with your central identity team to ensure that identities in hybrid environments are
synchronized.
When you can, avoid using secrets and consider using identity-based authentication for
user access to the application itself, not just to its resources.
The following list provides a summary of guidance. For more information, see
Recommendations for application secrets.
Treat these secrets as entities that can be dynamically pulled from a secret store.
They shouldn't be hard coded in your application code, IaC scripts, deployment
pipelines, or in any other artifact.
Apply operational practices that handle tasks like key rotation and expiration.
For information about rotation policies, see Automate the rotation of a secret for
resources that have two sets of authentication credentials and Tutorial: Updating
certificate auto-rotation frequency in Key Vault.
Verify that identity is authenticated with strong authentication. Any action must be
traceable to prevent repudiation attacks.
Detect weak or missing authentication protocols and get visibility into and
insights about user and application sign-ins.
Evaluate access from identities to the workload based on security and compliance
requirements and consider user account risk, device status, and other criteria and
policies that you set.
Most resources have data plane access. You need to know the identities that access
resources and the actions that they perform. You can use that information for security
diagnostics.
For more information, see Recommendations on security monitoring and threat analysis.
Azure facilitation
We recommend that you always use modern authentication protocols that take into
account all available data points and use conditional access. Microsoft Entra ID
provides identity and access management in Azure. It covers the management plane of
Azure and is integrated with the data planes of most Azure services. Microsoft Entra ID
is the tenant that's associated with the workload subscription. It tracks and manages
identities and their allowed permissions and simplifies overall management to minimize
the risk of oversight or human error.
These capabilities natively integrate into the same Microsoft Entra identity and
permission model for user segments:
You can use Microsoft Entra ID for authentication and authorization of custom
applications via Microsoft Authentication Library (MSAL) or platform features, like
authentication for web apps. It covers the management plane of Azure, the data planes
of most of Azure services, and integration capabilities for your applications.
You can stay current by visiting What's new in Microsoft Entra ID.
Tradeoff: Microsof Microsoft Entra ID is a single point of failure just like any
other foundational service. There's no workaround until the outage is fixed by
Microsoft. However, the rich feature set of Microsoft Entra outweighs the risk of
using custom solutions.
Azure supports open protocols like OAuth2 and OpenID Connect. We recommend that
you use these standard authentication and authorization mechanisms instead of
designing your own flows.
Azure RBAC
Azure RBAC represents security principals in Microsoft Entra ID. All role assignments are
done via Azure RBAC. Take advantage of built-in roles that provide most of the
permissions that you need. For more information, see Microsoft Entra built-in roles.
By assigning users to roles, you can control access to Azure resources. For more
information, see Overview of role-based access control in Microsoft Entra ID.
You can use Privileged Identity Management to provide time-based and approval-
based role activation for roles that are associated with high-impact identities. For
more information, see What is Privileged Identity Management?.
For more information about RBAC, see Best practices for Azure RBAC.
Workload identity
Microsoft Entra ID can handle your application's identity. The service principal that's
associated with the application can dictate its access scope.
For more information, see What are workload identities?.
The service principal is also abstracted when you use a managed identity. The
advantage is that Azure manages all credentials for the application.
Not all services support managed identities. If you can't use managed identities, you can
use service principals. However, using service principals increases your management
overhead. For more information, see What are managed identities for Azure resources?.
Resource identity
The concept of managed identities can be extended to Azure resources. Azure
resources can use managed identities to authenticate themselves to other services that
support Microsoft Entra authentication. For more information, see Azure services that
can use managed identities to access other services.
For more information, see Conditional access: Users, groups, and workload identities.
For more information, see Secure access control using groups in Microsoft Entra ID.
Threat detection
Microsoft Entra ID Protection can help you detect, investigate, and remediate identity-
based risks. For more information, see What is Identity Protection?.
Threat detection can take the form of reacting to an alert of suspicious activity or
proactively searching for anomalous events in activity logs. User and Entity Behavior
Analytics (UEBA) in Microsoft Sentinel makes it easy to detect suspicious activities. For
more information, see Identify advanced threats with UEBA.
Hybrid systems
On Azure, don't synchronize accounts to Microsoft Entra ID that have high privileges
in your existing Active Directory. This synchronization is blocked in the default
Microsoft Entra Connect Sync configuration, so you only need to confirm that you
haven't customized this configuration.
For information about filtering in Microsoft Entra ID, see Microsoft Entra Connect Sync:
Configure filtering.
Identity logging
Enable diagnostic settings on Azure resources to emit information that you can use as
an audit trail. The diagnostic information shows which identities attempt to access which
resources and the outcome of those attempts. The collected logs are sent to Azure
Monitor.
Tradeoff: Logging incurs costs because of the data storage that's used to store
the logs. It also might cause a performance impact, especially on the code and on
logging solutions that you add to the application.
Example
The following example shows an identity implementation. Different types of identities
are used together to provide the required levels of access.
Identity components
System-managed identities. Microsoft Entra ID provides access to service data
planes that don't face users, like Azure Key Vault and data stores. These identities
also control access, via RBAC, to the Azure management plane for workload
components, deployment agents, and team members.
The security of preshared secrets is critical for any application. Azure Key Vault provides
a secure storage mechanism for these secrets, including Redis and third-party secrets.
A rotation mechanism is used to help ensure that secrets aren't compromised. Tokens
for the Microsoft identity platform implementation of OAuth 2 and OpenID Connect are
used to authenticate users.
Azure Policy is used to ensure that identity components like Key Vault use RBAC instead
of access policies. JIT and JEA provide traditional standing permissions for human
operators.
Access logs are enabled across all components via Azure Diagnostics, or via code for
code components.
Related links
Tutorial: Automate the rotation of a secret for resources that have two sets of
authentication credentials
Tutorial: Updating certificate auto-rotation frequency in Key Vault
What's new in Microsoft Entra ID?
Microsoft Entra built-in roles
Overview of role-based access control in Microsoft Entra ID
What are workload identities?
What are managed identities for Azure resources?
Conditional access: Users, groups, and workload identities
Microsoft Entra Connect Sync: Configure filtering
Security checklist
Refer to the complete set of recommendations.
Security checklist
Azure network security overview
Article • 06/27/2024
This article covers some of the options that Azure offers in the area of network security.
You can learn about:
Azure networking
Network access control
Azure Firewall
Secure remote access and cross-premises connectivity
Availability
Name resolution
Perimeter network (DMZ) architecture
Azure DDoS protection
Azure Front Door
Traffic manager
Monitoring and threat detection
7 Note
For web workloads, we highly recommend utilizing Azure DDoS protection and a
web application firewall to safeguard against emerging DDoS attacks. Another
option is to deploy Azure Front Door along with a web application firewall. Azure
Front Door offers platform-level protection against network-level DDoS attacks.
Azure networking
Azure requires virtual machines to be connected to an Azure Virtual Network. A virtual
network is a logical construct built on top of the physical Azure network fabric. Each
virtual network is isolated from all other virtual networks. This helps ensure that network
traffic in your deployments is not accessible to other Azure customers.
Learn more:
Virtual network overview
7 Note
Storage Firewalls are covered in the Azure storage security overview article
If you need basic network level access control (based on IP address and the TCP or UDP
protocols), you can use Network Security Groups (NSGs). An NSG is a basic, stateful,
packet filtering firewall, and it enables you to control access based on a 5-tuple . NSGs
include functionality to simplify management and reduce the chances of configuration
mistakes:
Augmented security rules simplify NSG rule definition and allow you to create
complex rules rather than having to create multiple simple rules to achieve the
same result.
Service tags are Microsoft created labels that represent a group of IP addresses.
They update dynamically to include IP ranges that meet the conditions that define
inclusion in the label. For example, if you want to create a rule that applies to all
Azure storage on the east region you can use Storage.EastUS
Application security groups allow you to deploy resources to application groups
and control the access to those resources by creating rules that use those
application groups. For example, if you have webservers deployed to the
'Webservers' application group you can create a rule that applies a NSG allowing
443 traffic from the Internet to all systems in the 'Webservers' application group.
Learn more:
Learn more:
Service endpoints
Service endpoints are another way to apply control over your traffic. You can limit
communication with supported services to just your VNets over a direct connection.
Traffic from your VNet to the specified Azure service remains on the Microsoft Azure
backbone network.
Learn more:
Service endpoints
For example, you might have a virtual network security appliance on your virtual
network. You want to make sure that all traffic to and from your virtual network goes
through that virtual security appliance. You can do this by configuring User Defined
Routes (UDRs) in Azure.
Forced tunneling is a mechanism you can use to ensure that your services are not
allowed to initiate a connection to devices on the internet. Note that this is different
from accepting incoming connections and then responding to them. Front-end web
servers need to respond to requests from internet hosts, and so internet-sourced traffic
is allowed inbound to these web servers and the web servers are allowed to respond.
What you don't want to allow is a front-end web server to initiate an outbound request.
Such requests might represent a security risk because these connections can be used to
download malware. Even if you do want these front-end servers to initiate outbound
requests to the internet, you might want to force them to go through your on-premises
web proxies. This enables you to take advantage of URL filtering and logging.
Instead, you would want to use forced tunneling to prevent this. When you enable
forced tunneling, all connections to the internet are forced through your on-premises
gateway. You can configure forced tunneling by taking advantage of UDRs.
Learn more:
You can access these enhanced network security features by using an Azure partner
solution. You can find the most current Azure partner network security solutions by
visiting the Azure Marketplace , and searching for "security" and "network security."
Azure Firewall
Azure Firewall is a cloud-native and intelligent network firewall security service that
provides threat protection for your cloud workloads running in Azure. It's a fully stateful
firewall as a service with built-in high availability and unrestricted cloud scalability. It
provides both east-west and north-south traffic inspection.
Azure Firewall is offered in three SKUs: Standard, Premium, and Basic. Azure Firewall
Standard provides L3-L7 filtering and threat intelligence feeds directly from Microsoft
Cyber Security. Azure Firewall Premium provides advanced capabilities include
signature-based IDPS to allow rapid detection of attacks by looking for specific patterns.
Azure Firewall Basic is a simplified SKU that provides the same level of security as the
Standard SKU but without the advanced capabilities.
Learn more:
The point-to-site VPN connection enables you to set up a private and secure connection
between the user and the virtual network. When the VPN connection is established, the
user can RDP or SSH over the VPN link into any virtual machine on the virtual network.
(This assumes that the user can authenticate and is authorized.) Point-to-site VPN
supports:
IKEv2 VPN, a standards-based IPsec VPN solution. IKEv2 VPN can be used to
connect from Mac devices (OSX versions 10.11 and above).
OpenVPN
Learn more:
One way to accomplish this is to use a site-to-site VPN. The difference between a site-
to-site VPN and a point-to-site VPN is that the latter connects a single device to a virtual
network. A site-to-site VPN connects an entire network (such as your on-premises
network) to a virtual network. Site-to-site VPNs to a virtual network use the highly
secure IPsec tunnel mode VPN protocol.
Learn more:
Create a Resource Manager VNet with a site-to-site VPN connection using the
Azure portal
About VPN Gateway
VPN connections move data over the internet. This exposes these connections to
potential security issues involved with moving data over a public network. In
addition, reliability and availability for internet connections cannot be guaranteed.
VPN connections to virtual networks might not have the bandwidth for some
applications and purposes, as they max out at around 200 Mbps.
Organizations that need the highest level of security and availability for their cross-
premises connections typically use dedicated WAN links to connect to remote sites.
Azure provides you the ability to use a dedicated WAN link that you can use to connect
your on-premises network to a virtual network. Azure ExpressRoute, Express route direct,
and Express route global reach enable this.
Learn more:
A better option might be to create a site-to-site VPN that connects between two virtual
networks. This method uses the same IPSec tunnel mode protocol as the cross-premises
site-to-site VPN connection mentioned above.
The advantage of this approach is that the VPN connection is established over the Azure
network fabric, instead of connecting over the internet. This provides you an extra layer
of security, compared to site-to-site VPNs that connect over the internet.
Learn more:
Another way to connect your virtual networks is VNET peering. This feature allows you
to connect two Azure networks so that communication between them happens over the
Microsoft backbone infrastructure without it ever going over the Internet. VNET peering
can connect two VNETs within the same region or two VNETs across Azure regions.
NSGs can be used to limit connectivity between different subnets or systems.
Availability
Availability is a key component of any security program. If your users and systems can't
access what they need to access over the network, the service can be considered
compromised. Azure has networking technologies that support the following high-
availability mechanisms:
Azure Application Gateway provides HTTP-based load balancing for your web-based
services. Application Gateway supports:
Learn more:
Learn more:
This load-balancing strategy can also yield performance benefits. You can direct
requests for the service to the datacenter that is nearest to the device that is making the
request.
In Azure, you can gain the benefits of global load balancing by using Azure Traffic
Manager.
Learn more:
Name resolution
Name resolution is a critical function for all services you host in Azure. From a security
perspective, compromise of the name resolution function can lead to an attacker
redirecting requests from your sites to an attacker's site. Secure name resolution is a
requirement for all your cloud hosted services.
Internal name resolution. This is used by services on your virtual networks, your
on-premises networks, or both. Names used for internal name resolution are not
accessible over the internet. For optimal security, it's important that your internal
name resolution scheme is not accessible to external users.
External name resolution. This is used by people and devices outside of your on-
premises networks and virtual networks. These are the names that are visible to the
internet, and are used to direct connection to your cloud-based services.
A virtual network DNS server. When you create a new virtual network, a DNS server
is created for you. This DNS server can resolve the names of the machines located
on that virtual network. This DNS server is not configurable, is managed by the
Azure fabric manager, and can therefore help you secure your name resolution
solution.
Bring your own DNS server. You have the option of putting a DNS server of your
own choosing on your virtual network. This DNS server can be an Active Directory
integrated DNS server, or a dedicated DNS server solution provided by an Azure
partner, which you can obtain from the Azure Marketplace.
Learn more:
Many large organizations host their own DNS servers on-premises. They can do this
because they have the networking expertise and global presence to do so.
In most cases, it's better to host your DNS name resolution services with a service
provider. These service providers have the network expertise and global presence to
ensure very high availability for your name resolution services. Availability is essential for
DNS services, because if your name resolution services fail, no one will be able to reach
your internet facing services.
Azure provides you with a highly available and high-performing external DNS solution in
the form of Azure DNS. This external name resolution solution takes advantage of the
worldwide Azure DNS infrastructure. It allows you to host your domain in Azure, using
the same credentials, APIs, tools, and billing as your other Azure services. As part of
Azure, it also inherits the strong security controls built into the platform.
Learn more:
You can design perimeter networks in a number of different ways. The decision to
deploy a perimeter network, and then what type of perimeter network to use if you
decide to use one, depends on your network security requirements.
Learn more:
Learn more:
7 Note
For web workloads, we highly recommend utilizing Azure DDoS protection and a
web application firewall to safeguard against emerging DDoS attacks. Another
option is to deploy Azure Front Door along with a web application firewall. Azure
Front Door offers platform-level protection against network-level DDoS attacks.
Learn more:
For more information on the whole set of Azure Front door capabilities you can
review the Azure Front Door overview
Learn more:
Security Group View helps with auditing and security compliance of Virtual Machines.
Use this feature to perform programmatic audits, comparing the baseline policies
defined by your organization to effective rules for each of your VMs. This can help you
identify any configuration drift.
Packet capture allows you to capture network traffic to and from the virtual machine.
You can collect network statistics and troubleshoot application issues, which can be
invaluable in the investigation of network intrusions. You can also use this feature
together with Azure Functions to start network captures in response to specific Azure
alerts.
For more information on Network Watcher and how to start testing some of the
functionality in your labs, see Azure network watcher monitoring overview.
7 Note
For the most up-to-date notifications on availability and status of this service, check
the Azure updates page .
Defender for Cloud helps you optimize and monitor network security by:
Learn more:
Logging
Logging at a network level is a key function for any network security scenario. In Azure,
you can log information obtained for NSGs to get network level logging information.
With NSG logging, you get information from:
Activity logs. Use these logs to view all operations submitted to your Azure
subscriptions. These logs are enabled by default, and can be used within the Azure
portal. They were previously known as audit or operational logs.
Event logs. These logs provide information about what NSG rules were applied.
Counter logs. These logs let you know how many times each NSG rule was applied
to deny or allow traffic.
You can also use Microsoft Power BI, a powerful data visualization tool, to view and
analyze these logs. Learn more:
Feedback
Was this page helpful? Yes No
This article discusses a collection of Azure best practices to enhance your network
security. These best practices are derived from our experience with Azure networking
and the experiences of customers like yourself.
These best practices are based on a consensus opinion, and Azure platform capabilities
and feature sets, as they exist at the time this article was written. Opinions and
technologies change over time and this article will be updated regularly to reflect those
changes.
As you plan your network and the security of your network, we recommend that you
centralize:
If you use a common set of management tools to monitor your network and the security
of your network, you get clear visibility into both. A straightforward, unified security
strategy reduces errors because it increases human understanding and the reliability of
automation.
Best practice: Don't assign allow rules with broad ranges (for example, allow 0.0.0.0
through 255.255.255.255).
Detail: Ensure troubleshooting procedures discourage or ban setting up these types of
rules. These allow rules lead to a false sense of security and are frequently found and
exploited by red teams.
Best practice: Create network access controls between subnets. Routing between
subnets happens automatically, and you don't need to manually configure routing
tables. By default, there are no network access controls between the subnets that you
create on an Azure virtual network.
Detail: Use a network security group to protect against unsolicited traffic into Azure
subnets. Network security groups (NSGs) are simple, stateful packet inspection devices.
NSGs use the 5-tuple approach (source IP, source port, destination IP, destination port,
and layer 4 protocol) to create allow/deny rules for network traffic. You allow or deny
traffic to and from a single IP address, to and from multiple IP addresses, or to and from
entire subnets.
When you use network security groups for network access control between subnets, you
can put resources that belong to the same security zone or role in their own subnets.
Best practice: Avoid small virtual networks and subnets to ensure simplicity and
flexibility. Detail: Most organizations add more resources than initially planned, and
reallocating addresses is labor intensive. Using small subnets adds limited security value,
and mapping a network security group to each subnet adds overhead. Define subnets
broadly to ensure that you have flexibility for growth.
Best practice: Simplify network security group rule management by defining Application
Security Groups.
Detail: Define an Application Security Group for lists of IP addresses that you think
might change in the future or be used across many network security groups. Be sure to
name Application Security Groups clearly so others can understand their content and
purpose.
Best practice: Give Conditional Access to resources based on device, identity, assurance,
network location, and more.
Detail: Microsoft Entra Conditional Access lets you apply the right access controls by
implementing automated access control decisions based on the required conditions. For
more information, see Manage access to Azure management with Conditional Access.
Best practice: Grant temporary permissions to perform privileged tasks, which prevents
malicious or unauthorized users from gaining access after the permissions have expired.
Access is granted only when users need it.
Detail: Use just-in-time access in Microsoft Entra Privileged Identity Management or in a
third-party solution to grant permissions to perform privileged tasks.
Zero Trust is the next evolution in network security. The state of cyberattacks drives
organizations to take the "assume breach" mindset, but this approach shouldn't be
limiting. Zero Trust networks protect corporate data and resources while ensuring that
organizations can build a modern workplace by using technologies that empower
employees to be productive anytime, anywhere, in any way.
Although the default system routes are useful for many deployment scenarios, there are
times when you want to customize the routing configuration for your deployments. You
can configure the next-hop address to reach specific destinations.
We recommend that you configure user-defined routes when you deploy a security
appliance for a virtual network. We talk about this recommendation in a later section
titled secure your critical Azure service resources to only your virtual networks.
7 Note
User-defined routes aren't required, and the default system routes usually work.
Azure network security appliances can deliver better security than what network-level
controls provide. Network security capabilities of virtual network security appliances
include:
Firewalling
Intrusion detection/intrusion prevention
Vulnerability management
Application control
Network-based anomaly detection
Web filtering
Antivirus
Botnet protection
Perimeter networks are useful because you can focus your network access control
management, monitoring, logging, and reporting on the devices at the edge of your
Azure virtual network. A perimeter network is where you typically enable distributed
denial of service (DDoS) protection, intrusion detection/intrusion prevention systems
(IDS/IPS), firewall rules and policies, web filtering, network antimalware, and more. The
network security devices sit between the internet and your Azure virtual network and
have an interface on both networks.
Although this is the basic design of a perimeter network, there are many different
designs, like back-to-back, tri-homed, and multi-homed.
Based on the Zero Trust concept mentioned earlier, we recommend that you consider
using a perimeter network for all high security deployments to enhance the level of
network security and access control for your Azure resources. You can use Azure or a
third-party solution to provide an extra layer of security between your assets and the
internet:
Azure native controls. Azure Firewall and Azure Web Application Firewall offer
basic security advantages. Advantages are a fully stateful firewall as a service, built-
in high availability, unrestricted cloud scalability, FQDN filtering, support for
OWASP core rule sets, and simple setup and configuration.
Third-party offerings. Search the Azure Marketplace for next-generation firewall
(NGFW) and other third-party offerings that provide familiar security tools and
enhanced levels of network security. Configuration might be more complex, but a
third-party offering might allow you to use existing capabilities and skillsets.
Avoid exposure to the internet with dedicated
WAN links
Many organizations have chosen the hybrid IT route. With hybrid IT, some of the
company's information assets are in Azure, and others remain on-premises. In many
cases, some components of a service are running in Azure while other components
remain on-premises.
Site-to-site VPN. It's a trusted, reliable, and established technology, but the
connection takes place over the internet. Bandwidth is constrained to a maximum
of about 1.25 Gbps. Site-to-site VPN is a desirable option in some scenarios.
Azure ExpressRoute. We recommend that you use ExpressRoute for your cross-
premises connectivity. ExpressRoute lets you extend your on-premises networks
into the Microsoft cloud over a private connection facilitated by a connectivity
provider. With ExpressRoute, you can establish connections to Microsoft cloud
services like Azure, Microsoft 365, and Dynamics 365. ExpressRoute is a dedicated
WAN link between your on-premises location or a Microsoft Exchange hosting
provider. Because this is a telco connection, your data doesn't travel over the
internet, so it isn't exposed to the potential risks of internet communications.
The location of your ExpressRoute connection can affect firewall capacity, scalability,
reliability, and network traffic visibility. You'll need to identify where to terminate
ExpressRoute in existing (on-premises) networks. You can:
Terminate outside the firewall (the perimeter network paradigm). Use this
recommendation if you require visibility into the traffic, if you need to continue an
existing practice of isolating datacenters, or if you're solely putting extranet
resources on Azure.
Terminate inside the firewall (the network extension paradigm). This is the default
recommendation. In all other cases, we recommend treating Azure as another
datacenter.
A popular and effective method for enhancing availability and performance is load
balancing. Load balancing is a method of distributing network traffic across servers that
are part of a service. For example, if you have front-end web servers as part of your
service, you can use load balancing to distribute the traffic across your multiple front-
end web servers.
This distribution of traffic increases availability because if one of the web servers
becomes unavailable, the load balancer stops sending traffic to that server and redirects
it to the servers that are still online. Load balancing also helps performance, because the
processor, network, and memory overhead for serving requests is distributed across all
the load-balanced servers.
We recommend that you employ load balancing whenever you can, and as appropriate
for your services. Following are scenarios at both the Azure virtual network level and the
global level, along with load-balancing options for each.
Requires requests from the same user/client session to reach the same back-end
virtual machine. Examples of this are shopping cart apps and web mail servers.
Accepts only a secure connection, so unencrypted communication to the server
isn't an acceptable option.
Requires multiple HTTP requests on the same long-running TCP connection to be
routed or load balanced to different back-end servers.
Load-balancing option: Use Azure Application Gateway, an HTTP web traffic load
balancer. Application Gateway supports end-to-end TLS encryption and TLS termination
at the gateway. Web servers can then be unburdened from encryption and decryption
overhead and traffic flowing unencrypted to the back-end servers.
Scenario: You need to load balance incoming connections from the internet among your
servers located in an Azure virtual network. Scenarios are when you:
Have stateless applications that accept incoming requests from the internet.
Don't require sticky sessions or TLS offload. Sticky sessions is a method used with
Application Load Balancing, to achieve server-affinity.
Load-balancing option: Use the Azure portal to create an external load balancer that
spreads incoming requests across multiple VMs to provide a higher level of availability.
Scenario: You need to load balance connections from VMs that are not on the internet.
In most cases, the connections that are accepted for load balancing are initiated by
devices on an Azure virtual network, such as SQL Server instances or internal web
servers.
Load-balancing option: Use the Azure portal to create an internal load balancer that
spreads incoming requests across multiple VMs to provide a higher level of availability.
Have a cloud solution that is widely distributed across multiple regions and
requires the highest level of uptime (availability) possible.
Need the highest level of uptime possible to make sure that your service is
available even if an entire datacenter becomes unavailable.
Load-balancing option: Use Azure Traffic Manager. Traffic Manager makes it possible to
load balance connections to your services based on the location of the user.
For example, if the user makes a request to your service from the EU, the connection is
directed to your services located in an EU datacenter. This part of Traffic Manager global
load balancing helps to improve performance because connecting to the nearest
datacenter is faster than connecting to datacenters that are far away.
The potential security problem with using these protocols over the internet is that
attackers can use brute force techniques to gain access to Azure virtual machines.
After the attackers gain access, they can use your VM as a launch point for
compromising other machines on your virtual network or even attack networked devices
outside Azure.
We recommend that you disable direct RDP and SSH access to your Azure virtual
machines from the internet. After direct RDP and SSH access from the internet is
disabled, you have other options that you can use to access these VMs for remote
management.
Scenario: Enable a single user to connect to an Azure virtual network over the internet.
Option: Point-to-site VPN is another term for a remote access VPN client/server
connection. After the point-to-site connection is established, the user can use RDP or
SSH to connect to any VMs located on the Azure virtual network that the user
connected to via point-to-site VPN. This assumes that the user is authorized to reach
those VMs.
Point-to-site VPN is more secure than direct RDP or SSH connections because the user
has to authenticate twice before connecting to a VM. First, the user needs to
authenticate (and be authorized) to establish the point-to-site VPN connection. Second,
the user needs to authenticate (and be authorized) to establish the RDP or SSH session.
Scenario: Enable users on your on-premises network to connect to VMs on your Azure
virtual network.
Option: A site-to-site VPN connects an entire network to another network over the
internet. You can use a site-to-site VPN to connect your on-premises network to an
Azure virtual network. Users on your on-premises network connect by using the RDP or
SSH protocol over the site-to-site VPN connection. You don't have to allow direct RDP
or SSH access over the internet.
Scenario: Use a dedicated WAN link to provide functionality similar to the site-to-site
VPN.
Option: Use ExpressRoute. It provides functionality similar to the site-to-site VPN. The
main differences are:
Improved security for your Azure service resources: With Azure Private Link,
Azure service resources can be secured to your virtual network using private
endpoint. Securing service resources to a private endpoint in virtual network
provides improved security by fully removing public internet access to resources,
and allowing traffic only from private endpoint in your virtual network.
Privately access Azure service resources on the Azure platform: Connect your
virtual network to services in Azure using private endpoints. There's no need for a
public IP address. The Private Link platform will handle the connectivity between
the consumer and services over the Azure backbone network.
Access from On-premises and peered networks: Access services running in Azure
from on-premises over ExpressRoute private peering, VPN tunnels, and peered
virtual networks using private endpoints. There's no need to configure
ExpressRoute Microsoft peering or traverse the internet to reach the service.
Private Link provides a secure way to migrate workloads to Azure.
Protection against data leakage: A private endpoint is mapped to an instance of a
PaaS resource instead of the entire service. Consumers can only connect to the
specific resource. Access to any other resource in the service is blocked. This
mechanism provides protection against data leakage risks.
Global reach: Connect privately to services running in other regions. The
consumer's virtual network could be in region A and it can connect to services in
region B.
Simple to set up and manage: You no longer need reserved, public IP addresses in
your virtual networks to secure Azure resources through an IP firewall. There are no
NAT or gateway devices required to set up the private endpoints. Private endpoints
are configured through a simple workflow. On service side, you can also manage
the connection requests on your Azure service resource with ease. Azure Private
Link works for consumers and services belonging to different Microsoft Entra
tenants too.
To learn more about private endpoints and the Azure services and regions that private
endpoints are available for, see Azure Private Link.
Next steps
See Azure security best practices and patterns for more security best practices to use
when you're designing, deploying, and managing your cloud solutions by using Azure.
Azure DDoS Protection fundamental
best practices
Article • 07/17/2024
To help protect a service running on Microsoft Azure, you should have a good
understanding of your application architecture and focus on the five pillars of software
quality. You should know typical traffic volumes, the connectivity model between the
application and other applications, and the service endpoints that are exposed to the
public internet.
For Azure App Service, select an App Service plan that offers multiple instances. For
Azure Cloud Services, configure each of your roles to use multiple instances. For Azure
Virtual Machines, ensure that your virtual machine (VM) architecture includes more than
one VM and that each VM is included in an availability set. We recommend using virtual
machine scale sets for autoscaling capabilities.
Defense in depth
The idea behind defense in depth is to manage risk by using diverse defensive
strategies. Layering security defenses in an application reduces the chance of a
successful attack. We recommend that you implement secure designs for your
applications by using the built-in capabilities of the Azure platform.
For example, the risk of attack increases with the size (surface area) of the application.
You can reduce the surface area by using an approval list to close down the exposed IP
address space and listening ports that aren't needed on the load balancers (Azure Load
Balancer and Azure Application Gateway). Network security groups (NSGs) are another
way to reduce the attack surface. You can use service tags and application security
groups to minimize complexity for creating security rules and configuring network
security, as a natural extension of an application’s structure. Additionally, you can use
Azure DDoS Solution for Microsoft Sentinel to pinpoint offending DDoS sources and
to block them from launching other, sophisticated attacks, such as data theft.
You should deploy Azure services in a virtual network whenever possible. This practice
allows service resources to communicate through private IP addresses. Azure service
traffic from a virtual network uses public IP addresses as source IP addresses by default.
Using service endpoints will switch service traffic to use virtual network private
addresses as the source IP addresses when they're accessing the Azure service from a
virtual network.
We often see customers' on-premises resources getting attacked along with their
resources in Azure. If you're connecting an on-premises environment to Azure, we
recommend that you minimize exposure of on-premises resources to the public
internet. You can use the scale and advanced DDoS protection capabilities of Azure by
deploying your well-known public entities in Azure. Because these publicly accessible
entities are often a target for DDoS attacks, putting them in Azure reduces the impact
on your on-premises resources.
Next steps
Learn more about business continuity.
Feedback
Was this page helpful? Yes No
This security baseline applies guidance from the Microsoft cloud security benchmark
version 1.0 to Azure DDoS Protection. The Microsoft cloud security benchmark provides
recommendations on how you can secure your cloud solutions on Azure. The content is
grouped by the security controls defined by the Microsoft cloud security benchmark and
the related guidance applicable to Azure DDoS Protection.
You can monitor this security baseline and its recommendations using Microsoft
Defender for Cloud. Azure Policy definitions will be listed in the Regulatory Compliance
section of the Microsoft Defender for Cloud portal page.
When a feature has relevant Azure Policy Definitions, they are listed in this baseline to
help you measure compliance with the Microsoft cloud security benchmark controls and
recommendations. Some recommendations may require a paid Microsoft Defender plan
to enable certain security scenarios.
7 Note
Features not applicable to Azure DDoS Protection have been excluded. To see how
Azure DDoS Protection completely maps to the Microsoft cloud security
benchmark, see the full Azure DDoS Protection security baseline mapping file .
Security profile
The security profile summarizes high-impact behaviors of Azure DDoS Protection, which
may result in increased security considerations.
ノ Expand table
Features
Description: Service configurations can be monitored and enforced via Azure Policy.
Learn more.
ノ Expand table
Configuration Guidance: Use Microsoft Defender for Cloud to configure Azure Policy to
audit and enforce configurations of your Azure resources. Use Azure Monitor to create
alerts when there is a configuration deviation detected on the resources.
Features
Description: Service produces resource logs that can provide enhanced service-specific
metrics and logging. The customer can configure these resource logs and send them to
their own data sink like a storage account or log analytics workspace. Learn more.
ノ Expand table
Next steps
See the Microsoft cloud security benchmark overview
Learn more about Azure security baselines
Prevent dangling DNS entries and avoid subdomain
takeover
Article • 03/27/2024
This article describes the common security threat of subdomain takeover and the steps you can take to mitigate against
it.
1. CREATION:
a. You provision an Azure resource with a fully qualified domain name (FQDN) of app-contogreat-dev-
001.azurewebsites.net .
b. You assign a CNAME record in your DNS zone with the subdomain greatapp.contoso.com that routes traffic to
your Azure resource.
2. DEPROVISIONING:
At this point, the CNAME record greatapp.contoso.com should be removed from your DNS zone. If the CNAME
record isn't removed, it's advertised as an active domain but doesn't route traffic to an active Azure resource. You
now have a "dangling" DNS record.
b. The dangling subdomain, greatapp.contoso.com , is now vulnerable and can be taken over by being assigned to
another Azure subscription's resource.
3. TAKEOVER:
a. Using commonly available methods and tools, a threat actor discovers the dangling subdomain.
b. The threat actor provisions an Azure resource with the same FQDN of the resource you previously controlled. In
this example, app-contogreat-dev-001.azurewebsites.net .
c. Traffic being sent to the subdomain greatapp.contoso.com is now routed to the malicious actor's resource where
they control the content.
The risks of subdomain takeover
When a DNS record points to a resource that isn't available, the record itself should be removed from your DNS zone. If it
isn't deleted, it's a "dangling DNS" record and creates the possibility for subdomain takeover.
Dangling DNS entries make it possible for threat actors to take control of the associated DNS name to host a malicious
website or service. Malicious pages and services on an organization's subdomain might result in:
Loss of control over the content of the subdomain - Negative press about your organization's inability to secure its
content, brand damage, and loss of trust.
Cookie harvesting from unsuspecting visitors - It's common for web apps to expose session cookies to
subdomains (*.contoso.com). Any subdomain can access them. Threat actors can use subdomain takeover to build
an authentic looking page, trick unsuspecting users to visit it, and harvest their cookies (even secure cookies). A
common misconception is that using SSL certificates protects your site, and your users' cookies, from a takeover.
However, a threat actor can use the hijacked subdomain to apply for and receive a valid SSL certificate. Valid SSL
certificates grant them access to secure cookies and can further increase the perceived legitimacy of the malicious
site.
Phishing campaigns - Malicious actors often exploit authentic-looking subdomains in phishing campaigns. The risk
extends to both malicious websites and MX records, which could enable threat actors to receive emails directed to
legitimate subdomains associated with trusted brands.
Further risks - Malicious sites might be used to escalate into other classic attacks such as XSS, CSRF, CORS bypass,
and more.
This tool helps Azure customers list all domains with a CNAME associated to an existing Azure resource that was created
on their subscriptions or tenants.
If your CNAMEs are in other DNS services and point to Azure resources, provide the CNAMEs in an input file to the tool.
The tool supports the Azure resources listed in the following table. The tool extracts, or takes as inputs, all the tenant's
CNAMEs.
ノ Expand table
Prerequisites
Run the query as a user who has:
If you're a Global Administrator of your organization's tenant, follow the guidance in Elevate access to manage all Azure
subscriptions and management groups to gain access to all your organization's subscriptions
Tip
Azure Resource Graph has throttling and paging limits that you should consider if you have a large Azure
environment.
Learn more about working with large Azure resource data sets.
1. From your DNS zone, remove all CNAME records that point to FQDNs of resources no longer provisioned.
2. To enable traffic to be routed to resources in your control, provision more resources with the FQDNs specified in the
CNAME records of the dangling subdomains.
3. Review your application code for references to specific subdomains and update any incorrect or outdated
subdomain references.
4. Investigate whether any compromise occurred and take action per your organization's incident response
procedures. Tips and best practices for investigating:
If your application logic results in secrets, such as OAuth credentials, being sent to dangling subdomains or if
privacy-sensitive information is transmitted to those subdomains, there is a possibility for this data to be exposed to
third parties.
5. Understand why the CNAME record was not removed from your DNS zone when the resource was deprovisioned
and take steps to ensure that DNS records are updated appropriately when Azure resources are deprovisioned in
the future.
Some Azure services offer features to aid in creating preventative measures and are detailed below. Other methods to
prevent this issue must be established through your organization's best practices or standard operating procedures.
The Microsoft Defender for App Service plan includes dangling DNS detection. With this plan enabled, you'll get security
alerts if you decommission an App Service website but don't remove its custom domain from your DNS registrar.
Microsoft Defender for Cloud's dangling DNS protection is available whether your domains are managed with Azure DNS
or an external domain registrar and applies to App Service on both Windows and Linux.
Learn more about this and other benefits of this Microsoft Defender plans in Introduction to Microsoft Defender for App
Service.
Despite the limited service offerings today, we recommend using alias records to defend against subdomain takeover
whenever possible.
These records don't prevent someone from creating the Azure App Service with the same name that's in your CNAME
entry. Without the ability to prove ownership of the domain name, threat actors can't receive traffic or control the
content.
Learn more about how to map an existing custom DNS name to Azure App Service.
Educate your application developers to reroute addresses whenever they delete resources.
Put "Remove DNS entry" on the list of required checks when decommissioning a service.
Put delete locks on any resources that have a custom DNS entry. A delete lock serves as an indicator that the
mapping must be removed before the resource is deprovisioned. Measures like this can only work when
combined with internal education programs.
Review your DNS records regularly to ensure that your subdomains are all mapped to Azure resources that:
Exist - Query your DNS zones for resources pointing to Azure subdomains such as *.azurewebsites.net or
*.cloudapp.azure.com (see the Reference list of Azure domains).
You own - Confirm that you own all resources that your DNS subdomains are targeting.
Maintain a service catalog of your Azure fully qualified domain name (FQDN) endpoints and the application
owners. To build your service catalog, run the following Azure Resource Graph query script. This script projects
the FQDN endpoint information of the resources you have access to and outputs them in a CSV file. If you have
access to all the subscriptions for your tenant, the script considers all those subscriptions as shown in the
following sample script. To limit the results to a specific set of subscriptions, edit the script as shown.
Public - cloudapp.net
Mooncake - chinacloudapp.cn
Fairfax - usgovcloudapp.net
BlackForest - azurecloudapp.de
For example, a hosted service in Public named "test" would have DNS "test.cloudapp.net"
Example: Subscription 'A' and subscription 'B' are the only subscriptions belonging to Microsoft Entra tenant 'AB'.
Subscription 'A' contains a classic cloud service 'test' with DNS name 'test.cloudapp.net'. Upon deletion of the cloud
service, a reservation is taken on DNS name 'test.cloudapp.net'. During the reservation period, only subscription 'A' or
subscription 'B' will be able to claim the DNS name 'test.cloudapp.net' by creating a classic cloud service named 'test'. No
other subscriptions will be allowed to claim it. After the reservation period, any subscription in Azure can now claim
'test.cloudapp.net'.
Next steps
To learn more about related services and Azure features you can use to defend against subdomain takeover, see the
following pages.
Enable Microsoft Defender for App Service - to receive alerts when dangling DNS entries are detected
Use a domain verification ID when adding custom domains in Azure App Service
Quickstart: Run your first Resource Graph query using Azure PowerShell
Implement a secure hybrid
network
Azure Firewall Azure Load Balancer Azure Virtual Machines Azure Virtual Network
This reference architecture shows a secure hybrid network that extends an on-premises
network to Azure. The architecture implements a perimeter network, also called a DMZ,
between the on-premises network and an Azure virtual network. All inbound and
outbound traffic passes through Azure Firewall.
Architecture
Components
The architecture consists of the following aspects:
Azure virtual network. The virtual network hosts the solution components and
other resources running in Azure.
Virtual network routes define the flow of IP traffic within the Azure virtual network.
In the diagram, there are two user-defined route tables.
In the gateway subnet, traffic is routed through the Azure Firewall instance.
7 Note
Depending on the requirements of your VPN connection, you can configure
Border Gateway Protocol (BGP) routes to implement the forwarding rules that
direct traffic back through the on-premises network.
Gateway. The gateway provides connectivity between the routers in the on-
premises network and the virtual network. The gateway is placed in its own subnet.
Network security groups. Use security groups to restrict network traffic within the
virtual network.
Azure Bastion. Azure Bastion allows you to log into virtual machines (VMs) in the
virtual network through SSH or remote desktop protocol (RDP) without exposing
the VMs directly to the internet. Use Bastion to manage the VMs in the virtual
network.
Hybrid applications where workloads run partly on-premises and partly in Azure.
Infrastructure that requires granular control over traffic entering an Azure virtual
network from an on-premises datacenter.
Applications that must audit outgoing traffic. Auditing is often a regulatory
requirement of many commercial systems and can help to prevent public
disclosure of private information.
Recommendations
The following recommendations apply for most scenarios. Follow these
recommendations unless you have a specific requirement that overrides them.
The IT administrator role shouldn't have access to the firewall resources. Access should
be restricted to the security IT administrator role.
A resource group containing the virtual network (excluding the VMs), NSGs, and
the gateway resources for connecting to the on-premises network. Assign the
centralized IT administrator role to this resource group.
A resource group containing the VMs for the Azure Firewall instance and the user-
defined routes for the gateway subnet. Assign the security IT administrator role to
this resource group.
Separate resource groups for each spoke virtual network that contains the load
balancer and VMs.
Networking recommendations
To accept inbound traffic from the internet, add a Destination Network Address
Translation (DNAT) rule to Azure Firewall.
Force-tunnel all outbound internet traffic through your on-premises network using the
site-to-site VPN tunnel, and route to the internet using network address translation
(NAT). This design prevents accidental leakage of any confidential information and
allows inspection and auditing of all outgoing traffic.
Don't completely block internet traffic from the resources in the spoke network subnets.
Blocking traffic will prevent these resources from using Azure PaaS services that rely on
public IP addresses, such as VM diagnostics logging, downloading of VM extensions,
and other functionality. Azure diagnostics also requires that components can read and
write to an Azure Storage account.
Verify that outbound internet traffic is force-tunneled correctly. If you're using a VPN
connection with the routing and remote access service on an on-premises server, use a
tool such as WireShark .
Consider using Application Gateway or Azure Front Door for SSL termination.
Considerations
These considerations implement the pillars of the Azure Well-Architected Framework,
which is a set of guiding tenets that can be used to improve the quality of a workload.
For more information, see Microsoft Azure Well-Architected Framework.
Performance efficiency
Performance efficiency is the ability of your workload to scale to meet the demands
placed on it by users in an efficient manner. For more information, see Performance
efficiency pillar overview.
For details about the bandwidth limits of VPN Gateway, see Gateway SKUs. For higher
bandwidths, consider upgrading to an ExpressRoute gateway. ExpressRoute provides up
to 10-Gbps bandwidth with lower latency than a VPN connection.
For more information about the scalability of Azure gateways, see the scalability
consideration sections in:
For details about managing virtual networks and NSGs at scale, see Azure Virtual
Network Manager (AVNM): Create a secured hub and spoke network to create new (and
onboard existing) hub and spoke virtual network topologies for central management of
connectivity and NSG rules.
Reliability
Reliability ensures your application can meet the commitments you make to your
customers. For more information, see Overview of the reliability pillar.
If you're using Azure ExpressRoute to provide connectivity between the virtual network
and on-premises network, configure a VPN gateway to provide failover if the
ExpressRoute connection becomes unavailable.
For information on maintaining availability for VPN and ExpressRoute connections, see
the availability considerations in:
Operational excellence
Operational excellence covers the operations processes that deploy an application and
keep it running in production. For more information, see Overview of the operational
excellence pillar.
If gateway connectivity from your on-premises network to Azure is down, you can still
reach the VMs in the Azure virtual network through Azure Bastion.
Each tier's subnet in the reference architecture is protected by NSG rules. You may need
to create a rule to open port 3389 for remote desktop protocol (RDP) access on
Windows VMs or port 22 for secure shell (SSH) access on Linux VMs. Other management
and monitoring tools may require rules to open additional ports.
You can find additional information about monitoring and managing VPN and
ExpressRoute connections in the article Implementing a hybrid network architecture with
Azure and on-premises VPN.
Security
Security provides assurances against deliberate attacks and the abuse of your valuable
data and systems. For more information, see Overview of the security pillar.
DDoS protection
DevOps access
Use Azure RBAC to restrict the operations that DevOps can perform on each tier. When
granting permissions, use the principle of least privilege. Log all administrative
operations and perform regular audits to ensure any configuration changes were
planned.
Cost optimization
Cost optimization is about looking at ways to reduce unnecessary expenses and
improve operational efficiencies. For more information, see Overview of the cost
optimization pillar.
Use the Azure pricing calculator to estimate costs. Other considerations are described
in the Cost optimization section in Microsoft Azure Well-Architected Framework.
Here are cost considerations for the services used in this architecture.
Azure Firewall
In this architecture, Azure Firewall is deployed in the virtual network to control traffic
between the gateway's subnet and the resources in the spoke virtual networks. In this
way Azure Firewall is cost effective because it's used as a shared solution consumed by
multiple workloads. Here are the Azure Firewall pricing models:
When compared to network virtual appliances (NVAs), with Azure Firewall you can save
up to 30-50%. For more information, see Azure Firewall vs NVA .
Azure Bastion
Azure Bastion securely connects to your virtual machine over RDP and SSH without
having the need to configure a public IP on the virtual machine.
In this architecture, internal load balancers are used to load balance traffic inside a
virtual network.
Deploy this scenario
This deployment creates two resource groups; the first holds a mock on-premises
network, the second a set of hub and spoke networks. The mock on-premises network
and the hub network are connected using Azure Virtual Network gateways to form a
site-to-site connection. This configuration is very similar to how you would connect your
on-premises datacenter to Azure.
Azure portal
Use the following button to deploy the reference using the Azure portal.
Once the deployment has been completed, verify site-to-site connectivity by looking at
the newly created connection resources. While in the Azure portal, search for
'connections' and note that the status of each connection.
The IIS instance found in the spoke network can be accessed from the virtual machine
located in the mock on-premises network. Create a connection to the virtual machine
using the included Azure Bastion host, open a web browser, and navigate to the address
of the application's network load balancer.
For detailed information and additional deployment options, see the Azure Resource
Manager templates (ARM templates) used to deploy this solution: Secure Hybrid
Network.
Next steps
The virtual datacenter: A network perspective.
Azure security documentation.
Related resources
Connect an on-premises network to Azure using ExpressRoute.
Configure ExpressRoute and Site-to-Site coexisting connections using PowerShell
Extend an on-premises network using ExpressRoute.
Feedback
Was this page helpful? Yes No
Microsoft Antimalware for Azure Cloud
Services and Virtual Machines
Article • 06/27/2024
Microsoft Antimalware for Azure is a free real-time protection that helps identify and
remove viruses, spyware, and other malicious software. It generates alerts when known
malicious or unwanted software tries to install itself or run on your Azure systems.
The solution is built on the same antimalware platform as Microsoft Security Essentials
(MSE), Microsoft Forefront Endpoint Protection, Microsoft System Center Endpoint
Protection, Microsoft Intune, and Microsoft Defender for Cloud. Microsoft Antimalware
for Azure is a single-agent solution for applications and tenant environments, designed
to run in the background without human intervention. Protection may be deployed
based on the needs of application workloads, with either basic secure-by-default or
advanced custom configuration, including antimalware monitoring.
When you deploy and enable Microsoft Antimalware for Azure for your applications, the
following core features are available:
7 Note
Microsoft Antimalware can also be deployed using Microsoft Defender for Cloud.
Read Install Endpoint Protection in Microsoft Defender for Cloud for more
information.
Architecture
Microsoft Antimalware for Azure includes the Microsoft Antimalware Client and Service,
Antimalware classic deployment model, Antimalware PowerShell cmdlets, and Azure
Diagnostics Extension. Microsoft Antimalware is supported on Windows Server 2008 R2,
Windows Server 2012, and Windows Server 2012 R2 operating system families. It isn't
supported on the Windows Server 2008 operating system, and also isn't supported in
Linux.
The Microsoft Antimalware Client and Service is installed by default in a disabled state in
all supported Azure guest operating system families in the Cloud Services platform. The
Microsoft Antimalware Client and Service isn't installed by default in the Virtual
Machines platform and is available as an optional feature through the Azure portal and
Visual Studio Virtual Machine configuration under Security Extensions.
When using Azure App Service on Windows, the underlying service that hosts the web
app has Microsoft Antimalware enabled on it. This is used to protect Azure App Service
infrastructure and does not run on customer content.
7 Note
The Azure portal or PowerShell cmdlets push the Antimalware extension package file to
the Azure system at a predetermined fixed location. The Azure Guest Agent (or the
Fabric Agent) launches the Antimalware Extension, applying the Antimalware
configuration settings supplied as input. This step enables the Antimalware service with
either default or custom configuration settings. If no custom configuration is provided,
then the antimalware service is enabled with the default configuration settings. For more
information, see the Samples section of this article for more details..
Once running, the Microsoft Antimalware client downloads the latest protection engine
and signature definitions from the Internet and loads them on the Azure system. The
Microsoft Antimalware service writes service-related events to the system OS events log
under the "Microsoft Antimalware" event source. Events include the Antimalware client
health state, protection and remediation status, new and old configuration settings,
engine updates and signature definitions, and others.
You can enable Antimalware monitoring for your Cloud Service or Virtual Machine to
have the Antimalware event log events written as they're produced to your Azure
storage account. The Antimalware Service uses the Azure Diagnostics extension to
collect Antimalware events from the Azure system into tables in the customer's Azure
Storage account.
The deployment workflow including configuration steps and options supported for the
above scenarios are documented in Antimalware deployment scenarios section of this
document.
7 Note
You can however use PowerShell/APIs and Azure Resource Manager templates to
deploy Virtual Machine Scale Sets with the Microsoft Anti-Malware extension. For
installing an extension on an already running Virtual Machine, you can use the
sample Python script vmssextn.py . This script gets the existing extension config
on the Scale Set and adds an extension to the list of existing extensions on the VM
Scale Sets.
Follow these steps to enable and configure Microsoft Antimalware for Azure Virtual
Machines using the Azure portal while provisioning a Virtual Machine:
5. Provide a Name, Username, Password, and create a new resource group or choose
an existing resource group.
6. Select Ok.
7. Choose a vm size.
8. In the next section, make the appropriate choices for your needs select the
Extensions section.
9. Select Add extension
10. Under New resource, choose Microsoft Antimalware.
11. Select Create
12. In the Install extension section file, locations, and process exclusions can be
configured as well as other scan options. Choose Ok.
13. Choose Ok.
14. Back in the Settings section, choose Ok.
15. In the Create screen, choose Ok.
To enable and configure the Microsoft Antimalware service using Visual Studio:
2. Choose your Virtual Machine in the Virtual Machines node in Server Explorer
3. Right-click configure to view the Virtual Machine configuration page
4. Select Microsoft Antimalware extension from the dropdown list under Installed
Extensions and click Add to configure with default antimalware configuration.
7. Click the Update button to push the configuration updates to your Virtual
Machine.
7 Note
The Visual Studio Virtual Machines configuration for Antimalware supports only
JSON format configuration. For more information, see the Samples section of this
article for more details.
7 Note
The Azure Virtual Machines configuration for Antimalware supports only JSON
format configuration. For more information, see the Samples section of this article
for more details.
For more information, see the Samples section of this article for more details.
Cloud Services and Virtual Machines - Configuration
Using PowerShell cmdlets
An Azure application or service can retrieve the Microsoft Antimalware configuration for
Cloud Services and Virtual Machines using PowerShell cmdlets.
Samples
To enable antimalware event collection for a virtual machine using the Azure Preview
Portal:
1. Click any part of the Monitoring lens in the Virtual Machine blade
2. Click the Diagnostics command on Metric blade
3. Select Status ON and check the option for Windows event system
4. . You can choose to uncheck all other options in the list, or leave them enabled per
your application service needs.
5. The Antimalware event categories "Error", "Warning", "Informational", etc., are
captured in your Azure Storage account.
Antimalware events are collected from the Windows event system logs to your Azure
Storage account. You can configure the Storage Account for your Virtual Machine to
collect Antimalware events by selecting the appropriate storage account.
Next steps
See code samples to enable and configure Microsoft Antimalware for Azure Resource
Manager (ARM) virtual machines.
Feedback
Was this page helpful? Yes No
You can enable and configure Microsoft Antimalware for Azure Resource Manager VMs.
This article provides code samples using PowerShell cmdlets.
7 Note
Before executing this code sample, you must uncomment the variables and provide
appropriate values.
PowerShell
The code sample below shows how you can enable IaaS Antimalware extension using
the AzureRmVmss PowerShell cmdlets.
7 Note
Before executing this code sample, you must uncomment the variables and provide
appropriate values.
PowerShell
7 Note
Before executing this code sample, you must uncomment the variables and provide
appropriate values.
PowerShell
7 Note
Before executing this code sample, you must uncomment the variables and provide
appropriate values.
PowerShell
Next steps
Learn more about Microsoft Antimalware for Azure.
Azure Virtual Machines security
overview
Article • 06/27/2024
This article provides an overview of the core Azure security features that can be used
with virtual machines.
You can use Azure Virtual Machines to deploy a wide range of computing solutions in an
agile way. The service supports Microsoft Windows, Linux, Microsoft SQL Server, Oracle,
IBM, SAP, and Azure BizTalk Services. So you can deploy any workload and any language
on nearly any operating system.
An Azure virtual machine gives you the flexibility of virtualization without having to buy
and maintain the physical hardware that runs the virtual machine. You can build and
deploy your applications with the assurance that your data is protected and safe in
highly secure datacenters.
Antimalware
With Azure, you can use antimalware software from security vendors such as Microsoft,
Symantec, Trend Micro, and Kaspersky. This software helps protect your virtual machines
from malicious files, adware, and other threats.
Microsoft Antimalware for Azure Cloud Services and Virtual Machines is a real-time
protection capability that helps identify and remove viruses, spyware, and other
malicious software. Microsoft Antimalware for Azure provides configurable alerts when
known malicious or unwanted software attempts to install itself or run on your Azure
systems.
Microsoft Antimalware for Azure is a single-agent solution for applications and tenant
environments. It's designed to run in the background without human intervention. You
can deploy protection based on the needs of your application workloads, with either
basic secure-by-default or advanced custom configuration, including antimalware
monitoring.
Learn more about Microsoft Antimalware for Azure and the core features available.
Learn more about antimalware software to help protect your virtual machines:
For even more powerful protection, consider using Microsoft Defender for Endpoint.
With Defender for Endpoint, you get:
Key Vault provides the option to store your keys in hardware security modules (HSMs)
certified to FIPS 140 validated standards. Your SQL Server encryption keys for backup or
transparent data encryption can all be stored in Key Vault with any keys or secrets from
your applications. Permissions and access to these protected items are managed
through Microsoft Entra ID.
Learn more:
The solution is integrated with Azure Key Vault to help you control and manage the disk
encryption keys and secrets in your key vault subscription. It ensures that all data in the
virtual machine disks are encrypted at rest in Azure Storage.
Learn more:
Azure Disk Encryption for Linux VMs and Azure Disk Encryption for Windows VMs
Quickstart: Encrypt a Linux IaaS VM with Azure PowerShell
Learn more:
Site Recovery:
Simplifies your BCDR strategy: Site Recovery makes it easy to handle replication,
failover, and recovery of multiple business workloads and apps from a single
location. Site Recovery orchestrates replication and failover but doesn't intercept
your application data or have any information about it.
Provides flexible replication: By using Site Recovery, you can replicate workloads
running on Hyper-V virtual machines, VMware virtual machines, and
Windows/Linux physical servers.
Supports failover and recovery: Site Recovery provides test failovers to support
disaster recovery drills without affecting production environments. You can also
run planned failovers with a zero-data loss for expected outages, or unplanned
failovers with minimal data loss (depending on replication frequency) for
unexpected disasters. After failover, you can fail back to your primary sites. Site
Recovery provides recovery plans that can include scripts and Azure Automation
workbooks so that you can customize failover and recovery of multi-tier
applications.
Eliminates secondary datacenters: You can replicate to a secondary on-premises
site, or to Azure. Using Azure as a destination for disaster recovery eliminates the
cost and complexity of maintaining a secondary site. Replicated data is stored in
Azure Storage.
Integrates with existing BCDR technologies: Site Recovery partners with other
applications' BCDR features. For example, you can use Site Recovery to help
protect the SQL Server back end of corporate workloads. This includes native
support for SQL Server Always On to manage the failover of availability groups.
Learn more:
Virtual networking
Virtual machines need network connectivity. To support that requirement, Azure requires
virtual machines to be connected to an Azure virtual network.
An Azure virtual network is a logical construct built on top of the physical Azure network
fabric. Each logical Azure virtual network is isolated from all other Azure virtual
networks. This isolation helps ensure that network traffic in your deployments is not
accessible to other Microsoft Azure customers.
Learn more:
Defender for Cloud helps you optimize and monitor the security of your virtual
machines by:
Learn more:
Compliance
Azure Virtual Machines is certified for FISMA, FedRAMP, HIPAA, PCI DSS Level 1, and
other key compliance programs. This certification makes it easier for your own Azure
applications to meet compliance requirements and for your business to address a wide
range of domestic and international regulatory requirements.
Learn more:
Confidential Computing
While confidential computing is not technically part of virtual machine security, the topic
of virtual machine security belongs to the higher-level subject of "compute" security.
Confidential computing belongs within the category of "compute" security.
Confidential computing ensures that when data is "in the clear," which is required for
efficient processing, the data is protected inside a Trusted Execution Environment
https://en.wikipedia.org/wiki/Trusted_execution_environment (TEE - also known as an
enclave), an example of which is shown in the figure below.
TEEs ensure there is no way to view data or the operations inside from the outside, even
with a debugger. They even ensure that only authorized code is permitted to access
data. If the code is altered or tampered, the operations are denied and the environment
disabled. The TEE enforces these protections throughout the execution of code within it.
Learn more:
Next steps
Learn about security best practices for VMs and operating systems.
Feedback
Was this page helpful? Yes No
Enabling automatic guest patching for your Azure Virtual Machines (VMs) and Scale Sets
(VMSS) helps ease update management by safely and automatically patching virtual
machines to maintain security compliance, while limiting the blast radius of VMs.
The VM is assessed periodically every few days and multiple times within any 30-day
period to determine the applicable patches for that VM. The patches can be installed
any day on the VM during off-peak hours for the VM. This automatic assessment
ensures that any missing patches are discovered at the earliest possible opportunity.
Patches are installed within 30 days of the monthly patch releases, following availability-
first orchestration. Patches are installed only during off-peak hours for the VM,
depending on the time zone of the VM. The VM must be running during the off-peak
hours for patches to be automatically installed. If a VM is powered off during a periodic
assessment, the platform will automatically assess and apply patches (if required) during
the next periodic assessment (usually within a few days) when the VM is powered on.
Definition updates and other patches not classified as Critical or Security won't be
installed through automatic VM guest patching. To install patches with other patch
classifications or schedule patch installation within your own custom maintenance
window, you can use Update Management.
Availability-first Updates
Azure orchestrates the patch installation process across all public and private clouds for
VMs that have enabled Automatic Guest Patching. The orchestration follows availability-
first principles across different levels of availability provided by Azure.
For a group of virtual machines undergoing an update, the Azure platform will
orchestrate updates:
Across regions:
Within a region:
VMs in different Availability Zones aren't updated concurrently with the same
update.
VMs that aren't part of an availability set are batched on a best effort basis to
avoid concurrent updates for all VMs in a subscription.
Restricting the number of concurrently patched VMs across regions, within a region, or
within an availability set limits the impact of a faulty patch on a given set of VMs. With
health monitoring, any potential issues are flagged before they impact the entire
workload.
The patch installation date for a given VM may vary month-to-month, as a specific VM
may be picked up in a different batch between monthly patching cycles.
The exact set of patches to be installed vary based on the VM configuration, including
OS type, and assessment timing. It's possible for two identical VMs in different regions
to get different patches installed if there are more or less patches available when the
patch orchestration reaches different regions at different times. Similarly, but less
frequently, VMs within the same region but assessed at different times (due to different
Availability Zone or Availability Set batches) might get different patches.
As the Automatic VM Guest Patching doesn't configure the patch source, two similar
VMs configured to different patch sources, such as public repository vs private
repository, may also see a difference in the exact set of patches installed.
For OS types that release patches on a fixed cadence, VMs configured to the public
repository for the OS can expect to receive the same set of patches across the different
rollout phases in a month. For example, Windows VMs configured to the public
Windows Update repository.
As a new rollout is triggered every month, a VM will receive at least one patch rollout
every month if the VM is powered on during off-peak hours. This process ensures that
the VM is patched with the latest available security and critical patches on a monthly
basis. To ensure consistency in the set of patches installed, you can configure your VMs
to assess and download patches from your own private repositories.
Supported OS images
) Important
ノ Expand table
Redhat RHEL 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7_9, 7-RAW, 7-
LVM
the VM template.
To use this mode on Windows VMs, set the property
osProfile.windowsConfiguration.patchSettings.patchMode=AutomaticByPlatform in
the VM template.
Enabling this mode will set the Registry Key
SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU\NoAutoUpdate to 1
AutomaticByOS:
property osProfile.windowsConfiguration.patchSettings.patchMode=AutomaticByOS
in the VM template.
Enabling this mode will set the Registry Key
SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU\NoAutoUpdate to 0
Manual:
ImageDefault:
template.
7 Note
VM is first created. This impacts certain patch mode transitions. Switching between
AutomaticByPlatform and Manual modes is supported on VMs that have
osProfile.windowsConfiguration.enableAutomaticUpdates=false . Similarly switching
PUT on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Mic
rosoft.Compute/virtualMachines/myVirtualMachine?api-version=2020-12-01`
JSON
{
"location": "<location>",
"properties": {
"osProfile": {
"linuxConfiguration": {
"provisionVMAgent": true,
"patchSettings": {
"patchMode": "AutomaticByPlatform"
}
}
}
}
}
PUT on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Mic
rosoft.Compute/virtualMachines/myVirtualMachine?api-version=2020-12-01`
JSON
{
"location": "<location>",
"properties": {
"osProfile": {
"windowsConfiguration": {
"provisionVMAgent": true,
"enableAutomaticUpdates": true,
"patchSettings": {
"patchMode": "AutomaticByPlatform"
}
}
}
}
}
Azure PowerShell
Azure PowerShell
Azure CLI
Azure portal
When creating a VM using the Azure portal, patch orchestration modes can be set
under the Management tab for both Linux and Windows.
7 Note
It can take more than three hours to enable automatic VM guest updates on a VM,
as the enablement is completed during the VM's off-peak hours. As assessment
and patch installation occur only during off-peak hours, your VM must be also be
running during off-peak hours to apply patches.
The platform will make periodic patching configuration calls to ensure alignment when
model changes are detected on IaaS VMs or scale sets in Flexible orchestration. Certain
model changes such as, but not limited to, updating assessment mode, patch mode,
and extension update may trigger a patching configuration call.
Automatic updates are disabled in most scenarios, and patch installation is done
through the extension going forward. The following conditions apply.
Bash
Bash
To verify whether automatic VM guest patching has completed and the patching
extension is installed on the VM, you can review the VM's instance view. If the
enablement process is complete, the extension will be installed and the assessment
results for the VM will be available under patchStatus . The VM's instance view can be
accessed through multiple ways as described below.
REST API
GET on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Mic
rosoft.Compute/virtualMachines/myVirtualMachine/instanceView?api-
version=2020-12-01`
Azure PowerShell
Use the Get-AzVM cmdlet with the -Status parameter to access the instance view for
your VM.
Azure PowerShell
Azure CLI
Use az vm get-instance-view to access the instance view for your VM.
Azure CLI
The assessment results for your VM can be reviewed under the availablePatchSummary
section. An assessment is periodically conducted for a VM that has automatic VM guest
patching enabled. The count of available patches after an assessment is provided under
criticalAndSecurityPatchCount and otherPatchCount results. Automatic VM guest
patching will install all patches assessed under the Critical and Security patch
classifications. Any other assessed patch is skipped.
The patch installation results for your VM can be reviewed under the
lastPatchInstallationSummary section. This section provides details on the last patch
installation attempt on the VM, including the number of patches that were installed,
pending, failed or skipped. Patches are installed only during the off-peak hours
maintenance window for the VM. Pending and failed patches are automatically retried
during the next off-peak hours maintenance window.
Disable automatic VM guest patching
Automatic VM guest patching can be disabled by changing the patch orchestration
mode for the VM.
To disable automatic VM guest patching on a Linux VM, change the patch mode to
ImageDefault .
modes can be set on the VM and this property can only be set when the VM is first
created. This impacts certain patch mode transitions:
Use the examples from the enablement section above in this article for API, PowerShell
and CLI usage examples to set the required patch mode.
7 Note
REST API
Use the Assess Patches API to assess available patches for your virtual machine.
POST on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Mic
rosoft.Compute/virtualMachines/myVirtualMachine/assessPatches?api-
version=2020-12-01`
Azure PowerShell
Use the Invoke-AzVmPatchAssessment cmdlet to assess available patches for your
virtual machine.
Azure PowerShell
Azure CLI
Use az vm assess-patches to assess available patches for your virtual machine.
Azure CLI
You can also trigger an on-demand patch installation for your VM at any time. Patch
installation can take a few minutes to complete and the status of the latest installation is
updated on the VM's instance view.
You can use on-demand patch installation to install all patches of one or more patch
classifications. You can also choose to include or exclude specific packages for Linux or
specific KB IDs for Windows. When triggering an on-demand patch installation, ensure
that you specify at least one patch classification or at least one patch (package for Linux,
KB ID for Windows) in the inclusion list.
REST API
Use the Install Patches API to install patches on your virtual machine.
POST on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Mic
rosoft.Compute/virtualMachines/myVirtualMachine/installPatches?api-
version=2020-12-01`
JSON
{
"maximumDuration": "PT1H",
"Setting": "IfRequired",
"linuxParameters": {
"classificationsToInclude": [
"Critical",
"Security"
]
}
}
JSON
{
"maximumDuration": "PT1H",
"rebootSetting": "IfRequired",
"windowsParameters": {
"classificationsToInclude": [
"Critical",
"Security"
]
}
}
Azure PowerShell
Use the Invoke-AzVMInstallPatch cmdlet to install patches on your virtual machine.
Example to install certain packages on a Linux VM:
Azure PowerShell
Azure PowerShell
Example to install all Security patches on a Windows VM, while including and excluding
patches with specific KB IDs and excluding any patch that requires a reboot:
Azure PowerShell
Azure CLI
Use az vm install-patches to install patches on your virtual machine.
Azure CLI
Example to install all Critical and Security patches on a Windows VM, while excluding
any patch that requires a reboot:
Azure CLI
az vm install-patches --resource-group myResourceGroup --name myVM --
maximum-duration PT2H --reboot-setting IfRequired --classifications-to-
include-win Critical Security --exclude-kbs-requiring-reboot true
Azure will store the package related updates within the customer repository for up to 90
days, depending on the available space. This allows customers to update their fleet
leveraging Strict Safe Deployment for VMs that are up to 3 months behind on updates.
There is no action required for customers that have enabled Auto Patching. The platform
will install a package that is snapped to a point-in-time by default. In the event a
snapshot-based update cannot be installed, Azure will apply the latest package on the
VM to ensure the VM remains secure. The point-in-time updates will be consistent on all
VMs across regions to ensure homogeneity. Customers can view the published date
information related to the applied update in Azure Resource Graph and the Instance
View of the VM.
Next steps
Learn more about creating and managing Windows virtual machines
Feedback
Was this page helpful? Yes No
This article describes security best practices for VMs and operating systems.
The best practices are based on a consensus of opinion, and they work with current
Azure platform capabilities and feature sets. Because opinions and technologies can
change over time, this article will be updated to reflect those changes.
In most infrastructure as a service (IaaS) scenarios, Azure virtual machines (VMs) are the
main workload for organizations that use cloud computing. This fact is evident in hybrid
scenarios where organizations want to slowly migrate workloads to the cloud. In such
scenarios, follow the general security considerations for IaaS , and apply security best
practices to all your VMs.
7 Note
To improve the security of Linux VMs on Azure, you can integrate with Azure AD
authentication. When you use Azure AD authentication for Linux VMs, you
centrally control and enforce policies that allow or deny access to the VMs.
Best practice: Control VM access. Detail: Use Azure policies to establish conventions for
resources in your organization and create customized policies. Apply these policies to
resources, such as resource groups. VMs that belong to a resource group inherit its
policies.
If your organization has many subscriptions, you might need a way to efficiently manage
access, policies, and compliance for those subscriptions. Azure management groups
provide a level of scope above subscriptions. You organize subscriptions into
management groups (containers) and apply your governance conditions to those
groups. All subscriptions within a management group automatically inherit the
conditions applied to the group. Management groups give you enterprise-grade
management at a large scale no matter what type of subscriptions you might have.
Best practice: Reduce variability in your setup and deployment of VMs. Detail: Use
Azure Resource Manager templates to strengthen your deployment choices and make it
easier to understand and inventory the VMs in your environment.
Best practice: Secure privileged access. Detail: Use a least privilege approach and built-
in Azure roles to enable users to access and set up VMs:
Virtual Machine Contributor: Can manage VMs, but not the virtual network or
storage account to which they are connected.
Classic Virtual Machine Contributor: Can manage VMs created by using the classic
deployment model, but not the virtual network or storage account to which the
VMs are connected.
Security Admin: In Defender for Cloud only: Can view security policies, view
security states, edit security policies, view alerts and recommendations, dismiss
alerts and recommendations.
DevTest Labs User: Can view everything and connect, start, restart, and shut down
VMs.
Your subscription admins and coadmins can change this setting, making them
administrators of all the VMs in a subscription. Be sure that you trust all of your
subscription admins and coadmins to log in to any of your machines.
7 Note
We recommend that you consolidate VMs with the same lifecycle into the same
resource group. By using resource groups, you can deploy, monitor, and roll up
billing costs for your resources.
Organizations that control VM access and setup improve their overall VM security.
An availability set is a logical grouping that you can use in Azure to ensure that the VM
resources you place within it are isolated from each other when they're deployed in an
Azure datacenter. Azure ensures that the VMs you place in an availability set run across
multiple physical servers, compute racks, storage units, and network switches. If a
hardware or Azure software failure occurs, only a subset of your VMs are affected, and
your overall application continues to be available to your customers. Availability sets are
an essential capability when you want to build reliable cloud solutions.
You can integrate Microsoft Antimalware and partner solutions with Microsoft Defender
for Cloud for ease of deployment and built-in detections (alerts and incidents).
Best practice: Integrate your antimalware solution with Defender for Cloud to monitor
the status of your protection.
Detail: Manage endpoint protection issues with Defender for Cloud
Computers that are managed by Update Management use the following configurations
to perform assessment and update deployments:
Microsoft Monitoring Agent (MMA) for Windows or Linux
PowerShell Desired State Configuration (DSC) for Linux
Automation Hybrid Runbook Worker
Microsoft Update or Windows Server Update Services (WSUS) for Windows
computers
If you use Windows Update, leave the automatic Windows Update setting enabled.
Best practice: Ensure at deployment that images you built include the most recent
round of Windows updates.
Detail: Check for and install all Windows updates as a first step of every deployment.
This measure is especially important to apply when you deploy images that come from
either you or your own library. Although images from the Azure Marketplace are
updated automatically by default, there can be a lag time (up to a few weeks) after a
public release.
Best practice: Periodically redeploy your VMs to force a fresh version of the OS.
Detail: Define your VM with an Azure Resource Manager template so you can easily
redeploy it. Using a template gives you a patched and secure VM when you need it.
Test and dev systems must follow backup strategies that provide restore capabilities that
are similar to what users have grown accustomed to, based on their experience with on-
premises environments. Production workloads moved to Azure should integrate with
existing backup solutions when possible. Or, you can use Azure Backup to help address
your backup requirements.
Organizations that don't enforce software-update policies are more exposed to threats
that exploit known, previously fixed vulnerabilities. To comply with industry regulations,
companies must prove that they are diligent and using correct security controls to help
ensure the security of their workloads located in the cloud.
Software-update best practices for a traditional datacenter and Azure IaaS have many
similarities. We recommend that you evaluate your current software update policies to
include VMs located in Azure.
To monitor the security posture of your Windows and Linux VMs, use Microsoft
Defender for Cloud. In Defender for Cloud, safeguard your VMs by taking advantage of
the following capabilities:
Defender for Cloud can actively monitor for threats, and potential threats are exposed in
security alerts. Correlated threats are aggregated in a single view called a security
incident.
Defender for Cloud stores data in Azure Monitor logs. Azure Monitor logs provides a
query language and analytics engine that gives you insights into the operation of your
applications and resources. Data is also collected from Azure Monitor, management
solutions, and agents installed on virtual machines in the cloud or on-premises. This
shared functionality helps you form a complete picture of your environment.
Organizations that don't enforce strong security for their VMs remain unaware of
potential attempts by unauthorized users to circumvent security controls.
Monitor VM performance
Resource abuse can be a problem when VM processes consume more resources than
they should. Performance issues with a VM can lead to service disruption, which violates
the security principle of availability. This is particularly important for VMs that are
hosting IIS or other web servers, because high CPU or memory usage might indicate a
denial of service (DoS) attack. It’s imperative to monitor VM access not only reactively
while an issue is occurring, but also proactively against baseline performance as
measured during normal operation.
We recommend that you use Azure Monitor to gain visibility into your resource’s health.
Azure Monitor features:
Resource diagnostic log files: Monitors your VM resources and identifies potential
issues that might compromise performance and availability.
Azure Diagnostics extension: Provides monitoring and diagnostics capabilities on
Windows VMs. You can enable these capabilities by including the extension as part
of the Azure Resource Manager template.
Azure Disk Encryption for Linux VMs and Azure Disk Encryption for Windows VMs helps
you encrypt your Linux and Windows IaaS virtual machine disks. Azure Disk Encryption
uses the industry-standard DM-Crypt feature of Linux and the BitLocker feature of
Windows to provide volume encryption for the OS and the data disks. The solution is
integrated with Azure Key Vault to help you control and manage the disk-encryption
keys and secrets in your key vault subscription. The solution also ensures that all data on
the virtual machine disks are encrypted at rest in Azure Storage.
Best practice: Use a key encryption key (KEK) for an additional layer of security for
encryption keys. Add a KEK to your key vault.
Detail: Use the Add-AzKeyVaultKey cmdlet to create a key encryption key in the key
vault. You can also import a KEK from your on-premises hardware security module
(HSM) for key management. For more information, see the Key Vault documentation.
When a key encryption key is specified, Azure Disk Encryption uses that key to wrap the
encryption secrets before writing to Key Vault. Keeping an escrow copy of this key in an
on-premises key management HSM offers additional protection against accidental
deletion of keys.
Best practice: Take a snapshot and/or backup before disks are encrypted. Backups
provide a recovery option if an unexpected failure happens during encryption.
Detail: VMs with managed disks require a backup before encryption occurs. After a
backup is made, you can use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt
managed disks by specifying the -skipVmBackup parameter. For more information about
how to back up and restore encrypted VMs, see the Azure Backup article.
Best practice: To make sure the encryption secrets don’t cross regional boundaries,
Azure Disk Encryption needs the key vault and the VMs to be located in the same
region.
Detail: Create and use a key vault that is in the same region as the VM to be encrypted.
When you apply Azure Disk Encryption, you can satisfy the following business needs:
Next steps
See Azure security best practices and patterns for more security best practices to use
when you’re designing, deploying, and managing your cloud solutions by using Azure.
The following resources are available to provide more general information about Azure
security and related Microsoft services:
Azure Security Team Blog - for up to date information on the latest in Azure
Security
Microsoft Security Response Center - where Microsoft security vulnerabilities,
including issues with Azure, can be reported or via email to [email protected]
Security Recommendations for Azure
Marketplace Images
Article • 02/06/2024
Prior to uploading images to the Azure Marketplace, your image must be updated with
several security configuration requirements. These requirements help maintain a high
level of security for partner solution images across the Azure Marketplace.
Make sure to run a security vulnerability detection on your image Prior to submitting it
to the Azure Marketplace. If you detect a security vulnerability in your own already
published image, you must inform your customers in a timely manner both of the
vulnerability's details and how to correct it in current deployments.
Category Check
Security Install all the latest security patches for the Linux distribution.
Security Follow industry guidelines to secure the VM image for the specific Linux
distribution.
Security Limit the attack surface by keeping minimal footprint with only necessary Windows
Server roles, features, services, and networking ports.
Security The VHD image only includes necessary locked accounts that do not have default
passwords that would allow interactive login; no back doors.
Security Disable firewall rules unless application functionally relies on them, such as a
firewall appliance.
Security Remove all sensitive information from the VHD image, such as test SSH keys,
known hosts file, log files, and unnecessary certificates.
Security Avoid using LVM. LVM is Vulnerable to write caching issues with VM hypervisors
and also increases data recovery complexity for users of your image.
Security Clear Bash/Shell history entries. This could include private information or plain-text
credentials for other systems.
Networking Include the SSH server by default. Set SSH keep alive to sshd config with the
following option: ClientAliveInterval 180.
Networking Remove any custom network configuration from the image. Delete the resolv.conf:
rm /etc/resolv.conf .
Deployment Ensure Azure Support can provide our partners with serial console output when
needed and provide adequate timeout for OS disk mounting from cloud storage.
Add the following parameters to the image Kernel Boot Line: console=ttyS0
earlyprintk=ttyS0 rootdelay=300 .
Deployment No swap partition on the OS disk. Swap can be requested for creation on the local
resource disk by the Linux Agent.
Category Check
Security Use a secure OS base image. The VHD used for the source of any image based on
Windows Server must be from the Windows Server OS images provided through
Microsoft Azure.
Security Applications should not depend on restricted user names like administrator, root,
or admin.
Security Enable BitLocker Drive Encryption for both OS hard drives and data hard drives.
Security Limit the attack surface by keeping minimal footprint with only necessary Windows
Server roles, features, services, and networking ports enabled.
Security The VHD image only includes necessary locked accounts that do not have default
passwords that would allow interactive login; no back doors.
Security Disable firewall rules unless application functionally relies on them, such as a
firewall appliance.
Security Remove all sensitive information from the VHD image, including HOSTS files, log
files, and unnecessary certificates.
Even if your organization does not have images in the Azure marketplace, consider
checking your Windows and Linux image configurations against these
recommendations.
Best practices for protecting secrets
Article • 11/15/2023
This article provides guidance on protecting secrets. Follow this guidance to help ensure
you do not log sensitive information, such as credentials, into GitHub repositories or
continuous integration/continuous deployment (CI/CD) pipelines.
Best practices
These best practices are intended to be a resource for IT pros. This might include
designers, architects, developers, and testers who build and deploy secure Azure
solutions.
Next steps
Minimizing security risk is a shared responsibility. You need to be proactive in taking
steps to secure your workloads. Learn more about shared responsibility in the cloud.
See Azure security best practices and patterns for more security best practices to use
when you're designing, deploying, and managing your cloud solutions by using Azure.
Feedback
Was this page helpful? Yes No
This article provides an overview of how encryption is used in Microsoft Azure. It covers
the major areas of encryption, including encryption at rest, encryption in flight, and key
management with Azure Key Vault. Each section includes links to more detailed
information.
Data encryption at rest using AES 256 data encryption is available for services across the
software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service
(IaaS) cloud models. This article summarizes and provides resources to help you use the
Azure encryption options.
For a more detailed discussion of how data at rest is encrypted in Azure, see Azure Data
Encryption-at-Rest.
Client-side encryption
Client-side encryption is performed outside of Azure. It includes:
Server-side encryption
The three server-side encryption models offer different key management characteristics,
which you can choose according to your requirements:
Customer-managed keys: Gives you control over the keys, including Bring Your
Own Keys (BYOK) support, or allows you to generate new ones.
Azure Storage Service Encryption (SSE) can automatically encrypt data before it is
stored, and it automatically decrypts the data when you retrieve it. The process is
completely transparent to users. Storage Service Encryption uses 256-bit Advanced
Encryption Standard (AES) encryption , which is one of the strongest block ciphers
available. AES handles encryption, decryption, and key management transparently.
To learn more about and download the Azure Storage Client Library for .NET NuGet
package, see Windows Azure Storage 8.3.0 .
When you use client-side encryption with Key Vault, your data is encrypted using a one-
time symmetric Content Encryption Key (CEK) that is generated by the Azure Storage
client SDK. The CEK is encrypted using a Key Encryption Key (KEK), which can be either a
symmetric key or an asymmetric key pair. You can manage it locally or store it in Key
Vault. The encrypted data is then uploaded to Azure Storage.
To learn more about client-side encryption with Key Vault and get started with how-to
instructions, see Tutorial: Encrypt and decrypt blobs in Azure Storage by using Key Vault.
Finally, you can also use the Azure Storage Client Library for Java to perform client-side
encryption before you upload data to Azure Storage, and to decrypt the data when you
download it to the client. This library also supports integration with Key Vault for
storage account key management.
TDE is used to encrypt SQL Server , Azure SQL Database, and Azure Synapse Analytics
data files in real time, using a Database Encryption Key (DEK), which is stored in the
database boot record for availability during recovery.
TDE protects data and log files, using AES and Triple Data Encryption Standard (3DES)
encryption algorithms. Encryption of the database file is performed at the page level.
The pages in an encrypted database are encrypted before they are written to disk and
are decrypted when they’re read into memory. TDE is now enabled by default on newly
created Azure SQL databases.
CLE has built-in functions that you can use to encrypt data by using either symmetric or
asymmetric keys, the public key of a certificate, or a passphrase using 3DES.
Three types of keys are used in encrypting and decrypting data: the Master Encryption
Key (MEK), Data Encryption Key (DEK), and Block Encryption Key (BEK). The MEK is used
to encrypt the DEK, which is stored on persistent media, and the BEK is derived from the
DEK and the data block. If you are managing your own keys, you can rotate the MEK.
Encryption of data in transit
Azure offers many mechanisms for keeping data private as it moves from one location
to another.
Perfect Forward Secrecy (PFS) protects connections between customers’ client systems
and Microsoft cloud services by unique keys. Connections also support RSA-based
2,048-bit key lengths, ECC 256-bit key lengths, SHA-384 message authentication, and
AES-256 data encryption. This combination makes it difficult for someone to intercept
and access data that is in transit.
SMB 3.0, which used to access Azure Files shares, supports encryption, and it's available
in Windows Server 2012 R2, Windows 8, Windows 8.1, and Windows 10. It allows cross-
region access and even access on the desktop.
Client-side encryption encrypts the data before it’s sent to your Azure Storage instance,
so that it’s encrypted as it travels across the network.
By default, after SMB encryption is turned on for a share or server, only SMB 3.0 clients
are allowed to access the encrypted shares.
RDP sessions
You can connect and sign in to a VM by using the Remote Desktop Protocol (RDP) from
a Windows client computer, or from a Mac with an RDP client installed. Data in transit
over the network in RDP sessions can be protected by TLS.
Site-to-site VPNs use IPsec for transport encryption. Azure VPN gateways use a set of
default proposals. You can configure Azure VPN gateways to use a custom IPsec/IKE
policy with specific cryptographic algorithms and key strengths, rather than the Azure
default policy sets.
Point-to-site VPNs
Point-to-site VPNs allow individual client computers access to an Azure virtual network.
The Secure Socket Tunneling Protocol (SSTP) is used to create the VPN tunnel. It can
traverse firewalls (the tunnel appears as an HTTPS connection). You can use your own
internal public key infrastructure (PKI) root certificate authority (CA) for point-to-site
connectivity.
You can configure a point-to-site VPN connection to a virtual network by using the
Azure portal with certificate authentication or PowerShell.
To learn more about point-to-site VPN connections to Azure virtual networks, see:
Site-to-site VPNs
You can use a site-to-site VPN gateway connection to connect your on-premises
network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This
type of connection requires an on-premises VPN device that has an external-facing
public IP address assigned to it.
You can configure a site-to-site VPN connection to a virtual network by using the Azure
portal, PowerShell, or Azure CLI.
To learn more about encryption of data in transit in Data Lake, see Encryption of data in
Data Lake Store.
Key Vault relieves organizations of the need to configure, patch, and maintain hardware
security modules (HSMs) and key management software. When you use Key Vault, you
maintain control. Microsoft never sees your keys, and applications don’t have direct
access to them. You can also import or generate keys in HSMs.
Next steps
Azure security overview
Azure network security overview
Azure database security overview
Azure virtual machines security overview
Data encryption at rest
Data security and encryption best practices
Key management in Azure
Article • 06/29/2023
7 Note
Zero Trust is a security strategy comprising three principles: "Verify explicitly", "Use
least privilege access", and "Assume breach". Data protection, including key
management, supports the "use least privilege access" principle. For more
information, see What is Zero Trust?
Platform-managed keys (PMKs) are encryption keys generated, stored, and managed
entirely by Azure. Customers do not interact with PMKs. The keys used for Azure Data
Encryption-at-Rest, for instance, are PMKs by default.
Customer-managed keys (CMK), on the other hand, are keys read, created, deleted,
updated, and/or administered by one or more customers. Keys stored in a customer-
owned key vault or hardware security module (HSM) are CMKs. Bring Your Own Key
(BYOK) is a CMK scenario in which a customer imports (brings) keys from an outside
storage location into an Azure key management service (see the Azure Key Vault: Bring
your own key specification).
A specific type of customer-managed key is the "key encryption key" (KEK). A KEK is a
primary key that controls access to one or more encryption keys that are themselves
encrypted.
Customer-managed keys can be stored on-premises or, more commonly, in a cloud key
management service.
Service Limits
Managed HSM, Dedicated HSM, and Payments HSM offer dedicated capacity. Key Vault
Standard and Premium are multi-tenant offerings and have throttling limits. For service
limits, see Key Vault service limits.
Encryption-At-Rest
Azure Key Vault and Azure Key Vault Managed HSM have integrations with Azure
Services and Microsoft 365 for Customer Managed Keys, meaning customers may use
their own keys in Azure Key Vault and Azure Key Managed HSM for encryption-at-rest of
data stored in these services. Dedicated HSM and Payments HSM are Infrastructure-as-
Service offerings and do not offer integrations with Azure Services. For an overview of
encryption-at-rest with Azure Key Vault and Managed HSM, see Azure Data Encryption-
at-Rest.
APIs
Dedicated HSM and Payments HSM support the PKCS#11, JCE/JCA, and KSP/CNG APIs,
but Azure Key Vault and Managed HSM do not. Azure Key Vault and Managed HSM use
the Azure Key Vault REST API and offer SDK support. For more information on the Azure
Key Vault API, see Azure Key Vault REST API Reference.
What's next
How to Choose the Right Key Management Solution
Azure Key Vault
Azure Managed HSM
Azure Dedicated HSM
Azure Payment HSM
What is Zero Trust?
How to choose the right key
management solution
Article • 02/11/2024
Azure offers multiple solutions for cryptographic key storage and management in the
cloud: Azure Key Vault (standard and premium offerings), Azure Managed HSM, Azure
Dedicated HSM, and Azure Payment HSM. It may be overwhelming for customers to
decide which key management solution is correct for them. This paper aims to help
customers navigate this decision-making process by presenting the range of solutions
based on three different considerations: scenarios, requirements, and industry.
To begin narrowing down a key management solution, follow the flowchart based on
common high-level requirements and key management scenarios. Alternatively, use the
table based on specific customer requirements that directly follows it. If either provide
multiple products as solutions, use a combination of the flowchart and table to help in
making a final decision. If curious about what other customers in the same industry are
using, read the table of common key management solutions by industry segment. To learn
more about a specific solution, use the links at the end of the document.
Encryption at rest is typically enabled for Azure IaaS, PaaS, and SaaS models.
Applications such as Microsoft 365; Microsoft Purview Information Protection;
platform services in which the cloud is used for storage, analytics, and service bus
functionality; and infrastructure services in which operating systems and applications
are hosted and deployed in the cloud use encryption at rest. Customer managed keys
for encryption at rest is used with Azure Storage and Microsoft Entra ID. For highest
security, keys should be HSM-backed, 3k or 4k RSA keys. For more information about
encryption at rest, see Azure Data Encryption at Rest.
SSL/TLS Offload is supported on Azure Managed HSM and Azure Dedicated HSM.
Customers have improved high availability, security, and best price point on Azure
Managed HSM for F5 and Nginx.
Lift and shift refer to scenarios where a PKCS11 application on-premises is migrated
to Azure Virtual Machines and running software such as Oracle TDE in Azure Virtual
Machines. Lift and shift requiring payment PIN processing is supported by Azure
Payment HSM. All other scenarios are supported by Azure Dedicated HSM. Legacy
APIs and libraries such as PKCS11, JCA/JCE, and CNG/KSP are only supported by
Azure Dedicated HSM.
Payment PIN processing includes allowing card and mobile payment authorization
and 3D-Secure authentication; PIN generation, management, and validation;
payment credential issuing for cards, wearables, and connected devices; securing
keys and authentication data; and sensitive data protection for point-to-point
encryption, security tokenization, and EMV payment tokenization. This also includes
certifications such as PCI DSS, PCI 3DS, and PCI PIN. These are supported by Azure
Payment HSM.
The flowchart result is a starting point to identify the solution that best matches your
needs.
Compare other customer requirements
Azure provides multiple key management solutions to allow customers to choose a
product based on both high-level requirements and management responsibilities. There is
a spectrum of management responsibilities ranging from Azure Key Vault and Azure
Managed HSM having less customer responsibility, followed by Azure Dedicated HSM and
Azure Payment HSM having the most customer responsibility.
This trade-off of management responsibility between the customer and Microsoft and
other requirements is detailed in the table below.
Provisioning and hosting are managed by Microsoft across all solutions. Key generation
and management, roles and permissions granting, and monitoring and auditing are the
responsibility of the customer across all solutions.
Use the table to compare all the solutions side by side. Begin from top to bottom,
answering each question found on the left-most column to help you choose the solution
that meets all your needs, including management overhead and costs.
ノ Expand table
What level of FIPS 140-2 FIPS 140-2 FIPS 140-2 level 3, PCI FIPS 140-2 level 3, FIPS 140-2
compliance level 1 level 3, PCI DSS, PCI 3DS HIPPA, PCI DSS, PCI level 3, PCI
do you need? DSS, PCI 3DS, eIDAS CC EAL4+, PTS HSM
3DS** GSMA v3, PCI
DSS, PCI
3DS, PCI
PIN
What are your Encryption Encryption Encryption at Rest, TLS PKCS11, TLS Offload, Payment
use cases? at Rest, at Rest, Offload, CMK, custom code/document PIN
CMK, CMK, signing, custom processing,
custom custom custom
AKV AKV Azure Managed HSM Azure Dedicated HSM Azure
Standard Premium Payment
HSM
ノ Expand table
I am a service provider for financial Azure Azure Payment HSM provides FIPS 140-2
services, an issuer, a card acquirer, a Payment HSM Level 3, PCI HSM v3, PCI DSS, PCI 3DS,
card network, a payment and PCI PIN compliance. It provides key
gateway/PSP, or 3DS solution provider sovereignty and single tenancy, common
looking for a single tenant service that internal compliance requirements around
can meet PCI and multiple major payment processing. Azure Payment HSM
compliance frameworks. provides full payment transaction and PIN
processing support.
I am an early-stage startup customer Azure Key Azure Key Vault Standard provides
looking to prototype a cloud-native Vault software-backed keys at an economy
application. Standard price.
I am a startup customer looking to Azure Key Both Azure Key Vault Premium and Azure
produce a cloud-native application. Vault Managed HSM provide HSM-backed
Premium, keys* and are the best solutions for
Azure building cloud native applications.
Managed
HSM
I am an IaaS customer wanting to Azure Azure Dedicated HSM supports SQL IaaS
move my application to use Azure Dedicated customers. It is the only solution that
VM/HSMs. HSM supports PKCS11 and custom non-cloud
native applications.
Azure Managed HSM: A FIPS 140-2 Level 3 validated, PCI compliant, single-tenant HSM
offering that gives customers full control of an HSM for encryption-at-rest, Keyless
SSL/TLS offload, and custom applications. Azure Managed HSM is the only key
management solution offering confidential keys. Customers receive a pool of three HSM
partitions—together acting as one logical, highly available HSM appliance—fronted by a
service that exposes crypto functionality through the Key Vault API. Microsoft handles the
provisioning, patching, maintenance, and hardware failover of the HSMs, but doesn't have
access to the keys themselves, because the service executes within Azure's Confidential
Compute Infrastructure. Azure Managed HSM is integrated with the Azure SQL, Azure
Storage, and Azure Information Protection PaaS services and offers support for Keyless TLS
with F5 and Nginx. For more information, see What is Azure Key Vault Managed HSM?
Azure Dedicated HSM: A FIPS 140-2 Level 3 validated single-tenant bare metal HSM
offering that lets customers lease a general-purpose HSM appliance that resides in
Microsoft datacenters. The customer has complete ownership over the HSM device and is
responsible for patching and updating the firmware when required. Microsoft has no
permissions on the device or access to the key material, and Azure Dedicated HSM is not
integrated with any Azure PaaS offerings. Customers can interact with the HSM using the
PKCS#11, JCE/JCA, and KSP/CNG APIs. This offering is most useful for legacy lift-and-shift
workloads, PKI, SSL Offloading and Keyless TLS (supported integrations include F5, Nginx,
Apache, Palo Alto, IBM GW and more), OpenSSL applications, Oracle TDE, and Azure SQL
TDE IaaS. For more information, see What is Azure Dedicated HSM?
Azure Payment HSM: A FIPS 140-2 Level 3, PCI HSM v3, validated single-tenant bare
metal HSM offering that lets customers lease a payment HSM appliance in Microsoft
datacenters for payments operations, including payment PIN processing, payment
credential issuing, securing keys and authentication data, and sensitive data protection.
The service is PCI DSS, PCI 3DS, and PCI PIN compliant. Azure Payment HSM offers single-
tenant HSMs for customers to have complete administrative control and exclusive access
to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to
customer data. Likewise, when the HSM is no longer required, customer data is zeroized
and erased as soon as the HSM is released, to ensure complete privacy and security is
maintained. For more information, see About Azure Payment HSM.
7 Note
* Azure Key Vault Premium allows the creation of both software-protected and HSM
protected keys. If using Azure Key Vault Premium, check to ensure that the key
created is HSM protected.
What's next
Key management in Azure
Azure Key Vault
Azure Managed HSM
Azure Dedicated HSM
Azure Payment HSM
What is Zero Trust?
Feedback
Was this page helpful? Yes No
Double encryption is where two or more independent layers of encryption are enabled
to protect against compromises of any one layer of encryption. Using two layers of
encryption mitigates threats that come with encrypting data. For example:
Azure provides double encryption for data at rest and data in transit.
Data at rest
Microsoft’s approach to enabling two layers of encryption for data at rest is:
Encryption at rest using customer-managed keys. You provide your own key for
data encryption at rest. You can bring your own keys to your Key Vault (BYOK –
Bring Your Own Key), or generate new keys in Azure Key Vault to encrypt the
desired resources.
Infrastructure encryption using platform-managed keys. By default, data is
automatically encrypted at rest using platform-managed encryption keys.
Data in transit
Microsoft’s approach to enabling two layers of encryption for data in transit is:
Transit encryption using Transport Layer Security (TLS) 1.2 to protect data when
it’s traveling between the cloud services and you. All traffic leaving a datacenter is
encrypted in transit, even if the traffic destination is another domain controller in
the same region. TLS 1.2 is the default security protocol used. TLS provides strong
authentication, message privacy, and integrity (enabling detection of message
tampering, interception, and forgery), interoperability, algorithm flexibility, and
ease of deployment and use.
Additional layer of encryption provided at the infrastructure layer. Whenever
Azure customer traffic moves between datacenters-- outside physical boundaries
not controlled by Microsoft or on behalf of Microsoft-- a data-link layer encryption
method using the IEEE 802.1AE MAC Security Standards (also known as MACsec)
is applied from point-to-point across the underlying network hardware. The
packets are encrypted and decrypted on the devices before being sent, preventing
physical “man-in-the-middle” or snooping/wiretapping attacks. Because this
technology is integrated on the network hardware itself, it provides line rate
encryption on the network hardware with no measurable link latency increase. This
MACsec encryption is on by default for all Azure traffic traveling within a region or
between regions, and no action is required on customers’ part to enable.
Next steps
Learn how encryption is used in Azure.
Feedback
Was this page helpful? Yes No
This article outlines the specific root and subordinate Certificate Authorities (CAs) that
are employed by Azure's service endpoints. It is important to note that this list is distinct
from the trust anchors provided on Azure VMs and hosted services, which leverage the
trust anchors provided by the operating systems themselves. The scope includes
government and national clouds. The minimum requirements for public key encryption
and signature algorithms, links to certificate downloads and revocation lists, and
information about key concepts are provided below the CA details tables. The host
names for the URIs that should be added to your firewall allowlists are also provided.
The Serial Number (top string in the table) contains the hexadecimal value of the
certificate serial number.
The Thumbprint (bottom string in the table) is the SHA1 thumbprint.
CAs listed in italics are the most recently added CAs.
ノ Expand table
ノ Expand table
ノ Expand table
Signature algorithms:
ES256
ES384
ES512
RS256
RS384
RS512
Elliptical curves:
P256
P384
P521
Key sizes:
ECDSA 256
ECDSA 384
ECDSA 521
RSA 2048
RSA 3072
RSA 4096
AIA:
cacerts.digicert.com
cacerts.digicert.cn
cacerts.geotrust.com
www.microsoft.com
CRL:
crl.microsoft.com
crl3.digicert.com
crl4.digicert.com
crl.digicert.cn
cdp.geotrust.com
mscrl.microsoft.com
www.microsoft.com
OCSP:
ocsp.msocsp.com
ocsp.digicert.com
ocsp.digicert.cn
oneocsp.microsoft.com
status.geotrust.com
Certificate Pinning
Certificate Pinning is a security technique where only authorized, or pinned, certificates
are accepted when establishing a secure session. Any attempt to establish a secure
session using a different certificate is rejected. Learn about the history and implications
of certificate pinning.
Java Applications
To determine if the Microsoft ECC Root Certificate Authority 2017 and Microsoft RSA
Root Certificate Authority 2017 root certificates are trusted by your Java application,
you can check the list of trusted root certificates used by the Java Virtual Machine (JVM).
Bash
If you're unsure of the path, you can find it by running the following
command:
Bash
3. Look for the Microsoft RSA Root Certificate Authority 2017 in the output. It
should look something like this:
If the Microsoft ECC Root Certificate Authority 2017 and Microsoft RSA
Root Certificate Authority 2017 root certificates are trusted, they should
appear in the list of trusted root certificates used by the JVM.
If it's not in the list, you'll need to add it.
The output should look like the following sample:
Bash
...
Microsoft ECC Root Certificate Authority 2017, 20-Aug-2022, Root
CA,
Microsoft RSA Root Certificate Authority 2017, 20-Aug-2022, Root
CA,
...
4. To add a root certificate to the trusted root certificate store in Java, you can use the
keytool utility. The following example adds the Microsoft RSA Root Certificate
Bash
7 Note
Past changes
The CA/Browser Forum updated the Baseline Requirements to require all publicly
trusted Public Key Infrastructures (PKIs) to end usage of the SHA-1 hash algorithms for
Online Certificate Standard Protocol (OCSP) on May 31, 2022. Microsoft updated all
remaining OCSP Responders that used the SHA-1 hash algorithm to use the SHA-256
hash algorithm. View the Sunset for SHA-1 OCSP signing article for additional
information.
Microsoft updated Azure services to use TLS certificates from a different set of Root
Certificate Authorities (CAs) on February 15, 2021, to comply with changes set forth by
the CA/Browser Forum Baseline Requirements. Some services finalized these updates in
2022. View the Azure TLS certificate changes article for additional information.
June 27, 2024: Removed the following CAs, which were superseded by both
versions of Microsoft Azure ECC TLS Issuing CAs 03, 04, 07, 08.
ノ Expand table
Next steps
To learn more about Certificate Authorities and PKI, see:
Feedback
Was this page helpful? Yes No
Though web browsers such as Chrome and Firefox were among the first applications to
implement this technique, the range of use cases rapidly expanded. Internet of Things
(IoT) devices, iOS and Android mobile apps, and a disparate collection of software
applications began using this technique to defend against Man-in-the-Middle attacks.
For several years, certificate pinning was considered good security practice. Oversight
over the public Public Key Infrastructure (PKI) landscape has improved with transparency
into issuance practices of publicly trusted CAs.
If your application explicitly specifies a list of acceptable CAs, you might periodically
need to update pinned certificates when Certificate Authorities change or expire. To
detect certificate pinning, we recommend the taking the following steps:
If you're an application developer, search your source code for any of the following
references for the CA that is changing or expiring. If there's a match, update the
application to include the missing CAs.
Certificate thumbprints
Subject Distinguished Names
Common Names
Serial numbers
Public keys
Other certificate properties
If your custom client application integrates with Azure APIs or other Azure services
and you're unsure if it uses certificate pinning, check with the application vendor.
As there's no single web standard for how certificate pinning is performed, we can't
offer direct guidance in detecting its usage. While we don't recommend against
certificate pinning, customers should be aware of the limitations this practice creates if
they choose to use it.
Next steps
Check the Azure Certificate Authority details for upcoming changes
Review the Azure Security Fundamentals best practices and patterns
Sunset for SHA-1 Online Certificate
Standard Protocol signing
Article • 04/21/2023
) Important
This article was published concurrent with the change described, and is not being
updated. For up-to-date information about CAs, see Azure Certificate Authority
details.
Microsoft is updating the Online Certificate Standard Protocol (OCSP) service to comply
with a recent change to the Certificate Authority / Browser Forum (CA/B Forum)
Baseline Requirements. This change requires that all publicly-trusted Public Key
Infrastructures (PKIs) end usage of the SHA-1 hash algorithms for OCSP responses by
May 31, 2022.
Microsoft leverages certificates from multiple PKIs to secure its services. Many of those
certificates already use OCSP responses that use the SHA-256 hash algorithm. This
change brings all remaining PKIs used by Microsoft into compliance with this new
requirement.
After May 31, 2022, clients that don't support SHA-256 hashes will be unable to validate
the revocation status of a certificate, which could result in a failure in the client,
depending on the configuration.
If you're unable to update your legacy client to one that supports SHA-256, you can
disable revocation checking to bypass OCSP until you update your client. If your
Transport Layer Security (TLS) stack is older than 2015, you should review your
configuration for potential incompatibilities.
Next steps
If you have questions, contact us through support .
Azure TLS certificate changes
Article • 05/23/2023
) Important
This article was published concurrent with the TLS certificate change, and is not
being updated. For up-to-date information about CAs, see Azure Certificate
Authority details.
Microsoft uses TLS certificates from the set of Root Certificate Authorities (CAs) that
adhere to the CA/Browser Forum Baseline Requirements. All Azure TLS/SSL endpoints
contain certificates chaining up to the Root CAs provided in this article. Changes to
Azure endpoints began transitioning in August 2020, with some services completing
their updates in 2022. All newly created Azure TLS/SSL endpoints contain updated
certificates chaining up to the new Root CAs.
All Azure services are impacted by this change. Details for some services are listed
below:
Azure Active Directory (Azure AD) services began this transition on July 7, 2020.
For the most up-to-date information about the TLS certificate changes for Azure
IoT services, refer to this Azure IoT blog post .
Azure IoT Hub began this transition in February 2023 with an expected
completion in October 2023.
Azure IoT Central will begin this transition in July 2023.
Azure IoT Hub Device Provisioning Service will begin this transition in January
2024.
Azure Cosmos DB began this transition in July 2022 with an expected completion
in October 2022.
Details on Azure Storage TLS certificate changes can be found in this Azure
Storage blog post .
Azure Cache for Redis is moving away from TLS certificates issued by Baltimore
CyberTrust Root starting May 2022, as described in this Azure Cache for Redis
article
Azure Instance Metadata Service has an expected completion in May 2022, as
described in this Azure Governance and Management blog post .
What changed?
Prior to the change, most of the TLS certificates used by Azure services chained up to
the following Root CA:
After the change, TLS certificates used by Azure services will chain up to one of the
following Root CAs:
Search your source code for the thumbprint, Common Name, and other cert
properties of any of the Microsoft IT TLS CAs in the Microsoft PKI repository . If
there's a match, then your application will be impacted. To resolve this problem,
update the source code include the new CAs. As a best practice, ensure that CAs
can be added or edited on short notice. Industry regulations require CA certificates
to be replaced within seven days of the change and hence customers relying on
pinning need to react swiftly.
If you have an application that integrates with Azure APIs or other Azure services
and you're unsure if it uses certificate pinning, check with the application vendor.
Different operating systems and language runtimes that communicate with Azure
services may require more steps to correctly build the certificate chain with these
new roots:
Linux: Many distributions require you to add CAs to /etc/ssl/certs. For specific
instructions, refer to the distribution’s documentation.
Java: Ensure that the Java key store contains the CAs listed above.
Windows running in disconnected environments: Systems running in
disconnected environments will need to have the new roots added to the
Trusted Root Certification Authorities store, and the intermediates added to the
Intermediate Certification Authorities store.
Android: Check the documentation for your device and version of Android.
Other hardware devices, especially IoT: Contact the device manufacturer.
If you have an environment where firewall rules are set to allow outbound calls to
only specific Certificate Revocation List (CRL) download and/or Online Certificate
Status Protocol (OCSP) verification locations, you'll need to allow the following CRL
and OCSP URLs. For a complete list of CRL and OCSP URLs used in Azure, see the
Azure CA details article.
http://crl3.digicert.com
http://crl4.digicert.com
http://ocsp.digicert.com
http://crl.microsoft.com
http://oneocsp.microsoft.com
http://ocsp.msocsp.com
Next steps
If you have questions, contact us through support .
Azure data security and encryption best
practices
Article • 03/27/2024
This article describes best practices for data security and encryption.
The best practices are based on a consensus of opinion, and they work with current
Azure platform capabilities and feature sets. Opinions and technologies change over
time and this article is updated on a regular basis to reflect those changes.
Protect data
To help protect data in the cloud, you need to account for the possible states in which
your data can occur, and what controls are available for that state. Best practices for
Azure data security and encryption relate to the following data states:
At rest: This includes all information storage objects, containers, and types that
exist statically on physical media, whether magnetic or optical disk.
In transit: When data is being transferred between components, locations, or
programs, it's in transit. Examples are transfer over the network, across a service
bus (from on-premises to cloud and vice-versa, including hybrid connections such
as ExpressRoute), or during an input/output process.
In Use: When data is being processed, the specialized AMD & Intel chipset based
Confidential compute VMs keep the data encrypted in memory using hardware
managed keys.
Azure Key Vault helps safeguard cryptographic keys and secrets that cloud applications
and services use. Key Vault streamlines the key management process and enables you to
maintain control of keys that access and encrypt your data. Developers can create keys
for development and testing in minutes, and then migrate them to production keys.
Security administrators can grant (and revoke) permission to keys, as needed.
You can use Key Vault to create multiple secure containers, called vaults. These vaults are
backed by HSMs. Vaults help reduce the chances of accidental loss of security
information by centralizing the storage of application secrets. Key vaults also control
and log the access to anything stored in them. Azure Key Vault can handle requesting
and renewing Transport Layer Security (TLS) certificates. It provides features for a robust
solution for certificate lifecycle management.
Azure Key Vault is designed to support application keys and secrets. Key Vault is not
intended to be a store for user passwords.
Best practice: Grant access to users, groups, and applications at a specific scope. Detail:
Use Azure RBAC predefined roles. For example, to grant access to a user to manage key
vaults, you would assign the predefined role Key Vault Contributor to this user at a
specific scope. The scope in this case would be a subscription, a resource group, or just
a specific key vault. If the predefined roles don't fit your needs, you can define your own
roles.
Best practice: Control what users have access to. Detail: Access to a key vault is
controlled through two separate interfaces: management plane and data plane. The
management plane and data plane access controls work independently.
Use Azure RBAC to control what users have access to. For example, if you want to grant
an application access to use keys in a key vault, you only need to grant data plane
access permissions by using key vault access policies, and no management plane access
is needed for this application. Conversely, if you want a user to be able to read vault
properties and tags but not have any access to keys, secrets, or certificates, you can
grant this user read access by using Azure RBAC, and no access to the data plane is
required.
Best practice: Store certificates in your key vault. Your certificates are of high value. In
the wrong hands, your application's security or the security of your data can be
compromised. Detail: Azure Resource Manager can securely deploy certificates stored in
Azure Key Vault to Azure VMs when the VMs are deployed. By setting appropriate
access policies for the key vault, you also control who gets access to your certificate.
Another benefit is that you manage all your certificates in one place in Azure Key Vault.
See Deploy Certificates to VMs from customer-managed Key Vault for more information.
Best practice: Ensure that you can recover a deletion of key vaults or key vault objects.
Detail: Deletion of key vaults or key vault objects can be inadvertent or malicious.
Enable the soft delete and purge protection features of Key Vault, particularly for keys
that are used to encrypt data at rest. Deletion of these keys is equivalent to data loss, so
you can recover deleted vaults and vault objects if needed. Practice Key Vault recovery
operations on a regular basis.
7 Note
If a user has contributor permissions (Azure RBAC) to a key vault management
plane, they can grant themselves access to the data plane by setting a key vault
access policy. We recommend that you tightly control who has contributor access
to your key vaults, to ensure that only authorized persons can access and manage
your key vaults, keys, secrets, and certificates.
7 Note
Because the vast majority of attacks target the end user, the endpoint becomes one of
the primary points of attack. An attacker who compromises the endpoint can use the
user's credentials to gain access to the organization's data. Most endpoint attacks take
advantage of the fact that users are administrators in their local workstations.
Best practice: Ensure endpoint protection. Detail: Enforce security policies across all
devices that are used to consume data, regardless of the data location (cloud or on-
premises).
Best practice: Apply disk encryption to help safeguard your data. Detail: Use Azure Disk
Encryption for Linux VMs or Azure Disk Encryption for Windows VMs. Disk Encryption
combines the industry-standard Linux dm-crypt or Windows BitLocker feature to
provide volume encryption for the OS and the data disks.
Azure Storage and Azure SQL Database encrypt data at rest by default, and many
services offer encryption as an option. You can use Azure Key Vault to maintain control
of keys that access and encrypt your data. See Azure resource providers encryption
model support to learn more.
Best practices: Use encryption to help mitigate risks related to unauthorized data
access. Detail: Encrypt your drives before you write sensitive data to them.
Organizations that don't enforce data encryption are more exposed to data-
confidentiality issues. For example, unauthorized or rogue users might steal data in
compromised accounts or gain unauthorized access to data coded in Clear Format.
Companies also must prove that they are diligent and using correct security controls to
enhance their data security in order to comply with industry regulations.
For data moving between your on-premises infrastructure and Azure, consider
appropriate safeguards such as HTTPS or VPN. When sending encrypted traffic between
an Azure virtual network and an on-premises location over the public internet, use Azure
VPN Gateway.
Following are best practices specific to using Azure VPN Gateway, SSL/TLS, and HTTPS.
Best practice: Move larger data sets over a dedicated high-speed WAN link. Detail: Use
ExpressRoute. If you choose to use ExpressRoute, you can also encrypt the data at the
application level by using SSL/TLS or other protocols for added protection.
Best practice: Interact with Azure Storage through the Azure portal. Detail: All
transactions occur via HTTPS. You can also use Storage REST API over HTTPS to interact
with Azure Storage.
Organizations that fail to protect data in transit are more susceptible to man-in-the-
middle attacks, eavesdropping, and session hijacking. These attacks can be the first step
in gaining access to confidential data.
Protect data in use
Lessen the need for trust Running workloads on the cloud requires trust. You give this
trust to various providers enabling different components of your application.
Reducing the attack surface The Trusted Computing Base (TCB) refers to all of a
system's hardware, firmware, and software components that provide a secure
environment. The components inside the TCB are considered "critical." If one
component inside the TCB is compromised, the entire system's security may be
jeopardized. A lower TCB means higher security. There's less risk of exposure to various
vulnerabilities, malware, attacks, and malicious people.
Prevent unauthorized access: Run sensitive data in the cloud. Trust that Azure
provides the best data protection possible, with little to no change from what gets
done today.
Meet regulatory compliance: Migrate to the cloud and keep full control of data to
satisfy government regulations for protecting personal information and secure
organizational IP.
Ensure secure and untrusted collaboration: Tackle industry-wide work-scale
problems by combing data across organizations, even competitors, to unlock
broad data analytics and deeper insights.
Isolate processing: Offer a new wave of products that remove liability on private
data with blind processing. User data can't even be retrieved by the service
provider.
Classification is identifiable at all times, regardless of where the data is stored or with
whom it's shared. The labels include visual markings such as a header, footer, or
watermark. Metadata is added to files and email headers in clear text. The clear text
ensures that other services, such as solutions to prevent data loss, can identify the
classification and take appropriate action.
The protection technology uses Azure Rights Management (Azure RMS). This
technology is integrated with other Microsoft cloud services and applications, such as
Microsoft 365 and Microsoft Entra ID. This protection technology uses encryption,
identity, and authorization policies. Protection that is applied through Azure RMS stays
with the documents and emails, independently of the location-inside or outside your
organization, networks, file servers, and applications.
This information protection solution keeps you in control of your data, even when it's
shared with other people. You can also use Azure RMS with your own line-of-business
applications and information protection solutions from software vendors, whether these
applications and solutions are on-premises or in the cloud.
Organizations that are weak on data classification and file protection might be more
susceptible to data leakage or data misuse. With proper file protection, you can analyze
data flows to gain insight into your business, detect risky behaviors and take corrective
measures, track access to documents, and so on.
Next steps
See Azure security best practices and patterns for more security best practices to use
when you're designing, deploying, and managing your cloud solutions by using Azure.
The following resources are available to provide more general information about Azure
security and related Microsoft services:
Azure Security Team Blog - for up to date information on the latest in Azure
Security
Microsoft Security Response Center - where Microsoft security vulnerabilities,
including issues with Azure, can be reported or via email to [email protected]
Azure Data Encryption at rest
Article • 11/15/2022
Microsoft Azure includes tools to safeguard data according to your company's security
and compliance needs. This paper focuses on:
In practice, key management and control scenarios, as well as scale and availability
assurances, require additional constructs. Microsoft Azure Encryption at Rest concepts
and components are described below.
Encryption at rest is designed to prevent the attacker from accessing the unencrypted
data by ensuring the data is encrypted when on disk. If an attacker obtains a hard drive
with encrypted data but not the encryption keys, the attacker must defeat the
encryption to read the data. This attack is much more complex and resource consuming
than accessing unencrypted data on a hard drive. For this reason, encryption at rest is
highly recommended and is a high priority requirement for many organizations.
Encryption at rest may also be required by an organization's need for data governance
and compliance efforts. Industry and government regulations such as HIPAA, PCI and
FedRAMP, lay out specific safeguards regarding data protection and encryption
requirements. Encryption at rest is a mandatory measure required for compliance with
some of those regulations. For more information on Microsoft's approach to FIPS 140-2
validation, see Federal Information Processing Standard (FIPS) Publication 140-2.
Microsoft is committed to encryption at rest options across cloud services and giving
customers control of encryption keys and logs of key use. Additionally, Microsoft is
working towards encrypting all customer data at rest by default.
Resource providers and application instances store the encrypted Data Encryption Keys
as metadata. Only an entity with access to the Key Encryption Key can decrypt these
Data Encryption Keys. Different models of key storage are supported. For more
information, see data encryption models.
Encrypted storage
Like PaaS, IaaS solutions can leverage other Azure services that store data encrypted at
rest. In these cases, you can enable the Encryption at Rest support as provided by each
consumed Azure service. The Data encryption models: supporting services table
enumerates the major storage, services, and application platforms and the model of
Encryption at Rest supported.
Encrypted compute
All Managed Disks, Snapshots, and Images are encrypted using Storage Service
Encryption using a service-managed key. A more complete Encryption at Rest solution
ensures that the data is never persisted in unencrypted form. While processing the data
on a virtual machine, data can be persisted to the Windows page file or Linux swap file,
a crash dump, or to an application log. To ensure this data is encrypted at rest, IaaS
applications can use Azure Disk Encryption on an Azure IaaS virtual machine (Windows
or Linux) and virtual disk.
Azure storage
All Azure Storage services (Blob storage, Queue storage, Table storage, and Azure Files)
support server-side encryption at rest; some services additionally support customer-
managed keys and client-side encryption.
Support for server encryption is currently provided through the SQL feature called
Transparent Data Encryption. Once an Azure SQL Database customer enables TDE key
are automatically created and managed for them. Encryption at rest can be enabled at
the database and server levels. As of June 2017, Transparent Data Encryption (TDE) is
enabled by default on newly created databases. Azure SQL Database supports RSA
2048-bit customer-managed keys in Azure Key Vault. For more information, see
Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database
and Data Warehouse.
Client-side encryption of Azure SQL Database data is supported through the Always
Encrypted feature. Always Encrypted uses a key that created and stored by the client.
Customers can store the master key in a Windows certificate store, Azure Key Vault, or a
local Hardware Security Module. Using SQL Server Management Studio, SQL users
choose what key they'd like to use to encrypt which column.
Conclusion
Protection of customer data stored within Azure Services is of paramount importance to
Microsoft. All Azure hosted services are committed to providing Encryption at Rest
options. Azure services support either service-managed keys, customer-managed keys,
or client-side encryption. Azure services are broadly enhancing Encryption at Rest
availability and new options are planned for preview and general availability in the
upcoming months.
Next steps
See data encryption models to learn more about service-managed keys and
customer-managed keys.
Learn how Azure uses double encryption to mitigate threats that come with
encrypting data.
Learn what Microsoft does to ensure platform integrity and security of hosts
traversing the hardware and firmware build-out, integration, operationalization,
and repair pipelines.
Data encryption models
Article • 07/19/2024
An understanding of the various encryption models and their pros and cons is essential
for understanding how the various resource providers in Azure implement encryption at
Rest. These definitions are shared across all resource providers in Azure to ensure
common language and taxonomy.
Each of the server-side encryption at rest models implies distinctive characteristics of
key management. This includes where and how encryption keys are created, and stored
as well as the access models and the key rotation procedures.
The supported encryption models in Azure split into two main groups: "Client
Encryption" and "Server-side Encryption" as mentioned previously. Independent of the
encryption at rest model used, Azure services always recommend the use of a secure
transport such as TLS or HTTPS. Therefore, encryption in transport should be addressed
by the transport protocol and should not be a major factor in determining which
encryption at rest model to use.
Server-side encryption with Microsoft-managed keys does imply the service has full
access to store and manage the keys. While some customers might want to manage the
keys because they feel they gain greater security, the cost and risk associated with a
custom key storage solution should be considered when evaluating this model. In many
cases, an organization might determine that resource constraints or risks of an on-
premises solution might be greater than the risk of cloud management of the
encryption at rest keys. However, this model might not be sufficient for organizations
that have requirements to control the creation or lifecycle of the encryption keys or to
have different personnel manage a service's encryption keys than those managing the
service (that is, segregation of key management from the overall management model
for the service).
Key access
When Server-side encryption with service-managed keys is used, the key creation,
storage, and service access are all managed by the service. Typically, the foundational
Azure resource providers will store the Data Encryption Keys in a store that is close to
the data and quickly available and accessible while the Key Encryption Keys are stored in
a secure internal store.
Advantages
Simple setup
Microsoft manages key rotation, backup, and redundancy
Customer does not have the cost associated with implementation or the risk of a
custom key management scheme.
Disadvantages
Loss of key encryption keys means loss of data. For this reason, keys should not be
deleted. Keys should be backed up whenever created or rotated. Soft-Delete and purge
protection must be enabled on any vault storing key encryption keys to protect against
accidental or malicious cryptographic erasure. Instead of deleting a key, it is
recommended to set enabled to false on the key encryption key. Use access controls to
revoke access to individual users or services in Azure Key Vault or Managed HSM.
Key Access
The server-side encryption model with customer-managed keys in Azure Key Vault
involves the service accessing the keys to encrypt and decrypt as needed. Encryption at
rest keys are made accessible to a service through an access control policy. This policy
grants the service identity access to receive the key. An Azure service running on behalf
of an associated subscription can be configured with an identity in that subscription. The
service can perform Microsoft Entra authentication and receive an authentication token
identifying itself as that service acting on behalf of the subscription. That token can then
be presented to Key Vault to obtain a key it has been given access to.
For operations using encryption keys, a service identity can be granted access to any of
the following operations: decrypt, encrypt, unwrapKey, wrapKey, verify, sign, get, list,
update, create, import, delete, backup, and restore.
To obtain a key for use in encrypting or decrypting data at rest the service identity that
the Resource Manager service instance will run as must have UnwrapKey (to get the key
for decryption) and WrapKey (to insert a key into key vault when creating a new key).
7 Note
For more detail on Key Vault authorization see the secure your key vault page in
the Azure Key Vault documentation.
Advantages
Full control over the keys used – encryption keys are managed in the customer's
Key Vault under the customer's control.
Ability to encrypt multiple services to one master
Can segregate key management from overall management model for the service
Can define service and key location across regions
Disadvantages
Key Access
When server-side encryption using customer-managed keys in customer-controlled
hardware is used, the key encryption keys are maintained on a system configured by the
customer. Azure services that support this model provide a means of establishing a
secure connection to a customer supplied key store.
Advantages
Full control over the root key used – encryption keys are managed by a customer
provided store
Ability to encrypt multiple services to one master
Can segregate key management from overall management model for the service
Can define service and key location across regions
Disadvantages
Supporting services
The Azure services that support each encryption model:
ノ Expand table
Analytics
Containers
Compute
Databases
PostgreSQL HSM
Identity
Integration
IoT Services
Management and
Governance
Media
Security
Storage
Other
* This service doesn't persist data. Transient caches, if any, are encrypted with a
Microsoft key.
** This service supports storing data in your own Key Vault, Storage Account, or other
data persisting service that already supports Server-Side Encryption with Customer-
Managed Key.
*** Any transient data stored temporarily on disk such as pagefiles or swap files are
encrypted with a Microsoft key (all tiers) or a customer-managed key (using the
Enterprise and Enterprise Flash tiers). For more information, see Configure disk
encryption in Azure Cache for Redis.
Related content
encryption is used in Azure
double encryption
Feedback
Was this page helpful? Yes No
There are several types of encryption available for your managed disks, including Azure
Disk Encryption (ADE), Server-Side Encryption (SSE), and encryption at host.
Encryption at host is a Virtual Machine option that enhances Azure Disk Storage
Server-Side Encryption to ensure that all temp disks and disk caches are encrypted
at rest and flow encrypted to the Storage clusters. For full details, see Encryption at
host - End-to-end encryption for your VM data.
Azure Disk Encryption helps protect and safeguard your data to meet your
organizational security and compliance commitments. ADE encrypts the OS and
data disks of Azure virtual machines (VMs) inside your VMs by using the DM-
Crypt feature of Linux or the BitLocker feature of Windows. ADE is integrated
with Azure Key Vault to help you control and manage the disk encryption keys and
secrets, with the option to encrypt with a key encryption key (KEK). For full details,
see Azure Disk Encryption for Linux VMs or Azure Disk Encryption for Windows
VMs.
Confidential disk encryption binds disk encryption keys to the virtual machine's
TPM and makes the protected disk content accessible only to the VM. The TPM
and VM guest state is always encrypted in attested code using keys released by a
secure protocol that bypasses the hypervisor and host operating system. Currently
only available for the OS disk; temp disk support is in preview . Encryption at host
may be used for other disks on a Confidential VM in addition to Confidential Disk
Encryption. For full details, see DCasv5 and ECasv5 series confidential VMs.
Encryption is part of a layered approach to security and should be used with other
recommendations to secure Virtual Machines and their disks. For full details, see
Security recommendations for virtual machines in Azure and Restrict import/export
access to managed disks.
Comparison
Here's a comparison of Disk Storage SSE, ADE, encryption at host, and Confidential disk
encryption.
ノ Expand table
Encryption at rest ✅ ✅ ✅ ✅
(OS and data
disks)
Encryption of ❌ ✅ ✅ ✅
caches
Data flows ❌ ✅ ✅ ✅
encrypted
between
Compute and
Storage
HSM Support Azure Key Vault Azure Key Vault Azure Key Vault Azure Key Vault
Premium and Premium and Premium Premium and
Managed HSM Managed HSM Managed HSM
Enhanced Key ❌ ❌ ❌ ✅
Protection
encryption
status*
) Important
For Confidential disk encryption, Microsoft Defender for Cloud does not currently
have a recommendation that is applicable.
* Microsoft Defender for Cloud has the following disk encryption recommendations:
Virtual machines and virtual machine scale sets should have encryption at host
enabled (Only detects Encryption at Host)
Virtual machines should encrypt temp disks, caches, and data flows between
Compute and Storage resources (Only detects Azure Disk Encryption)
Windows virtual machines should enable Azure Disk Encryption or
EncryptionAtHost (Detects both Azure Disk Encryption and EncryptionAtHost)
Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost
(Detects both Azure Disk Encryption and EncryptionAtHost)
Next steps
Azure Disk Encryption for Linux VMs
Azure Disk Encryption for Windows VMs
Server-side encryption of Azure Disk Storage
Encryption at host
DCasv5 and ECasv5 series confidential VMs
Azure Security Fundamentals - Azure encryption overview
Feedback
Was this page helpful? Yes No
Applies to: Azure SQL Database Azure SQL Managed Instance Azure
Synapse Analytics
This article outlines the basics of securing the data tier of an application using Azure
SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. The security
strategy described follows the layered defense-in-depth approach as shown in the
picture below, and moves from the outside in:
7 Note
Microsoft Entra ID was previously known as Azure Active Directory (Azure AD).
Network security
Microsoft Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse
Analytics provide a relational database service for cloud and enterprise applications. To
help protect customer data, firewalls prevent network access to the server until access is
explicitly granted based on IP address or Azure Virtual network traffic origin.
IP firewall rules
IP firewall rules grant access to databases based on the originating IP address of each
request. For more information, see Overview of Azure SQL Database and Azure Synapse
Analytics firewall rules.
Virtual network rules enable Azure SQL Database to only accept communications that
are sent from selected subnets inside a virtual network.
7 Note
Controlling access with firewall rules does not apply to SQL Managed Instance. For
more information about the networking configuration needed, see Connecting to a
managed instance
Access management
) Important
Managing databases and servers within Azure is controlled by your portal user
account's role assignments. For more information on this article, see Azure role-
based access control in the Azure portal.
Authentication
Authentication is the process of proving the user is who they claim to be. SQL Database
and SQL Managed Instance support SQL authentication and authentication with
Microsoft Entra ID (formerly Azure Active Directory). SQL Managed instance additionally
supports Windows authentication for Microsoft Entra principals.
SQL authentication:
SQL authentication refers to the authentication of a user when connecting to Azure
SQL Database or Azure SQL Managed Instance using username and password. A
server admin login with a username and password must be specified when the
server is being created. Using these credentials, a server admin can authenticate to
any database on that server or instance as the database owner. After that,
additional SQL logins and users can be created by the server admin, which enable
users to connect using username and password.
A server admin called the Microsoft Entra administrator must be created to use
Microsoft Entra authentication with SQL Database. For more information, see
Connecting to SQL Database with Microsoft Entra authentication. Microsoft Entra
authentication supports both managed and federated accounts. The federated
accounts support Windows users and groups for a customer domain federated
with Microsoft Entra ID.
To enable Windows authentication for Microsoft Entra principals, you will turn your
Microsoft Entra tenant into an independent Kerberos realm and create an
incoming trust in the customer domain. Learn how Windows authentication for
Azure SQL Managed Instance is implemented with Microsoft Entra ID and
Kerberos.
) Important
Managing databases and servers within Azure is controlled by your portal user
account's role assignments. For more information on this article, see Azure role-
based access control in Azure portal. Controlling access with firewall rules does not
apply to SQL Managed Instance. Please see the following article on connecting to
a managed instance for more information about the networking configuration
needed.
Authorization
Authorization refers to controlling access on resources and commands within a
database. This is done by assigning permissions to a user within a database in Azure
SQL Database or Azure SQL Managed Instance. Permissions are ideally managed by
adding user accounts to database roles and assigning database-level permissions to
those roles. Alternatively an individual user can also be granted certain object-level
permissions. For more information, see Logins and users
As a best practice, create custom roles when needed. Add users to the role with the least
privileges required to do their job function. Do not assign permissions directly to users.
The server admin account is a member of the built-in db_owner role, which has
extensive permissions and should only be granted to few users with administrative
duties. To further limit the scope of what a user can do, the EXECUTE AS can be used to
specify the execution context of the called module. Following these best practices is also
a fundamental step towards Separation of Duties.
Row-level security
Row-Level Security enables customers to control access to rows in a database table
based on the characteristics of the user executing a query (for example, group
membership or execution context). Row-Level Security can also be used to implement
custom Label-based security concepts. For more information, see Row-Level security.
Threat protection
SQL Database and SQL Managed Instance secure customer data by providing auditing
and threat detection capabilities.
SQL Database, SQL Managed Instance, and Azure Synapse Analytics enforce encryption
(SSL/TLS) at all times for all connections. This ensures all data is encrypted "in transit"
between the client and server irrespective of the setting of Encrypt or
TrustServerCertificate in the connection string.
As a best practice, recommend that in the connection string used by the application,
you specify an encrypted connection and not trust the server certificate. This forces your
application to verify the server certificate and thus prevents your application from being
vulnerable to man in the middle type attacks.
For example when using the ADO.NET driver this is accomplished via Encrypt=True and
TrustServerCertificate=False. If you obtain your connection string from the Azure portal,
it will have the correct settings.
) Important
Note that some non-Microsoft drivers may not use TLS by default or rely on an
older version of TLS (<1.2) in order to function. In this case the server still allows
you to connect to your database. However, we recommend that you evaluate the
security risks of allowing such drivers and application to connect to SQL Database,
especially if you store sensitive data.
For further information about TLS and connectivity, see TLS considerations
In Azure, all newly created databases are encrypted by default and the database
encryption key is protected by a built-in server certificate. Certificate maintenance and
rotation are managed by the service and require no input from the user. Customers who
prefer to take control of the encryption keys can manage the keys in Azure Key Vault.
Security management
Vulnerability assessment
Vulnerability assessment is an easy to configure service that can discover, track, and help
remediate potential database vulnerabilities with the goal to proactively improve overall
database security. Vulnerability assessment (VA) is part of the Microsoft Defender for
SQL offering, which is a unified package for advanced SQL security capabilities.
Vulnerability assessment can be accessed and managed via the central Microsoft
Defender for SQL portal.
Data discovery and classification
Data discovery and classification (currently in preview) provides basic capabilities built
into Azure SQL Database and SQL Managed Instance for discovering, classifying and
labeling the sensitive data in your databases. Discovering and classifying your utmost
sensitive data (business/financial, healthcare, personal data, etc.) can play a pivotal role
in your organizational Information protection stature. It can serve as infrastructure for:
For more information, see Get started with data discovery and classification.
Compliance
In addition to the above features and functionality that can help your application meet
various security requirements, Azure SQL Database also participates in regular audits,
and has been certified against a number of compliance standards. For more information,
see the Microsoft Azure Trust Center where you can find the most current list of SQL
Database compliance certifications.
Next steps
For a discussion of the use of logins, user accounts, database roles, and
permissions in SQL Database and SQL Managed Instance, see Manage logins and
user accounts.
For a discussion of database auditing, see auditing.
For a discussion of threat detection, see threat detection.
Feedback
Was this page helpful? Yes No
This article provides best practices on how to solve common security requirements. Not
all requirements are applicable to all environments, and you should consult your
database and security team on which features to implement.
7 Note
Microsoft Entra ID was previously known as Azure Active Directory (Azure AD).
Security Architects
Security Managers
Compliance Officers
Privacy Officers
Security Engineers
Unless otherwise stated, we recommend you follow all best practices listed in each
section to achieve the respective goal or requirement. To meet specific security
compliance standards or best practices, important regulatory compliance controls are
listed under the Requirements or Goals section wherever applicable. These are the
security standards and regulations that are referenced in this paper:
We plan on continuing to update the recommendations and best practices listed here.
Provide input or any corrections for this document using the Feedback link at the
bottom of this article.
Authentication
Authentication is the process of proving the user is who they claim to be. Azure SQL
Database and SQL Managed Instance support two types of authentication:
SQL authentication
Microsoft Entra authentication
7 Note
Microsoft Entra authentication may not be supported for all tools and 3rd party
applications.
Manage group accounts and control user permissions without duplicating logins
across servers, databases and managed instances.
Simplified and flexible permission management.
Management of applications at scale.
How to implement
Best practices
Create a Microsoft Entra tenant and create users to represent human users and
create service principals to represent apps, services, and automation tools. Service
principals are equivalent to service accounts in Windows and Linux.
7 Note
In SQL Managed Instance, you can also create logins that map to Microsoft
Entra principals in the master database. See CREATE LOGIN (Transact-SQL).
Using Microsoft Entra groups simplifies permission management and both the
group owner, and the resource owner can add/remove members to/from the
group.
Create a separate group for Microsoft Entra administrators for each server or
managed instance.
See the article, Provision a Microsoft Entra administrator for your server.
7 Note
Microsoft Entra authentication is recorded in Azure SQL audit logs, but not in
Microsoft Entra sign-in logs.
Azure RBAC permissions granted in Azure do not apply to Azure SQL
Database or SQL Managed Instance permissions. Such permissions must be
created/mapped manually using existing SQL permissions.
On the client-side, Microsoft Entra authentication needs access to the internet
or via User Defined Route (UDR) to a virtual network.
The Microsoft Entra access token is cached on the client side and its lifetime
depends on token configuration. See the article, Configurable token lifetimes
in Microsoft Entra ID
For guidance on troubleshooting Microsoft Entra authentication issues, see
the following blog: Troubleshooting Microsoft Entra ID .
How to implement
Best practices
Create Microsoft Entra group(s) and enable multifactor authentication policy for
selected groups using Microsoft Entra Conditional Access.
See the article, Plan Conditional Access Deployment.
Multifactor authentication can be enabled for the entire Microsoft Entra tenant or
for Active Directory federated with Microsoft Entra ID.
Use Microsoft Entra interactive authentication mode for Azure SQL Database and
Azure SQL Managed Instance where a password is requested interactively,
followed by multifactor authentication:
Use universal authentication in SSMS. See the article, Using Microsoft Entra
multifactor authentication with Azure SQL Database, SQL Managed Instance,
Azure Synapse (SSMS support for multifactor authentication).
Use interactive authentication supported in SQL Server Data Tools (SSDT). See
the article, Microsoft Entra ID support in SQL Server Data Tools (SSDT).
Use other SQL tools supporting multifactor authentication.
SSMS Wizard support for export/extract/deploy database
SqlPackage: option '/ua'
sqlcmd Utility: option -G (interactive)
bcp Utility: option -G (interactive)
7 Note
How to implement
Best practices
Use single sign-on authentication using Windows credentials. Federate the on-
premises Active Directory domain with Microsoft Entra ID and use integrated
Windows authentication (for domain-joined machines with Microsoft Entra ID).
See the article, SSMS support for Microsoft Entra integrated authentication.
How to implement
Enable Azure Managed Identity. You can also use integrated or certificate-based
authentication.
Best practices
Use Microsoft Entra authentication for integrated federated domain and domain-
joined machine (see section above).
See the sample application for integrated authentication .
How to implement
Use Azure Key Vault to store passwords and secrets. Whenever applicable, use
multifactor authentication for Azure SQL Database with Microsoft Entra users.
Best practices
How to implement
Best practices
As a server or instance admin, create logins and users. Unless using contained
database users with passwords, all passwords are stored in master database.
See the article, Controlling and granting database access to SQL Database, SQL
Managed Instance and Azure Synapse Analytics.
Access management
Access management (also called Authorization) is the process of controlling and
managing authorized users' access and privileges to Azure SQL Database or SQL
Managed Instance.
Implement principle of least privilege
Mentioned in: FedRamp controls AC-06, NIST: AC-6, OSA Practice #3
The principle of least privilege states that users shouldn't have more privileges than
needed to complete their tasks. For more information, see the article Just enough
administration.
How to implement
In SQL Databases:
Use granular permissions and user-defined database roles (or server-roles in
SQL Managed Instance):
Best practices
The following best practices are optional but will result in better manageability and
supportability of your security strategy:
If possible, start with the least possible set of permissions and start adding
permissions one by one if there's a real necessity (and justification) – as opposed
to the opposite approach: taking permissions away step by step.
Refrain from assigning permissions to individual users. Use roles (database or
server roles) consistently instead. Roles helps greatly with reporting and
troubleshooting permissions. (Azure RBAC only supports permission assignment
via roles.)
Create and use custom roles with the exact permissions needed. Typical roles that
are used in practice:
Security deployment
Administrator
Developer
Support personnel
Auditor
Automated processes
End user
Use built-in roles only when the permissions of the roles match exactly the needed
permissions for the user. You can assign users to multiple roles.
Remember that permissions in the database engine can be applied within the
following scopes (the smaller the scope, the smaller the impact of the granted
permissions):
Server (special roles in the master database) in Azure
Database
Schema
It is a best practice to use schemas to grant permissions inside a database.
Object (table, view, procedure, and so on)
7 Note
Perform regular checks using Vulnerability Assessment (VA) to test for too many
permissions.
How to implement
Create roles according to the needed user-groups and assign permissions to roles.
For management-level tasks in Azure portal or via PowerShell-automation use
Azure roles. Either find a built-in role matching the requirement, or create an
Azure custom role using the available permissions
Create Server roles for server-wide tasks (creating new logins, databases) in a
managed instance.
Create Database Roles for database-level tasks.
For certain sensitive tasks, consider creating special stored procedures signed by a
certificate to execute the tasks on behalf of the users. One important advantage of
digitally signed stored procedures is that if the procedure is changed, the
permissions that were granted to the previous version of the procedure are
immediately removed.
Example: Tutorial: Signing Stored Procedures with a Certificate
To ensure that a DBA can't see data that is considered highly sensitive and can still
do DBA tasks, you can use Always Encrypted with role separation.
See the articles, Overview of Key Management for Always Encrypted, Key
Provisioning with Role Separation, and Column Master Key Rotation with Role
Separation.
In cases where the use of Always Encrypted isn't feasible, or at least not without
major costs and efforts that may even render the system near unusable,
compromises can be made and mitigated through the use of compensating
controls such as:
Human intervention in processes.
Audit trails – for more information on Auditing, see, Audit critical security
events.
Best practices
Make sure that different accounts are used for Development/Test and Production
environments. Different accounts help to comply with separation of Test and
Production systems.
Use built-in roles when the permissions match exactly the needed permissions – if
the union of all permissions from multiple built-in roles leads to a 100% match,
you can assign multiple roles concurrently as well.
Create and use user-defined roles when built-in roles grant too many permissions
or insufficient permissions.
Role assignments can also be done temporarily, also known as Dynamic Separation
of Duties (DSD), either within SQL Agent Job steps in T-SQL or using Azure PIM for
Azure roles.
Make sure that DBAs don't have access to the encryption keys or key stores, and
that Security Administrators with access to the keys have no access to the database
in turn. The use of Extensible Key Management (EKM) can make this separation
easier to achieve. Azure Key Vault can be used to implement EKM.
You can retrieve the definition of the Azure built-in roles to see the permissions
used and create a custom role based on excerpts and cumulations of these via
PowerShell.
Because any member of the db_owner database role can change security settings
like Transparent Data Encryption (TDE), or change the SLO, this membership should
be granted with care. However, there are many tasks that require db_owner
privileges. Task like changing any database setting such as changing DB options.
Auditing plays a key role in any solution.
7 Note
For the readers that want to dive deeper into SoD, we recommend the following
resources:
Separation of Duties is not limited to the data in a database, but includes application
code. Malicious code can potentially circumvent security controls. Before deploying
custom code to production, it is essential to review what's being deployed.
How to implement
Use a database tool like Azure Data Studio that supports source control.
Best practices
Vulnerability Assessment contains rules that check for excessive permissions, the
use of old encryption algorithms, and other security problems within a database
schema.
Make sure the person conducting the review is an individual other than the
originating code author and knowledgeable in code-reviews and secure coding.
Be sure to know all sources of code-changes. Code can be in T-SQL Scripts. It can
be ad hoc commands to be executed or be deployed in forms of Views, Functions,
Triggers, and Stored Procedures. It can be part of SQL Agent Job definitions
(Steps). It can also be executed from within SSIS packages, Azure Data Factory, and
other services.
Data protection
Data protection is a set of capabilities for safeguarding important information from
compromise by encryption or obfuscation.
7 Note
Microsoft attests to Azure SQL Database and SQL Managed Instance as being FIPS
140-2 Level 1 compliant. This is done after verifying the strict use of FIPS 140-2
Level 1 acceptable algorithms and FIPS 140-2 Level 1 validated instances of those
algorithms including consistency with required key lengths, key management, key
generation, and key storage. This attestation is meant to allow our customers to
respond to the need or requirement for the use of FIPS 140-2 Level 1 validated
instances in the processing of data or delivery of systems or applications. We define
the terms "FIPS 140-2 Level 1 compliant" and "FIPS 140-2 Level 1 compliance" used
in the above statement to demonstrate their intended applicability to U.S. and
Canadian government use of the different term "FIPS 140-2 Level 1 validated."
Protects your data while data moves between your client and server. Refer to Network
Security.
How to implement
Transparent data encryption (TDE) with service managed keys are enabled by
default for any databases created after 2017 in Azure SQL Database and SQL
Managed Instance.
In a managed instance, if the database is created from a restore operation using an
on-premises server, the TDE setting of the original database will be honored. If the
original database doesn't have TDE enabled, we recommend that TDE be manually
turned on for the managed instance.
Best practices
Don't store data that requires encryption-at-rest in the master database. The
master database can't be encrypted with TDE.
Use customer-managed keys in Azure Key Vault if you need increased transparency
and granular control over the TDE protection. Azure Key Vault allows the ability to
revoke permissions at any time to render the database inaccessible. You can
centrally manage TDE protectors along with other keys, or rotate the TDE protector
at your own schedule using Azure Key Vault.
If you're using customer-managed keys in Azure Key Vault, follow the articles,
Guidelines for configuring TDE with Azure Key Vault and How to configure Geo-DR
with Azure Key Vault.
7 Note
Some items considered customer content, such as table names, object names, and
index names, may be transmitted in log files for support and troubleshooting by
Microsoft.
The policies that determine which data is sensitive and whether the sensitive data must
be encrypted in memory and not accessible to administrators in plaintext, are specific to
your organization and compliance regulations you need to adhere to. Please see the
related requirement: Identify and tag sensitive data.
How to implement
Use Always Encrypted to ensure sensitive data isn't exposed in plaintext in Azure
SQL Database or SQL Managed Instance, even in memory/in use. Always Encrypted
protects the data from Database Administrators (DBAs) and cloud admins (or bad
actors who can impersonate high-privileged but unauthorized users) and gives you
more control over who can access your data.
Best practices
Manage Always Encrypted keys with role separation if you're using Always
Encrypted to protect data from malicious DBAs. With role separation, a security
admin creates the physical keys. The DBA creates the column master key and
column encryption key metadata objects describing the physical keys in the
database. During this process, the security admin doesn't need access to the
database, and the DBA doesn't need access to the physical keys in plaintext.
See the article, Managing Keys with Role Separation for details.
Store your column master keys in Azure Key Vault for ease of management. Avoid
using Windows Certificate Store (and in general, distributed key store solutions, as
opposed central key management solutions) that make key management hard.
Think carefully through the tradeoffs of using multiple keys (column master key or
column encryption keys). Keep the number of keys small to reduce key
management cost. One column master key and one column encryption key per
database is typically sufficient in steady-state environments (not in the middle of a
key rotation). You may need additional keys if you have different user groups, each
using different keys and accessing different data.
Rotate column master keys per your compliance requirements. If you also need to
rotate column encryption keys, consider using online encryption to minimize
application downtime.
See the article, Performance and Availability Considerations.
If you're concerned about third parties accessing your data legally without your
consent, ensure that all application and tools that have access to the keys and data
in plaintext run outside of Microsoft Azure Cloud. Without access to the keys, the
third party will have no way of decrypting the data unless they bypass the
encryption.
Always Encrypted doesn't easily support granting temporary access to the keys
(and the protected data). For example, if you need to share the keys with a DBA to
allow the DBA to do some cleansing operations on sensitive and encrypted data.
The only way to reliably revoke the access to the data from the DBA will be to
rotate both the column encryption keys and the column master keys protecting
the data, which is an expensive operation.
To access the plaintext values in encrypted columns, a user needs to have access to
the Column Master Key (CMK) that protects columns, which is configured in the
key store holding the CMK. The user also needs to have the VIEW ANY COLUMN
MASTER KEY DEFINITION and VIEW ANY COLUMN ENCRYPTION KEY
DEFINITION database permissions.
How to implement
Use Cell-level Encryption (CLE). See the article, Encrypt a Column of Data for
details.
Use Always Encrypted, but be aware of its limitation. The limitations are listed
below.
Best practices:
Use AES (AES 256 recommended) for data encryption. Algorithms, such RC4, DES
and TripleDES, are deprecated and shouldn't be used because of known
vulnerabilities.
Keep in mind that Always Encrypted is primarily designed to protect sensitive data in use
from high-privilege users of Azure SQL Database (cloud operators, DBAs) - see Protect
sensitive data in use from high-privileged, unauthorized users. Be aware of the following
challenges when using Always Encrypted to protect data from application users:
How to implement
7 Note
Always Encrypted does not work with Dynamic Data Masking. It is not possible to
encrypt and mask the same column, which implies that you need to prioritize
protecting data in use vs. masking the data for your app users via Dynamic Data
Masking.
Best practices
7 Note
Dynamic Data Masking cannot be used to protect data from high-privilege users.
Masking policies do not apply to users with administrative access like db_owner.
Don't permit app users to run ad hoc queries (as they may be able to work around
Dynamic Data Masking).
See the article, Bypassing masking using inference or brute-force techniques for
details.
Use a proper access control policy (via SQL permissions, roles, RLS) to limit user
permissions to make updates in the masked columns. Creating a mask on a
column doesn't prevent updates to that column. Users that receive masked data
when querying the masked column, can update the data if they have write-
permissions.
Dynamic Data Masking doesn't preserve the statistical properties of the masked
values. This may impact query results (for example, queries containing filtering
predicates or joins on the masked data).
Network security
Network security refers to access controls and best practices to secure your data in
transit to Azure SQL Database.
How to implement
Ensure that client machines connecting to Azure SQL Database and SQL Managed
Instance are using the latest Transport Layer Security (TLS) version.
Best practices
Enforce a minimal TLS version at the logical server in Azure or SQL Managed
Instance level by using the minimal TLS version setting. We recommend setting the
minimal TLS version to 1.2, after testing to confirm your applications supports it.
TLS 1.2 includes fixes for vulnerabilities found in previous versions.
Configure all your apps and tools to connect to SQL Database with encryption
enabled
Encrypt = On, TrustServerCertificate = Off (or equivalent with non-Microsoft
drivers).
If your app uses a driver that doesn't support TLS or supports an older version of
TLS, replace the driver, if possible. If not possible, carefully evaluate the security
risks.
Reduce attack vectors via vulnerabilities in SSL 2.0, SSL 3.0, TLS 1.0, and TLS 1.1
by disabling them on client machines connecting to Azure SQL Database per
Transport Layer Security (TLS) registry settings.
Check cipher suites available on the client: Cipher Suites in TLS/SSL (Schannel
SSP). Specifically, disable 3DES per Configuring TLS Cipher Suite Order.
How to implement
In SQL Database:
Best practices
You can access Azure SQL Database and SQL Managed Instance by connecting to a
public endpoint (for example, using a public data path). The following best
practices should be considered:
For a server in SQL Database, use IP firewall rules to restrict access to only
authorized IP addresses.
For SQL Managed Instance, use Network Security Groups (NSG) to restrict
access over port 3342 only to required resources. For more information, see Use
a managed instance securely with public endpoints.
7 Note
The SQL Managed Instance public endpoint is not enabled by default and it
and must be explicitly enabled. If company policy disallows the use of public
endpoints, use Azure Policy to prevent enabling public endpoints in the first
place.
Ensure that Power BI Desktop is connecting using TLS1.2 by setting the registry key
on the client machine as per Transport Layer Security (TLS) registry settings.
Restrict data access for specific users via Row-level security (RLS) with Power BI.
For Power BI Service, use the on-premises data gateway, keeping in mind
Limitations and Considerations.
For a simple Web App, connecting over public endpoint requires setting Allow
Azure Services to ON.
Integrate your app with an Azure Virtual Network for private data path connectivity
to a managed instance. Optionally, you can also deploy a Web App with App
Service Environments (ASE).
For Web App with ASE or virtual network Integrated Web App connecting to a
database in SQL Database, you can use virtual network service endpoints and
virtual network firewall rules to limit access from a specific virtual network and
subnet. Then set Allow Azure Services to OFF. You can also connect ASE to a
managed instance in SQL Managed Instance over a private data path.
Ensure that your Web App is configured per the article, Best practices for securing
platform as a service (PaaS) web and mobile applications using Azure App Service.
Install Web Application Firewall (WAF) to protect your web app from common
exploits and vulnerabilities.
Configure Azure Virtual Machine hosting for secure
connections to SQL Database/SQL Managed Instance
Best practices
Use a combination of Allow and Deny rules on the NSGs of Azure virtual machines
to control which regions can be accessed from the VM.
Ensure that your VM is configured per the article, Security best practices for IaaS
workloads in Azure.
Ensure that all VMs are associated with a specific virtual network and subnet.
Evaluate if you need the default route 0.0.0.0/Internet per the guidance at about
forced tunneling.
If yes – for example, front-end subnet - then keep the default route.
If no – for example, middle tier or back-end subnet – then enable force
tunneling so no traffic goes over Internet to reach on-premises (a.k.a cross-
premises).
Implement User Defined Routes if you need to send all traffic in the virtual network
to a Network Virtual Appliance for packet inspection.
Use virtual network service endpoints for secure access to PaaS services like Azure
Storage via the Azure backbone network.
How to implement
Use Advanced Threat Protection for Azure SQL Database to detect Denial of
Service (DoS) attacks against databases.
Best practices
Follow the practices described in Minimize Attack Surface helps minimize DDoS
attack threats.
The Advanced Threat Protection Brute force SQL credentials alert helps to detect
brute force attacks. In some cases, the alert can even distinguish penetration
testing workloads.
How to implement
Use Advanced Threat Protection for SQL to detect unusual and potentially harmful
attempts to access or exploit databases, including:
SQL injection attack.
Credentials theft/leak.
Privilege abuse.
Data exfiltration.
Best practices
Configure Microsoft Defender for SQL for a specific server or a managed instance.
You can also configure Microsoft Defender for SQL for all servers and managed
instances in a subscription by enabling Microsoft Defender for Cloud.
How to implement
Best practices
Enabling auditing to Log Analytics will incur cost based on ingestion rates. Please
be aware of the associated cost with using this option , or consider storing the
audit logs in an Azure storage account.
Further resources
How to implement
When saving Audit logs to Azure Storage, make sure that access to the Storage
Account is restricted to the minimal security principles. Control who has access to
the storage account.
For more information, see Authorizing access to Azure Storage.
Best practices
Controlling Access to the Audit Target is a key concept in separating DBA from
Auditors.
When auditing access to sensitive data, consider securing the data with data
encryption to avoid information leakage to the Auditor. For more information, see
the section Protect sensitive data in use from high-privileged, unauthorized users.
Security Management
This section describes the different aspects and best practices for managing your
databases security posture. It includes best practices for ensuring your databases are
configured to meet security standards, for discovering and for classifying and tracking
access to potentially sensitive data in your databases.
How to implement
Enable SQL Vulnerability Assessment (VA) to scan your database for security issues,
and to automatically run periodically on your databases.
Best practices
Initially, run VA on your databases and iterate by remediating failing checks that
oppose security best practices. Set up baselines for acceptable configurations until
the scan comes out clean, or all checks has passed.
Configure periodic recurring scans to run once a week and configure the relevant
person to receive summary emails.
Review the VA summary following each weekly scan. For any vulnerabilities found,
evaluate the drift from the previous scan result and determine if the check should
be resolved. Review if there's a legitimate reason for the change in configuration.
Resolve checks and update baselines where relevant. Create ticket items for
resolving actions and track these until they're resolved.
Further resources
How to implement
Use SQL Data Discovery and Classification to discover, classify, label, and protect
the sensitive data in your databases.
View the classification recommendations that are created by the automated
discovery in the SQL Data Discovery and Classification dashboard. Accept the
relevant classifications, such that your sensitive data is persistently tagged with
classification labels.
Manually add classifications for any additional sensitive data fields that were not
discovered by the automated mechanism.
For more information, see SQL Data Discovery and Classification.
Best practices
How to implement
Best practices
See best practices for the Auditing and Data Classification sections:
Audit critical security events
Identify and tag sensitive data
How to implement
Monitor SQL-related security recommendations and active threats in Microsoft
Defender for Cloud.
Scenario 2: A rogue DBA. This scenario is often raised by security sensitive customers
from regulated industries. In this scenario, a high privilege user might copy data from
Azure SQL Database to another subscription not controlled by the data owner.
Potential mitigations
Today, Azure SQL Database and SQL Managed Instance offers the following techniques
for mitigating data exfiltration threats:
Use a combination of Allow and Deny rules on the NSGs of Azure VMs to control
which regions can be accessed from the VM.
If using a server in SQL Database, set the following options:
Allow Azure Services to OFF.
Only allow traffic from the subnet containing your Azure VM by setting up a
VNet Firewall rule.
Use Private Link
For SQL Managed Instance, using private IP access by default addresses the first
data exfiltration concern of a rogue VM. Turn on the subnet delegation feature on
a subnet to automatically set the most restrictive policy on a SQL Managed
Instance subnet.
The Rogue DBA concern is more exposed with SQL Managed Instance as it has a
larger surface area and networking requirements are visible to customers. The best
mitigation for this is applying all of the practices in this security guide to prevent
the Rogue DBA scenario in the first place (not only for data exfiltration). Always
Encrypted is one method to protect sensitive data by encrypting it and keeping the
key inaccessible for the DBA.
Azure offers built-in high-availability: High-availability with SQL Database and SQL
Managed Instance
The Business Critical tier includes failover groups, full and differential log backups,
and point-in-time-restore backups enabled by default:
Automated backups
Recover a database using automated database backups - Point-in-time restore
Next steps
See An overview of Azure SQL Database security capabilities
Feedback
Was this page helpful? Yes No
To help improve security, Azure Database includes many built-in security controls that
you can use to limit and control access.
Introduction
Cloud computing requires new security paradigms that are unfamiliar to many
application users, database administrators, and programmers. As a result, some
organizations are hesitant to implement a cloud infrastructure for data management
due to perceived security risks. However, much of this concern can be alleviated through
a better understanding of the security features built into Microsoft Azure and Microsoft
Azure SQL Database.
Checklist
We recommend that you read the Azure Database Security Best Practices article prior to
reviewing this checklist. You'll be able to get the most out of this checklist after you
understand the best practices. You can then use this checklist to make sure that you've
addressed the important issues in Azure database security.
Checklist Description
Category
Protect Data
Checklist Description
Category
Transport Layer Security, for data encryption when data is moving to the
Encryption in networks.
Motion/Transit Database requires secure communication from clients based on the
TDS(Tabular Data Stream) protocol over TLS (Transport Layer Security).
Control Access
Row level Security (Using Security Policy, at the same time restricting row-
Application level access based on a user's identity,role, or execution context).
Access Dynamic Data Masking (Using Permission & Policy, limits sensitive data
exposure by masking it to non-privileged users)
Proactive
Monitoring
Auditing tracks database events and writes them to an Audit log/ Activity
Tracking & log in your Azure Storage account.
Detecting Track Azure Database health using Azure Monitor Activity Logs.
Threat Detection detects anomalous database activities indicating
potential security threats to the database.
Conclusion
Azure Database is a robust database platform, with a full range of security features that
meet many organizational and regulatory compliance requirements. You can easily
protect data by controlling the physical access to your data, and using various options
for data security at the file-, column-, or row-level with Transparent Data Encryption,
Cell-Level Encryption, or Row-Level Security. Always Encrypted also enables operations
against encrypted data, simplifying the process of application updates. In turn, access to
auditing logs of SQL Database activity provides you with the information you need,
allowing you to know how and when data is accessed.
Next steps
You can improve the protection of your database against malicious users or
unauthorized access with just a few simple steps. In this tutorial you learn to:
This article contains security recommendations for Blob storage. Implementing these
recommendations will help you fulfill your security obligations as described in our
shared responsibility model. For more information on how Microsoft fulfills service
provider responsibilities, see Shared responsibility in the cloud.
Microsoft Defender for Cloud periodically analyzes the security state of your Azure
resources to identify potential security vulnerabilities. It then provides you with
recommendations on how to address them. For more information on Microsoft
Defender for Cloud recommendations, see Review your security recommendations.
Data protection
Recommendation Comments Defender
for Cloud
Use the Azure Create new storage accounts using the Azure Resource -
Resource Manager Manager deployment model for important security
deployment model enhancements, including superior Azure role-based access
control (Azure RBAC) and auditing, Resource Manager-
based deployment and governance, access to managed
identities, access to Azure Key Vault for secrets, and
Microsoft Entra authentication and authorization for
access to Azure Storage data and resources. If possible,
migrate existing storage accounts that use the classic
deployment model to use Azure Resource Manager. For
more information about Azure Resource Manager, see
Azure Resource Manager overview.
Turn on soft delete for Soft delete for blobs enables you to recover blob data -
blobs after it has been deleted. For more information on soft
delete for blobs, see Soft delete for Azure Storage blobs.
Turn on soft delete for Soft delete for containers enables you to recover a -
containers container after it has been deleted. For more information
on soft delete for containers, see Soft delete for
containers.
Lock storage account Apply an Azure Resource Manager lock to your storage
to prevent accidental account to protect the account from accidental or
or malicious deletion malicious deletion or configuration change. Locking a
or configuration storage account does not prevent data within that account
changes from being deleted. It only prevents the account itself
from being deleted. For more information, see Apply an
Azure Resource Manager lock to a storage account.
Require secure transfer When you require secure transfer for a storage account, all -
(HTTPS) to the storage requests to the storage account must be made over
account HTTPS. Any requests made over HTTP are rejected.
Microsoft recommends that you always require secure
transfer for all of your storage accounts. For more
information, see Require secure transfer to ensure secure
connections.
Limit shared access Requiring HTTPS when a client uses a SAS token to access -
signature (SAS) tokens blob data helps to minimize the risk of eavesdropping. For
to HTTPS connections more information, see Grant limited access to Azure
only Storage resources using shared access signatures (SAS).
Use Microsoft Entra ID to Microsoft Entra ID provides superior security and ease -
authorize access to blob of use over Shared Key for authorizing requests to
data Blob storage. For more information, see Authorize
access to data in Azure Storage.
Keep in mind the principle When assigning a role to a user, group, or application, -
of least privilege when grant that security principal only those permissions
assigning permissions to a that are necessary for them to perform their tasks.
Microsoft Entra security Limiting access to resources helps prevent both
principal via Azure RBAC unintentional and malicious misuse of your data.
Use a user delegation SAS A user delegation SAS is secured with Microsoft Entra -
to grant limited access to credentials and also by the permissions specified for
blob data to clients the SAS. A user delegation SAS is analogous to a
service SAS in terms of its scope and function, but
offers security benefits over the service SAS. For more
information, see Grant limited access to Azure Storage
resources using shared access signatures (SAS).
Regenerate your account Rotating the account keys periodically reduces the risk -
keys periodically of exposing your data to malicious actors.
Disallow Shared Key When you disallow Shared Key authorization for a -
authorization storage account, Azure Storage rejects all subsequent
requests to that account that are authorized with the
account access keys. Only secured requests that are
authorized with Microsoft Entra ID will succeed. For
Recommendation Comments Defender
for Cloud
Keep in mind the principle When creating a SAS, specify only those permissions -
of least privilege when that are required by the client to perform its function.
assigning permissions to a Limiting access to resources helps prevent both
SAS unintentional and malicious misuse of your data.
Have a revocation plan in If a SAS is compromised, you will want to revoke that -
place for any SAS that you SAS as soon as possible. To revoke a user delegation
issue to clients SAS, revoke the user delegation key to quickly
invalidate all signatures associated with that key. To
revoke a service SAS that is associated with a stored
access policy, you can delete the stored access policy,
rename the policy, or change its expiry time to a time
that is in the past. For more information, see Grant
limited access to Azure Storage resources using
shared access signatures (SAS).
If a service SAS is not A service SAS that is not associated with a stored -
associated with a stored access policy cannot be revoked. For this reason,
access policy, then set the limiting the expiry time so that the SAS is valid for one
expiry time to one hour or hour or less is recommended.
less
Disable anonymous read anonymous read access to a container and its blobs -
access to containers and grants read-only access to those resources to any
blobs client. Avoid enabling anonymous read access unless
your scenario requires it. To learn how to disable
anonymous access for a storage account, see
Overview: Remediating anonymous read access for
blob data.
Networking
Recommendation Comments Defender
for Cloud
Configure the minimum Require that clients use a more secure version of TLS to -
required version of make requests against an Azure Storage account by
Transport Layer Security configuring the minimum version of TLS for that account.
(TLS) for a storage For more information, see Configure minimum required
account. version of Transport Layer Security (TLS) for a storage
account
Recommendation Comments Defender
for Cloud
Enable the Secure When you enable the Secure transfer required option, all Yes
transfer required requests made against the storage account must take
option on all of your place over secure connections. Any requests made over
storage accounts HTTP will fail. For more information, see Require secure
transfer in Azure Storage.
Enable firewall rules Configure firewall rules to limit access to your storage -
account to requests that originate from specified IP
addresses or ranges, or from a list of subnets in an Azure
Virtual Network (VNet). For more information about
configuring firewall rules, see Configure Azure Storage
firewalls and virtual networks.
Allow trusted Microsoft Turning on firewall rules for your storage account blocks -
services to access the incoming requests for data by default, unless the requests
storage account originate from a service operating within an Azure Virtual
Network (VNet) or from allowed public IP addresses.
Requests that are blocked include those from other Azure
services, from the Azure portal, from logging and metrics
services, and so on. You can permit requests from other
Azure services by adding an exception to allow trusted
Microsoft services to access the storage account. For
more information about adding an exception for trusted
Microsoft services, see Configure Azure Storage firewalls
and virtual networks.
Use private endpoints A private endpoint assigns a private IP address from your -
Azure Virtual Network (VNet) to the storage account. It
secures all traffic between your VNet and the storage
account over a private link. For more information about
private endpoints, see Connect privately to a storage
account using Azure Private Endpoint.
Use VNet service tags A service tag represents a group of IP address prefixes -
from a given Azure service. Microsoft manages the
address prefixes encompassed by the service tag and
automatically updates the service tag as addresses
change. For more information about service tags
supported by Azure Storage, see Azure service tags
overview. For a tutorial that shows how to use service tags
to create outbound network rules, see Restrict access to
PaaS resources.
Limit network access to Limiting network access to networks hosting clients Yes
specific networks requiring access reduces the exposure of your resources
to network attacks.
Recommendation Comments Defender
for Cloud
Configure network You can configure network routing preference for your -
routing preference Azure storage account to specify how network traffic is
routed to your account from clients over the Internet
using the Microsoft global network or Internet routing.
For more information, see Configure network routing
preference for Azure Storage.
Logging/Monitoring
Recommendation Comments Defender
for Cloud
Track how requests Enable logging for Azure Storage to track how requests to -
are authorized the service are authorized. The logs indicate whether a
request was made anonymously, by using an OAuth 2.0
token, by using Shared Key, or by using a shared access
signature (SAS). For more information, see Monitoring Azure
Blob Storage with Azure Monitor or Azure Storage analytics
logging with Classic Monitoring.
Next steps
Azure security documentation
Secure development documentation.
Customer Lockbox for Microsoft Azure
Article • 07/30/2024
7 Note
To use this feature, your organization must have an Azure support plan with a
minimal level of Developer.
This article covers how to enable Customer Lockbox for Microsoft Azure and how
requests are initiated, tracked, and stored for later reviews and audits.
Supported services
The following services are currently supported for Customer Lockbox for Microsoft
Azure:
7 Note
To enable Customer Lockbox for Microsoft Azure, the user account needs to have
the Global Administrator role assigned.
Workflow
The following steps outline a typical workflow for a Customer Lockbox for Microsoft
Azure request.
2. After this person troubleshoots the issue, but can't fix it, they open a support ticket
from the Azure portal . The ticket is assigned to an Azure Customer Support
Engineer.
3. An Azure Support Engineer reviews the service request and determines the next
steps to resolve the issue.
4. If the support engineer can't troubleshoot the issue by using standard tools and
service generated data, the next step is to request elevated permissions by using a
Just-In-Time (JIT) access service. This request can be from the original support
engineer or from a different engineer because the problem is escalated to the
Azure DevOps team.
5. After the Azure Engineer submits an access request, Just-In-Time service evaluates
the request taking into account factors such as:
6. When the request requires direct access to customer data, a Customer Lockbox
request is initiated. For example, remote desktop access to a customer's virtual
machine.
The request is now in a Customer Notified state, waiting for the customer's
approval before granting access.
9. The email notification provides a link to the Customer Lockbox blade in the
Administration module. The designated approver signs in to the Azure portal to
view any pending requests that their organization has for Customer Lockbox for
Microsoft Azure:
The request remains in the customer queue for four days. After this time, the
access request automatically expires and no access is granted to Microsoft
engineers.
10. To get the details of the pending request, the designated approver can select the
Customer Lockbox request from Pending Requests:
11. The designated approver can also select the SERVICE REQUEST ID to view the
support ticket request that was created by the original user. This information
provides context for why Microsoft Support is engaged, and the history of the
reported problem. For example:
12. The designated approver reviews the request and selects Approve or Deny:
As a result of the selection:
For auditing purposes, the actions taken in this workflow are logged in Customer
Lockbox request logs.
Auditing logs
The auditing logs for Customer Lockbox for Azure are written to the activity logs for
subscription-scoped requests and to the Entra Audit Log for tenant-scoped requests.
Create request
Request approved
Request denied
You can fiiter for Service = Access Reviews and Activity = one of the above
activities .
As an example:
7 Note
The History tab in the Azure Lockbox portal has been removed due to existing
technical limitations. To see Customer Lockbox request history, please use the
Activity Log for subscription-scoped requests and the Entra Audit Log for tenant-
scoped requests.
Exclusions
Customer Lockbox requests are not triggered in the following scenarios:
External legal demands for data also do not trigger Customer Lockbox requests. For
details, see the discussion of government requests for data on the Microsoft Trust
Center.
Next steps
Enable Customer Lockbox from the Administration module in the Customer Lockbox
blade. Customer Lockbox for Microsoft Azure is available for all customers who have an
Azure support plan with a minimal level of Developer.
Feedback
Was this page helpful? Yes No
7 Note
To use this feature, your organization must have an Azure support plan with a
minimal level of Developer.
Alternate email notification feature enables customers to use alternate email IDs for
getting Customer Lockbox notifications. This enables Customer Lockbox for Microsoft
Azure customers to receive notifications in scenarios where their Azure account is not
email enabled or if they have a service principal defined as the tenant admin or
subscription owner.
) Important
For example, Alice has the subscription owner role for subscription X and she adds
Bob's email address as alternate email/other email in her user profile who has a
reader role. When a Customer Lockbox request is created for a resource scoped to
subscription 'X', Bob will receive the email notification, but he'll not be able to
approve/reject the Customer Lockbox request as he does not have the required
privileges for it (subscription owner role).
Prerequisites
To take advantage of the Customer Lockbox for Microsoft Azure alternate email feature,
you must have:
A Microsoft Entra ID tenant that has Customer Lockbox for Microsoft Azure
enabled on it.
A Developer or above Azure support plan.
Role Assignments:
A user account with Tenant admin/privileged authentication administrator/User
administrator role to update user settings.
[Optional] Subscription owner or the new Azure Customer Lockbox Approver for
Subscription role if you’d like to approve/reject Customer Lockbox requests.
Set up
Here are the steps to set up the Customer Lockbox for Microsoft Azure alternate email
feature.
4. Search for the user for whom you want to add alternate email address.
7 Note
6. Navigate to Contact Information tab.
7. Select Add email under 'Other emails' category and then select Add.
8. Add alternate email address in the text field and select save.
9. Select the save button in the Contact Information tab to save the updates.
10. The contact information tab for this user should now show updated information
with alternate email:
11. Anytime a lockbox request is triggered and if the above user is identified as a
Lockbox approver, the Lockbox email notification is sent to both primary and other
email addresses, notifying that the Microsoft Support is trying to access a resource
within their tenant, and they should take an action by logging into Azure portal to
approve/reject the request. Here is an example screenshot:
Known Issues
Here are the known issues with this feature:
Duplicate emails are sent if the value for primary and other email is same.
Notifications are sent to only the first email address in 'other emails' despite
multiple email IDs configured in other email field.
If the primary email is not set, and the other email is set, two emails are sent to the
alternate email address.
Next steps
Customer Lockbox for Microsoft Azure
Customer Lockbox for Microsoft Azure frequently asked questions
Feedback
Was this page helpful? Yes No
This article answers frequently asked questions about Customer Lockbox for Microsoft
Azure.
General
Can I enable Customer Lockbox for Microsoft
Azure at management group or subscription
level?
No, Customer Lockbox for Microsoft Azure can only be enabled at tenant-level, and is
applicable to all the subscriptions and resources under that tenant.
Next steps
Customer Lockbox for Microsoft Azure overview
Customer Lockbox for Microsoft Azure alternate email notifications
Feedback
Was this page helpful? Yes No
This security baseline applies guidance from the Microsoft cloud security benchmark
version 1.0 to Customer Lockbox for Microsoft Azure. The Microsoft cloud security
benchmark provides recommendations on how you can secure your cloud solutions on
Azure. The content is grouped by the security controls defined by the Microsoft cloud
security benchmark and the related guidance applicable to Customer Lockbox for
Microsoft Azure.
You can monitor this security baseline and its recommendations using Microsoft
Defender for Cloud. Azure Policy definitions will be listed in the Regulatory Compliance
section of the Microsoft Defender for Cloud portal page.
When a feature has relevant Azure Policy Definitions, they are listed in this baseline to
help you measure compliance with the Microsoft cloud security benchmark controls and
recommendations. Some recommendations may require a paid Microsoft Defender plan
to enable certain security scenarios.
7 Note
Features not applicable to Customer Lockbox for Microsoft Azure have been
excluded. To see how Customer Lockbox for Microsoft Azure completely maps to
the Microsoft cloud security benchmark, see the full Customer Lockbox for
Microsoft Azure security baseline mapping file .
Security profile
The security profile summarizes high-impact behaviors of Customer Lockbox for
Microsoft Azure, which may result in increased security considerations.
Features
Description: Service network traffic respects Network Security Groups rule assignment
on its subnets. Learn more.
Features
Description: Service native IP filtering capability for filtering network traffic (not to be
confused with NSG or Azure Firewall). Learn more.
Supported Enabled By Default Configuration Responsibility
Description: Service supports disabling public network access either through using
service-level IP ACL filtering rule (not NSG or Azure Firewall) or using a 'Disable Public
Network Access' toggle switch. Learn more.
Identity management
For more information, see the Microsoft cloud security benchmark: Identity management.
Features
Description: Service supports using Azure AD authentication for data plane access.
Learn more.
Features
Managed Identities
Description: Data plane actions support authentication using managed identities. Learn
more.
Service Principals
Description: Data plane supports authentication using service principals. Learn more.
Features
Features
Description: Data plane supports native use of Azure Key Vault for credential and secrets
store. Learn more.
Privileged access
For more information, see the Microsoft cloud security benchmark: Privileged access.
Features
Description: Service has the concept of a local administrative account. Learn more.
Supported Enabled By Default Configuration Responsibility
Features
Description: Azure Role-Based Access Control (Azure RBAC) can be used to managed
access to service's data plane actions. Learn more.
Features
Customer Lockbox
Description: Customer Lockbox can be used for Microsoft support access. Learn more.
Data protection
For more information, see the Microsoft cloud security benchmark: Data protection.
Features
Description: Tools (such as Azure Purview or Azure Information Protection) can be used
for data discovery and classification in the service. Learn more.
Features
Description: Service supports DLP solution to monitor sensitive data movement (in
customer's content). Learn more.
Description: Service supports data in-transit encryption for data plane. Learn more.
Features
Description: The service supports Azure Key Vault integration for any customer keys,
secrets, or certificates. Learn more.
Features
Description: The service supports Azure Key Vault integration for any customer
certificates. Learn more.
Features
Description: Service configurations can be monitored and enforced via Azure Policy.
Learn more.
Features
Features
Description: Service produces resource logs that can provide enhanced service-specific
metrics and logging. The customer can configure these resource logs and send them to
their own data sink like a storage account or log analytics workspace. Learn more.
Feature notes: Though Customer Lockbox does not support this feature, the customer
does have access to the activity logs for the service.
Next steps
See the Microsoft cloud security benchmark overview
Learn more about Azure security baselines
Trusted Hardware Identity Management
Article • 02/06/2024
The Open Enclave SDK and Azure Attestation don't look at the nextUpdate date,
however, and will pass attestation.
Ubuntu 20.04
Ubuntu 18.04
Windows
For newer versions of Ubuntu (for example, Ubuntu 22.04), you have to use the Intel
QPL.
Why do Trusted Hardware Identity Management and Intel
have different baselines?
Trusted Hardware Identity Management and Intel provide different baseline levels of the
trusted computing base. When customers assume that Intel has the latest baselines,
they must ensure that all the requirements are satisfied. This approach can lead to a
breakage if customers haven't updated to the specified requirements.
Trusted Hardware Identity Management takes a slower approach to updating the TCB
baseline, so customers can make the necessary changes at their own pace. Although this
approach provides an older TCB baseline, customers won't experience a breakage if they
haven't met the requirements of the new TCB baseline. This is why the TCB baseline from
Trusted Hardware Identity Management is a different version from Intel's baseline. We
want to empower customers to meet the requirements of the new TCB baseline at their
pace, instead of forcing them to update and causing a disruption that would require
reprioritization of workstreams.
The quote generation/verification collateral that's used to generate the Intel SGX or Intel
TDX quotes can be split into:
The PCK certificate. To retrieve it, customers must use a Trusted Hardware Identity
Management endpoint.
All other quote generation/verification collateral. To retrieve it, customers can
either use a Trusted Hardware Identity Management endpoint or an Intel
Provisioning Certification Service (PCS) endpoint.
The Intel QPL configuration file (sgx_default_qcnl.conf) contains three keys for defining
the collateral endpoints. The pccs_url key defines the endpoint that's used to retrieve
the PCK certificates. The collateral_service key can define the endpoint that's used to
retrieve all other quote generation/verification collateral. If the collateral_service key
is not defined, all quote verification collateral is retrieved from the endpoint defined
with the pccs_url key.
ノ Expand table
The following code snippet is from an example of an Intel QPL configuration file:
Bash
{
"pccs_url":
"https://global.acccache.azure.net/sgx/certification/v3/",
"use_secure_cert": true,
"collateral_service":
"https://global.acccache.azure.net/sgx/certification/v3/",
"pccs_api_version": "3.1",
"retry_times": 6,
"retry_delay": 5,
"local_pck_url":
"http://169.254.169.254/metadata/THIM/sgx/certification/v3/",
"pck_cache_expire_hours": 24,
"verify_collateral_cache_expire_hours": 24,
"custom_request_options": {
"get_cert": {
"headers": {
"metadata": "true"
},
"params": {
"api-version": "2021-07-22-preview"
}
}
}
}
The following procedures explain how to change the Intel QPL configuration file and
activate the changes.
On Windows
1. Make changes to the configuration file.
2. Ensure that there are read permissions to the file from the following registry
location and key/value:
Bash
[HKEY_LOCAL_MACHINE\SOFTWARE\Intel\SGX\QCNL]
"CONFIG_FILE"="<Full File Path>"
3. Restart the AESMD service. For instance, open PowerShell as an administrator and
use the following commands:
Bash
On Linux
1. Make changes to the configuration file. For example, you can use Vim for the
changes via the following command:
Bash
sudo vim /etc/sgx_default_qcnl.conf
2. Restart the AESMD service. Open any terminal and run the following commands:
Bash
URI parameters
Bash
GET "http://169.254.169.254/metadata/THIM/amd/certification"
Request body
ノ Expand table
Sample request
Bash
Responses
ノ Expand table
Name Description
200 OK Lists available collateral in the HTTP body within JSON format
Definitions
ノ Expand table
Key Description
certificateChain AMD SEV Key (ASK) and AMD Root Key (ARK) certificates
Bash
b. Create an AKS cluster with one CVM node in the resource group:
Bash
Bash
Bash
2. Verify the connection to your cluster by using the kubectl get command. This
command returns a list of the cluster nodes.
Bash
The following output example shows the single node that you created in the
previous steps. Make sure that the node status is Ready .
ノ Expand table
3. Create a curl.yaml file with the following content. It defines a job that runs a curl
container to fetch AMD collateral from the Trusted Hardware Identity Management
endpoint. For more information about Kubernetes Jobs, see the Kubernetes
documentation .
Bash
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
metadata:
labels:
app: curl
spec:
nodeSelector:
kubernetes.azure.com/security-type: ConfidentialVM
containers:
- name: curlcontainer
image: alpine/curl:3.14
imagePullPolicy: IfNotPresent
args: ["-H", "Metadata:true",
"http://169.254.169.254/metadata/THIM/amd/certification"]
restartPolicy: "Never"
ノ Expand table
Bash
Bash
ノ Expand table
6. Run the following command to get the job logs and validate if it's working. A
successful output should include vcekCert , tcbm , and certificateChain .
Bash
kubectl logs job/curl
Next steps
Learn more about Azure Attestation documentation.
Learn more about Azure confidential computing .
Securing PaaS deployments
Article • 06/27/2024
Develop secure applications on Azure is a general guide to the security questions and
controls you should consider at each phase of the software development lifecycle when
developing applications for the cloud.
Organizations are able to improve their threat detection and response times by using a
provider's cloud-based security capabilities and cloud intelligence. By shifting
responsibilities to the cloud provider, organizations can get more security coverage,
which enables them to reallocate security resources and budget to other business
priorities.
In the middle of the stack, there is no difference between a PaaS deployment and on-
premises. At the application layer and the account and access management layer, you
have similar risks. In the next steps section of this article, we will guide you to best
practices for eliminating or minimizing these risks.
At the top of the stack, data governance and rights management, you take on one risk
that can be mitigated by key management. (Key management is covered in best
practices.) While key management is an additional responsibility, you have areas in a
PaaS deployment that you no longer have to manage so you can shift resources to key
management.
The Azure platform also provides you strong DDoS protection by using various network-
based technologies. However, all types of network-based DDoS protection methods
have their limits on a per-link and per-datacenter basis. To help avoid the impact of
large DDoS attacks, you can take advantage of Azure's core cloud capability of enabling
you to quickly and automatically scale out to defend against DDoS attacks. We'll go into
more detail on how you can do this in the recommended practices articles.
The following figure shows how the security perimeter has evolved from a network
perimeter to an identity perimeter. Security becomes less about defending your network
and more about defending your data, as well as managing the security of your apps and
users. The key difference is that you want to push security closer to what's important to
your company.
Initially, Azure PaaS services (for example, web roles and Azure SQL) provided little or no
traditional network perimeter defenses. It was understood that the element's purpose
was to be exposed to the Internet (web role) and that authentication provides the new
perimeter (for example, BLOB or Azure SQL).
Modern security practices assume that the adversary has breached the network
perimeter. Therefore, modern defense practices have moved to identity. Organizations
must establish an identity-based security perimeter with strong authentication and
authorization hygiene (best practices).
Principles and patterns for the network perimeter have been available for decades. In
contrast, the industry has relatively less experience with using identity as the primary
security perimeter. With that said, we have accumulated enough experience to provide
some general recommendations that are proven in the field and apply to almost all PaaS
services.
The following are best practices for managing the identity perimeter.
Best practice: Secure your keys and credentials to secure your PaaS deployment. Detail:
Losing keys and credentials is a common problem. You can use a centralized solution
where keys and secrets can be stored in hardware security modules (HSMs). Azure Key
Vault safeguards your keys and secrets by encrypting authentication keys, storage
account keys, data encryption keys, .pfx files, and passwords using keys that are
protected by HSMs.
Best practice: Don't put credentials and other secrets in source code or GitHub. Detail:
The only thing worse than losing your keys and credentials is having an unauthorized
party gain access to them. Attackers can take advantage of bot technologies to find keys
and secrets stored in code repositories such as GitHub. Do not put key and secrets in
these public code repositories.
Best practice: Protect your VM management interfaces on hybrid PaaS and IaaS services
by using a management interface that enables you to remote manage these VMs
directly. Detail: Remote management protocols such as SSH , RDP , and PowerShell
remoting can be used. In general, we recommend that you do not enable direct remote
access to VMs from the internet.
If possible, use alternate approaches like using virtual private networks in an Azure
virtual network. If alternative approaches are not available, ensure that you use complex
passphrases and two-factor authentication (such as Microsoft Entra multifactor
authentication).
Best practice: Use strong authentication and authorization platforms. Detail: Use
federated identities in Microsoft Entra ID instead of custom user stores. When you use
federated identities, you take advantage of a platform-based approach and you
delegate the management of authorized identities to your partners. A federated identity
approach is especially important when employees are terminated and that information
needs to be reflected through multiple identity and authorization systems.
Use standard authentication protocols, such as OAuth2 and Kerberos. These protocols
have been extensively peer reviewed and are likely implemented as part of your
platform libraries for authentication and authorization.
The following table lists the STRIDE threats and gives some example mitigations that use
Azure features. These mitigations won't work in every situation.
ノ Expand table
Best practice: Authenticate through Microsoft Entra ID. Detail: App Service provides an
OAuth 2.0 service for your identity provider. OAuth 2.0 focuses on client developer
simplicity while providing specific authorization flows for web applications, desktop
applications, and mobile phones. Microsoft Entra ID uses OAuth 2.0 to enable you to
authorize access to mobile and web applications.
Best practice: Restrict access based on the need to know and least privilege security
principles. Detail: Restricting access is imperative for organizations that want to enforce
security policies for data access. You can use Azure RBAC to assign permissions to users,
groups, and applications at a certain scope. To learn more about granting users access
to applications, see Get started with access management.
Best practice: Protect your keys. Detail: Azure Key Vault helps safeguard cryptographic
keys and secrets that cloud applications and services use. With Key Vault, you can
encrypt keys and secrets (such as authentication keys, storage account keys, data
encryption keys, .PFX files, and passwords) by using keys that are protected by hardware
security modules (HSMs). For added assurance, you can import or generate keys in
HSMs. See Azure Key Vault to learn more. You can also use Key Vault to manage your
TLS certificates with auto-renewal.
Best practice: Restrict incoming source IP addresses. Detail: App Service Environment
has a virtual network integration feature that helps you restrict incoming source IP
addresses through network security groups. Virtual networks enable you to place Azure
resources in a non-internet, routable network that you control access to. To learn more,
see Integrate your app with an Azure virtual network.
Best practice: Monitor the security state of your App Service environments. Detail: Use
Microsoft Defender for Cloud to monitor your App Service environments. When
Defender for Cloud identifies potential security vulnerabilities, it creates
recommendations that guide you through the process of configuring the needed
controls.
Web Application Firewall (WAF) provides centralized protection of your web applications
from common exploits and vulnerabilities.
DDoS protection
Azure DDoS Protection, combined with application-design best practices, provides
enhanced DDoS mitigation features to provide more defense against DDoS attacks. You
should enable Azure DDOS Protection on any perimeter virtual network.
Monitor the performance of your applications
Monitoring is the act of collecting and analyzing data to determine the performance,
health, and availability of your application. An effective monitoring strategy helps you
understand the detailed operation of the components of your application. It helps you
increase your uptime by notifying you of critical issues so that you can resolve them
before they become problems. It also helps you detect anomalies that might be security
related.
Use Azure Application Insights to monitor availability, performance, and usage of your
application, whether it's hosted in the cloud or on-premises. By using Application
Insights, you can quickly identify and diagnose errors in your application without waiting
for a user to report them. With the information that you collect, you can make informed
choices on your application's maintenance and improvements.
Application Insights has extensive tools for interacting with the data that it collects.
Application Insights stores its data in a common repository. It can take advantage of
shared functionality such as alerts, dashboards, and deep analysis with the Kusto query
language.
Fuzz testing is a method for finding program failures (code errors) by supplying
malformed input data to program interfaces (entry points) that parse and consume this
data.
Next steps
In this article, we focused on security advantages of an Azure PaaS deployment and
security best practices for cloud applications. Next, learn recommended practices for
securing your PaaS web and mobile solutions using specific Azure services. We'll start
with Azure App Service, Azure SQL Database and Azure Synapse Analytics, Azure
Storage, and Azure Cloud Services. As articles on recommended practices for other
Azure services become available, links will be provided in the following list:
See Develop secure applications on Azure for security questions and controls you
should consider at each phase of the software development lifecycle when developing
applications for the cloud.
See Azure security best practices and patterns for more security best practices to use
when you're designing, deploying, and managing your cloud solutions by using Azure.
The following resources are available to provide more general information about Azure
security and related Microsoft services:
Microsoft Product Lifecycle - for consistent and predictable guidelines for support
throughout the life of a product
Microsoft Security Response Center - where Microsoft security vulnerabilities,
including issues with Azure, can be reported or via email to [email protected]
Feedback
Was this page helpful? Yes No
In this article, we discuss a collection of Azure App Service security best practices for
securing your PaaS web and mobile applications. These best practices are derived from
our experience with Azure and the experiences of customers like yourself.
Azure App Service is a platform-as-a-service (PaaS) offering that lets you create web and
mobile apps for any platform or device and connect to data anywhere, in the cloud or
on-premises. App Service includes the web and mobile capabilities that were previously
delivered separately as Azure Websites and Azure Mobile Services. It also includes new
capabilities for automating business processes and hosting cloud APIs. As a single
integrated service, App Service brings a rich set of capabilities to web, mobile, and
integration scenarios.
For App Service on Windows, you can also restrict IP addresses dynamically by
configuring the web.config. For more information, see Dynamic IP Security.
Next steps
This article introduced you to a collection of App Service security best practices for
securing your PaaS web and mobile applications. To learn more about securing your
PaaS deployments, see:
In this article, we discuss a collection of Azure Storage security best practices for
securing your platform-as-a-service (PaaS) web and mobile applications. These best
practices are derived from our experience with Azure and the experiences of customers
like yourself.
Azure makes it possible to deploy and use storage in ways not easily achievable on-
premises. With Azure storage, you can reach high levels of scalability and availability
with relatively little effort. Not only is Azure Storage the foundation for Windows and
Linux Azure Virtual Machines, it can also support large distributed applications.
Azure Storage provides the following four services: Blob storage, Table storage, Queue
storage, and File storage. To learn more, see Introduction to Microsoft Azure Storage.
Storage access keys are high priority secrets and should only be accessible to people
responsible for storage access control. If the wrong people get access to these keys,
they'll have complete control of storage and could replace, delete, or add files to
storage. This includes malware and other types of content that can potentially
compromise your organization or your customers.
You still need a way to provide access to objects in storage. To provide more granular
access, you can take advantage of shared access signature (SAS). The SAS makes it
possible for you to share specific objects in storage for a pre-defined time-interval and
with specific permissions. A shared access signature allows you to define:
The interval over which the SAS is valid, including the start time and the expiry
time.
The permissions granted by the SAS. For example, a SAS on a blob might grant a
user read and write permissions to that blob, but not delete permissions.
An optional IP address or range of IP addresses from which Azure Storage accepts
the SAS. For example, you might specify a range of IP addresses belonging to your
organization. This provides another measure of security for your SAS.
The protocol over which Azure Storage accepts the SAS. You can use this optional
parameter to restrict access to clients using HTTPS.
SAS allows you to share content the way you want to share it without giving away your
storage account keys. Always using SAS in your application is a secure way to share your
storage resources without compromising your storage account keys.
To learn more about shared access signature, see Using shared access signatures.
You can use Azure built-in roles in Azure to assign privileges to users. For example, use
Storage Account Contributor for cloud operators that need to manage storage accounts
and Classic Storage Account Contributor role to manage classic storage accounts. For
cloud operators that need to manage VMs but not the virtual network or storage
account to which they're connected, you can add them to the Virtual Machine
Contributor role.
Organizations that don't enforce data access control by using capabilities such as Azure
RBAC may be giving more privileges than necessary for their users. More privileges than
necessary can lead to data compromise by allowing some users access to data they
shouldn't have in the first place.
To learn more about Azure RBAC see:
Client-side encryption also enables you to have sole control over your encryption keys.
You can generate and manage your own encryption keys. It uses an envelope technique
where the Azure storage client library generates a content encryption key (CEK) that is
then wrapped (encrypted) using the key encryption key (KEK). The KEK is identified by a
key identifier and can be an asymmetric key pair or a symmetric key and can be
managed locally or stored in Azure Key Vault.
Client-side encryption is built into the Java and the .NET storage client libraries. See
Client-side encryption and Azure Key Vault for Microsoft Azure Storage for information
on encrypting data within client applications and generating and managing your own
encryption keys.
Next steps
This article introduced you to a collection of Azure Storage security best practices for
securing your PaaS web and mobile applications. To learn more about securing your
PaaS deployments, see:
Securing PaaS deployments
Securing PaaS web and mobile applications using Azure App Services
Securing PaaS databases in Azure
Best practices for securing PaaS
databases in Azure
Article • 10/12/2023
In this article, we discuss a collection of Azure SQL Database and Azure Synapse
Analytics security best practices for securing your platform-as-a-service (PaaS) web and
mobile applications. These best practices are derived from our experience with Azure
and the experiences of customers like yourself.
Azure SQL Database and Azure Synapse Analytics provide a relational database service
for your internet-based applications. Let's look at services that help protect your
applications and data when using Azure SQL Database and Azure Synapse Analytics in a
PaaS deployment:
SQL authentication uses a username and password. When you created the server
for your database, you specified a "server admin" login with a username and
password. Using these credentials, you can authenticate to any database on that
server as the database owner.
7 Note
To ensure that Microsoft Entra ID is a good fit for your environment, see Microsoft
Entra features and limitations.
Azure SQL manages key related issues for TDE. As with TDE, on-premises special care
must be taken to ensure recoverability and when moving databases. In more
sophisticated scenarios, the keys can be explicitly managed in Azure Key Vault through
extensible key management. See Enable TDE on SQL Server Using EKM. This also allows
for Bring Your Own Key (BYOK) through Azure Key Vaults BYOK capability.
Azure SQL provides encryption for columns through Always Encrypted. This allows only
authorized applications access to sensitive columns. Using this kind of encryption limits
SQL queries for encrypted columns to equality-based values.
Application level encryption should also be used for selective data. Data sovereignty
concerns can sometimes be mitigated by encrypting data with a key that is kept in the
correct country/region. This prevents even accidental data transfer from causing an
issue since it is impossible to decrypt the data without the key, assuming a strong
algorithm is used (such as AES 256).
You can use additional precautions to help secure the database, such as designing a
secure system, encrypting confidential assets, and building a firewall around the
database servers.
Next steps
This article introduced you to a collection of SQL Database and Azure Synapse Analytics
security best practices for securing your PaaS web and mobile applications. To learn
more about securing your PaaS deployments, see:
In addition to this article, please also review Service Fabric security checklist for more
information.
Deploying an application on Azure is fast, easy, and cost-effective. Before you deploy
your cloud application into production, review our list of essential and recommended
best practices for implementing secure clusters in your application.
Azure Service Fabric is a distributed systems platform that makes it easy to package,
deploy, and manage scalable and reliable microservices. Service Fabric also addresses
the significant challenges in developing and managing cloud applications. Developers
and administrators can avoid complex infrastructure problems and focus on
implementing mission-critical, demanding workloads that are scalable, reliable, and
manageable.
Use Azure Resource Manager templates and the Service Fabric PowerShell module
to create secure clusters.
Use X.509 certificates.
Configure security policies.
Implement the Reliable Actors security configuration.
Configure TLS for Azure Service Fabric.
Use network isolation and security with Azure Service Fabric.
Configure Azure Key Vault for security.
Assign users to roles.
Things to consider if hosting untrusted applications in a Service Fabric cluster.
Your clusters must be secured to prevent unauthorized users from connecting, especially
when a cluster is running in production. Although it's possible to create an unsecured
cluster, anonymous users can connect to your cluster if the cluster exposes management
endpoints to the public internet.
There are three scenarios for implementing cluster security by using various
technologies:
7 Note
Use Azure Resource Manager templates and the Service Fabric PowerShell module to
create a secure cluster. For step-by-step instructions to create a secure Service Fabric
cluster by using Azure Resource Manager templates, see Creating a Service Fabric
cluster.
Customize your cluster by using the template to configure managed storage for
VM virtual hard disks (VHDs).
Drive changes to your resource group by using the template for easy configuration
management and auditing.
Many aspects of the Service Fabric application lifecycle can be automated. The Service
Fabric PowerShell module automates common tasks for deploying, upgrading,
removing, and testing Azure Service Fabric applications. Managed APIs and HTTP APIs
for application management are also available.
To learn more about using X.509 certificates, see Add or remove certificates for a Service
Fabric cluster.
Use an Active Directory domain group or user: Run the service under the
credentials for an Active Directory user or group account. Be sure to use Active
Directory on-premises within your domain and not Azure Active Directory. Access
other resources in the domain that have been granted permissions by using a
domain user or group. For example, resources such as file shares.
Assign a security access policy for HTTP and HTTPS endpoints: Specify the
SecurityAccessPolicy property to apply a RunAs policy to a service when the
service manifest declares endpoint resources with HTTP. Ports allocated to the
HTTP endpoints are correctly access-controlled lists for the RunAs user account
that the service runs under. When the policy isn't set, http.sys doesn't have access
to the service and you can get failures with calls from the client.
To learn how to use security policies in a Service Fabric cluster, see Configure security
policies for your application.
In general, use the actor design pattern to help model solutions for the following
software problems or security scenarios:
In Service Fabric, actors are implemented in the Reliable Actors application framework.
This framework is based on the actor pattern and built on top of Service Fabric Reliable
Services. Each reliable actor service that you write is a partitioned stateful reliable
service.
Every actor is defined as an instance of an actor type, identical to the way a .NET object
is an instance of a .NET type. For example, an actor type that implements the
functionality of a calculator can have many actors of that type that are distributed on
various nodes across a cluster. Each of the distributed actors is uniquely characterized by
an actor identifier.
Replicator security configurations are used to secure the communication channel that is
used during replication. This configuration prevents services from seeing each other's
replication traffic and ensures that highly available data is secure. By default, an empty
security configuration section prevents replication security. Replicator configurations
configure the replicator that is responsible for making the Actor State Provider state
highly reliable.
To configure TLS for an application, you first need to obtain an SSL/TLS certificate that
has been signed by a CA. The CA is a trusted third party that issues certificates for TLS
security purposes. If you don't already have an SSL/TLS certificate, you need to obtain
one from a company that sells SSL/TLS certificates.
The certificate must meet the following requirements for SSL/TLS certificates in Azure:
The certificate's subject name must match the domain name that is used to access
your cloud service.
Acquire a custom domain name to use for accessing your cloud service.
Request a certificate from a CA with a subject name that matches your service's
custom domain name. For example, if your custom domain name is
contoso.com, the certificate from your CA should have the subject name
.contoso.com or www.contoso.com.
7 Note
The HTTP protocol is unsecure and subject to eavesdropping attacks. Data that is
transmitted over HTTP is sent as plain text from the web browser to the web server or
between other endpoints. Attackers can intercept and view sensitive data that is sent via
HTTP, such as credit card details and account logins. When data is sent or posted
through a browser via HTTPS, SSL ensures that sensitive information is encrypted and
secure from interception.
To learn more about using SSL/TLS certificates, see Configuring TLS for an application in
Azure.
The template has an NSG for each of the virtual machine scale sets and is used to
control the traffic in and out of the set. The rules are configured by default to allow all
traffic necessary for the system services and the application ports specified in the
template. Review these rules and make any changes to fit your needs, including adding
new rules for your applications.
For more information, see Common networking scenarios for Azure Service Fabric.
Service Fabric uses X.509 certificates to secure a cluster and to provide application
security features. You use Azure Key Vault to manage certificates for Service Fabric
clusters in Azure. The Azure resource provider that creates the clusters pulls the
certificates from a key vault. The provider then installs the certificates on the VMs when
the cluster is deployed on Azure.
A certificate relationship exists between Azure Key Vault, the Service Fabric cluster, and
the resource provider that uses the certificates. When the cluster is created, information
about the certificate relationship is stored in a key vault.
We recommend that you put the key vault in its own resource group. This action
helps to prevent the loss of your keys and secrets if other resource groups are
removed, such as storage, compute, or the group that contains your cluster. The
resource group that contains your key vault must be in the same region as the
cluster that is using it.
The key vault must be enabled for deployment. The compute resource provider
can then get the certificates from the vault and install them on the VM instances.
To learn more about how to set up a key vault, see What is Azure Key Vault?.
7 Note
For more information about using roles in Service Fabric, see Service Fabric role-
based access control for Service Fabric clients.
Azure Service Fabric supports two access control types for clients that are connected to
a Service Fabric cluster: administrator and user. The cluster administrator can use access
control to limit access to certain cluster operations for different groups of users. Access
control makes the cluster more secure.
Next steps
Service Fabric security checklist
Set up your Service Fabric development environment.
Learn about Service Fabric support options.
Azure security logging and auditing
Article • 08/29/2023
Azure provides a wide array of configurable security auditing and logging options to
help you identify gaps in your security policies and mechanisms. This article discusses
generating, collecting, and analyzing security logs from services hosted on Azure.
7 Note
Data plane logs provide information about events raised as part of Azure resource
usage. Examples of this type of log are the Windows event system, security, and
application logs in a virtual machine (VM) and the diagnostics logs that are
configured through Azure Monitor.
Processed events provide information about analyzed events/alerts that have been
processed on your behalf. Examples of this type are Microsoft Defender for Cloud
alerts where Microsoft Defender for Cloud has processed and analyzed your
subscription and provides concise security alerts.
The following table lists the most important types of logs available in Azure:
Log category Log type Usage Integration
Activity logs Control-plane events Provides insight into the REST API, Azure
on Azure Resource operations that were Monitor
Manager resources performed on resources in
your subscription.
Azure Resource Frequent data about Provides insight into Azure Monitor
logs the operation of Azure operations that your resource
Resource Manager itself performed.
resources in
subscription
Azure Active Logs and reports Reports user sign-in activities Microsoft Graph
Directory and system activity
reporting information about users and
group management.
Virtual Windows Event Log Captures system data and Windows (using
machines and service and Linux logging data on the virtual Azure Diagnostics]
cloud services Syslog machines and transfers that storage) and Linux in
data into a storage account of Azure Monitor
your choice.
Azure Storage Storage logging, Provides insight into trace REST API or the
Analytics provides metrics data requests, analyzes usage client library
for a storage account trends, and diagnoses issues
with your storage account.
Process data / Microsoft Defender for Provides security information REST APIs, JSON
security alerts Cloud alerts, Azure and alerts.
Monitor logs alerts
Next steps
Auditing and logging: Protect data by maintaining visibility and responding quickly
to timely security alerts.
Search the audit log in the Microsoft 365 Defender portal: Use the Microsoft 365
Defender portal to search the unified audit log and view user and administrator
activity in your organization.
Azure security management and
monitoring overview
Article • 06/20/2024
This article provides an overview of the security features and services that Azure
provides to aid in the management and monitoring of Azure cloud services and virtual
machines.
Learn more:
Antimalware
With Azure, you can use antimalware software from major security vendors such as
Microsoft, Symantec, Trend Micro, McAfee, and Kaspersky. This software helps protect
your virtual machines from malicious files, adware, and other threats.
Microsoft Antimalware for Azure Cloud Services and Virtual Machines offers you the
ability to install an antimalware agent for both PaaS roles and virtual machines. Based on
System Center Endpoint Protection, this feature brings proven on-premises security
technology to the cloud.
Learn more:
Multifactor authentication
Microsoft Entra multifactor authentication is a method of authentication that requires
the use of more than one verification method. It adds a critical second layer of security
to user sign-ins and transactions.
Learn more:
Multifactor authentication
How Microsoft Entra multifactor authentication works
ExpressRoute
You can use Azure ExpressRoute to extend your on-premises networks into the
Microsoft Cloud over a dedicated private connection that's facilitated by a connectivity
provider. With ExpressRoute, you can establish connections to Microsoft cloud services
such as Azure, Microsoft 365, and CRM Online. Connectivity can be from:
ExpressRoute connections don't go over the public internet. They can offer more
reliability, faster speeds, lower latencies, and higher security than typical connections
over the internet.
Learn more:
Learn more:
About VPN gateways
Azure network security overview
This is a growing security risk for cloud-hosted resources because organizations can't
sufficiently monitor what those users are doing with their privileged access. Additionally,
if a user account with privileged access is compromised, that one breach can affect an
organization's overall cloud security. Microsoft Entra Privileged Identity Management
helps to resolve this risk by lowering the exposure time of privileges and increasing
visibility into usage.
Privileged Identity Management introduces the concept of a temporary admin for a role
or “just in time” administrator access. This kind of admin is a user who needs to
complete an activation process for that assigned role. The activation process changes
the assignment of the user to a role in Microsoft Entra ID from inactive to active, for a
specified time period.
Learn more:
Identity Protection
Microsoft Entra ID Protection provides a consolidated view of suspicious sign-in
activities and potential vulnerabilities to help protect your business. Identity Protection
detects suspicious activities for users and privileged (admin) identities, based on signals
like:
Brute-force attacks.
Leaked credentials.
Sign-ins from unfamiliar locations and infected devices.
Defender for Cloud helps you optimize and monitor the security of your Azure resources
by:
Enabling you to define policies for your Azure subscription resources according to:
Your organization's security needs.
The type of applications or sensitivity of the data in each subscription.
Any industry or regulatory standards or benchmarks you apply to your
subscriptions.
Monitoring the state of your Azure virtual machines, networking, and applications.
Providing a list of prioritized security alerts, including alerts from integrated
partner solutions. It also provides the information that you need to quickly
investigate an attack and recommendations on how to remediate it.
Learn more:
Next Steps
Learn about the shared responsibility model and which security tasks are handled by
Microsoft and which tasks are handled by you.
For more information about security management, see Security management in Azure.
Feedback
Was this page helpful? Yes No
Azure subscribers may manage their cloud environments from multiple devices,
including management workstations, developer PCs, and even privileged end-user
devices that have task-specific permissions. In some cases, administrative functions are
performed through web-based consoles such as the Azure portal . In other cases, there
may be direct connections to Azure from on-premises systems over Virtual Private
Networks (VPNs), Terminal Services, client application protocols, or (programmatically)
the Azure classic deployment model. Additionally, client endpoints can be either domain
joined or isolated and unmanaged, such as tablets or smartphones.
Although multiple access and management capabilities provide a rich set of options, this
variability can add significant risk to a cloud deployment. It can be difficult to manage,
track, and audit administrative actions. This variability may also introduce security
threats through unregulated access to client endpoints that are used for managing
cloud services. Using general or personal workstations for developing and managing
infrastructure opens unpredictable threat vectors such as web browsing (for example,
watering hole attacks) or email (for example, social engineering and phishing).
The potential for attacks increases in this type of environment because it's challenging
to construct security policies and mechanisms to appropriately manage access to Azure
interfaces (such as SMAPI) from widely varied endpoints.
Remote management threats
Attackers often attempt to gain privileged access by compromising account credentials
(for example, through password brute forcing, phishing, and credential harvesting), or
by tricking users into running harmful code (for example, from harmful websites with
drive-by downloads or from harmful email attachments). In a remotely managed cloud
environment, account breaches can lead to an increased risk due to anywhere, anytime
access.
Even with tight controls on primary administrator accounts, lower-level user accounts
can be used to exploit weaknesses in one's security strategy. Lack of appropriate
security training can also lead to breaches through accidental disclosure or exposure of
account information.
When a user workstation is also used for administrative tasks, it can be compromised at
many different points. Whether a user is browsing the web, using 3rd-party and open-
source tools, or opening a harmful document file that contains a trojan.
In general, most targeted attacks that result in data breaches can be traced to browser
exploits, plug-ins (such as Flash, PDF, Java), and spear phishing (email) on desktop
machines. These machines may have administrative-level or service-level permissions to
access live servers or network devices for operations when used for development or
management of other assets.
Isolate sensitive functions from one another to decrease the likelihood that a mistake at
one level leads to a breach in another. Examples:
Security policies can include Group Policy settings that deny open Internet access
from the device and use of a restrictive firewall configuration.
Use Internet Protocol security (IPsec) VPNs if direct access is needed.
Configure separate management and development Active Directory domains.
Isolate and filter management workstation network traffic.
Use antimalware software.
Implement multi-factor authentication to reduce the risk of stolen credentials.
7 Note
Within an on-premises enterprise environment, you can limit the attack surface of your
physical infrastructure through dedicated management networks, server rooms that
have card access, and workstations that run on protected areas of the network. In a
cloud or hybrid IT model, being diligent about secure management services can be
more complex because of the lack of physical access to IT resources. Implementing
protection solutions requires careful software configuration, security-focused processes,
and comprehensive policies.
Virtual Machine deployed applications provide their own client tools and interfaces as
needed, such as the Microsoft Management Console (MMC), an enterprise management
console (such as Microsoft System Center or Windows Intune), or another management
application Microsoft SQL Server Management Studio, for example. These tools typically
reside in an enterprise environment or client network. They may depend on specific
network protocols, such as Remote Desktop Protocol (RDP), that require direct, stateful
connections. Some may have web-enabled interfaces that shouldn't be openly
published or accessible via the Internet.
You can restrict access to infrastructure and platform services management in Azure by
using multi-factor authentication, X.509 management certificates, and firewall rules. The
Azure portal and SMAPI require Transport Layer Security (TLS). However, services and
applications that you deploy into Azure require you to take protection measures that are
appropriate based on your application. These mechanisms can frequently be enabled
more easily through a standardized hardened workstation configuration.
Security guidelines
In general, helping to secure administrator workstations for use with the cloud is similar
to the practices used for any workstation on-premises. For example, minimized build
and restrictive permissions. Some unique aspects of cloud management are more akin
to remote or out-of-band enterprise management. These include the use and auditing
of credentials, security-enhanced remote access, and threat detection and response.
Authentication
You can use Azure logon restrictions to constrain source IP addresses for accessing
administrative tools and audit access requests. To help Azure identify management
clients (workstations and/or applications), you can configure both SMAPI (via customer-
developed tools such as Windows PowerShell cmdlets) and the Azure portal to require
client-side management certificates to be installed, in addition to TLS/SSL certificates.
We also recommend that administrator access require multi-factor authentication.
Some applications or services that you deploy into Azure may have their own
authentication mechanisms for both end-user and administrator access, whereas others
take full advantage of Azure AD. Depending on whether you're federating credentials
via Active Directory Federation Services (AD FS), using directory synchronization or
maintaining user accounts solely in the cloud, using Microsoft Identity Manager (part of
Azure AD Premium) helps you manage identity lifecycles between the resources.
Connectivity
Several mechanisms are available to help secure client connections to your Azure virtual
networks. Two of these mechanisms, site-to-site VPN (S2S) and point-to-site VPN (P2S),
enable the use of industry standard IPsec (S2S) for encryption and tunneling. When
Azure is connecting to public-facing Azure services management such as the Azure
portal, Azure requires Hypertext Transfer Protocol Secure (HTTPS).
Monitoring, logging, and auditing provide a basis for tracking and understanding
administrative activities, but it may not always be feasible to audit all actions in
complete detail due to the amount of data generated. Auditing the effectiveness of the
management policies is a best practice, however.
Policy enforcement that includes strict access controls puts programmatic mechanisms
in place that can govern administrator actions, and it helps ensure that all possible
protection measures are being used. Logging provides proof of enforcement, in addition
to a record of who did what, from where, and when. Logging also enables you to audit
and crosscheck information about how administrators follow policies, and it provides
evidence of activities
Client configuration
We recommend three primary configurations for a hardened workstation. The biggest
differentiators between them are cost, usability, and accessibility, while maintaining a
similar security profile across all options. The following table provides a short analysis of
the benefits and risks to each. (Note that "corporate PC" refers to a standard desktop PC
configuration that would be deployed for all domain users, regardless of roles.)
It's important that the hardened workstation is the host and not the guest, with nothing
between the host operating system and the hardware. Following the "clean source
principle" (also known as "secure origin") means that the host should be the most
hardened. Otherwise, the hardened workstation (guest) is subject to attacks on the
system on which it's hosted.
You can further segregate administrative functions through dedicated system images for
each hardened workstation that have only the tools and permissions needed for
managing select Azure and cloud applications, with specific local AD DS GPOs for the
necessary tasks.
The corporate PC virtual machine runs in a protected space and provides user
applications. The host remains a "clean source" and enforces strict network policies in
the root operating system (for example, blocking RDP access from the virtual machine).
Best practices
Consider the following additional guidelines when you're managing applications and
data in Azure.
Don't Do
Don't Do
Don't email credentials for Maintain confidentiality by delivering account names and
administrator access or other secrets passwords by voice (but not storing them in voice mail),
(for example, TLS/SSL or management perform a remote installation of client/server certificates
certificates) (via an encrypted session), download from a protected
network share, or distribute by hand via removable
media.
Don't store account passwords Establish security management principles and system
unencrypted or un-hashed in hardening policies, and apply them to your development
application storage (such as in environment.
spreadsheets, SharePoint sites, or file
shares).
Don't share accounts and passwords Create a dedicated Microsoft account to manage your
between administrators, or reuse Azure subscription, an account that is not used for
passwords across multiple user personal email.
accounts or services, particularly
those for social media or other
nonadministrative activities.
Don't email configuration files. Configuration files and profiles should be installed from
a trusted source (for example, an encrypted USB flash
drive), not from a mechanism that can be easily
compromised, such as email.
Don't use weak or simple logon Enforce strong password policies, expiration cycles
passwords. (change-on-first-use), console timeouts, and automatic
account lockouts. Use a client password management
system with multi-factor authentication for password
vault access.
Don't expose management ports to Lock down Azure ports and IP addresses to restrict
the Internet. management access.
Azure operations
Within Microsoft's operation of Azure, operations engineers and support personnel who
access Azure's production systems use hardened workstation PCs with VMs provisioned
on them for internal corporate network access and applications (such as e-mail, intranet,
etc.). All management workstation computers have TPMs, the host boot drive is
encrypted with BitLocker, and they're joined to a special organizational unit (OU) in
Microsoft's primary corporate domain.
System hardening is enforced through Group Policy, with centralized software updating.
For auditing and analysis, event logs (such as security and AppLocker) are collected from
management workstations and saved to a central location.
A web browser is a key entry point for harmful code due to its extensive
interactions with external servers. Review your client policies and enforce running
in protected mode, disabling add-ons, and disabling file downloads. Ensure that
security warnings are displayed. Take advantage of Internet zones and create a list
of trusted sites for which you have configured reasonable hardening. Block all
other sites and in-browser code, such as ActiveX and Java.
Standard user. Running as a standard user brings a number of benefits, the biggest
of which is that stealing administrator credentials via malware becomes more
difficult. In addition, a standard user account doesn't have elevated privileges on
the root operating system, and many configuration options and APIs are locked
out by default.
Code signing. Code signing all tools and scripts used by administrators provides a
manageable mechanism for deploying application lockdown policies. Hashes don't
scale with rapid changes to the code, and file paths don't provide a high level of
security. Set the PowerShell execution policies for Windows computers.
Group Policy. Create a global administrative policy that is applied to any domain
workstation that is used for management (and block access from all others), and to
user accounts authenticated on those workstations.
Security-enhanced provisioning. Safeguard your baseline hardened workstation
image to help protect against tampering. Use security measures like encryption
and isolation to store images, virtual machines, and scripts, and restrict access
(perhaps use an auditable check-in/check-out process).
Patching. Maintain a consistent build (or have separate images for development,
operations, and other administrative tasks), scan for changes and malware
routinely, keep the build up to date, and only activate machines when they're
needed.
Governance. Use AD DS GPOs to control all the administrators' Windows
interfaces, such as file sharing. Include management workstations in auditing,
monitoring, and logging processes. Track all administrator and developer access
and usage.
Summary
Using a hardened workstation configuration for administering your Azure cloud services,
Virtual Machines, and applications can help you avoid numerous risks and threats that
can come from remotely managing critical IT infrastructure. Both Azure and Windows
provide mechanisms that you can employ to help protect and control communications,
authentication, and client behavior.
Next steps
The following resources are available to provide more general information about Azure
and related Microsoft services:
Securing Privileged Access - get the technical details for designing and building a
secure administrative workstation for Azure management
Microsoft Trust Center - learn about Azure platform capabilities that protect the
Azure fabric and the workloads that run on Azure
Microsoft Security Response Center - where Microsoft security vulnerabilities,
including issues with Azure, can be reported or via email to [email protected]
Azure operational security overview
Article • 08/29/2023
Azure operational security refers to the services, controls, and features available to users
for protecting their data, applications, and other assets in Microsoft Azure. It's a
framework that incorporates the knowledge gained through a variety of capabilities that
are unique to Microsoft. These capabilities include the Microsoft Security Development
Lifecycle (SDL), the Microsoft Security Response Center program, and deep awareness of
the cybersecurity threat landscape.
Microsoft Azure Monitor logs is a cloud-based IT management solution that helps you
manage and protect your on-premises and cloud infrastructure. Its core functionality is
provided by the following services that run in Azure. Azure includes multiple services
that help you manage and protect your on-premises and cloud infrastructure. Each
service provides a specific management function. You can combine services to achieve
different management scenarios.
Azure Monitor
Azure Monitor collects data from managed sources into central data stores. This data
can include events, performance data, or custom data provided through the API. After
the data is collected, it's available for alerting, analysis, and export.
You can consolidate data from a variety of sources and combine data from your Azure
services with your existing on-premises environment. Azure Monitor logs also clearly
separates the collection of the data from the action taken on that data, so that all
actions are available to all kinds of data.
Automation
Azure Automation provides a way for you to automate the manual, long-running, error-
prone, and frequently repeated tasks that are commonly performed in a cloud and
enterprise environment. It saves time and increases the reliability of administrative tasks.
It even schedules these tasks to be automatically performed at regular intervals. You can
automate processes by using runbooks or automate configuration management by
using Desired State Configuration.
Backup
Azure Backup is the Azure-based service that you can use to back up (or protect) and
restore your data in the Microsoft Cloud. Azure Backup replaces your existing on-
premises or off-site backup solution with a cloud-based solution that's reliable, secure,
and cost-competitive.
Azure Backup offers components that you download and deploy on the appropriate
computer or server, or in the cloud. The component, or agent, that you deploy depends
on what you want to protect. All Azure Backup components (whether you're protecting
data on-premises or in the cloud) can be used to back up data to an Azure Recovery
Services vault in Azure.
Site Recovery
Azure Site Recovery provides business continuity by orchestrating the replication of on-
premises virtual and physical machines to Azure, or to a secondary site. If your primary
site is unavailable, you fail over to the secondary location so that users can keep
working. You fail back when systems return to working order. Use Microsoft Defender
for Cloud to perform more intelligent and effective threat detection.
Azure AD also includes a full suite of identity management capabilities, including these:
Multi-factor authentication
Self-service password management
Self-service group management
Privileged account management
Azure role-based access control (Azure RBAC)
Application usage monitoring
Rich auditing
Security monitoring and alerting
With Azure Active Directory, all applications that you publish for your partners and
customers (business or consumer) have the same identity and access management
capabilities. This enables you to significantly reduce your operational costs.
Safeguard virtual machine (VM) data in Azure by providing visibility into your virtual
machine’s security settings and monitoring for threats. Defender for Cloud can monitor
your virtual machines for:
Defender for Cloud uses Azure role-based access control (Azure RBAC). Azure RBAC
provides built-in roles that can be assigned to users, groups, and services in Azure.
Defender for Cloud assesses the configuration of your resources to identify security
issues and vulnerabilities. In Defender for Cloud, you see information related to a
resource only when you're assigned the role of owner, contributor, or reader for the
subscription or resource group that a resource belongs to.
7 Note
To learn more about roles and allowed actions in Defender for Cloud, see
Permissions in Microsoft Defender for Cloud.
Defender for Cloud uses the Microsoft Monitoring Agent. This is the same agent that
the Azure Monitor service uses. Data collected from this agent is stored in either an
existing Log Analytics workspace associated with your Azure subscription or a new
workspace, taking into account the geolocation of the VM.
Azure Monitor
Performance issues in your cloud app can affect your business. With multiple
interconnected components and frequent releases, degradations can happen at any
time. And if you’re developing an app, your users usually discover issues that you didn’t
find in testing. You should know about these issues immediately, and you should have
tools for diagnosing and fixing the problems.
Azure Monitor is basic tool for monitoring services running on Azure. It gives you
infrastructure-level data about the throughput of a service and the surrounding
environment. If you're managing your apps all in Azure and deciding whether to scale
up or down resources, Azure Monitor is the place to start.
You can also use monitoring data to gain deep insights about your application. That
knowledge can help you to improve application performance or maintainability, or
automate actions that would otherwise require manual intervention.
Windows event system logs are one category of diagnostic logs for VMs. Blob, table,
and queue logs are categories of diagnostic logs for storage accounts.
Diagnostic logs differ from the Activity Log. The Activity log provides insight into the
operations that were performed on resources in your subscription. Diagnostic logs
provide insight into operations that your resource performed itself.
Metrics
Azure Monitor provides telemetry that gives you visibility into the performance and
health of your workloads on Azure. The most important type of Azure telemetry data is
the metrics (also called performance counters) emitted by most Azure resources. Azure
Monitor provides several ways to configure and consume these metrics for monitoring
and troubleshooting.
Azure Diagnostics
Azure Diagnostics enables the collection of diagnostic data on a deployed application.
You can use the Diagnostics extension from various sources. Currently supported are
Azure cloud service roles, Azure virtual machines running Microsoft Windows, and Azure
Service Fabric.
The end-to-end network can have complex configurations and interactions between
resources. The result is complex scenarios that need scenario-based monitoring through
Azure Network Watcher.
Network Watcher simplifies monitoring and diagnosing of your Azure network. You can
use the diagnostic and visualization tools in Network Watcher to:
Role Assignments
Policy Assignments
Azure Resource Manager templates
Resource Groups
DevOps
Before Developer Operations (DevOps) application development, teams were in
charge of gathering business requirements for a software program and writing code.
Then a separate QA team tested the program in an isolated development environment.
If requirements were met, the QA team released the code for operations to deploy. The
deployment teams were further fragmented into groups like networking and database.
Each time a software program was “thrown over the wall” to an independent team, it
added bottlenecks.
DevOps enables teams to deliver more secure, higher-quality solutions faster and more
cheaply. Customers expect a dynamic and reliable experience when consuming software
and services. Teams must rapidly iterate on software updates and measure the impact of
the updates. They must respond quickly with new development iterations to address
issues or provide more value.
Cloud platforms such as Microsoft Azure have removed traditional bottlenecks and
helped commoditize infrastructure. Software reigns in every business as the key
differentiator and factor in business outcomes. No organization, developer, or IT worker
can or should avoid the DevOps movement.
Mature DevOps practitioners adopt several of the following practices. These practices
involve people to form strategies based on the business scenarios. Tooling can help
automate the various practices.
Agile planning and project management techniques are used to plan and isolate
work into sprints, manage team capacity, and help teams quickly adapt to
changing business needs.
Version control, usually with Git, enables teams located anywhere in the world to
share source and integrate with software development tools to automate the
release pipeline.
Continuous integration drives the ongoing merging and testing of code, which
leads to finding defects early. Other benefits include less time wasted on fighting
merge issues and rapid feedback for development teams.
Continuous delivery of software solutions to production and testing environments
helps organizations quickly fix bugs and respond to ever-changing business
requirements.
Monitoring of running applications--including production environments for
application health, as well as customer usage--helps organizations form a
hypothesis and quickly validate or disprove strategies. Rich data is captured and
stored in various logging formats.
Infrastructure as Code (IaC) is a practice that enables the automation and
validation of creation and teardown of networks and virtual machines to help with
delivering secure, stable application hosting platforms.
Microservices architecture is used to isolate business use cases into small reusable
services. This architecture enables scalability and efficiency.
Next steps
To learn about the Security and Audit solution, see the following articles:
This article provides a set of operational best practices for protecting your data,
applications, and other assets in Azure.
The best practices are based on a consensus of opinion, and they work with current
Azure platform capabilities and feature sets. Opinions and technologies change over
time and this article is updated on a regular basis to reflect those changes.
There are multiple options for requiring two-step verification. The best option for you
depends on your goals, the Microsoft Entra edition you're running, and your licensing
program. See How to require two-step verification for a user to determine the best
option for you. See the Microsoft Entra ID and Microsoft Entra multifactor
Authentication pricing pages for more information about licenses and pricing.
Option 1: Enable MFA for all users and login methods with Microsoft Entra Security
Defaults Benefit: This option enables you to easily and quickly enforce MFA for all users
in your environment with a stringent policy to:
This method is available to all licensing tiers but is not able to be mixed with existing
Conditional Access policies. You can find more information in Microsoft Entra Security
Defaults
Option 3: Enable multifactor authentication with Conditional Access policy. Benefit: This
option allows you to prompt for two-step verification under specific conditions by using
Conditional Access. Specific conditions can be user sign-in from different locations,
untrusted devices, or applications that you consider risky. Defining specific conditions
where you require two-step verification enables you to avoid constant prompting for
your users, which can be an unpleasant user experience.
This is the most flexible way to enable two-step verification for your users. Enabling a
Conditional Access policy works only for Microsoft Entra multifactor authentication in
the cloud and is a premium feature of Microsoft Entra ID. You can find more information
on this method in Deploy cloud-based Microsoft Entra multifactor authentication.
This method uses the Microsoft Entra ID Protection risk evaluation to determine if two-
step verification is required based on user and sign-in risk for all cloud applications. This
method requires Microsoft Entra ID P2 licensing. You can find more information on this
method in Microsoft Entra ID Protection.
7 Note
Option 2, enabling multifactor authentication by changing the user state, overrides
Conditional Access policies. Because options 3 and 4 use Conditional Access
policies, you cannot use option 2 with them.
Organizations that don't add extra layers of identity protection, such as two-step
verification, are more susceptible for credential theft attack. A credential theft attack can
lead to data compromise.
Best practice: Ensure you have the proper level of password protection in the cloud.
Detail: Follow the guidance in Microsoft Password Guidance , which is scoped to users
of the Microsoft identity platforms (Microsoft Entra ID, Active Directory, and Microsoft
account).
Best practice: Monitor for suspicious actions related to your user accounts.
Detail: Monitor for users at risk and risky sign-ins by using Microsoft Entra security
reports.
In the Azure enrollment portal, you can ensure admin contact information includes
details that notify security operations. Contact information is an email address and
phone number.
Organize Azure subscriptions into management
groups
If your organization has many subscriptions, you might need a way to efficiently manage
access, policies, and compliance for those subscriptions. Azure management groups
provide a level of scope that’s above subscriptions. You organize subscriptions into
containers called management groups and apply your governance conditions to the
management groups. All subscriptions within a management group automatically inherit
the conditions applied to the management group.
You can build a flexible structure of management groups and subscriptions into a
directory. Each directory is given a single top-level management group called the root
management group. This root management group is built into the hierarchy to have all
management groups and subscriptions fold up to it. The root management group
allows global policies and Azure role assignments to be applied at the directory level.
Best practice: Ensure that new subscriptions apply governance elements like policies
and permissions as they are added.
Detail: Use the root management group to assign enterprise-wide security elements
that apply to all Azure assets. Policies and permissions are examples of elements.
Best practice: Align the top levels of management groups with segmentation strategy to
provide a point for control and policy consistency within each segment.
Detail: Create a single management group for each segment under the root
management group. Don’t create any other management groups under the root.
Best practice: Limit management group depth to avoid confusion that hampers both
operations and security.
Detail: Limit your hierarchy to three levels, including the root.
Best practice: Carefully select which items to apply to the entire enterprise with the root
management group.
Detail: Ensure root management group elements have a clear need to be applied across
every resource and that they’re low impact.
Best practice: Carefully plan and test all enterprise-wide changes on the root
management group before applying them (policy, Azure RBAC model, and so on).
Detail: Changes in the root management group can affect every resource on Azure.
While they provide a powerful way to ensure consistency across the enterprise, errors or
incorrect usage can negatively affect production operations. Test all changes to the root
management group in a test lab or production pilot.
You should continuously monitor the storage services that your application uses for any
unexpected changes in behavior (such as slower response times). Use logging to collect
more detailed data and to analyze a problem in depth. The diagnostics information that
you obtain from both monitoring and logging helps you to determine the root cause of
the issue that your application encountered. Then you can troubleshoot the issue and
determine the appropriate steps to remediate it.
Azure Storage Analytics performs logging and provides metrics data for an Azure
storage account. We recommend that you use this data to trace requests, analyze usage
trends, and diagnose issues with your storage account.
Prevent, detect, and respond to threats
Microsoft Defender for Cloud helps you prevent, detect, and respond to threats by
providing increased visibility into (and control over) the security of your Azure resources.
It provides integrated security monitoring and policy management across your Azure
subscriptions, helps detect threats that might otherwise go unnoticed, and works with
various security solutions.
The Free tier of Defender for Cloud offers limited security for your resources in Azure as
well as Arc-enabled resources outside of Azure. The Enahanced Security Features extend
these capabilities to include threat and vulnerability management, as well as regulatory
compliance reporting. Defender for Cloud Plans help you find and fix security
vulnerabilities, apply access and application controls to block malicious activity, detect
threats by using analytics and intelligence, and respond quickly when under attack. You
can try Defender for Cloud Standard at no cost for the first 30 days. We recommend that
you enable enhanced security features on your Azure subscriptions in Defender for
Cloud.
Use Defender for Cloud to get a central view of the security state of all your resources in
your own data centers, Azure and other clouds. At a glance, verify that the appropriate
security controls are in place and configured correctly, and quickly identify any resources
that need attention.
Defender for Cloud also integrates with Microsoft Defender for Endpoint, which
provides comprehensive Endpoint Detection and Response (EDR) capabilities. With
Microsoft Defender for Endpoint integration, you can spot abnormalities and detect
vulnerabilities. You can also detect and respond to advanced attacks on server
endpoints monitored by Defender for Cloud.
Almost all enterprise organizations have a security information and event management
(SIEM) system to help identify emerging threats by consolidating log information from
diverse signal gathering devices. The logs are then analyzed by a data analytics system
to help identify what’s “interesting” from the noise that is inevitable in all log gathering
and analytics solutions.
Here are some best practices for preventing, detecting, and responding to threats:
Best practice: Increase the speed and scalability of your SIEM solution by using a cloud-
based SIEM.
Detail: Investigate the features and capabilities of Microsoft Sentinel and compare them
with the capabilities of what you’re currently using on-premises. Consider adopting
Microsoft Sentinel if it meets your organization’s SIEM requirements.
Best practice: Find the most serious security vulnerabilities so you can prioritize
investigation.
Detail: Review your Azure secure score to see the recommendations resulting from the
Azure policies and initiatives built into Microsoft Defender for Cloud. These
recommendations help address top risks like security updates, endpoint protection,
encryption, security configurations, missing WAF, internet-connected VMs, and many
more.
The secure score, which is based on Center for Internet Security (CIS) controls, lets you
benchmark your organization’s Azure security against external sources. External
validation helps validate and enrich your team’s security strategy.
Best practice: Monitor the security posture of machines, networks, storage and data
services, and applications to discover and prioritize potential security issues.
Detail: Follow the security recommendations in Defender for Cloud starting, with the
highest priority items.
Best practice: Integrate Defender for Cloud alerts into your security information and
event management (SIEM) solution.
Detail: Most organizations with a SIEM use it as a central clearinghouse for security
alerts that require an analyst response. Processed events produced by Defender for
Cloud are published to the Azure Activity Log, one of the logs available through Azure
Monitor. Azure Monitor offers a consolidated pipeline for routing any of your
monitoring data into a SIEM tool. See Stream alerts to a SIEM, SOAR, or IT Service
Management solution for instructions. If you’re using Microsoft Sentinel, see Connect
Microsoft Defender for Cloud.
Best practice: Speed up your investigation and hunting processes and reduce false
positives by integrating Endpoint Detection and Response (EDR) capabilities into your
attack investigation.
Detail: Enable the Microsoft Defender for Endpoint integration via your Defender for
Cloud security policy. Consider using Microsoft Sentinel for threat hunting and incident
response.
Azure Network Watcher is a regional service. Use its diagnostic and visualization tools to
monitor and diagnose conditions at a network scenario level in, to, and from Azure.
The following are best practices for network monitoring and available tools.
Best practice: Gain insight into your network traffic by using flow logs.
Detail: Build a deeper understanding of your network traffic patterns by using network
security group flow logs. Information in flow logs helps you gather data for compliance,
auditing, and monitoring your network security profile.
You can use Azure Resource Manager to provision your applications by using a
declarative template. In a single template, you can deploy multiple services along with
their dependencies. You use the same template to repeatedly deploy your application in
every stage of the application lifecycle.
Best practice: Automatically build and deploy to Azure web apps or cloud services.
Detail: You can configure your Azure DevOps Projects to automatically build and deploy
to Azure web apps or cloud services. Azure DevOps automatically deploys the binaries
after doing a build to Azure after every code check-in. The package build process is
equivalent to the Package command in Visual Studio, and the publishing steps are
equivalent to the Publish command in Visual Studio.
Best practice: Check your app's performance before you launch it or deploy updates to
production.
Detail: Run cloud-based load tests to:
Apache JMeter is a free, popular open source tool with a strong community backing.
Designing and building for DDoS resiliency requires planning and designing for a variety
of failure modes. Following are best practices for building DDoS-resilient services on
Azure.
Best practice: Ensure that security is a priority throughout the entire lifecycle of an
application, from design and implementation to deployment and operations.
Applications can have bugs that allow a relatively low volume of requests to use a lot of
resources, resulting in a service outage.
Detail: To help protect a service running on Microsoft Azure, you should have a good
understanding of your application architecture and focus on the five pillars of software
quality. You should know typical traffic volumes, the connectivity model between the
application and other applications, and the service endpoints that are exposed to the
public internet.
Best practice: Design your applications to scale horizontally to meet the demand of an
amplified load, specifically in the event of a DDoS attack. If your application depends on
a single instance of a service, it creates a single point of failure. Provisioning multiple
instances makes your system more resilient and more scalable.
Detail: For Azure App Service, select an App Service plan that offers multiple instances.
For Azure Cloud Services, configure each of your roles to use multiple instances.
For Azure Virtual Machines, ensure that your VM architecture includes more than one
VM and that each VM is included in an availability set. We recommend using Virtual
Machine Scale Sets for autoscaling capabilities.
Network security groups are another way to reduce the attack surface. You can use
service tags and application security groups to minimize complexity for creating security
rules and configuring network security, as a natural extension of an application’s
structure.
You should deploy Azure services in a virtual network whenever possible. This practice
allows service resources to communicate through private IP addresses. Azure service
traffic from a virtual network uses public IP addresses as source IP addresses by default.
Using service endpoints switches service traffic to use virtual network private addresses
as the source IP addresses when they're accessing the Azure service from a virtual
network.
We often see customers' on-premises resources getting attacked along with their
resources in Azure. If you're connecting an on-premises environment to Azure, minimize
exposure of on-premises resources to the public internet.
Azure has two DDoS service offerings that provide protection from network attacks:
Basic protection is integrated into Azure by default at no additional cost. The scale
and capacity of the globally deployed Azure network provides defense against
common network-layer attacks through always-on traffic monitoring and real-time
mitigation. Basic requires no user configuration or application changes and helps
protect all Azure services, including PaaS services like Azure DNS.
Standard protection provides advanced DDoS mitigation capabilities against
network attacks. It's automatically tuned to protect your specific Azure resources.
Protection is simple to enable during the creation of virtual networks. It can also be
done after creation and requires no application or resource changes.
Enable Azure Policy to monitor and enforce your organization’s written policy. This will
ensure compliance with your company or regulatory security requirements by centrally
managing security policies across your hybrid cloud workloads. Learn how to create and
manage policies to enforce compliance. See Azure Policy definition structure for an
overview of the elements of a policy.
Here are some security best practices to follow after you adopt Azure Policy:
Best practice: Policy supports several types of effects. You can read about them in Azure
Policy definition structure. Business operations can be negatively affected by the deny
effect and the remediate effect, so start with the audit effect to limit the risk of negative
impact from policy.
Detail: Start policy deployments in audit mode and then later progress to deny or
remediate. Test and review the results of the audit effect before you move to deny or
remediate.
For more information, see Create and manage policies to enforce compliance.
Best practice: Identify the roles responsible for monitoring for policy violations and
ensuring the right remediation action is taken quickly.
Detail: Have the assigned role monitor compliance through the Azure portal or via the
command line.
Next steps
See Azure security best practices and patterns for more security best practices to use
when you’re designing, deploying, and managing your cloud solutions by using Azure.
The following resources are available to provide more general information about Azure
security and related Microsoft services:
Azure Security Team Blog - for up to date information on the latest in Azure
Security
Microsoft Security Response Center - where Microsoft security vulnerabilities,
including issues with Azure, can be reported or via email to [email protected]
Feedback
Was this page helpful? Yes No
Introduction
Azure provides a suite of infrastructure services that you can use to deploy your
applications. Azure Operational Security refers to the services, controls, and features
available to users for protecting their data, applications, and other assets in Microsoft
Azure.
To get the maximum benefit out of the cloud platform, we recommend that you use
Azure services and follow the checklist. Organizations that invest time and resources
assessing the operational readiness of their applications before launch have a higher
rate of satisfaction than those that don't. When performing this work, checklists can be
an invaluable mechanism to ensure that applications are evaluated consistently and
holistically.
Checklist
This checklist is intended to help enterprises think through various operational security
considerations as they deploy sophisticated enterprise applications on Azure. It can also
be used to help you build a secure cloud migration and operation strategy for your
organization.
ノ Expand table
Conclusion
Many organizations have successfully deployed and operated their cloud applications
on Azure. The checklists provided highlight several checklists that are essential and help
you to increase the likelihood of successful deployments and frustration-free operations.
We highly recommend these operational and strategic considerations for your existing
and new application deployments on Azure.
Next steps
To learn more about security in Azure, see the following articles:
Feedback
Was this page helpful? Yes No
In our discussions with current and future Azure customers, we're often asked "do you
have a list of all the security-related services and technologies that Azure has to offer?"
When you evaluate cloud service provider options, it's helpful to have this information.
So we have provided this list to get you started.
Over time, this list will change and grow, just as Azure does. Make sure to check this
page on a regular basis to stay up-to-date on our security-related services and
technologies.
Service Description
Microsoft A cloud workload protection solution that provides security management and
Defender for advanced threat protection across hybrid cloud workloads.
Cloud
Microsoft A scalable, cloud-native solution that delivers intelligent security analytics and
Sentinel threat intelligence across the enterprise.
Azure Key A secure secrets store for the passwords, connection strings, and other
Vault information you need to keep your apps working.
Azure Monitor A monitoring service that collects telemetry and other data, and provides a
logs query language and analytics engine to deliver operational insights for your
apps and resources. Can be used alone or with other services such as Defender
for Cloud.
Azure Dev/Test A service that helps developers and testers quickly create environments in Azure
Labs while minimizing waste and controlling cost.
Storage security
ノ Expand table
Service Description
Azure Storage Service Encryption A security feature that automatically encrypts your data in
Azure storage.
Azure StorSimple Virtual Array An integrated storage solution that manages storage tasks
between an on-premises virtual array running in a hypervisor
and Microsoft Azure cloud storage.
Client-Side encryption for blobs A client-side encryption solution that supports encrypting
data within client applications before uploading to Azure
Storage, and decrypting data while downloading to the
client.
Azure Storage shared access A shared access signature (SAS) provides delegated access to
signatures resources in your storage account.
Azure Storage Account Keys An access control method for Azure storage that is used
authorize requests to the storage account using either the
account access keys or a Microsoft Entra account (default).
Azure File shares A storage security technology that offers fully managed file
shares in the cloud that are accessible via the industry
standard Server Message Block (SMB) protocol, Network File
System (NFS) protocol, and Azure Files REST AP.
Database security
ノ Expand table
Service Description
Azure SQL Firewall A network access control feature that protects against network-based
attacks to database.
Azure SQL Connection To provide security, SQL Database controls access with firewall rules
Encryption limiting connectivity by IP address, authentication mechanisms
requiring users to prove their identity, and authorization mechanisms
limiting users to specific actions and data.
Azure SQL Always Protects sensitive data, such as credit card numbers or
Encrypted national/regional identification numbers (for example, U.S. social
security numbers), stored in Azure SQL Database, Azure SQL
Managed Instance, and SQL Server databases.
Service Description
Azure SQL transparent A database security feature that helps protect Azure SQL Database,
data encryption Azure SQL Managed Instance, and Azure Synapse Analytics against
the threat of malicious offline activity by encrypting data at rest.
Azure SQL Database An auditing feature for Azure SQL Database and Azure Synapse
Auditing Analytics that tracks database events and writes them to an audit log
in your Azure storage account, Log Analytics workspace, or Event
Hubs.
Virtual network rules A firewall security feature that controls whether the server for your
databases and elastic pools in Azure SQL Database or for your
dedicated SQL pool (formerly SQL DW) databases in Azure Synapse
Analytics accepts communications that are sent from particular
subnets in virtual networks.
Service Description
Azure role- An access control feature designed to allow users to access only the
based access control resources they are required to access based on their roles within the
organization.
Microsoft Entra ID A cloud-based identity and access management service that supports a
multi-tenant, cloud-based directory and multiple identity management
services within Azure.
Azure Active A customer identity access management (CIAM) solution that enables
Directory B2C control over how customers sign-up, sign-in, and manage their profiles
when using Azure-based applications.
Microsoft Entra A cloud-based and managed version of Active Directory Domain Services
Domain Services that provides managed domain services such as domain join, group
policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM
authentication.
Service Description
Azure Backup An Azure-based service used to back up and restore data in the Azure
cloud.
Azure Site Recovery An online service that replicates workloads running on physical and virtual
machines (VMs) from a primary site to a secondary location to enable
recovery of services after a failure.
Networking
ノ Expand table
Service Description
Network Security Groups A network-based access control feature to filter network traffic
between Azure resources in an Azure virtual network.
Azure VPN Gateway A network device used as a VPN endpoint to allow cross-premises
access to Azure Virtual Networks.
Azure Application Gateway An advanced web traffic load balancer that enables you to manage
traffic to your web applications.
Web application firewall A feature that provides centralized protection of your web
(WAF) applications from common exploits and vulnerabilities
Azure ExpressRoute A feature that lets you extend your on-premises networks into the
Microsoft cloud over a private connection with the help of a
connectivity provider.
Microsoft Entra application An authenticating front-end used to secure remote access to on-
proxy premises web applications.
Azure Firewall A cloud-native and intelligent network firewall security service that
provides threat protection for your cloud workloads running in
Azure.
Azure DDoS protection Combined with application design best practices, provides defense
against DDoS attacks.
Virtual Network service Provides secure and direct connectivity to Azure services over an
endpoints optimized route over the Azure backbone network.
Service Description
Azure Private Link Enables you to access Azure PaaS Services (for example, Azure
Storage and SQL Database) and Azure hosted customer-
owned/partner services over a private endpoint in your virtual
network.
Azure Bastion A service you deploy that lets you connect to a virtual machine
using your browser and the Azure portal, or via the native SSH or
RDP client already installed on your local computer.
Azure Front Door Provides web application protection capability to safeguard your
web applications from network attacks and common web
vulnerabilities exploits like SQL Injection or Cross Site Scripting
(XSS).
Next steps
Learn more about Azure's end-to-end security and how Azure services can help you
meet the security needs of your business and protect your users, devices, resources,
data, and applications in the cloud.
Feedback
Was this page helpful? Yes No
This article describes feature availability in the Microsoft Azure and Azure Government
clouds. Features are listed as GA (Generally Available), Public Preview, or Not Available
for the following security services:
7 Note
Azure Government
Azure Government uses the same underlying technologies as Azure (sometimes referred
to as Azure Commercial or Azure Public), which includes the core components of
Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-
Service (SaaS). Both Azure and Azure Government have comprehensive security controls
in place, and the Microsoft commitment on the safeguarding of customer data.
For more information about Azure Government, see What is Azure Government?
7 Note
These lists and tables do not include feature or bundle availability in the Azure
Government Secret or Azure Government Top Secret clouds. For more information
about specific availability for air-gapped clouds, please contact your account team.
The following diagram displays the hierarchy of Microsoft clouds and how they relate to
each other.
The Office 365 GCC environment helps customers comply with US government
requirements, including FedRAMP High, CJIS, and IRS 1075. The Office 365 GCC High
and DoD environments support customers who need compliance with DoD IL4/5, DFARS
7012, NIST 800-171, and ITAR.
The following sections identify when a service has an integration with Microsoft 365 and
the feature availability for Office 365 GCC, Office 365 High, and Office 365 DoD.
Azure Information Protection
Azure Information Protection (AIP) is a cloud-based solution that enables organizations
to discover, classify, and protect documents and emails by applying labels to content.
AIP is part of the Microsoft Purview Information Protection (MIP) solution, and extends
the labeling and classification functionality provided by Microsoft 365.
For more information, see the Azure Information Protection product documentation.
Office 365 GCC is paired with Microsoft Entra ID in Azure. Office 365 GCC High and
Office 365 DoD are paired with Microsoft Entra ID in Azure Government. Make sure
to pay attention to the Azure environment to understand where interoperability is
possible. In the following table, interoperability that is not possible is marked with
a dash (-) to indicate that support is not relevant.
Extra configurations are required for GCC-High and DoD customers. For more
information, see Azure Information Protection Premium Government Service
Description.
7 Note
More details about support for government customers are listed in footnotes
below the table.
Extra steps are required for configuring Azure Information Protection for GCC High
and DoD customers. For more information, see the Azure Information Protection
Premium Government Service Description.
ノ Expand table
Administration
1 The scanner can function without Office 365 to scan files only. The scanner cannot
apply labels to files without Office 365.
2
The classification and labeling add-in is only supported for government customers
with Microsoft 365 Apps (version 9126.1001 or higher), including Professional Plus
(ProPlus) and Click-to-Run (C2R) versions. Office 2010, Office 2013, and other Office
2016 versions are not supported.
ノ Expand table
Administration
SDK
Customizations
Key management
Office files 3
4
Information Rights Management with SharePoint Online (IRM-protected sites and
libraries) is currently not available.
5
Information Rights Management (IRM) is supported only for Microsoft 365 Apps
(version 9126.1001 or higher), including Professional Plus (ProPlus) and Click-to-Run
(C2R) versions. Office 2010, Office 2013, and other Office 2016 versions are not
supported.
6
Only on-premises Exchange is supported. Outlook Protection Rules are not supported.
File Classification Infrastructure is not supported. On-premises SharePoint is not
supported.
7
Sharing of protected documents and emails from government clouds to users in the
commercial cloud is not currently available. Includes Microsoft 365 Apps users in the
commercial cloud, non-Microsoft 365 Apps users in the commercial cloud, and users
with an RMS for Individuals license.
8
The number of Sensitive Information Types in your Microsoft Purview compliance
portal may vary based on region.
For more information, see the Microsoft Defender for Cloud product documentation.
The following table displays the current Defender for Cloud feature availability in Azure
and Azure Government.
ノ Expand table
Continuous export GA GA
Workflow automation GA GA
Feature/Service Azure Azure Government
Asset inventory GA GA
Microsoft Defender for DNS Not available for new Not available for new
subscriptions subscriptions
Just-in-time VM access GA GA
1
Partially GA: The ability to disable specific findings from vulnerability scans is in public
preview.
2
Vulnerability scans of container registries on Azure Gov can only be performed with
the scan on push feature.
4
Partially GA: Support for Azure Arc-enabled clusters is in public preview and not
available on Azure Government.
5
Requires Microsoft Defender for Kubernetes.
6
Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage
are in public preview.
8
There may be differences in the standards offered per cloud type.
9
Partially GA: Support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too)
is in public preview and not available on Azure Government. Run-time visibility of
vulnerabilities in container images is also a preview feature.
Microsoft Sentinel
Microsoft Sentinel is a scalable, cloud-native, security information event management
(SIEM), and security orchestration automated response (SOAR) solution. Microsoft
Sentinel delivers intelligent security analytics and threat intelligence across the
enterprise, providing a single solution for alert detection, threat visibility, proactive
hunting, and threat response.
For Microsoft Sentinel feature availability in Azure, Azure Government, and Azure China
21 Vianet, see Microsoft Sentinel feature support for Azure clouds.
Tip
ノ Expand table
Office IRM
Dynamics 365
- Microsoft Power BI
- Microsoft Project
Office 365
Teams
The following table displays the current Microsoft Defender for IoT feature availability in
Azure, and Azure Government.
For organizations
ノ Expand table
Vulnerability management GA GA
Active Directory GA GA
ArcSight GA GA
CyberArk PSM GA GA
Email GA GA
FortiGate GA GA
FortiSIEM GA GA
Microsoft Sentinel GA GA
Feature Azure Azure Government
NetWitness GA GA
Splunk GA GA
ノ Expand table
Azure Attestation
Microsoft Azure Attestation is a unified solution for remotely verifying the
trustworthiness of a platform and integrity of the binaries running inside it. The service
receives evidence from the platform, validates it with security standards, evaluates it
against configurable policies, and produces an attestation token for claims-based
applications (e.g., relying parties, auditing authorities).
Azure Attestation is currently available in multiple regions across Azure public and
Government clouds. In Azure Government, the service is available in preview status
across US Gov Virginia and US Gov Arizona.
ノ Expand table
BCDR support GA -
Customer lockbox GA -
Next steps
Understand the shared responsibility model and which security tasks are handled
by the cloud provider and which tasks are handled by you.
Understand the Azure Government Cloud capabilities and the trustworthy design
and security used to support compliance applicable to federal, state, and local
government organizations and their partners.
Understand the Office 365 Government plan.
Understand compliance in Azure for legal and regulatory standards.
Feedback
Was this page helpful? Yes No
This article contains security best practices to use when you're designing, deploying, and
managing your cloud solutions by using Azure. These best practices come from our
experience with Azure security and the experiences of customers like you.
Best practices
These best practices are intended to be a resource for IT pros. IT pros include designers,
architects, developers, and testers who build and deploy secure Azure solutions.
Next steps
Microsoft finds that using security benchmarks can help you quickly secure cloud
deployments. Benchmark recommendations from your cloud service provider give you a
starting point for selecting specific security configuration settings in your environment
and allow you to quickly reduce risk to your organization. See the Microsoft cloud
security benchmark for a collection of high-impact security recommendations to help
secure the services you use in Azure.
Microsoft Services in Cybersecurity
Article • 06/28/2024
Microsoft services can create solutions that integrate, and enhance the latest security
and identity capabilities of our products to help protect your business and drive
innovation.
Our team of technical professionals consists of highly trained experts who offer a wealth
of security and identity experience.
Feedback
Was this page helpful? Yes No
Visit the Microsoft Security Response Center (MSRC) to report a security specific issue.
You can also create a tailored, Azure support request in the Azure portal. Visit the Azure
portal here . Follow the prompts to receive recommended solutions or to log a
support request.
Next steps
MSRC is part of the security community. Learn how MSRC helps to protect customers
and the broader ecosystem.
Penetration testing
Article • 06/27/2024
One of the benefits of using Azure for application testing and deployment is that you
can quickly get environments created. You don't have to worry about requisitioning,
acquiring, and "racking and stacking" your own on-premises hardware.
Quickly creating environments is great but you still need to make sure you perform your
normal security due diligence. One of the things you likely want to do is penetration test
the applications you deploy in Azure. We don't perform penetration testing of your
application for you, but we do understand that you want and need to perform testing
on your own applications. That's a good thing, because when you enhance the security
of your applications you help make the entire Azure ecosystem more secure.
) Important
Tests on your endpoints to uncover the Open Web Application Security Project
(OWASP) top 10 vulnerabilities
Fuzz testing of your endpoints
Port scanning of your endpoints
One type of pen test that you can't perform is any kind of Denial of Service (DoS)
attack. This test includes initiating a DoS attack itself, or performing related tests that
might determine, demonstrate, or simulate any type of DoS attack.
7 Note
You may only simulate attacks using Microsoft approved testing partners:
To learn more about these simulation partners, see testing with simulation
partners.
Next steps
Learn more about the Penetration Testing Rules of Engagement .
Feedback
Was this page helpful? Yes No
This page is a partial list of the Azure domains in use. Some of them are REST API
endpoints.
Service Subdomain