0% found this document useful (0 votes)
321 views176 pages

Fabric Security

Uploaded by

elcolorcuca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
321 views176 pages

Fabric Security

Uploaded by

elcolorcuca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 176

Tell us about your PDF experience.

Microsoft Fabric security


Microsoft Fabric is a software as a service (SaaS) platform that offers a complete security
package. Fabric removes the cost and responsibility of maintaining your security
solution and transfers it to the cloud. With Fabric, you can use the expertise and
resources of Microsoft to keep your data secure, patch vulnerabilities, monitor threats,
and comply with regulations.

Fabric security fundamentals

e OVERVIEW

Security in Microsoft Fabric

Microsoft Fabric security fundamentals

p CONCEPT

Fabric and OneLake security

c HOW-TO GUIDE

Configure Multi-Geo support for Fabric

Inbound network security

e OVERVIEW

Microsoft Entra ID

Zero Trust

Conditional Access

c HOW-TO GUIDE

Conditional access in Fabric

Outbound network security


p CONCEPT

On-premises data gateway with Dataflow Gen 2

Integration runtime in Azure Data Factory

Integration runtime in Azure Data Factory

Azure Data Factory managed virtual network

Lakehouse SQL endpoints

Direct lake

i REFERENCE

Azure service tags

Service tags on-premises

Governance and compliance

b GET STARTED

Governance and compliance documentation

e OVERVIEW

Governance and compliance in Microsoft Fabric

p CONCEPT

Microsoft Purview

Microsoft Purview hub


Security in Microsoft Fabric
Article • 06/19/2024

Microsoft Fabric is a software as a service (SaaS) platform that lets users get, create,
share, and visualize data.

As a SaaS service, Fabric offers a complete security package for the entire platform.
Fabric removes the cost and responsibility of maintaining your security solution, and
transfers it to the cloud. With Fabric, you can use the expertise and resources of
Microsoft to keep your data secure, patch vulnerabilities, monitor threats, and comply
with regulations. Fabric also allows you to manage, control and audit your security
settings, in line with your changing needs and demands.

As you bring your data to the cloud and use it with various analytic experiences such as
Power BI, Data Factory, and the next generation of Synapse, Microsoft ensures that built-
in security and reliability features secure your data at rest and in transit. Microsoft also
makes sure that your data is recoverable in cases of infrastructure failures or disasters.

Fabric security is:

Always on - Every interaction with Fabric is encrypted by default and authenticated


using Microsoft Entra ID. All communication between Fabric experiences travels
through the Microsoft backbone internet. Data at rest is automatically stored
encrypted. To regulate access to Fabric, you can add extra security features such as
Private Links or Entra Conditional Access . Fabric can also connect to data
protected by a firewall or a private network using trusted access.

Compliant – Fabric has data sovereignty out of the box with multi geo capacities.
Fabric also supports a wide range of compliance standards.

Governable - Fabric comes with a set of governance tools such data lineage,
information protection labels, data loss prevention and purview integration.

Configurable - You can configure Fabric security in accordance with your


organizational policies.

Evolving - Microsoft is constantly improving Fabric security, by adding new


features and controls.

Authenticate
Microsoft Fabric is a SaaS platform, like many other Microsoft services such as Azure,
Microsoft Office, OneDrive, and Dynamics. All these Microsoft SaaS services including
Fabric, use Microsoft Entra ID as their cloud-based identity provider. Microsoft Entra ID
helps users connect to these services quickly and easily from any device and any
network. Every request to connect to Fabric is authenticated with Microsoft Entra ID,
allowing users to safely connect to Fabric from their corporate office, when working at
home, or from a remote location.

Understand network security


Fabric is SaaS service that runs in the Microsoft cloud. Some scenarios involve
connecting to data that's outside of the Fabric platform. For example, viewing a report
from your own network or connecting to data that's in another service. Interactions
within Fabric use the internal Microsoft network and traffic outside of the service
is protected by default. For more information and a detailed description, see Data in
transit.

Inbound network security


Your organization might want to restrict and secure the network traffic coming into
Fabric based on your company's requirements. With Microsoft Entra ID Conditional
Access and Private Links, you can select the right inbound solution for your
organization.

Microsoft Entra ID Conditional Access


Microsoft Entra ID provides Fabric with Conditional Access which allows you to secure
access to Fabric on every connection. Here are a few examples of access restrictions you
can enforce using Conditional Access.

Define a list of IPs for inbound connectivity to Fabric.

Use Multifactor Authentication (MFA).

Restrict traffic based on parameters such as country of origin or device type.

To configure conditional access, see Conditional access in Fabric.

To understand more about authentication in Fabric, see Microsoft Fabric security


fundamentals.
Private Links
Private links enable secure connectivity to Fabric by restricting access to your Fabric
tenant from an Azure virtual network (VNet), and blocking all public access. This ensures
that only network traffic from that VNet is allowed to access Fabric features such as
Notebooks, Lakehouses, and data warehouses, in your tenant.

To configure Private Links in Fabric, see Set up and use private links.

Outbound network security


Fabric has a set of tools that allow you to connect to external data sources and bring
that data into Fabric in a secure way. This section lists different ways to import and
connect to data from a secure network into fabric.

Trusted workspace access

With Fabric you can access firewall enabled Azure Data Lake Gen 2 accounts securely.
Fabric workspaces that have a workspace identity can securely access Azure Data Lake
Gen 2 accounts with public network access enabled, from selected virtual networks and
IP addresses. You can limit ADLS gen 2 access to specific Fabric workspaces. For more
information, see Trusted workspace access.

7 Note

Fabric workspace identities can only be created in workspaces associated with a


Fabric capacity (F64 or higher). For information about buying a Fabric subscription,
see Buy a Microsoft Fabric subscription.

Managed Private Endpoints


Managed private endpoints allow secure connections to data sources such Azure SQL
databases without exposing them to the public network or requiring complex network
configurations.

Managed virtual networks

Managed virtual networks are virtual networks that are created and managed by
Microsoft Fabric for each Fabric workspace. Managed virtual networks provide network
isolation for Fabric Spark workloads, meaning that the compute clusters are deployed in
a dedicated network and are no longer part of the shared virtual network.

Managed virtual networks also enable network security features such as managed
private endpoints, and private link support for Data Engineering and Data Science items
in Microsoft Fabric that use Apache Spark.

Data gateway
To connect to on-premises data sources or a data source that might be protected by a
firewall or a virtual network, you can use one of these options:

On-premises data gateway - The gateway acts as a bridge between your on-
premises data sources and Fabric. The gateway is installed on a server within your
network, and it allows Fabric to connect to your data sources through a secure
channel without the need to open ports or make changes to your network.

Virtual network (VNet) data gateway - The VNet gateway allows you to connect
from Microsoft Cloud services to your Azure data services within a VNet, without
the need of an on-premises data gateway.

Connect to OneLake from an existing service


You can connect to Fabric using your existing Azure Platform as a Service (PaaS) service.
For Synapse and Azure Data Factory (ADF) you can use Azure Integration Runtime (IR) or
Azure Data Factory managed virtual network. You can also connect to these services and
other services such as Mapping data flows, Synapse Spark clusters, Databricks Spark
clusters and Azure HDInsight using OneLake APIs.

Azure service tags

Use service Tags to ingest data without the use of data gateways, from data sources
deployed in an Azure virtual network, such as Azure SQL Virtual Machines (VMs), Azure
SQL Managed Instance (MI) and REST APIs. You can also use service tags to get traffic
from a virtual network or an Azure firewall. For example, service tags can allow
outbound traffic to Fabric so that a user on a VM can connect to Fabric SQL endpoints
from SSMS, while blocked from accessing other public internet resources.

IP allowlists
If you have data that doesn't reside in Azure, you can enable an IP allowlist on your
organization's network to allow traffic to and from Fabric. An IP allowlist is useful if you
need to get data from data sources that don't support service tags, such as on-premises
data sources. With these shortcuts, you can get data without copying it into OneLake
using a Lakehouse SQL endpoint or Direct Lake.

You can get the list of Fabric IPs from Service tags on-premises. The list is available as a
JSON file, or programmatically with REST APIs, PowerShell, and Azure Command-Line
Interface (CLI).

Secure Data
In Fabric, all data that is stored in OneLake is encrypted at rest. All data at rest is stored
in your home region, or in one of your capacities at a remote region of your choice so
that you can meet data at rest sovereignty regulations. For more information, see
Microsoft Fabric security fundamentals.

Understand tenants in multiple geographies


Many organizations have a global presence and require services in multiple Azure
geographies. For example, a company can have its headquarters in the United States,
while doing business in other geographical areas, such as Australia. To comply with local
regulations, businesses with a global presence need to ensure that data remains stored
at rest in several regions. In Fabric, this is called multi-geo.

The query execution layer, query caches, and item data assigned to a multi-geo
workspace remain in the Azure geography of their creation. However, some metadata,
and processing, is stored at rest in the tenant's home geography.

Fabric is part of a larger Microsoft ecosystem. If your organization is already using other
cloud subscription services, such as Azure, Microsoft 365, or Dynamics 365, then Fabric
operates within the same Microsoft Entra tenant. Your organizational domain (for
example, contoso.com) is associated with Microsoft Entra ID. Like all Microsoft cloud
services.

Fabric ensures that your data is secure across regions when you're working with several
tenants that have multiple capacities across a number of geographies.

Data logical separation - The Fabric platform provide logical isolation between
tenants to protect your data.
Data sovereignty - To start working with multi-geo, see Configure Multi-Geo
support for Fabric.

Access data
Fabric controls data access using workspaces. In workspaces, data appears in the form of
Fabric items, and users can't view or use items (data) unless you give them access to the
workspace. You can find more information about workspace and item permissions, in
Permission model.

Workspace roles
Workspace access is listed in the table below. It includes workspace roles and Fabric and
OneLake security. Users with a viewer role can run SQL, Data Analysis Expressions (DAX)
or Multidimensional Expressions (MDX) queries, but they can't access Fabric items or run
a notebook.

ノ Expand table

Role Workspace access OneLake access

Admin, member, and contributor Can use all the items in the workspace

Viewer Can see all the items in the workspace

Share items
You can share Fabric items with users in your organization that don't have any
workspace role. Sharing items gives restricted access, allowing users to only access the
shared item in the workspace.

Limit access
You can limit viewer access to data using row-level security (RLS), column-level security
(CLS) and object-level security (OLS). With RLS, CLS and OLS, you can create user
identities that have access to certain portions of your data, and limit SQL results
returning only what the user's identity can access.

You can also add RLS to a DirectLake dataset. If you define security for both SQL and
DAX, DirectLake falls back to DirectQuery for tables that have RLS in SQL. In such cases,
DAX, or MDX results are limited to the user's identity.
To expose reports using a DirectLake dataset with RLS without a DirectQuery fallback,
use direct dataset sharing or apps in Power BI. With apps in Power BI you can give
access to reports without viewer access. This kind of access means that the users can't
use SQL. To enable DirectLake to read the data, you need to switch the data source
credential from Single Sign On (SSO) to a fixed identity that has access to the files in the
lake.

Protect data
Fabric supports sensitivity labels from Microsoft Purview Information Protection. These
are the labels, such as General, Confidential, and Highly Confidential that are widely used
in Microsoft Office apps such as Word, PowerPoint, and Excel to protect sensitive
information. In Fabric, you can classify items that contain sensitive data using these
same sensitivity labels. The sensitivity labels then follow the data automatically from
item to item as it flows through Fabric, all the way from data source to business user.
The sensitivity label follows even when the data is exported to supported formats such
as PBIX, Excel, PowerPoint, and PDF, ensuring that your data remains protected. Only
authorized users can open the file. For more information, see Governance and
compliance in Microsoft Fabric.

To help you govern, protect, and manage your data, you can use Microsoft Purview.
Microsoft Purview and Fabric work together letting you store, analyze, and govern your
data from a single location, the Microsoft Purview hub.

Recover data
Fabric data resiliency ensures that your data is available if there is a disaster. Fabric also
enables you to recover your data in case of a disaster, Disaster recovery. For more
information, see Reliability in Microsoft Fabric.

Administer Fabric
As an administrator in Fabric, you get to control capabilities for the entire organization.
Fabric enables delegation of the admin role to capacities, workspaces, and domains. By
delegating admin responsibilities to the right people, you can implement a model that
lets several key admins control general Fabric settings across the organization, while
other admins who are in charge of settings related to specific areas.

Using various tools, admins can also monitor key Fabric aspects such as capacity
consumption.
Audit Logs
To view your audit logs, follow the instructions in Track user activities in Microsoft Fabric.
You can also refer to the Operation list to see which activities are available for searching
in the audit logs.

Note that an internal issue caused OneLake audit events to not be shown in the
Microsoft 365 Admin center from 4/21 through 5/6. You can request this data if need be
through support channels.

Capabilities
Review this section for a list of some of the security features available in Microsoft
Fabric.

ノ Expand table

Capability Description

Conditional access Secure your apps by using Microsoft Entra ID

Lockbox Control how Microsoft engineers access your data

Fabric and OneLake Learn how to secure your data in Fabric and OneLake.
security

Resiliency Reliability and regional resiliency with Azure availability zones

Service tags Enable an Azure SQL Managed Instance (MI) to allow incoming
connections from Microsoft Fabric

Related content
Security fundamentals

Admin overview

Governance and compliance overview

Microsoft Fabric licenses

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric security fundamentals
Article • 01/15/2024

This article presents a big-picture perspective of the Microsoft Fabric security


architecture by describing how the main security flows in the system work. It also
describes how users authenticate with Fabric, how data connections are established, and
how Fabric stores and moves data through the service.

The article is primarily targeted at Fabric administrators, who are responsible for
overseeing Fabric in the organization. It's also relevant to enterprise security
stakeholders, including security administrators, network administrators, Azure
administrators, workspace administrators, and database administrators.

Fabric platform
Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything
from data movement to data science, real-time analytics, and business intelligence (BI).
The Fabric platform comprises a series of services and infrastructure components that
support the common functionality for all Fabric experiences. Collectively, they offer a
comprehensive set of analytics experiences designed to work together seamlessly.
Experiences include Lakehouse, Data Factory, Synapse Data Engineering, Synapse Data
Warehouse, Power BI, and others.

With Fabric, you don't need to piece together different services from multiple vendors.
Instead, you benefit from a highly integrated, end-to-end, and easy-to-use product
that's designed to simplify your analytics needs. Fabric was designed from the outset to
protect sensitive assets.

The Fabric platform is built on a foundation of software as a service (SaaS), which


delivers reliability, simplicity, and scalability. It's built on Azure, which is Microsoft's
public cloud computing platform. Traditionally, many data products have been platform
as a service (PaaS), requiring an administrator of the service to set up security,
compliance, and governance for each service. Because Fabric is a SaaS service, many of
these features are built into the SaaS platform and require no setup or minimal setup.

Architectural diagram
The architectural diagram below shows a high-level representation of the Fabric security
architecture.
The architectural diagram depicts the following concepts.

1. A user uses a browser or a client application, like Power BI Desktop, to connect to


the Fabric service.

2. Authentication is handled by Microsoft Entra ID, previously known as Azure Active


Directory, which is the cloud-based identity and access management service that
authenticates the user or service principal and manages access to Fabric.

3. The web front end receives user requests and facilitates sign-in. It also routes
requests and serves front-end content to the user.

4. The metadata platform stores tenant metadata, which can include customer data.
Fabric services query this platform on demand in order to retrieve authorization
information and to authorize and validate user requests. It's located in the tenant
home region.

5. The back-end capacity platform is responsible for compute operations and for
storing customer data, and it's located in the capacity region. It leverages Azure
core services in that region as necessary for specific Fabric experiences.

Fabric platform infrastructure services are multitenant. There is logical isolation between
tenants. These services don't process complex user input and are all written in managed
code. Platform services never run any user-written code.

The metadata platform and the back-end capacity platform each run in secured virtual
networks. These networks expose a series of secure endpoints to the internet so that
they can receive requests from customers and other services. Apart from these
endpoints, services are protected by network security rules that block access from the
public internet. Communication within virtual networks is also restricted based on the
privilege of each internal service.

The application layer ensures that tenants are only able to access data from within their
own tenant.
Authentication
Fabric relies on Microsoft Entra ID to authenticate users (or service principals). When
authenticated, users receive access tokens from Microsoft Entra ID. Fabric uses these
tokens to perform operations in the context of the user.

A key feature of Microsoft Entra ID is conditional access. Conditional access ensures that
tenants are secure by enforcing multifactor authentication, allowing only Microsoft
Intune enrolled devices to access specific services. Conditional access also restricts user
locations and IP ranges.

Authorization
All Fabric permissions are stored centrally by the metadata platform. Fabric services
query the metadata platform on demand in order to retrieve authorization information
and to authorize and validate user requests.

For performance reasons, Fabric sometimes encapsulates authorization information into


signed tokens. Signed tokens are only issued by the back-end capacity platform, and
they include the access token, authorization information, and other metadata.

Data residency
In Fabric, a tenant is assigned to a home metadata platform cluster, which is located in a
single region that meets the data residency requirements of that region's geography.
Tenant metadata, which can include customer data, is stored in this cluster.

Customers can control where their workspaces are located. They can choose to locate
their workspaces in the same geography as their metadata platform cluster, either
explicitly by assigning their workspaces on capacities in that region or implicitly by using
Fabric Trial, Power BI Pro, or Power BI Premium Per User license mode. In the latter case,
all customer data is stored and processed in this single geography. For more
information, see Microsoft Fabric concepts and licenses.

Customers can also create Multi-Geo capacities located in geographies (geos) other
than their home region. In this case, compute and storage (including OneLake and
experience-specific storage) is located in the multi-geo region, however the tenant
metadata remains in the home region. Customer data will only be stored and processed
in these two geographies. For more information, see Configure Multi-Geo support for
Fabric.
Data handling
This section provides an overview of how data handling works in Fabric. It describes
storage, processing, and the movement of customer data.

Data at rest
All Fabric data stores are encrypted at rest by using Microsoft-managed keys. Fabric
data includes customer data as well as system data and metadata.

While data can be processed in memory in an unencrypted state, it's never persisted to
permanent storage while in an unencrypted state.

Data in transit
Data in transit across the public internet between Microsoft services is always encrypted
with at least TLS 1.2. Fabric negotiates to TLS 1.3 whenever possible. Traffic between
Microsoft services always routes over the Microsoft global network.

Inbound Fabric communication also enforces TLS 1.2 and negotiates to TLS 1.3,
whenever possible. Outbound Fabric communication to customer-owned infrastructure
prefers secure protocols but might fall back to older, insecure protocols (including TLS
1.0) when newer protocols aren't supported.

Telemetry
Telemetry is used to maintain performance and reliability of the Fabric platform. The
Fabric platform telemetry store is designed to be compliant with data and privacy
regulations for customers in all regions where Fabric is available, including the European
Union (EU). For more information, see EU Data Boundary Services .

OneLake
OneLake is a single, unified, logical data lake for the entire organization, and it's
automatically provisioned for every Fabric tenant. It's built on Azure and it can store any
type of file, structured or unstructured. Also, all Fabric items, like warehouses and
lakehouses, automatically store their data in OneLake.

OneLake supports the same Azure Data Lake Storage Gen2 (ADLS Gen2) APIs and SDKs,
therefore it's compatible with existing ADLS Gen2 applications, including Azure
Databricks.
For more information, see Fabric and OneLake security.

Workspace security
Workspaces represent the primary security boundary for data stored in OneLake. Each
workspace represents a single domain or project area where teams can collaborate on
data. You manage security in the workspace by assigning users to workspace roles.

For more information, see Fabric and OneLake security (Workspace security).

Item security
Within a workspace, you can assign permissions directly to Fabric items, like warehouses
and lakehouses. Item security provides the flexibility to grant access to an individual
Fabric item without granting access to the entire workspace. Users can set up per item
permissions either by sharing an item or by managing the permissions of an item.

Compliance resources
The Fabric service is governed by the Microsoft Online Services Terms and the
Microsoft Enterprise Privacy Statement .

For the location of data processing, refer to the Location of Data Processing terms in the
Microsoft Online Services Terms and to the Data Protection Addendum .

For compliance information, the Microsoft Trust Center is the primary resource for
Fabric. For more information about compliance, see Microsoft compliance offerings.

The Fabric service follows the Security Development Lifecycle (SDL), which consists of a
set of strict security practices that support security assurance and compliance
requirements. The SDL helps developers build more secure software by reducing the
number and severity of vulnerabilities in software, while reducing development cost. For
more information, see Microsoft Security Development Lifecycle Practices .

Related content
For more information about Fabric security, see the following resources.

Security in Microsoft Fabric


Microsoft Fabric end-to-end security scenario
OneLake security overview
Microsoft Fabric concepts and licenses
Questions? Try asking the Microsoft Fabric community .
Suggestions? Contribute ideas to improve Microsoft Fabric .

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Power BI security
Article • 06/24/2024

Power BI is an online software service (SaaS, or Software as a Service) offering as part of


Microsoft Fabric that lets you easily and quickly create self-service Business Intelligence
dashboards, reports, semantic models, and visualizations. With Power BI, you can
connect to many different data sources, combine, and shape data from those
connections, then create reports and dashboards that can be shared with others.

This article outlines Power BI data handling practices when it comes to storing,
processing, and transferring customer data.

Data at rest
Power BI uses two primary data storage resource types:

Azure Storage

Azure SQL Databases

In most scenarios, Azure Storage is utilized to persist the data of Power BI artifacts, while
Azure SQL Databases are used to persist artifact metadata.

All data persisted by Power BI is encrypted by default using Microsoft-managed keys.


Customer data stored in Azure SQL Databases is fully encrypted using Azure SQL's
Transparent Data Encryption (TDE) technology. Customer data stored in Azure storage is
encrypted using Azure Storage Encryption.

Optionally, organizations can utilize Power BI Premium to use their own keys to encrypt
data at rest that is imported into a semantic model. This approach is often described as
bring your own key (BYOK). Utilizing BYOK helps ensure that even in case of a service
operator error, customer data won't be exposed – something that can't easily be
achieved using transparent service-side encryption. See Bring your own encryption keys
for Power BI for more information.

Power BI semantic models allow for various data source connection modes that
determine whether the data source data is persisted in the service or not.

ノ Expand table
Semantic Model Mode (Kind) Data Persisted in Power BI

Import Yes

DirectQuery No

Live Connect No

DirectLake No (stored in OneLake)

Composite If contains an Import data source

Streaming If configured to persist

Regardless of the semantic model mode utilized, Power BI may temporarily cache any
retrieved data to optimize query and report load performance.

Data in processing
Data is in processing when it's either actively being used by one or more users as part of
an interactive scenario, or when a background process, such as refresh, touches this
data. Power BI loads actively processed data into the memory space of one or more
service workloads. To facilitate the functionality required by the workload, the processed
data in memory isn't encrypted.

Power BI embedded analytics


Independent Software Vendors (ISVs) and solution providers have two main modes of
embedding Power BI artifacts in their web applications and portals: embed for your
organization and embed for your customers. The artifact is embedded into an IFrame in
the application or portal. An IFrame isn't allowed to read or write data from the external
web application or portal, and the communication with the IFrame is done by using the
Power BI Client SDK using POST messages.

In an embed for your organization scenario, Microsoft Entra or through customized


portals.ITs. All Power BI policies and capabilities described in this paper such as Row
Level Security (RLS) and object-level security (OLS) are automatically applied to all users
independently of whether they access Power BI through the Power BI portal or
through customized portals.

In an embed for your customers scenario, ISVs typically own Power BI tenants and Power
BI items (dashboards, reports, semantic models, and others). It's the responsibility of an
ISV back-end service to authenticate its end users and decide which artifacts and which
access level is appropriate for that end user. ISV policy decisions are encrypted in
an embed token generated by Power BI and passed to the ISV back-end for further
distribution to the end users according to the business logic of the ISV. End users using
a browser or other client applications aren't able to automatically append the encrypted
embed token to Power BI requests as an Authorization: EmbedToken header. Based on
this header, Power BI enforces all policies (such as access or RLS) precisely as was
specified by the ISV during generation. Power BI Client APIs automatically append the
encrypted embed token to Power BI requests as an Authorization: EmbedToken header.
Based on this header, Power BI enforces all policies (such as access or RLS) precisely as
was specified by the ISV during generation.

To enable embedding and automation, and to generate the embed tokens described
above, Power BI exposes a rich set of REST APIs. These Power BI REST APIs support both
user delegated and service principal Microsoft Entra methods of authentication and
authorization.

Power BI embedded analytics and its REST APIs support all Power BI network isolation
capabilities described in this article: For example, Service Tags and Private Links.

Paginated reports
Paginated reports are designed to be printed or shared. They're called paginated
because they're formatted to fit well on a page. They display all the data in a table, even
if the table spans multiple pages. You can control their report page layout exactly.

Paginated reports support rich and powerful expressions written in Microsoft Visual
Basic .NET. Expressions are widely used throughout Power BI Report Builder paginated
reports to retrieve, calculate, display, group, sort, filter, parameterize, and format data.

Expressions are created by the author of the report with access to the broad range of
features of the .NET framework. The processing and execution of paginated reports is
performed inside a sandbox.

Paginated report definitions (.rdl section.Authentication to the Power BI Service section.

The Microsoft Entra token obtained during the authentication is used to communicate
directly from the browser to the Power BI Premium cluster.

In Power BI Premium, the Power BI service runtime provides an appropriately isolated


execution environment for each report render.

A paginated report can access a wide set of data sources as part of the rendering of the
report. The sandbox doesn't communicate directly with any of the data sources but
instead communicates with the trusted process to request data, and then the trusted
process appends the required credentials to the connection. In this way, the sandbox
never has access to any credential or secret.

In order to support features such as Bing maps, or calls to Azure Functions, the sandbox
does have access to the internet.

Power BI Mobile
Power BI Mobile is a collection of apps designed for the primary mobile platforms:
Android, iOS. Security considerations for the Power BI Mobile apps fall into two
categories:

Device communication

The application and data on the device

For device communication, all Power BI Mobile applications communicate with the
Power BI service, and use the same connection and authentication sequences used by
browsers, which are described in detail earlier in this white paper. The Power BI mobile
applications for iOS and Android bring up a browser session within the application itself.

Power BI Mobile supports certificate-based authentication (CBA) when authenticating


with Power BI (sign in to service), SSRS ADFS on-premises (connect to SSRS server) and
SSRS App Proxy on either iOs or Android.

Power BI Mobile apps actively communicate with the Power BI service. Telemetry is used
to gather mobile app usage statistics and similar data, which is transmitted to services
that are used to monitor usage and activity; no customer data is sent with telemetry.

The Power BI application stores data on the device that facilitates use of the app:

Microsoft Entra ID and refresh tokens are stored in a secure mechanism on the
device, using industry-standard security measures.

Data and settings (key-value pairs for user configuration) is cached in storage on
the device and can be encrypted by the OS. In iOS this is automatically done when
the user sets a passcode. In Android this can be configured in the settings. T data
and settings (key-value pairs for user configuration) are cached in storage on the
device in a sandbox and internal storage that is accessible only to the app.

Geolocation is enabled or disabled explicitly by the user. If enabled, geolocation


data isn't saved on the device and isn't shared with Microsoft.
Notifications are enabled or disabled explicitly by the user. If enabled, Android and
iOS don't support geographic data residency requirements for notifications.

Data encryption can be enhanced by applying file-level encryption via Microsoft Intune,
a software service that provides mobile device and application management. Both
platforms for which Power BI Mobile is available support Intune. With Intune enabled
and configured, data on the mobile device is encrypted, and the Power BI application
itself can't be installed on an SD card. Learn more about Microsoft Intune .

In order to implement SSO, some secured storage values related to the token-based
authentication are available for other Microsoft first party apps (such as Microsoft
Authenticator) and are managed by the Microsoft Authentication Library (MSAL).

Power BI Mobile cached data is deleted when the app is removed, when the user signs
out of Power BI Mobile, or when the user fails to sign in (such as after a token expiration
event or password change). The data cache includes dashboards and reports previously
accessed from the Power BI Mobile app.

Power BI Mobile doesn't access other application folders or files on the device.

The Power BI apps for iOS and Android let you protect your data by configuring extra
identification, such as providing Face ID, Touch ID, or a passcode for iOS, and biometric
ID (Fingerprint ID) for Android. Learn more about additional identification. Users can also
configure their app to require identification each time the app is brought to the
foreground using Face ID, Touch ID, or passcode.

Related content
Security in Microsoft Fabric

Security fundamentals

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Permission model
Article • 06/27/2024

Microsoft Fabric has a flexible permission model that allows you to control access to
data in your organization. This article explains the different types of permissions in
Fabric and how they work together to control access to data in your organization.

A workspace is a logical entity for grouping items in Fabric. Workspace roles define
access permissions for workspaces. Although items are stored in one workspace, they
can be shared with other users across Fabric. When you share Fabric items, you can
decide which permissions to grant the user you're sharing the item with. Certain items
such as Power BI reports, allow even more granular control of data. Reports can be set
up so that depending on their permissions, users who view them only see a portion of
the data they hold.

Workspace roles
Workspace roles are used to control access to workspaces and the content within them.
A Fabric administrator can assign workspace roles to individual users or groups.
Workspace roles are confined to a specific workspace and don't apply to other
workspaces, the capacity the workspace is in, or the tenant.

There are four Workspace roles and they apply to all items within the workspace. Users
that don't have any of these roles, can't access the workspace. The roles are:

Viewer - Can view all content in the workspace, but can't modify it.

Contributor - Can view and modify all content in the workspace.

Member - Can view, modify, and share all content in the workspace.

Admin - Can view, modify, share, and manage all content in the workspace,
including managing permissions.

This table shows a small set of the capabilities each role has. For a full and more detailed
list, see Microsoft Fabric workspace roles.

ノ Expand table

Capability Admin Member Contributor Viewer

Delete the workspace ✅ ❌ ❌ ❌


Capability Admin Member Contributor Viewer

Add admins ✅ ❌ ❌ ❌

Add members ✅ ✅ ❌ ❌

Write data ✅ ✅ ✅ ❌

Create items ✅ ✅ ✅ ❌

Read data ✅ ✅ ✅ ✅

Item permissions
Item permissions are used to control access to individual Fabric items within a
workspace. Item permissions are confined to a specific item and don't apply to other
items. Use item permissions to control who can view, modify, and manage individual
items in a workspace. You can use item permissions to give a user access to a single
item in a workspace that they don't have access to.

When you're sharing the item with a user or group, you can configure item permissions.
Sharing an item grants the user the read permission for that item by default. Read
permissions allow users to see the metadata for that item and view any reports
associated with it. However, read permissions don't allow users to access underlying
data in SQL or OneLake.

Different Fabric items have different permissions. To learn more about the permissions
for each item, see:

Semantic model

warehouse

Data Factory

Lakehouse

Data science

Real-Time Intelligence

Compute permissions
Permissions can also be set within a specific compute engine in Fabric, specifically
through the SQL endpoint or in a semantic model. Compute engine permissions enable
a more granular data access control, such as table and row level security.

SQL endpoint - The SQL endpoint provides direct SQL access to tables in OneLake,
and can have security configured natively through SQL commands. These
permissions only apply to queries made through SQL.

Semantic model - Semantic models allow for security to be defined using DAX.
Restrictions defined using DAX apply to users querying through the semantic
model or Power BI reports built on the semantic model.

You can find more information in these articles:

Row-level security in Fabric data warehousing

Row-level security (RLS) with Power BI

Object-level security (OLS)

OneLake permissions (data access roles)


OneLake has its own permissions for governing access to files and folders in OneLake
through OneLake data access roles. OneLake data access roles allow users to create
custom roles within a lakehouse and to grant read permissions only to the specified
folders when accessing OneLake. For each OneLake role, users can assign users, security
groups or grant an automatic assignment based on the workspace role.

Learn more about OneLake Data Access Control Model and view the how-to guides.

How to secure a lakehouse for Data Science teams

How to secure a lakehouse for Data Warehousing teams

How to secure data for common data architectures

Order of operation
Fabric has three different security levels. A user must have access at each level in order
to access the data. Each level evaluates sequentially to determine if a user has access.
Security rules such as Microsoft Information Protection policies evaluate at a given level
to allow or disallow access. The order of operation when evaluating Fabric security is:

1. Entra authentication: Checks if the user is able to authenticate to the Microsoft


Entra tenant.
2. Fabric access: Checks if the user can access Microsoft Fabric.
3. Data security: Checks if the user can perform the requested action on a table or
file.

Examples
This section provides two examples of how permissions can be set up in Fabric.

Example 1: Setting up team permissions


Wingtip Toys is set up with one tenant for the entire organization, and three capacities.
Each capacity represents a different region. Wingtip Toys operates in the United States,
Europe, and Asia. Each capacity has a workspace for each department in the
organization, including the sales department.

The sales department has a manager, a sales team lead, and sales team members.
Wingtip Toys also employs one analyst for the entire organization.

The following table shows the requirements for each role in the sales department and
how permissions are set up to enable them.

ノ Expand table

Role Requirement Setup

Manager View and modify all content in the sales A member role for all the sales
department in the entire organization workspaces in the organization

Team lead View and modify all content in the sales A member role for the sales
department in a specific region workspace in the region

Sales team View stats of other sale members in the No roles for any of the sales
member region workspaces
View and modify his own sales report Access to a specific report that
lists the member's sale figures

Analyst View all content in the sales department in A viewer role for all the sale
the entire organization workspaces in the organization

Wingtip also has a quarterly report that lists its sales income per sales member. This
report is stored in a finance workspace. By using row-level security, the report is set up
so that each sales member can only see their own sale figures. Team leads can see the
sales figures of all the sale members in their region, and the sales manager can see sale
figures of all the sale members in the organization.
Example 2: Workspace and item permissions
When you share an item, or change its permissions, workspace roles don't change. The
example in this section shows how workspace and item permissions interact.

Veronica and Marta work together. Veronica is the owner of a report she wants to share
with Marta. If Veronica shares the report with Marta, Marta will be able to access it
regardless of the workspace role she has.

Let's say that Marta has a viewer role in the workspace where the report is stored. If
Veronica decides to remove Marta's item permissions from the report, Marta will still be
able to view the report in the workspace. Marta will also be able to open the report from
the workspace and view its content. This is because Marta has view permissions to the
workspace.

If Veronica doesn't want Marta to view the report, removing Marta's item permissions
from the report isn't enough. Veronica also needs to remove Marta's viewer permissions
from the workspace. Without the workspace viewer permissions, Marta won't be able to
see that the report exists because she won't be able to access the workspace. Marta will
also not be able to use the link to the report, because she doesn't have access to the
report.

Now that Marta doesn't have a workspace viewer role, if Veronica decides to share the
report with her again, Marta will be able to view it using the link Veronica shares with
her, without having access to the workspace.

Example 3: Power BI App permissions


When sharing Power BI reports, you often want your recipients to only have access to
the reports and not to items in the workspace. For this you can use Power BI apps or
share reports directly with users.

Furthermore you can limit viewer access to data using Row-level security (RLS), with RLS
you can create roles that have access to certain portions of your data, and limit results
returning only what the user's identity can access.

This works fine when using import models as the data is imported in the semantic
model and the recipients have access to this as part of the app. With DirectLake the
report reads the data directly from the Lakehouse and the report recipient needs to
have access to these files in the lake. You can do this in several ways:

Give ReadData permission on the Lakehouse directly.


Switch the data source credential from Single Sign On (SSO) to a fixed identity that
has access to the files in the lake.

Because RLS is defined in the Semantic Model the data will be read first and then the
rows will be filtered.

If any security is defined in the SQL endpoint that the report is built on, the queries
automatically fall back to DirectQuery mode. If you do not want this default fallback
behavior, you can create a new Lakehouse using shortcuts to the tables in the original
Lakehouse and not define RLS or OLS in SQL on the new Lakehouse.

Related content
Security fundamentals

Microsoft Fabric licenses

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric governance
documentation
Govern, manage, and protect all your data in Fabric.

Secure, protect and comply

e OVERVIEW

Governance and compliance in Fabric

Security overview

c HOW-TO GUIDE

Manage access

Audit

` DEPLOY

Information protection

Data loss prevention in Power BI

i REFERENCE

Security documentation

Admin documentation

Manage your data estate

i REFERENCE

Tenant settings and control

p CONCEPT

Optimize business on domains


c HOW-TO GUIDE

Workspace management

Govern capacity resources

Tenant multicloud abilities

Encourage data discovery, trust, and use

c HOW-TO GUIDE

Discover

Track lineage

Analyze impact

e OVERVIEW

Endorse and trust

Curate

Monitor, uncover insights and act

p CONCEPT

Monitor

Data insights for admins

Data insights for creators

e OVERVIEW

Automate
Microsoft Fabric documentation for
admins
Learn about the Microsoft Fabric admin settings, options, and tools.

Fabric in your organization

e OVERVIEW

What is Microsoft Fabric admin?

What is the admin portal?

b GET STARTED

Enable Fabric for your organization

Region availability

Find your Fabric home region

c HOW-TO GUIDE

Understand Fabric admin roles

i REFERENCE

Governance documentation

Security documentation

Tools and settings

e OVERVIEW

About tenant settings

c HOW-TO GUIDE

Set up git integration

Set up item certification


Configure notifications

Set up metadata scanning

Enable content certification

Enable service principal authentication

Configure Multi-Geo support

Monitoring and management

e OVERVIEW

What is the admin monitoring workspace?

p CONCEPT

Feature usage and adoption report

c HOW-TO GUIDE

Use the Monitoring hub

Workspace administration

p CONCEPT

Manage workspaces

c HOW-TO GUIDE

Workspace tenant settings


Microsoft Fabric security white paper
Article • 05/07/2024

Security is a top priority for Microsoft Fabric. As a Fabric customer, you need to
safeguard your assets from threats and follow your organization's security policies. The
Microsoft Fabric security white paper serves as an end-to-end security overview for
Fabric. It covers details on how Microsoft secures your data by default as a software as a
service (SaaS) service, and how you can secure, manage, and govern your data when
using Fabric.

The Fabric security white paper combines several online security documents into a single
downloadable PDF document for reading convenience. This PDF is updated at regular
intervals, while the online documentation at Microsoft Fabric security is always up to
date.

Download the Fabric security white paper


You can download the Fabric security white paper as a PDF from the following link:

Microsoft Fabric security white paper .

Related content
Microsoft Fabric security
Security in Microsoft Fabric
Microsoft Fabric security fundamentals

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Protect inbound traffic
Article • 03/06/2024

Inbound traffic is traffic coming into Fabric from the internet. This article explains the
differences between the two ways to protect inbound traffic in Microsoft Fabric. Use this
article to decide which method is best for your organization.

Entra Conditional Access - When a user authenticates access is determined based


on a set of policies that might include IP address, location, and managed devices.

Private links - Fabric uses a private IP address from your virtual network. The
endpoint allows users in your network to communicate with Fabric over the private
IP address using private links.

Once traffic enters Fabric, it gets authenticated by Microsoft Entra ID, which is the same
authentication method used by Microsoft 365, OneDrive, and Dynamics 365. Microsoft
Entra ID authentication allows users to securely connect to cloud applications from any
device and any network, whether they’re at home, remote, or in their corporate office.

The Fabric backend platform is protected by a virtual network and isn't directly
accessible from the public internet other than through secure endpoints. To understand
how traffic is protected in Fabric, review Fabric's Architectural diagram.

By default, Fabric communicates between experiences using the internal Microsoft


backbone network. When a Power BI report loads data from OneLake, the data goes
through the internal Microsoft network. This configuration is different from having to set
up multiple Platform as a Service (PaaS) services to connect to each other over a private
network. Inbound communication between clients such as your browser or SQL Server
Management Studio (SSMS) and Fabric, uses the TLS 1.2 protocol and negotiates to TLS
1.3 when possible.

Fabric's default security settings include:

Microsoft Entra ID which is used to authenticate every request.

Upon successful authentication, requests are routed to the appropriate backend


service through secure Microsoft managed endpoints.

Internal traffic between experiences in Fabric is routed over the Microsoft


backbone.

Traffic between clients and Fabric is encrypted using at least the Transport Layer
Security (TLS) 1.2 protocol.
Entra Conditional Access
Every interaction with Fabric is authenticated with Microsoft Entra ID. Microsoft Entra ID
is based upon the Zero Trust security model, which assumes that you're not fully
protected within your organization's network perimeter. Instead of looking at your
network as a security boundary, Zero Trust looks at identity as the primary perimeter for
security.

To determine access at the time of authentication you can define and enforce
conditional access policies based on your users' identity, device context, location,
network, and application sensitivity. For example, you can require multifactor
authentication, device compliance, or approved apps for accessing your data and
resources in Fabric. You can also block or limit access from risky locations, devices, or
networks.

Conditional access policies help you protect your data and applications without
compromising user productivity and experience. Here are a few examples of access
restrictions you can enforce using conditional access.

Define a list of IPs for inbound connectivity to Fabric.

Use Multifactor Authentication (MFA).

Restrict traffic based on parameters such as country of origin or device type.

Fabric doesn't support other authentication methods such as account keys or SQL
authentication, which rely on usernames and passwords.

Configure conditional access


To configure conditional access in Fabric, you need to select several Fabric related Azure
services such as Power BI, Azure Data Explorer, Azure SQL Database, and Azure Storage.

7 Note

Conditional access can be considered too broad for some customers as any policy
will be applied to Fabric and the related Azure services.

Licensing
Conditional access requires Microsoft Entra ID P1 licenses. Often these licenses are
already available in your organization because they're shared with other Microsoft
products such as Microsoft 365. To find the right license for your requirements,
see License requirements.

Trusted access
Fabric doesn't need to reside in your private network, even when you have your data
stored inside one. With PaaS services, it's common to put the compute in the same
private network as the storage account. However, with Fabric this isn't needed. To enable
trusted access into Fabric, you can use features such as on-premises Data gateways,
Trusted workspace access and managed private endpoints. For more information, see
Security in Microsoft Fabric.

Private links
With private endpoints your service is assigned a private IP address from your virtual
network. The endpoint allows other resources in the network to communicate with the
service over the private IP address.

Using Private links, a tunnel from the service into one of your subnets creates a private
channel. Communication from external devices travels from their IP address, to a private
endpoint in that subnet, through the tunnel and into the service.

When implementing private links, Fabric is no longer accessible through the public
internet. To access Fabric, all users have to connect through the private network. The
private network is required for all communications with Fabric, including viewing a
Power BI report in the browser and using SQL Server Management Studio (SSMS) to
connect to an SQL endpoint.

On-premises networks
If you're using on-premises networks, you can extend them to the Azure Virtual Network
(VNet) using an ExpressRoute circuit, or a site-to-site VPN, to access Fabric using private
connections.

Bandwidth
With private links, all traffic to Fabric travels through the private endpoint, causing
potential bandwidth issues. Users are no longer able to load global distributed nondata
related resources such as images .css and .html files used by Fabric, from their region.
These resources are loaded from the location of the private endpoint. For example, for
Australian users with a US private endpoint, traffic travels to the US first. This increases
load times and might reduce performance.

Cost
The cost of private links and the increase of the ExpressRoute bandwidth to allow
private connectivity from your network, might add costs to your organization.

Considerations and limitations


With private links you're closing off Fabric to the public internet. As a result, there are
many considerations and limitations you need to take into account.

Related content
Private links for secure access to Fabric

Conditional access in Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Private links for secure access to Fabric
Article • 05/31/2024

You can use private links to provide secure access for data traffic in Fabric. Azure Private
Link and Azure Networking private endpoints are used to send data traffic privately
using Microsoft's backbone network infrastructure instead of going across the internet.

When private link connections are used, those connections go through the Microsoft
private network backbone when Fabric users access resources in Fabric.

To learn more about Azure Private Link, see What is Azure Private Link.

Enabling private endpoints has an impact on many items, so you should review this
entire article before enabling private endpoints.

What is a private endpoint


Private endpoint guarantees that traffic going into your organization's Fabric items (such
as uploading a file into OneLake, for example) always follows your organization's
configured private link network path. You can configure Fabric to deny all requests that
don't come from the configured network path.

Private endpoints don't guarantee that traffic from Fabric to your external data sources,
whether in the cloud or on-premises, is secured. Configure firewall rules and virtual
networks to further secure your data sources.

A private endpoint is a single, directional technology that lets clients initiate connections
to a given service but doesn't allow the service to initiate a connection into the
customer network. This private endpoint integration pattern provides management
isolation since the service can operate independently of customer network policy
configuration. For multitenant services, this private endpoint model provides link
identifiers to prevent access to other customers' resources hosted within the same
service.

The Fabric service implements private endpoints and not service endpoints.

Using private endpoints with Fabric provides the following benefits:

Restrict traffic from the internet to Fabric and route it through the Microsoft
backbone network.
Ensure only authorized client machines can access Fabric.
Comply with regulatory and compliance requirements that mandate private access
to your data and analytics services.

Understand private endpoint configuration


There are two tenant settings in the Fabric admin portal involved in Private Link
configuration: Azure Private Links and Block Public Internet Access.

If Azure Private Link is properly configured and Block public Internet access is enabled:

Supported Fabric items are only accessible for your organization from private
endpoints, and aren't accessible from the public Internet.
Traffic from the virtual network targeting endpoints and scenarios that support
private links are transported through the private link.
Traffic from the virtual network targeting endpoints and scenarios that don't
support private links will be blocked by the service, and won't work.
There might be scenarios that don't support private links, which therefore will be
blocked at the service when Block Public Internet Access is enabled.

If Azure Private Link is properly configured and Block public Internet access is disabled:

Traffic from the public Internet will be allowed by Fabric services.


Traffic from the virtual network targeting endpoints and scenarios that support
private links are transported through the private link.
Traffic from the virtual network targeting endpoints and scenarios that don't
support private links are transported through the public Internet, and will be
allowed by Fabric services.
If the virtual network is configured to block public Internet access, scenarios that
don't support private links will be blocked by the virtual network, and won't work.

Private Link in Fabric experiences

OneLake
OneLake supports Private Link. You can explore OneLake in the Fabric portal or from any
machine within your established virtual network using via OneLake file explorer, Azure
Storage Explorer, PowerShell, and more.

Direct calls using OneLake regional endpoints don't work via private link to Fabric. For
more information about connecting to OneLake and regional endpoints, see How do I
connect to OneLake?.
Warehouse and Lakehouse SQL endpoint
Accessing Warehouse items and Lakehouse SQL endpoints in the portal is protected by
Private Link. Customers can also use Tabular Data Stream (TDS) endpoints (for example,
SQL Server Management Studio, Azure Data Studio) to connect to Warehouse via Private
link.

Visual query in Warehouse doesn't work when the Block Public Internet Access tenant
setting is enabled.

Lakehouse, Notebook, Spark job definition, Environment


Once you've enabled the Azure Private Link tenant setting, running the first Spark job
(Notebook or Spark job definition) or performing a Lakehouse operation (Load to Table,
table maintenance operations such as Optimize or Vacuum) will result in the creation of
a managed virtual network for the workspace.

Once the managed virtual network has been provisioned, the starter pools (default
Compute option) for Spark are disabled, as these are prewarmed clusters hosted in a
shared virtual network. Spark jobs run on custom pools that are created on-demand at
the time of job submission within the dedicated managed virtual network of the
workspace. Workspace migration across capacities in different regions isn't supported
when a managed virtual network is allocated to your workspace.

When the private link setting is enabled, Spark jobs won't work for tenants whose home
region doesn't support Fabric Data Engineering, even if they use Fabric capacities from
other regions that do.

For more information, see Managed VNet for Fabric.

Dataflow Gen2
You can use Dataflow gen2 to get data, transform data, and publish dataflow via private
link. When your data source is behind the firewall, you can use the VNet data gateway to
connect to your data sources. The VNet data gateway enables the injection of the
gateway (compute) into your existing virtual network, thus providing a managed
gateway experience. You can use VNet gateway connections to connect to a Lakehouse
or Warehouse in the tenant that requires a private link or connect to other data sources
with your virtual network.

Pipeline
When you connect to Pipeline via private link, you can use the data pipeline to load data
from any data source with public endpoints into a private-link-enabled Microsoft Fabric
lakehouse. Customers can also author and operationalize data pipelines with activities,
including Notebook and Dataflow activities, using the private link. However, copying
data from and into a Data Warehouse isn't currently possible when Fabric's private link is
enabled.

ML Model, Experiment, and AI skill


ML Model, Experiment, and AI skill supports private link.

Power BI
If internet access is disabled, and if the Power BI semantic model, Datamart, or
Dataflow Gen1 connects to a Power BI semantic model or Dataflow as a data
source, the connection will fail.

Publish to Web isn't supported when the tenant setting Azure Private Link is
enabled in Fabric.

Email subscriptions aren't supported when the tenant setting Block Public Internet
Access is enabled in Fabric.

Exporting a Power BI report as PDF or PowerPoint isn't supported when the tenant
setting Azure Private Link is enabled in Fabric.

If your organization is using Azure Private Link in Fabric, modern usage metrics
reports will contain partial data (only Report Open events). A current limitation
when transferring client information over private links prevents Fabric from
capturing Report Page Views and performance data over private links. If your
organization had enabled the Azure Private Link and Block Public Internet Access
tenant settings in Fabric, the refresh for the dataset fails and the usage metrics
report doesn't show any data.

Eventhouse
Eventhouse supports Private Link, allowing secure data ingestion and querying from
your Azure Virtual Network via a private link. You can ingest data from various sources,
including Azure Storage accounts, local files, and Dataflow Gen2. Streaming ingestion
ensures immediate data availability. Additionally, you can utilize KQL queries or Spark to
access data within an eventhouse.
Limitations:

Ingesting data from OneLake isn't supported.


Creating a shortcut to an eventhouse isn't possible.
Connecting to an eventhouse in a data pipeline isn't possible.
Ingesting data using queued ingestion isn't supported.
Data connectors relying on queued ingestion aren't supported.
Querying an eventhouse using T-SQL isn't possible.

Other Fabric items


Other Fabric items, such as Event stream, don't currently support Private Link, and are
automatically disabled when you turn on the Block Public Internet Access tenant setting
in order to protect compliance status.

Microsoft Purview Information Protection


Microsoft Purview Information Protection doesn't currently support Private Link. This
means that in Power BI Desktop running in an isolated network, the Sensitivity button is
grayed out, label information won't appear, and decryption of .pbix files will fail.

To enable these capabilities in Desktop, admins can configure service tags for the
underlying services that support Microsoft Purview Information Protection, Exchange
Online Protection (EOP), and Azure Information Protection (AIP). Make sure you
understand the implications of using service tags in a private links isolated network.

Other considerations and limitations


There are several considerations to keep in mind while working with private endpoints in
Fabric:

Fabric supports up to 450 capacities in a tenant where Private Link is enabled.

Tenant migration is blocked when Private Link is turned on in the Fabric admin
portal.

Customers can't connect to Fabric resources in multiple tenants from a single


virtual network, but rather only the last tenant to set up Private Link.

Private link doesn't support in Trial capacity. When accessing Fabric via Private Link
traffic, trial capacity won't work.
Any uses of external images or themes aren't available when using a private link
environment.

Each private endpoint can be connected to one tenant only. You can't set up a
private link to be used by more than one tenant.

For Fabric users: On-premises data gateways aren't supported and fail to register
when Private Link is enabled. To run the gateway configurator successfully, Private
Link must be disabled. Learn more about this scenario. VNet data gateways will
work. For more information, see these considerations.

For non-PowerBI (PowerApps or LogicApps) Gateway users: The gateway doesn't


work properly when Private Link is enabled. A potential workaround is to disable
the Azure Private Link tenant setting, configure the gateway in a remote region (a
region other than the recommended region), then re-enable Azure Private Link.
After Private Link is re-enabled, the gateway in the remote region won't use private
links.

Private links resource REST APIs don't support tags.

The following URLs must be accessible from the client browser:

Required for auth:


login.microsoftonline.com
aadcdn.msauth.net

msauth.net

msftauth.net
graph.microsoft.com

login.live.com , though this may be different based on account type.

Required for the Data Engineering and Data Science experiences:


http://res.cdn.office.net/
https://aznbcdn.notebooks.azure.net/

https://pypi.org/* (for example, https://pypi.org/pypi/azure-storage-

blob/json )

local static endpoints for condaPackages


https://cdn.jsdelivr.net/npm/monaco-editor*

Related content
Set up and use secure private endpoints
Managed VNet for Fabric
Conditional Access
How to find your Microsoft Entra tenant ID

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Set up and use private links
Article • 06/02/2024

In Fabric, you can configure and use an endpoint that allows your organization to access
Fabric privately. To configure private endpoints, you must be a Fabric administrator and
have permissions in Azure to create and configure resources such as virtual machines
(VMs) and virtual networks (VNets).

The steps that allow you to securely access Fabric from private endpoints are:

1. Set up private endpoints for Fabric.


2. Create a Microsoft.PowerBI private link services for Power BI resource in the Azure
portal.
3. Create a virtual network.
4. Create a virtual machine (VM).
5. Create a private endpoint.
6. Connect to a VM using Bastion.
7. Access Fabric privately from the virtual machine.
8. Disable public access for Fabric.

The following sections provide additional information for each step.

Step 1. Set up private endpoints for Fabric


1. Sign in to Fabric as an administrator.

2. Go to the tenant settings.

3. Find and expand the setting Azure Private Link.

4. Set the toggle to Enabled.


It takes about 15 minutes to configure a private link for your tenant. This includes
configuring a separate FQDN (fully qualified domain name) for the tenant in order to
communicate privately with Fabric services.

When this process is finished, move on to the next step.

Step 2. Create a Microsoft.PowerBI private link services


for Power BI resource in the Azure portal
This step is used to support Azure Private Endpoint association with your Fabric
resource.

1. Sign in to the Azure portal .

2. Select Create a resource.

3. Under Template deployment, select Create.


4. On the Custom deployment page, select Build your own template in the editor.

5. In the editor, create the following a Fabric resource using the ARM template as
shown below, where

<resource-name> is the name you choose for the Fabric resource.

<tenant-object-id> is your Microsoft Entra tenant ID. See How to find your

Microsoft Entra tenant ID.

{
"$schema": "http://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"resources": [
{
"type":"Microsoft.PowerBI/privateLinkServicesForPowerBI",
"apiVersion": "2020-06-01",
"name" : "<resource-name>",
"location": "global",
"properties" :
{
"tenantId": "<tenant-object-id>"
}
}
]
}

If you're using an Azure Government cloud for Power BI, location should be the
region name of the tenant. For example, if the tenant is in US Gov Texas, you
should put "location": "usgovtexas" in the ARM template. The list of Power BI US
Government regions can be found in the Power BI for US government article.

) Important

Use Microsoft.PowerBI/privateLinkServicesForPowerBI as type value, even


though the resource is being created for Fabric.

6. Save the template. Then enter the following information.

ノ Expand table

Setting Value

Project details

Subscription Select your subscription.

Resource group Select **Create new. Enter test-PL as the name. Select OK.

Instance details Select the region.

Region
7. On the review screen, select Create to accept the terms and conditions.
Step 3. Create a virtual network
The following procedure creates a virtual network with a resource subnet, an Azure
Bastion subnet, and an Azure Bastion host.

The number of IP addresses your subnet will need is the number of capacities on your
tenant plus five. For example, if you're creating a subnet for a tenant with seven
capacities, you'll need twelve IP addresses.

1. In the Azure portal, search for and select Virtual networks.

2. On the Virtual networks page, select + Create.

3. On the Basics tab of Create virtual network, enter or select the following
information:

ノ Expand table
Setting Value

Project details

Subscription Select your subscription.

Resource group Select test-PL, the name we created in Step 2.

Instance details

Name Enter vnet-1.

Region Select the region where you'll initiate the connection to Fabric.

4. Select Next to proceed to the Security tab. You can leave as default or change
based on business need.

5. Select Next to proceed to the IP Addresses tab. You can leave as default or change
based on business need.

6. Select Save.

7. Select Review + create at the bottom of the screen. When validation passes, select
Create.

Step 4. Create a virtual machine


The next step is to create a virtual machine.

1. In the Azure portal, go to Create a resource > Compute > Virtual machines.

2. On the Basics tab, enter or select the following information:

ノ Expand table

Settings Value

Project details

Subscription Select your Azure Subscription.

Resource group Select the resource group you provided in Step 2.

Instance details

Virtual machine Enter a name for the new virtual machine. Select the info bubble
name next to the field name to see important information about virtual
machine names.
Settings Value

Region Select the region you selected in Step 3.

Availability options For testing, choose No infrastructure redundancy required

Security Type Leave the default.

Image Select the image you want. For example, choose Windows Server
2022.

VM architecture Leave the default of x64.

Size Select a size.

ADMINISTRATOR
ACCOUNT

Username Enter a username of your choosing.

Password Enter a password of your choosing. The password must be at least


12 characters long and meet the defined complexity requirements.

Confirm password Reenter password.

INBOUND PORT
RULES

Public inbound ports Choose None.


3. Select Next: Disks.

4. On the Disks tab, leave the defaults and select Next: Networking.

5. On the Networking tab, select the following information:

ノ Expand table

Settings Value

Virtual network Select the virtual network you created in Step 3.


Settings Value

Subnet Select the default (10.0.0.0/24) you created in Step 3.

For the rest of the fields, you leave the defaults.

6. Select Review + create. You're taken to the Review + create page where Azure
validates your configuration.

7. When you see the Validation passed message, select Create.


Step 5. Create a private endpoint
The next step is to create a private endpoint for Fabric.

1. In the search box at the top of the portal, enter Private endpoint. Select Private
endpoints.

2. Select + Create in Private endpoints.

3. On the Basics tab of Create a private endpoint, enter or select the following
information:

ノ Expand table

Settings Value

Project details

Subscription Select your Azure Subscription.

Resource group Select the resource group you created in Step 2.

Instance details

Name Enter FabricPrivateEndpoint. If this name is taken, create a unique name.

Region Select the region you created for your virtual network in Step 3.

The following image shows the Create a private endpoint - Basics window.
4. Select Next: Resource. In the Resource pane, enter or select the following
information:

ノ Expand table

Settings Value

Connection method Select connect to an Azure resource in my directory.

Subscription Select your subscription.

Resource type Select Microsoft.PowerBI/privateLinkServicesForPowerBI

Resource Choose the Fabric resource you created in Step 2.

Target subresource Tenant

The following image shows the Create a private endpoint - Resource window.

5. Select Next: Virtual Network. In Virtual Network, enter or select the following
information.

ノ Expand table

Settings Value

NETWORKING

Virtual network Select vnet-1 which you created in Step 3.


Settings Value

Subnet Select subnet-1 which you created in in Step 3.

PRIVATE DNS INTEGRATION

Integrate with private DNS zone Select Yes.

Private DNS Zone Select


(New)privatelink.analysis.windows.net
(New)privatelink.pbidedicated.windows.net
(New)privatelink.prod.powerquery.microsoft.com

6. Select Next: Tags, then Next: Review + create.

7. Select Create.

Step 6. Connect to a VM using Bastion


Azure Bastion protects your virtual machines by providing lightweight, browser-based
connectivity without the need to expose them through public IP addresses. For more
information, see What is Azure Bastion?.

Connect to your VM using the following steps:

1. Create a subnet called AzureBastionSubnet in the virtual network you created in


Step 3.

2. In the portal's search bar, enter testVM which we created in Step 4.

3. Select the Connect button, and choose Connect via Bastion from the dropdown
menu.

4. Select Deploy Bastion.

5. On the Bastion page, enter the required authentication credentials, then click
Connect.

Step 7. Access Fabric privately from the VM


The next step is to access Fabric privately, from the virtual machine you created in the
previous step, using the following steps:

1. In the virtual machine, open PowerShell.

2. Enter nslookup <tenant-object-id-without-hyphens>-


api.privatelink.analysis.windows.net .

3. You receive a response similar to the following message and can see that the
private IP address is returned. You can see that the Onelake endpoint and
Warehouse endpoint also return private IPs.

4. Open the browser and go to app.fabric.microsoft.com to access Fabric privately.

Step 8. Disable public access for Fabric


Finally, you can optionally disable public access for Fabric.

If you disable public access for Fabric, certain constraints on access to Fabric services are
put into place, as described in the next section.

) Important

When you turn on Block Internet Access, some unsupported Fabric items will be
disabled. Learn full list of limitations and considerations in About private links

To disable public access for Fabric, sign in to Fabric as an administrator, and navigate
to the Admin portal. Select Tenant settings and scroll to the Advanced networking
section. Enable the toggle button in the Block Public Internet Access tenant setting.
It takes approximately 15 minutes for the system to disable your organization's access
to Fabric from the public Internet.

Completion of private endpoint configuration


Once you've followed the steps in the previous sections and the private link is
successfully configured, your organization implements private links based on the
following configuration selections, whether the selection is set upon initial configuration
or subsequently changed.

If Azure Private Link is properly configured and Block public Internet access is enabled:

Fabric is only accessible for your organization from private endpoints, and isn't
accessible from the public Internet.
Traffic from the virtual network targeting endpoints and scenarios that support
private links are transported through the private link.
Traffic from the virtual network targeting endpoints and scenarios that don't
support private links will be blocked by the service, and won't work.
There may be scenarios that don't support private links, which therefore will be
blocked at the service when Block public Internet access is enabled.

If Azure Private Link is properly configured and Block public Internet access is disabled:

Traffic from the public Internet will be allowed by Fabric services.


Traffic from the virtual network targeting endpoints and scenarios that support
private links are transported through the private link.
Traffic from the virtual network targeting endpoints and scenarios that don't
support private links are transported through the public Internet, and will be
allowed by Fabric services.
If the virtual network is configured to block public Internet access, scenarios that
don't support private links will be blocked by the virtual network, and won't work.

The following video shows how to connect a mobile device to Fabric, using private
endpoints:

7 Note

This video might use earlier versions of Power BI Desktop or the Power BI service.

https://www.youtube-nocookie.com/embed/-3yFtlZBpqs

More questions? Ask the Fabric Community .

Related content
About private links

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Service tags
Article • 06/28/2024

You can use Azure service tags to enable connections to and from Microsoft Fabric. In
Azure, a service tag is a defined group of IP addresses that is automatically managed, as
a group, to minimize the complexity of updates or changes to network security rules.

Which service tags are supported?


In Microsoft Fabric, you can use the service tags listed in the table below. There's no
service tag for untrusted code that is used in Data Engineering items.

ノ Expand table

Tag Purpose Can use Can be Can use with


inbound or regional? Azure
outbound? Firewall?

DataFactory Azure Data Both No Yes


Factory

DataFactoryManagement On premises data Outbound No Yes


pipeline activity

EventHub Azure Event Hubs Outbound Yes Yes

Power BI Power BI and Both No Yes


Microsoft Fabric

PowerQueryOnline Power Query Both No Yes


Online

KustoAnalytics Real-Time Both No No


Intelligence

Use service tags


You can use the service tags to define network access controls on network security
groups, Azure Firewall, and user-defined routes.

Related content
Private endpoints
Azure IP Ranges and Service Tags – Public Cloud
You can refer to the PowerBI tag. Microsoft Fabric currently doesn't support
regional service tags nor breakdown IP ranges by region.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Conditional access in Fabric
Article • 11/15/2023

The Conditional Access feature in Microsoft Entra ID offers several ways enterprise
customers can secure apps in their tenants, including:

Multifactor authentication
Allowing only Intune enrolled devices to access specific services
Restricting user locations and IP ranges

For more information on the full capabilities of Conditional Access, see the article
Microsoft Entra Conditional Access documentation.

Configure conditional access for Fabric


To ensure that conditional access for Fabric works as intended and expected, it's
recommended to adhere to the following best practices:

Configure a single, common, conditional access policy for the Power BI Service,
Azure Data Explorer, Azure SQL Database, and Azure Storage. Having a single,
common policy significantly reduces unexpected prompts that might arise from
different policies being applied to downstream services, and the consistent security
posture provides the best user experience in Microsoft Fabric and its related
products.

The products to include in the policy are the following:

Product
Power BI Service
Azure Data Explorer
Azure SQL Database
Azure Storage

If you create a restrictive policy (such as one that blocks access for all apps except
Power BI), certain features, such as dataflows, won't work.

7 Note

If you already have a conditional access policy configured for Power BI, be sure to
include the other products listed above in your existing Power BI policy, otherwise
conditional access may not operate as intended in Fabric.
The following steps show how to configure a conditional access policy for Microsoft
Fabric.

1. Sign in to the Azure portal using an account with global administrator permissions.
2. Select Microsoft Entra ID.
3. On the Overview page, choose Security from the menu.
4. On the Security | Getting started page, choose Conditional Access.
5. On the Conditional Access | Overview page, select +Create new policy.
6. Provide a name for the policy.
7. Under Assignments, select the Users field. Then, on the Include tab, choose Select
users and groups, and then check the Users and groups checkbox. The Select
users and groups pane opens, and you can search for and select a Microsoft Entra
user or group for conditional access. When done, click Select.
8. Place your cursor in the Target resources field and choose Cloud apps from the
drop-down menu. Then, on the Include tab, choose Select apps and place your
cursor in the Select field. In the Select side pane that appears, find and select
Power BI Service, Azure Data Explorer, Azure SQL Database, and Azure Storage.
When you've selected all four items, close the side pane by clicking Select.
9. Under Access controls, put your cursor in the Grant field. In the Grant side pane
that appears, configure the policy you want to apply, and then click Select.
10. Set the Enable policy toggle to On, then select Create.

Next steps
Microsoft Entra Conditional Access documentation

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Add Fabric URLs to your allowlist
Article • 05/28/2024

This article contains the allowlist of the Microsoft Fabric URLs required for interfacing
with Fabric workloads. For the Power BI allowlist, see Add Power BI URLs to your
allowlist.

The URLs are divided into two categories: required and optional. The required URLs are
necessary for the service to work correctly. The optional URLs are used for specific
features that you might not use. To use Fabric, you must be able to connect to the
endpoints marked required in the tables in this article, and to any endpoints marked
required on the linked sites. If the link to an external site refers to a specific section, you
only need to review the endpoints in that section. You can also add endpoints that are
marked optional to allowlists for specific functionality to work.

Fabric requires only TCP Port 443 to be opened for the listed endpoints.

The tables in this article use the following conventions:

Wildcards (*) represent all levels under the root domain.


N/A is used when information isn't available.

The Endpoint column lists domain names and links to external sites, which contain
further endpoint information.

Fabric Platform Endpoints


ノ Expand table

Purpose Endpoint Port

Required: Portal *.fabric.microsoft.com TCP 443

OneLake
ノ Expand table

Purpose Endpoint Port

For OneLake access for DFS APIs (default *.onelake.dfs.fabric.microsoft.com Port


Onelake endpoint) 1443
Purpose Endpoint Port

Onelake endpoint for calling Blob APIs *.onelake.blob.fabric.microsoft.com TCP


443

Optional: Regional Endpoints for DFS APIs *<region>- TCP


onelake.dfs.fabric.microsoft.com 443

Optional: Regional Endpoints for Blob APIs *<region>- TCP


onelake.blob.fabric.microsoft.com 443

Pipeline
ノ Expand table

Purpose Endpoint Port

For outbound connections

Required: Portal *.powerbi.com TCP 443

Required: Backend APIs for *.pbidedicated.windows.net TCP 443


Portal

Required: Cloud pipelines No specific endpoint is required N/A

Optional: On-premises *.login.windows.net TCP 443


data gateway login login.live.com
aadcdn.msauth.net
login.microsoftonline.com
*.microsoftonline-p.com
See the documentation for Adjust communication
settings for the on-premises data gateway

Optional: On-premises *.servicebus.windows.net TCP 443


data gateway TCP
communication 5671-
5672
TCP
9350-
9354

Optional: On-premises *.frontend.clouddatahub.net TCP 443


data gateway pipelines (User can use service tag DataFactory or
DataFactoryManagement)

For inbound connections No specific endpoints other than the customer's data
store endpoints required in pipelines and behinds the
firewall.
Purpose Endpoint Port

(User can use service tag DataFactory, regional tag is


supported, like DataFactory.WestUs)

Lakehouse
ノ Expand table

Purpose Endpoint Port

Inbound connections https://cdn.jsdelivr.net/npm/monaco-editor* N/A

Notebook
ノ Expand table

Purpose Endpoint Port

Inbound connections (icons) http://res.cdn.office.net/ N/A

Required: Notebook backend https://*.pbidedicated.windows.net N/A


wss://*.pbidedicated.windows.net
(HTTP/WebSocket)

Required: Lakehouse backend https://onelake.dfs.fabric.microsoft.com N/A

Required: Shared backend https://*.analysis.windows.net N/A

Required: DE/DS extension UX https://pbides.powerbi.com N/A

Required: Notebooks UX https://aznb-ame-prod.azureedge.net N/A

Required: Notebooks UX https://*.notebooks.azuresandbox.ms N/A

Required: Notebooks UX https://content.powerapps.com N/A

Required: Notebooks UX https://aznbcdn.notebooks.azure.net N/A

Spark
ノ Expand table
Purpose Endpoint Port

Inbound connections (icons) http://res.cdn.office.net/ N/A

Inbound connections (library management for https://pypi.org/* N/A


PyPI)

Inbound connections (library management for local static endpoints for N/A
Conda) condaPackages

Data Warehouse
ノ Expand table

Purpose Endpoint Port

Required: Datamart SQL datamart.fabric.microsoft.com 1433

Required: Datamart SQL datamart.pbidedicated.microsoft.com 1433

Required: Fabric DW SQL datawarehouse.fabric.microsoft.com 1433

Required: Fabric SQL datawarehouse.pbidedicated.microsoft.com 1433

Data Science
ノ Expand table

Purpose Endpoint Port

Inbound connections (library management for https://pypi.org/* N/A


PyPI)

Inbound connections (library management for local static endpoints for N/A
Conda) condaPackages

KQL Database
ノ Expand table

Purpose Endpoint Port

https://*.z[0-9].kusto.fabric.microsoft.com
Eventstream
ノ Expand table

Purpose Endpoint Port

Customers can send/read events from Event sb://*.servicebus.windows.net http: 443


stream in their custom app amqp:
5672/5673
kafka: 9093

Related content
Add Power BI URLs to allowlist

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Add Power BI URLs to your allowlist
Article • 05/21/2024

This article contains the allowlist of the Power BI URLs required for interfacing with
Power BI. For the Microsoft Fabric allowlist, see Add Fabric URLs to your allowlist.

The Power BI service requires internet connectivity. The endpoints listed in the following
tables should be reachable for customers who use the Power BI service. All endpoints in
the Power BI service support HTTP/2.

To use the Power BI service, you must be able to connect to the endpoints marked
required in the tables in this article, and to any endpoints marked required on the
linked sites. If the link to an external site refers to a specific section, you only need to
review the endpoints in that section.

You can also add endpoints that are marked optional to allowlists for specific
functionality to work.

The Power BI service requires only TCP Port 443 to be opened for the listed endpoints.

Wildcards (*) represent all levels under the root domain. N/A is used when information
isn't available. The Destination(s) column lists domain names and links to external sites,
which contain further endpoint information.

) Important

The information in this article doesn't apply to Power BI China operated by


21Vianet or Power BI for US government. Read Connect government and global
Azure cloud services to learn more about communicating between cloud services.

Authentication
Power BI depends on the required endpoints in the Microsoft 365 authentication and
identity sections. To use Power BI, you must be able to connect to the endpoints in the
following linked site.

ノ Expand table

Purpose Destination Port

Required: Authentication and See the documentation for Microsoft 365 Common and N/A
Purpose Destination Port

identity Office Online URLs

General site usage


For the general use of Power BI, you must be able to connect to the endpoints and
linked sites in the following table.

ノ Expand table

Purpose Destination Port

Required: Backend APIs api.powerbi.com TCP


443

Required: Backend APIs *.analysis.windows.net TCP


443

Required: Backend APIs *.pbidedicated.windows.net TCP


443

Required: Content Delivery content.powerapps.com TCP


Network (CDN) 443

Required: Datamart SQL One of the following: 1433


datamart.fabric.microsoft.com
datamart.pbidedicated.windows.net

Required: Microsoft 365 See the documentation for Microsoft 365 Common N/A
integration and Office Online URLs

Required: Portal *.powerbi.com TCP


443

Required: Manage gateways, gatewayadminportal.azure.com TCP


connections and data policies 443
(preview)

Required: Service telemetry dc.services.visualstudio.com TCP


443

Optional: Informational messages arc.msn.com TCP


443

Optional: NPS surveys nps.onyx.azure.net TCP


443
Administration
To perform administrative functions in Power BI, you must be able to connect to the
endpoints in the following linked sites.

ノ Expand table

Purpose Destination Port

Required: For managing users and See the documentation for Microsoft 365 Common N/A
viewing audit logs and Office Online URLs

Getting data
To get data from specific data sources, such as OneDrive, you must be able to connect
to the endpoints in the following table. Access to other internet domains and URLs
might be required for specific data sources that your organization uses.

ノ Expand table

Purpose Destination Port

Required: AppSource (internal or external appsource.microsoft.com TCP


apps in Power BI) *.s-microsoft.com 443

Optional: Import files From OneDrive See the Required URLs and ports for N/A
personal OneDrive site

Optional: Power BI in 60-Seconds tutorial *.doubleclick.net TCP


video *.ggpht.com 443
*.google.com
*.googlevideo.com
*.youtube.com
*.ytimg.com
fonts.gstatic.com

Optional: PubNub streaming data sources See the PubNub documentation N/A

Dashboard and report integration


Power BI depends on certain endpoints to support your dashboards and reports. You
must be able to connect to the endpoints and linked sites in the following table.
ノ Expand table

Purpose Destination Port

Required: Excel See the documentation for Microsoft 365 Common and Office N/A
integration Online URLs

Power BI visuals
Power BI depends on certain endpoints to view and access Power BI visuals. You must be
able to connect to the endpoints and linked sites in the following table.

ノ Expand table

Purpose Destination Port

Required: Import a custom *.powerbi.com TCP


visual from the Marketplace *.osi.office.net 443
interface or from a file *.msecnd.net
store.office.com
store-images.s-microsoft.com
visuals.azureedge.net

Optional: Azure Maps https://atlas.microsoft.com N/A


https://us.atlas.microsoft.com
https://eu.atlas.microsoft.com

Optional: Bing Maps bing.com TCP


platform.bing.com 443
r.bing.com
*.virtualearth.net

Optional: Esri Maps *.esri.com TCP


*.arcgis.com 443

Optional: PowerApps See the Required services section from the PowerApps N/A
system requirements site

Optional: Visio See the documentation for Microsoft 365 Common and N/A
Office Online URLs, as well as SharePoint Online and
OneDrive for work or school

Power BI OneDrive and SharePoint integration


Power BI depends on certain endpoints to support integration with OneDrive for
Business and SharePoint Online. You must be able to connect to the endpoints and
linked sites in the following table.

ノ Expand table

Purpose Destination Port

Required: OneDrive and See the documentation for SharePoint Online and N/A
SharePoint integration OneDrive for Business URLs

Related external sites


Power BI links to other related sites. These sites host documentation, support, new
feature requests, and more. Access to these sites doesn't affect the functionality of
Power BI, so adding them to allowlists is optional.

ノ Expand table

Purpose Destination Port

Optional: Community site community.powerbi.com TCP


oxcrx34285.i.lithium.com 443

Optional: Documentation site learn.microsoft.com TCP


img-prod-cms-rt-microsoft- 443
com.akamaized.net
statics-uhf-eas.akamaized.net
cdnssl.clicktale.net
ing-district.clicktale.net

Optional: Download site (for Power BI Desktop and download.microsoft.com TCP


other products) 443

Optional: External redirects aka.ms TCP


go.microsoft.com 443

Optional: Ideas feedback site ideas.powerbi.com TCP


powerbi.uservoice.com 443

Optional: Power BI site - landing page, learn more powerbi.microsoft.com TCP


links, support site, download links, partner showcase, 443
and so on.

Optional: Power BI Developer Center dev.powerbi.com TCP


443

Optional: Support site support.powerbi.com TCP


s3.amazonaws.com 443
*.olark.com
Purpose Destination Port

logx.optimizely.com
mscom.demdex.net
tags.tiqcdn.com

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


What are managed virtual networks?
Article • 05/30/2024

Managed virtual networks are virtual networks that are created and managed by
Microsoft Fabric for each Fabric workspace. Managed virtual networks provide network
isolation for Fabric Spark workloads, meaning that the compute clusters are deployed in
a dedicated network and are no longer part of the shared virtual network.

Managed virtual networks also enable network security features such as managed
private endpoints, and private link support for Data Engineering and Data Science items
in Microsoft Fabric that use Apache Spark.

Fabric workspaces that are provisioned with a dedicated virtual network provide you
with value in three ways:

With a managed virtual network you get complete network isolation for the Spark
clusters running your Spark jobs (which allow users to run arbitrary user code)
while offloading the burden of managing the virtual network to Microsoft Fabric.

You don't need to create a subnet for the Spark clusters based on peak load, as
this is managed for you by Microsoft Fabric.

A managed virtual network for your workspace, along with managed private
endpoints, allows you to access data sources that are behind firewalls or otherwise
blocked from public access.

7 Note
Managed virtual networks are currently not supported in the Switzerland West and
West Central US regions.

Outbound: Managed Private Endpoints are not available in Fabric workspaces


attached to capacities in Switzerland West and West Central US regions.

Inbound: If workspaces are attached to Fabric capacities in this region within


tenants where the Private Link setting is enabled, Data Engineering jobs originating
from notebooks, Spark job definitions, and lakehouse operations will result in
errors.

How to enable managed virtual networks for a


Fabric workspace
Managed virtual networks are provisioned for a Fabric workspace when

Managed private endpoints are added to a workspace. Workspace admins can


create and delete managed private endpoint connections from the workspace
settings of a Fabric Workspace.

For more information, see About managed private endpoints in Fabric

Enabling Private Link and running a Spark job in a Fabric Workspace. Tenant
admins can enable the Private Link setting in the Admin portal of their Microsoft
Fabric tenant.

Once you have enabled the Private Link setting, running the first Spark job
(Notebook or Spark job definition) or performing a Lakehouse operation (for
example, Load to Table, or a table maintenance operation such as Optimize or
Vacuum) will result in the creation of a managed virtual network for the workspace.
Learn more about configuring Private Links for Microsoft Fabric

7 Note

The managed virtual network is provisioned automatically as part of the job


submission step for the first Spark Job in the workspace. Once the managed
virtual network has been provisioned, the starter pools (default Compute
option) for Spark are disabled, as these are pre-warmed clusters hosted in a
shared virtual network. Spark jobs will run on custom pools created on-
demand at the time of job submission within the dedicated managed virtual
network of the workspace.

Related content
About managed private endpoints
How to create managed private endpoints
About private links

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of managed private endpoints
for Fabric
Article • 05/30/2024

Managed private endpoints are feature that allows secure and private access to data
sources from Fabric Spark workloads.

What are Managed Private Endpoints?


Managed private endpoints are connections that workspace admins can create to
access data sources that are behind a firewall or that are blocked from accessing
from the public internet.

Managed private endpoints allow Fabric Spark workloads to securely access data
sources without exposing them to the public network or requiring complex
network configurations.

The private endpoints provide a secure way to connect and access the data from
these data sources using items such as notebooks and Spark job definitions.

Microsoft Fabric creates and manages managed private endpoints based on the
inputs from the workspace admin. Workspace admins can set up managed private
endpoints from the workspace settings by specifying the resource ID of the data
source, identifying the target subresource, and providing a justification for the
private endpoint request.

Managed private endpoints support various data sources, such as Azure Storage,
Azure SQL Database and many more.
For more information about supported data sources for managed private endpoints in
Fabric, see Supported data sources.

Limitations and considerations


Starter pool limitation: Workspaces with managed virtual networks (VNets) can't
access starter pools. This category encompasses workspaces that use managed
private endpoints or are associated with a Fabric tenant enabled with Azure Private
Links and have executed Spark jobs. Such workspaces rely on on-demand clusters,
taking three to five minutes to start a session.

Managed private endpoints: Managed private endpoints are supported only for
Fabric trial capacity and Fabric capacities F64 or higher.

Tenant Region Compatibility: Managed private endpoints function only in regions


where Fabric Data Engineering workloads are available. Creating them in
unsupported Fabric Tenant home regions results in errors. These unsupported
Tenant home regions include

ノ Expand table

Region

West Central US

Israel Central

Switzerland West
Region

Italy North

West India

Mexico Central

Qatar Central

Spain Central

Capacity Region Compatibility: Managed private endpoints function only in


regions where Fabric Data Engineering workloads are available. Creating them in
unsupported capacity regions results in errors. These unsupported regions include

ノ Expand table

Region

West Central US

Switzerland West

Italy North

Qatar Central

West India

France South

Germany North

Japan West

Korea South

Southafrica West

UAE Central

Spark job resilience: To prevent Spark job failures or errors, migrate workspaces
with managed private endpoints to Fabric capacity SKUs of F64 or higher.

Workspace migration: Workspace migration across capacities in different regions


is unsupported.

OneLake shortcuts do not yet support connections to ADLS Gen2 storage


accounts using managed private endpoints.
These limitations and considerations might affect your use cases and workflows. Take
them into account before enabling the Azure Private Link tenant setting for your tenant.

Related content
Create and use managed private endpoints
Overview of private links in Fabric
Overview of managed virtual networks in Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Create and use managed private
endpoints
Article • 05/31/2024

Users with admin permissions to a Microsoft Fabric workspace can create, view, and
delete managed private endpoints from the Fabric portal through the workspace
settings.

The user can also monitor the status and the approval process of the managed
private endpoints from the Network security section of the workspace settings.

The user can access the data sources using the private endpoint name from the
Fabric Spark workloads.

Create a managed private endpoint


1. In a Fabric workspace, navigate to the workspace settings, select the Network
security tab, and then select the Create option in the Managed Private Endpoint
section.

The Create Managed Private endpoint dialog opens.


2. Specify a name for the private endpoint and copy in the resource identifier for the
Azure resource. The resource identifier can be found in the properties tab on the
Azure portal page.

When done, select Create.

3. When the managed private endpoint has been provisioned, the Activation status
change to Succeeded.
In addition the request for the private endpoint access is sent to the data source.
The data source admins are notified on the Azure portal resource pages for their
data sources. There they'll see a pending access request with the request message.

Taking SQL server as an example, users can navigate to the Azure portal and search for
the "SQL Server" resource.

1. On the Resource page, select Networking from the navigation menu and then
select the Private Access tab.

2. Data source administrators should be able to view the active private endpoint
connections and new connection requests.
3. Admins can either Approve or Reject by providing a business justification.

4. Once the request has been approved or rejected by the data source admin, the
status is updated in the Fabric workspace settings page upon refresh.

5. When the status has changed to approved, the endpoint can be used in notebooks
or Spark job definitions to access the data stored in the data source from Fabric
workspace.

Use managed private endpoints in Fabric


Microsoft Fabric notebooks support seamless interaction with data sources behind
secured networks using managed private endpoints for data exploration and processing.
Within a notebook, users can quickly read data from their protected data sources (and
write data back to) their lakehouses in a variety of file formats.

This guide provides code samples to help you get started in your own notebooks to
access data from data sources such as SQL DB through managed private endpoints.

Prerequisites
1. Access to the data source. This example looks at Azure SQL Server and Azure SQL
Database.

2. Sign into Microsoft Fabric and the Azure portal.

3. Navigate to the Azure SQL Server's resource page in the Azure portal and select
the Properties menu. Copy the Resource ID for the SQL Server that you would like
to connect to from Microsoft Fabric.

4. Using the steps listed in Create a managed private-endpoint, create the managed
private endpoint from the Fabric Network security settings page.

5. Once the data source administrator of the SQL server has approved the new
private endpoint connection request, you should be able to use the newly created
Managed Private Endpoint.

Connect to the Data Source from Notebooks


1. In the Microsoft Fabric workspace, use the experience switcher on the left-hand
side of your home page to switch to the Synapse Data Engineering experience.
2. Select Create and create a new notebook.

3. Now, in the notebook, by specifying the name of the SQL database and its
connection properties, you can connect through the managed private endpoint
connection that's been set up to read the tables in the database and write them to
your lakehouse in Microsoft Fabric.

4. The following PySpark code shows how to connect to an SQL database.

serverName = "<server_name>.database.windows.net"
database = "<database_name>"
dbPort = 1433
dbUserName = "<username>"
dbPassword = “<db password> or reference based on Keyvault>”

from pyspark.sql import SparkSession

spark = SparkSession.builder \
.appName("Example") \
.config("spark.jars.packages", "com.microsoft.azure:azure-sqldb-
spark:1.0.2") \
.config("spark.sql.catalogImplementation",
"com.microsoft.azure.synapse.spark") \
.config("spark.sql.catalog.testDB", "com.microsoft.azure.synapse.spark")
\
.config("spark.sql.catalog.testDB.spark.synapse.linkedServiceName",
"AzureSqlDatabase") \
.config("spark.sql.catalog.testDB.spark.synapse.linkedServiceName.connection
String", f"jdbc:sqlserver://{serverName}:{dbPort};database={database};user=
{dbUserName};password={dbPassword}") \ .getOrCreate()
jdbcURL = "jdbc:sqlserver://{0}:{1};database=
{2}".format(serverName,dbPort,database)
connection = {"user":dbUserName,"password":dbPassword,"driver":
"com.microsoft.sqlserver.jdbc.SQLServerDriver"}

df = spark.read.jdbc(url=jdbcURL, table = "dbo.Employee",


properties=connection)
df.show()
display(df)

# Write the dataframe as a delta table in your lakehouse


df.write.mode("overwrite").format("delta").saveAsTable("Employee")

# You can also specify a custom path for the table location
# df.write.mode("overwrite").format("delta").option("path",
"abfss://yourlakehouse.dfs.core.windows.net/Employee").saveAsTable("Employee
")

Now that the connection has been established, next step is to create a data frame to
read the table in the SQL Database.

Supported data sources


Microsoft Fabric supports over 26 data sources to connect to using managed private
endpoints. Users need to specify the resource identifier, which can be found in the
Properties settings page of their data source in the Azure portal. Ensure resource ID
format is followed as shown in the following table.

ノ Expand table

Service Resource ID Format

Cognitive /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Services name}/providers/Microsoft.CognitiveServices/accounts/{resource-name}

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Databricks name}/providers/Microsoft.Databricks/workspaces/{workspace-name}

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Database for name}/providers/Microsoft.DBforMariaDB/servers/{server-name}
MariaDB

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Database for name}/providers/Microsoft.DBforMySQL/servers/{server-name}
MySQL

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Database for name}/providers/Microsoft.DBforPostgreSQL/servers/{server-name}
Service Resource ID Format

PostgreSQL

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Cosmos DB name}/providers/Microsoft.DocumentDB/databaseAccounts/{account-name}
for MongoDB

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Cosmos DB name}/providers/Microsoft.DocumentDB/databaseAccounts/{account-name}
for NoSQL

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Monitor name}/providers/Microsoft.Insights/privateLinkScopes/{scope-name}
Private Link
Scopes

Azure Key /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Vault name}/providers/Microsoft.KeyVault/vaults/{vault-name}

Azure Data /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Explorer name}/providers/Microsoft.Kusto/clusters/{cluster-name}
(Kusto)

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Machine name}/providers/Microsoft.MachineLearningServices/workspaces/{workspace-
Learning name}

Private Link /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Service name}/providers/Microsoft.Network/privateLinkServices/{service-name}

Microsoft /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Purview name}/providers/Microsoft.Purview/accounts/{account-name}

Azure Search /subscriptions/{subscription-id}/resourceGroups/{resource-group-


name}/providers/Microsoft.Search/searchServices/{service-name}

Azure SQL /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Database name}/providers/Microsoft.Sql/servers/{server-name}

Azure SQL /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Database name}/providers/Microsoft.Sql/managedInstances/{instance-name}
(Azure SQL
Managed
Instance)

Azure Blob /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Storage name}/providers/Microsoft.Storage/storageAccounts/{storage-account-name}

Azure Data /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Lake Storage name}/providers/Microsoft.Storage/storageAccounts/{storage-account-name}
Gen2
Service Resource ID Format

Azure File /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Storage name}/providers/Microsoft.Storage/storageAccounts/{storage-account-name}

Azure Queue /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Storage name}/providers/Microsoft.Storage/storageAccounts/{storage-account-name}

Azure Table /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Storage name}/providers/Microsoft.Storage/storageAccounts/{storage-account-name}

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Synapse name}/providers/Microsoft.Synapse/workspaces/{workspace-name}
Analytics

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Synapse name}/providers/Microsoft.Synapse/workspaces/{workspace-name}
Analytics
(Artifacts)

Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Functions name}/providers/Microsoft.Web/sites/{function-app-name}

Azure Event /subscriptions/{subscription-id}/resourcegroups/{resource-group-


Hubs name}/providers/Microsoft.EventHub/namespaces/{namespace-name}

Azure IoT /subscriptions/{subscription-id}/resourceGroups/{resource-group-


Hub name}/providers/Microsoft.Devices/IotHubs/{iothub-name}

Related content
About managed private endpoints in Fabric
About private links in Fabric
Overview of managed virtual networks in Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Workspace identity
Article • 06/21/2024

A Fabric workspace identity is an automatically managed service principal that can be


associated with a Fabric workspace. Fabric workspaces with a workspace identity can
securely read or write to firewall-enabled Azure Data Lake Storage Gen2 accounts
through trusted workspace access for OneLake shortcuts. In the future, Fabric items will
be able to use the identity when connecting to resources that support Microsoft Entra
authentication. Fabric will use workspace identities to obtain Microsoft Entra tokens
without the customer having to manage any credentials.

Workspace identities can be created in the workspace settings of workspaces that are
associated with a Fabric capacity. A workspace identity is automatically assigned the
workspace contributor role and has access to workspace items.

When you create a workspace identity, Fabric creates a service principal in Microsoft
Entra ID to represent the identity. An accompanying app registration is also created.
Fabric automatically manages the credentials associated with workspace identities,
thereby preventing credential leaks and downtime due to improper credential handling.

7 Note

Fabric workspace identity is generally available. You can only create a workspace
identity in F64 or higher capacities. For information about buying a Fabric
subscription, see Buy a Microsoft Fabric subscription.

While Fabric workspace identities share some similarities with Azure managed identities,
their lifecycle, administration, and governance are different. A workspace identity has an
independent lifecycle that is managed entirely in Fabric. A Fabric workspace can
optionally be associated with an identity. When the workspace is deleted, the identity
gets deleted. The name of the workspace identity is always the same as the name of the
workspace it's associated with.

Create and manage a workspace identity


You must be a workspace admin to be able to create and manage a workspace identity.
The workspace you're creating the identity for must be associated with a Fabric F64
capacity or higher.

1. Navigate to the workspace and open the workspace settings.


2. Select the Workspace identity tab.
3. Select the + Workspace identity button.

When the workspace identity has been created, the tab displays the workspace identity
details and the list of authorized users.

The sections of the workspace identity configuration are described in the following
sections.

Identity details

ノ Expand table

Detail Description

Name Workspace identity name. The workspace identity name is the same as the workspace
name.

ID The workspace identity GUID. This is a unique identifier for the identity.

Role The workspace role assigned to the identity. Workspace identities are automatically
assigned the contributor role upon creation.

State The state of the workspace. Possible values: Active, Inactive, Deleting, Unusable, Failed,
DeleteFailed

Authorized users
For information, see Access control.
Delete a workspace identity
When an identity is deleted, Fabric items relying on the workspace identity for trusted
workspace access or authentication will break. Deleted workspace identities cannot be
restored.

7 Note

When a workspace is deleted, the workspace identity is deleted as well. its


workspace identity is deleted as well. If the workspace is restored after deletion, the
workspace identity is not restored. If you want the restored workspace to have a
workspace identity, you must create a new one.

How to use workspace identity


Shortcuts in a workspace that has a workspace identity can be used for trusted service
access. For more information, see trusted workspace access.

Security, administration, and governance of the


workspace identity
The following sections describe who can use the workspace identity, and how you can
monitor it in Microsoft Purview and Azure.

Access control
Workspace identity can be created and deleted by workspace admins. The workspace
identity has the workspace contributor role on the workspace.

Currently, workspace identity isn't supported for authentication to target resources in


connections. Authentication to target resources in connections will be supported in the
future. Admins, members, and contributors will be able to use workspace identity in
authentication in connections in the future.

Application Administrators or users with higher roles can view, modify, and delete the
service principal and app registration associated with the workspace identity in Azure.

2 Warning
Modifying or deleting the service principal or app registration in Azure is not
recommended, as it will cause Fabric items relying on workspace identity to stop
working.

Administer the workspace identity in Fabric


Fabric administrators can administer the workspace identities created in their tenant on
the Fabric identities tab in the admin portal.

1. Navigate to the Fabric identities tab in the Admin portal.


2. Select a workspace identity, and then select Details.
3. In the Details tab, you can view additional information related to the workspace
identity.
4. You can also delete a workspace identity.

7 Note

Workspace identities cannot be restored after deletion. Be sure to review the


consequences of deleting a workspace identity described in Delete a
workspace identity.

Administer the workspace identity in Purview


You can view the audit events generated upon the creation and deletion of workspace
identity in Purview Audit Log. To access the log

1. Navigate to the Microsoft Purview hub.


2. Select the Audit tile.
3. In the audit search form that appears, use the Activities - friendly names field to
search for fabric identity to find the activities related to workspace identities.
Currently, the following activities related to workspace identities are:

Created Fabric Identity for Workspace


Retrieved Fabric Identity for Workspace
Deleted Fabric Identity for Workspace
Retrieved Fabric Identity Token for Workspace

Administer the workspace identity in Azure


The application associated with the workspace identity can be viewed under both
Enterprise applications and App registrations in the Azure portal.

Enterprise applications

The application associated with the workspace identity can be seen in Enterprise
Applications in the Azure portal. Fabric Identity Management app is its configuration
owner.

2 Warning

Modifications to the application made here will cause the workspace identity to
stop working.

To view the audit logs and sign-in logs for this identity:

1. Sign in to the Azure portal.


2. Navigate to Microsoft Entra ID > Enterprise Applications.
3. Select either Audit logs or Sign in logs, as desired.

App registrations
The application associated with the workspace identity can be seen under App
registrations in the Azure portal. No modifications should be made there, as this will
cause the workspace identity to stop working.

Advanced scenarios
The following sections describe scenarios involving workspace identities that might
occur.

Deleting the identity


The workspace identity can be deleted in the workspace settings. When an identity is
deleted, Fabric items relying on the workspace identity for trusted workspace access or
authentication will break. Deleted workspace identities can't be restored.

When a workspace is deleted, its workspace identity is deleted as well. If the workspace
is restored after deletion, the workspace identity is not restored. If you want the
restored workspace to have a workspace identity, you must create a new one.
Renaming the workspace
When a workspace gets renamed, the workspace identity is also renamed to match the
workspace name. However its Entra application and service principal remain the same.
Note that there can be multiple application and app registration objects with same
name in a tenant.

Considerations and limitations


A workspace identity can only be created in workspaces associated with a Fabric
F64+ capacity. For information about buying a Fabric subscription, see Buy a
Microsoft Fabric subscription.
If a workspace with a workspace identity is migrated to a non-Fabric or a capacity
lower than F64, the identity won't be disabled or deleted, but Fabric items relying
on the workspace identity will stop working.
A maximum of 1,000 workspace identities can be created in a tenant. Once this
limit is reached, workspace identities must be deleted to enable newer ones to be
created.
Azure Data Lake Storage Gen2 shortcuts in a workspace that has a workspace
identity will be capable of trusted service access.

Troubleshooting issues with creating a


workspace identity
If you can't create a workspace identity because the creation button is disabled,
make sure you have the workspace administrator role, and that the workspace is
associated with a Fabric F64+ capacity.

If you run into issues the first time you create a workspace identity in your tenant,
try the following steps:

1. If the workspace identity state is failed, wait for an hour and then delete the
identity.
2. After the identity has been deleted, wait 5 minutes and then create the
identity again.

Related content
Trusted workspace access
Fabric identities
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Trusted workspace access
Article • 05/30/2024

Fabric allows you to access firewall-enabled Azure Data Lake Storage (ADLS) Gen2
accounts in a secure manner. Fabric workspaces that have a workspace identity can
securely access ADLS Gen2 accounts with public network access enabled from selected
virtual networks and IP addresses. You can limit ADLS Gen2 access to specific Fabric
workspaces.

Fabric workspaces that access a storage account with trusted workspace access need
proper authorization for the request. Authorization is supported with Microsoft Entra
credentials for organizational accounts or service principals. To find out more about
resource instance rules, see Grant access from Azure resource instances.

To limit and protect access to firewall-enabled storage accounts from certain Fabric
workspaces, you can set up resource instance rule to allow access from specific Fabric
workspaces.

7 Note

Trusted workspace access is generally available. Fabric workspace identity can only
be created in workspaces associated with a Fabric capacity (F64 or higher). For
information about buying a Fabric subscription, see Buy a Microsoft Fabric
subscription.

This article shows you how to:

Configure trusted workspace access in an ADLS Gen2 storage account.

Create a OneLake shortcut in a Fabric Lakehouse that connects to a trusted-


workspace-access enabled ADLS Gen2 storage account.

Create a data pipeline to connect directly to a firewall-enabled ADLS Gen2 account


that has trusted workspace access enabled.

Use the T-SQL COPY statement to ingest data into your Warehouse from a firewall-
enabled ADLS Gen2 account that has trusted workspace access enabled.

Configure trusted workspace access in ADLS


Gen2
Resource instance rule
You can configure specific Fabric workspaces to access your storage account based on
their workspace identity. You can create a resource instance rule by deploying an ARM
template with a resource instance rule. To create a resource instance rule:

1. Sign in to the Azure portal and go to Custom deployment.

2. Choose Build your own template in the editor. For a sample ARM template that
creates a resource instance rule, see ARM template sample.

3. Create the resource instance rule in the editor. When done, choose Review +
Create.

4. On the Basics tab that appears, specify the required project and instance details.
When done, choose Review + Create.

5. On the Review + Create tab that appears, review the summary and then select
Create. The rule will be submitted for deployment.

6. When deployment is complete, you'll be able to go to the resource.

7 Note

Resource instance rules for Fabric workspaces can only be created through
ARM templates. Creation through the Azure portal is not supported.
The subscriptionId "00000000-0000-0000-0000-000000000000" must be used
for the Fabric workspace resourceId.
You can get the workspace id for a Fabric workspace through its address bar
URL.

Here's an example of a resource instance rule that can be created through ARM
template. For a complete example, see ARM template sample.

JSON

"resourceAccessRules": [

{ "tenantId": " df96360b-9e69-4951-92da-f418a97a85eb",

"resourceId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourcegroups/Fabric/providers/Microsoft.Fabric/workspaces/b27
88a72-eef5-4258-a609-9b1c3e454624"
}
]

Trusted service exception


If you select the trusted service exception for an ADLS Gen2 account that has public
network access enabled from selected virtual networks and IP addresses, Fabric
workspaces with a workspace identity will be able to access the storage account. When
the trusted service exception checkbox is selected, any workspaces in your tenant's
Fabric capacities that have a workspace identity can access data stored in the storage
account.

This configuration isn't recommended, and support might be discontinued in the future.
We recommend that you use resource instance rules to grant access to specific
resources.
Who can configure Storage accounts for trusted service
access?
A Contributor on the storage account (an Azure RBAC role) can configure resource
instance rules or trusted service exception.

How to use trusted workspace access in Fabric


There are currently three ways to use trusted workspace access to access your data from
Fabric in a secure manner:

You can create a new ADLS shortcut in a Fabric Lakehouse to start analyzing your
data with Spark, SQL, and Power BI.

You can create a data pipeline that leverages trusted workspace access to directly
access a firewall-enabled ADLS Gen2 account.

You can use a T-SQL Copy statement that leverages trusted workspace access to
ingest data into a Fabric warehouse.

The following sections show you how to use these methods.

Create a OneLake shortcut to storage account with


trusted workspace access
With the workspace identity configured in Fabric, and trusted workspace access enabled
in your ADLS Gen2 storage account, you can create OneLake shortcuts to access your
data from Fabric. You just create a new ADLS shortcut in a Fabric Lakehouse and you can
start analyzing your data with Spark, SQL, and Power BI.

Prerequisites
A Fabric workspace associated with a Fabric capacity. See Workspace identity.
Create a workspace identity associated with the Fabric workspace.
The user account or service principal used for creating the shortcut should have
Azure RBAC roles on the storage account. The principal must have a Storage Blob
Data Contributor, Storage Blob Data owner, or Storage Blob Data Reader role at
the storage account scope, or a Storage Blob Delegator role at the storage account
scope in addition to a Storage Blob Data Reader role at the container scope.
Configure a resource instance rule for the storage account.
7 Note

Preexisting shortcuts in a workspace that meets the prerequisites will automatically


start to support trusted service access.

Steps
1. Start by creating a new shortcut in a Lakehouse.

The New shortcut wizard opens.

2. Under External sources select Azure Data Lake Storage Gen2.


3. Provide the URL of the storage account that has been configured with trusted
workspace access, and choose a name for the connection. For Authentication kind,
choose Organizational account, or Service Principal.

When done, select Next.

4. Provide the shortcut name and sub path.


When done, select Create.

5. The lakehouse shortcut is created, and you should be able to preview storage data
in the shortcut.

Use the OneLake shortcut to a storage account with trusted


workspace access in Fabric items

With OneCopy in Fabric, you can access your OneLake shortcuts with trusted access
from all Fabric workloads.

Spark: You can use Spark to access data from your OneLake shortcuts. When
shortcuts are used in Spark, they appear as folders in OneLake. You just need to
reference the folder name to access the data. You can use the OneLake shortcut to
storage accounts with trusted workspace access in Spark notebooks.
SQL endpoint: Shortcuts created in the "Tables" section of your lakehouse are also
available in the SQL endpoint. You can open the SQL endpoint and query your data
just like any other table.

Pipelines: Data pipelines can access managed shortcuts to storage accounts with
trusted workspace access. Data pipelines can be used to read from or write to
storage accounts through OneLake shortcuts.

Dataflows v2: Dataflows Gen2 can be used to access managed shortcuts to


storage accounts with trusted workspace access. Dataflows Gen2 can read from or
write to storage accounts through OneLake shortcuts.

Semantic models and reports: The default semantic model associated with a
Lakehouse SQL endpoint can read managed shortcuts to storage accounts with
trusted workspace access. To see the managed tables in the default semantic
model, go to the SQL endpoint, select Reporting, and choose Automatically
update semantic model.

You can also create new semantic models that reference table shortcuts to storage
accounts with trusted workspace access. Go to the SQL endpoint, select Reporting
and choose New semantic model.

You can create reports on top of the default semantic models and custom semantic
models.

KQL Database: You can also create OneLake shortcuts to ADLS Gen2 in a KQL
database. The steps to create the managed shortcut with trusted workspace access
remain the same.

Create a data pipeline to a storage account with trusted


workspace access
With the workspace identity configured in Fabric and trusted access enabled in your
ADLS Gen2 storage account, you can create data pipelines to access your data from
Fabric. You can create a new data pipeline to copy data into a Fabric lakehouse and then
you can start analyzing your data with Spark, SQL, and Power BI.

Prerequisites

A Fabric workspace associated with a Fabric capacity. See Workspace identity.


Create a workspace identity associated with the Fabric workspace.
The user account or service principal used for creating the connection should have
Azure RBAC roles on the storage account. The principal must have a Storage Blob
Data Contributor, Storage Blob Data owner, or Storage Blob Data Reader role at
the storage account scope.
Configure a resource instance rule for the storage account.

Steps
1. Start by selecting Get Data in a lakehouse.

2. Select New data pipeline. Provide a name for the pipeline and then select Create.

3. Choose Azure Data Lake Gen2 as the data source.


4. Provide the URL of the storage account that has been configured with trusted
workspace access, and choose a name for the connection. For Authentication kind,
choose Organizational account or Service Principal.

When done, select Next.

5. Select the file that you need to copy into the lakehouse.

When done, select Next.

6. On the Review + save screen, select Start data transfer immediately. When done,
select Save + Run.


7. When the pipeline status changes from Queued to Succeeded, go to the lakehouse
and verify that the data tables were created.

Use the T-SQL COPY statement to ingest data into a


warehouse
With the workspace identity configured in Fabric and trusted access enabled in your
ADLS Gen2 storage account, you can use the COPY T-SQL statement to ingest data into
your Fabric warehouse. Once the data is ingested into the warehouse, then you can start
analyzing your data with SQL and Power BI.

Restrictions and Considerations


Trusted workspace access is only supported for workspaces in Fabric capacities
(F64 or higher).
You can only use trusted workspace access in OneLake shortcuts and data
pipelines. To securely access storage accounts from Fabric Spark, see Managed
private endpoints for Fabric.
If a workspace with a workspace identity is migrated to a non-Fabric capacity or
Fabric capacity lower than F64, trusted workspace access will stop working after an
hour.
Pre-existing shortcuts created before October 10, 2023 don't support trusted
workspace access.
Connections for trusted workspace access can't be created or modified in Manage
connections and gateways.
If you reuse connections that support trusted workspace access in Fabric items
other than shortcuts and pipelines, or in other workspaces, they might not work.
Only organizational account or service principal must be used for authentication to
storage accounts for trusted workspace access.
Pipelines can't write to OneLake table shortcuts on storage accounts with trusted
workspace access. This is a temporary limitation.
A maximum of 200 resource instance rules can be configured. For more
information, see Azure subscription limits and quotas - Azure Resource Manager.
Trusted workspace access only works when public access is enabled from selected
virtual networks and IP addresses.
Resource instance rules for Fabric workspaces must be created through ARM
templates. Resource instance rules created through the Azure portal UI aren't
supported.
Pre-existing shortcuts in a workspace that meets the prerequisites will
automatically start to support trusted service access.
Troubleshooting issues with trusted workspace access
If a shortcut in a lakehouse that targets a firewall-protected ADLS Gen2 storage account
becomes inaccessible, it might be because the lakehouse has been shared with a user
who doesn't have an admin, member, or contributor role in the workspace where the
lakehouse resides. This is a known issue. The remedy is not to share the lakehouse with
users who don't have an admin, member, or contributor role in the workspace.

ARM template sample


JSON

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2023-01-01",
"name": "<storage account name>",
"id": "/subscriptions/<subscription id of storage
account>/resourceGroups/<resource group
name>/providers/Microsoft.Storage/storageAccounts/<storage account name>",
"location": "<region>",
"sku": {
"name": "Standard_RAGRS",
"tier": "Standard"
},
"kind": "StorageV2",
"properties": {
"networkAcls": {
"resourceAccessRules": [
{
"tenantId": "<tenantid>",
"resourceId": "/subscriptions/00000000-0000-
0000-0000-
000000000000/resourcegroups/Fabric/providers/Microsoft.Fabric/workspaces/<wo
rkspace-id>"
}]
}
}
}
]
}

Related content
Workspace identity
Grant access from Azure resource instances
Trusted access based on a managed identity

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Guest user sharing
Article • 12/21/2023

Sharing items with guest users in Fabric is similar to sharing items with guest users in
Power BI, except that in Fabric, you can only share items by sharing the workspace.
Explicit sharing of particular items with guest users isn't supported, except for reports,
dashboards, semantic models, and apps.

For more information about Guest user sharing in Power BI, see Distribute Power BI
content to external guest users with Microsoft Entra B2B.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Customer Lockbox for Microsoft Fabric
Article • 02/08/2024

Use Customer Lockbox for Microsoft Azure to control how Microsoft engineers access
your data. In this article you'll learn how Customer Lockbox requests are initiated,
tracked, and stored for later reviews and audits.

Typically, Customer Lockbox is used to help Microsoft engineers troubleshoot a


Microsoft Fabric service support request. Customer Lockbox can also be used when
Microsoft identifies a problem, and a Microsoft-initiated event is opened to investigate
the issue.

Enable Customer Lockbox for Microsoft Fabric


To enable Customer Lockbox for Microsoft Fabric, you must be a Microsoft Entra Global
Administrator. To assign roles in Microsoft Entra ID, see Assign Microsoft Entra roles to
users.

1. Open the Azure portal.

2. Go to Customer Lockbox for Microsoft Azure.

3. In the Administration tab, select Enabled.

Microsoft access request


In cases where the Microsoft engineer can't troubleshoot your issue by using standard
tools, elevated permissions are requested using the Just-In-Time (JIT) access service. The
request can come from the original support engineer, or from a different engineer.

After the access request is submitted, the JIT service evaluates the request, considering
factors such as:

The scope of the resource


Whether the requester is an isolated identity or using multi-factor authentication

Permissions levels

Based on the JIT role, the request may also include an approval from internal Microsoft
approvers. For example, the approver might be the customer support lead or the
DevOps Manager.

When the request requires direct access to customer data, a Customer Lockbox request
is initiated. For example, in cases where remote desktop access to a customer's virtual
machine is needed. Once the Customer Lockbox request is made, it awaits customer's
approval before access is granted.

These steps describe a Microsoft initiated Customer Lockbox request, for Microsoft
Fabric service.

1. The Microsoft Entra Global Administrator receives a pending access request


notification email from Microsoft. The admin who received the email, becomes the
designated approver.


2. The email provides a link to Customer Lockbox in the Azure Administration
module. Using the link, the designated approver signs in to the Azure portal to
view any pending Customer Lockbox requests. The request remains in the
customer queue for four days. After that, the access request automatically expires
and no access is granted to Microsoft engineers.

3. To get the details of the pending request, the designated approver can select the
Customer Lockbox request from the Pending Requests menu option.

4. After reviewing the request, the designated approver enters a justification and
selects one of the options below. For auditing purposes, the actions are logged
in the Customer Lockbox logs.

Approve - Access is granted to the Microsoft engineer for a default period of


eight hours.

Deny - The access request by the Microsoft engineer is rejected and no


further action is taken.

Logs
Customer Lockbox has two type of logs:

Activity logs - Available from the Azure Monitor activity log.

The following activity logs are available for Customer Lockbox:


Deny Lockbox Request
Create Lockbox Request
Approve Lockbox Request
Lockbox Request Expiry

To access the activity logs, in the Azure portal, select Activity Log. You can filter the
results for specific actions.

Audit logs - Available from the Microsoft Purview compliance portal. You can see
the audit logs in the admin portal.

Customer Lockbox for Microsoft Fabric has four audit logs:

ノ Expand table

Audit log Friendly name

GetRefreshHistoryViaLockbox Get refresh history via lockbox

DeleteAdminUsageDashboardsViaLockbox Delete admin usage dashboards via lockbox

DeleteUsageMetricsv2PackageViaLockbox Delete usage metrics v2 package via lockbox

DeleteAdminMonitoringFolderViaLockbox Delete admin monitoring folder via lockbox

Exclusions
Customer Lockbox requests aren't triggered in the following engineering support
scenarios:
Emergency scenarios that fall outside of standard operating procedures. For
example, a major service outage requires immediate attention to recover or restore
services in an unexpected scenario. These events are rare and usually don't require
access to customer data.

A Microsoft engineer accesses the Azure platform as part of troubleshooting, and


is accidentally exposed to customer data. For example, during troubleshooting the
Azure Network Team captures a packet on a network device. Such scenarios don't
usually result in access to meaningful customer data.

External legal demands for data. For details, see government requests for
data on the Microsoft Trust Center.

Data access
Access to data varies according to the Microsoft Fabric experience your request is for.
This section lists which data the Microsoft engineer can access, after you approve a
Customer Lockbox request.

Power BI - When running the operations listed below, the Microsoft engineer will
have access to a few tables linked to your request. Each operation the Microsoft
engineer uses, is reflected in the audit logs.
Get refresh history
Delete admin usage dashboard
Delete usage metrics v2 package
Delete admin monitoring folder

Real-Time Analytics - The Real-Time Analytics engineer will have access to the
data in the KQL database that's linked to your request.

Data Engineering - The Data Engineering engineer will have access to the
following Spark logs linked to your request:
Driver logs
Event logs
Executor logs

Data Factory - The Data Factory engineer will have access to data pipeline
definitions linked to your request, if permission is granted.

Related content
Microsoft Purview Customer Lockbox
Microsoft 365 guidance for security & compliance

Security overview

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Row-level security (RLS) with Power BI
Article • 04/26/2024

Row-level security (RLS) with Power BI can be used to restrict data access for given users.
Filters restrict data access at the row level, and you can define filters within roles. In the
Power BI service, users with access to a workspace have access to semantic models in
that workspace. RLS only restricts data access for users with Viewer permissions. It
doesn't apply to Admins, Members, or Contributors.

You can configure RLS for data models imported into Power BI with Power BI. You can
also configure RLS on semantic models that are using DirectQuery, such as SQL Server.
For Analysis Services or Azure Analysis Services lives connections, you configure row-
level security in the model, not in Power BI. The security option doesn't show up for live
connection semantic models.

Define roles and rules in Power BI Desktop


You can define roles and rules within Power BI Desktop. When you publish to Power BI,
you also publish the role definitions.

To define security roles:

1. Import data into your Power BI Desktop report, or configure a DirectQuery


connection.

7 Note

You can't define roles within Power BI Desktop for Analysis Services live
connections. You need to do that within the Analysis Services model.

2. From the Modeling tab, select Manage Roles.

3. From the Manage roles window, select Create.


4. Under Roles, provide a name for the role.

7 Note

You can't define a role with a comma, for example London,ParisRole .

5. Under Tables, select the table to which you want to apply a DAX (Data Analysis
Expression) rule.

6. In the Table filter DAX expression box, enter the DAX expressions. This expression
returns a value of true or false. For example: [Entity ID] = “Value” .

7 Note

You can use username() within this expression. Be aware that username() has
the format of DOMAIN\username within Power BI Desktop. Within the Power
BI service and Power BI Report Server, it's in the format of the user's User
Principal Name (UPN). Alternatively, you can use userprincipalname(), which
always returns the user in the format of their user principal name,
[email protected].

7. After you've created the DAX expression, select the checkmark above the
expression box to validate the expression.
7 Note

In this expression box, use commas to separate DAX function arguments even
if you're using a locale that normally uses semicolon separators (e.g. French or
German).

8. Select Save.

You can't assign users to a role within Power BI Desktop. You assign them in the Power
BI service. You can enable dynamic security within Power BI Desktop by making use of
the username() or userprincipalname() DAX functions and having the proper
relationships configured.

By default, row-level security filtering uses single-directional filters, whether the


relationships are set to single direction or bi-directional. You can manually enable bi-
directional cross-filtering with row-level security by selecting the relationship and
checking the Apply security filter in both directions checkbox. Note that if a table takes
part in multiple bi-directional relationships you can only select this option for one of
those relationships. Select this option when you've also implemented dynamic row-level
security at the server level, where row-level security is based on username or login ID.

For more information, see Bidirectional cross-filtering using DirectQuery in Power BI and
the Securing the Tabular BI Semantic Model technical article.

Define roles and rules in Power BI using


enhanced row-level security editor (Preview)
You can quickly and easily define row-level security roles and filters within Power BI
using the enhanced row-level security editor. With this editor, you can toggle between
using the default drop-down interface and a DAX interface. When you publish to Power
BI, you also publish the role definitions.

To define security roles using the enhanced row-level security editor:

1. In Power BI Desktop, enable the preview by going to Files > Options and Settings
> Options > Preview features and turn on “Enhanced row-level security editor”.
Alternatively you can use this editor in the Service by editing your data model in
the Power BI service.

2. Import data into your Power BI semantic model, or configure a DirectQuery


connection.

3. From the ribbon, select Manage roles.

4. From the Manage roles window, select New to create a new role.

5. Under Roles, provide a name for the role and select enter.
6. Under Select tables, select the table you want to apply a row-level security filter to.

7. Under Filter data, use the default editor to define your roles. The expressions
created return a true or false value.

7 Note

Not all row-level security filters supported in Power BI can be defined using
the default editor. Limitations include expressions that today can only be
defined using DAX including dynamic rules such as username() or
userprincipalname(). To define roles using these filters switch to use the DAX
editor.
8. Optionally select Switch to DAX editor to switch to using the DAX editor to define
your role. You can switch back to the default editor by selecting Switch to default
editor. All changes made in either editor interface persist when switching
interfaces when possible.

When defining a role using the DAX editor that can't be defined in the default
editor, if you attempt to switch to the default editor you'll be prompted with a
warning that switching editors may result in some information being lost. To keep
this information, select Cancel and continue only editing this role in the DAX
editor.

9. Select Save

Validate the roles within Power BI Desktop


After you've created your roles, test the results of the roles within Power BI Desktop.

1. From the Modeling tab, select View as.

Screenshot of the Modeling tab, highlighting View as.

The View as roles window appears, where you see the roles you've created.
Screenshot of the View as roles window with None selected.

2. Select a role you created. Then choose OK to apply that role.

The report renders the data relevant for that role.

3. You can also select Other user and supply a given user.

Screenshot of the View as roles window with an example user entered.

It's best to supply the User Principal Name (UPN) because that's what the Power BI
service and Power BI Report Server use.

Within Power BI Desktop, Other user displays different results only if you're using
dynamic security based on your DAX expressions. In this case, you need to include
the username as well as the role.

4. Select OK.

The report renders based on what the RLS filters allow the user to see.

7 Note

The View as roles feature doesn't work for DirectQuery models with Single
Sign-On (SSO) enabled.

Manage security on your model


To manage security on your semantic model, open the workspace where you saved your
semantic model in the Power BI service and do the following steps:

1. In the Power BI service, select the More options menu for a semantic model. This
menu appears when you hover on a semantic model name, whether you select it
from the navigation menu or the workspace page.
2. Select Security.

Security takes you to the Role-Level Security page where you add members to a role
you created. Contributor (and higher workspace roles) will see Security and can assign
users to a role.

Working with members

Add members
In the Power BI service, you can add a member to the role by typing in the email address
or name of the user or security group. You can't add Groups created in Power BI. You
can add members external to your organization.

You can use the following groups to set up row-level security.

Distribution Group
Mail-enabled Group
Microsoft Entra Security Group

Note that Microsoft 365 groups aren't supported and can't be added to any roles.

You can also see how many members are part of the role by the number in parentheses
next to the role name, or next to Members.

Remove members
You can remove members by selecting the X next to their name.

Validating the role within the Power BI service


You can validate that the role you defined is working correctly in the Power BI service by
testing the role.

1. Select More options (...) next to the role.


2. Select Test as role.
You're redirected to the report that was published from Power BI Desktop with this
semantic model, if it exists. Dashboards aren't available for testing using the Test as role
option.

In the page header, the role being applied is shown. Test other roles, a combination of
roles, or a specific person by selecting Now viewing as. Here you see important
permissions details pertaining to the individual or role being tested. For more
information about how permissions interact with RLS, see RLS user experience.

Test other reports connected to the semantic model by selecting Viewing in the page
header. You can only test reports located in the same workspace as your semantic
model.
To return to normal viewing, select Back to Row-Level Security.

7 Note

The Test as role feature doesn't work for DirectQuery models with Single Sign-On
(SSO) enabled. Additionally, not all aspects of a report can be validated in the Test
as role feature including Q&A visualizations, Quick insights visualizations, and
Copilot.

Using the username() or userprincipalname()


DAX function
You can take advantage of the DAX functions username() or userprincipalname() within
your dataset. You can use them within expressions in Power BI Desktop. When you
publish your model, it will be used within the Power BI service.

Within Power BI Desktop, username() will return a user in the format of DOMAIN\User
and userprincipalname() will return a user in the format of [email protected].

Within the Power BI service, username() and userprincipalname() will both return the
user's User Principal Name (UPN). This looks similar to an email address.

Using RLS with workspaces in Power BI


If you publish your Power BI Desktop report to a workspace in the Power BI service, the
RLS roles are applied to members who are assigned to the Viewer role in the workspace.
Even if Viewers are given Build permissions to the semantic model, RLS still applies. For
example, if Viewers with Build permissions use Analyze in Excel, their view of the data is
restricted by RLS. Workspace members assigned Admin, Member, or Contributor have
edit permission for the semantic model and, therefore, RLS doesn’t apply to them. If you
want RLS to apply to people in a workspace, you can only assign them the Viewer role.
Read more about roles in workspaces.

Considerations and limitations


You can see the current limitations for row-level security on cloud models here:

If you previously defined roles and rules in the Power BI service, you must re-create
them in Power BI Desktop.
You can define RLS only on the semantic models created with Power BI Desktop. If
you want to enable RLS for semantic models created with Excel, you must convert
your files into Power BI Desktop (PBIX) files first. Learn more.
Service principals can't be added to an RLS role. Accordingly, RLS isn't applied for
apps using a service principal as the final effective identity.
Only Import and DirectQuery connections are supported. Live connections to
Analysis Services are handled in the on-premises model.
The Test as role/View as role feature doesn't work for DirectQuery models with
single sign-on (SSO) enabled.
The Test as role/view as role feature shows only reports from semantic models
workspace.
The Test as role/View as role feature doesn't work for paginated reports.

Keep in mind that if a Power BI report references a row with RLS configured then the
same message displays as for a deleted or non-existing field. To these users, it looks like
the report is broken.

FAQ
Question: What if I have previously created roles and rules for a dataset in the Power BI
service? Do they still work if I do nothing?
Answer: No, visuals won't render properly. You have to re-create the roles and rules
within Power BI Desktop and then publish to the Power BI service.

Question: Can I create these roles for Analysis Services data sources?
Answer: Yes, if you imported the data into Power BI Desktop. If you're using a live
connection, you can't configure RLS within the Power BI service. You define RLS in the
Analysis Services model on-premises.

Question: Can I use RLS to limit the columns or measures accessible by my users?
Answer: No, if a user has access to a particular row of data, they can see all the columns
of data for that row. To restrict access to columns and column metadata, consider using
object-level security.

Question: Does RLS let me hide detailed data but give access to data summarized in
visuals?
Answer: No, you secure individual rows of data, but users can always see either the
details or the summarized data.

Question: My data source already has security roles defined (for example SQL Server
roles or SAP BW roles). What's the relationship between these roles and RLS?
Answer: The answer depends on whether you're importing data or using DirectQuery. If
you're importing data into your Power BI dataset, the security roles in your data source
aren't used. In this case, you should define RLS to enforce security rules for users who
connect in Power BI. If you're using DirectQuery, the security roles in your data source
are used. When a user opens a report, Power BI sends a query to the underlying data
source, which applies security rules to the data based on the user's credentials.

Question: Can a user belong to more than one role?


Answer: A user can belong to multiple roles, and the roles are additive. For example, if a
user belongs to both the "Sales" and "Marketing" roles, they can see data for both these
roles.

Related content
Restrict data access with row-level security (RLS) for Power BI Desktop
Row-level security (RLS) guidance in Power BI Desktop
Power BI implementation planning: Report consumer security planning
RLS for Embedded scenarios for ISVs

Questions? Try asking the Power BI Community Suggestions? Contribute ideas to


improve Power BI

Feedback
Was this page helpful?  Yes  No
Provide product feedback | Ask the community
Object-level security (OLS)
Article • 04/26/2024

Object-level security (OLS) enables model authors to secure specific tables or columns
from report viewers. For example, a column that includes personal data can be restricted
so that only certain viewers can see and interact with it. In addition, you can also restrict
object names and metadata. This added layer of security prevents users without the
appropriate access levels from discovering business critical or sensitive personal
information like employee or financial records. For viewers that don’t have the required
permission, it's as if the secured tables or columns don't exist.

Create a report that uses OLS


Like RLS, OLS is also defined within model roles. Currently, you can't create OLS
definitions natively in Power BI Desktop.

To create roles on Power BI Desktop semantic models, use external tools such as Tabular
Editor .

Configure object-level security using tabular editor


1. In Power BI Desktop, create the model that will define your OLS rules.

2. On the External Tools ribbon, select Tabular Editor. If you don’t see the Tabular
Editor button, install the program . When open, Tabular Editor will automatically
connect to your model.

3. In the Model view, select the drop-down menu under Roles. The roles you created
in step one will appear.
4. Select the role you want to enable an OLS definition for, and expand the Table
Permissions.

5. Set the permissions for the table or column to None or Read.

None: OLS is enforced and the table or column will be hidden from that role
Read: The table or column will be visible to that role

To secure the whole table


Set categories under Table permissions to None.

6. After you define object-level security for the roles, save your changes.

7. In Power BI Desktop, publish your semantic model to the Power BI Service.

8. In the Power BI Service, navigate to the Security page by selecting the more
options menu on the semantic model, and assign members or groups to their
appropriate roles.

The OLS rules are now defined. Users without the required permission will receive a
message that the field can't be found for all report visuals using that field.
Considerations and limitations
OLS only applies to Viewers in a workspace. Workspace members assigned Admin,
Member, or Contributor have edit permission for the semantic model and,
therefore, OLS doesn’t apply to them. Read more about roles in workspaces.

Semantic models with OLS configured for one or more table or column objects
aren't supported with these Power BI features:
Q&A visualizations
Quick insights visualizations
Smart narrative visualizations
Excel Data Types gallery

See other OLS restrictions

Related content
Object-level security in Azure Analysis Services
Power BI implementation planning: Report consumer security planning
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Reliability in Microsoft Fabric
Article • 12/13/2023

This article describes reliability support in Microsoft Fabric, and both regional resiliency
with availability zones and cross-region recovery and business continuity. For a more
detailed overview of reliability in Azure, see Azure reliability.

Availability zone support


Azure availability zones are at least three physically separate groups of datacenters
within each Azure region. Datacenters within each zone are equipped with independent
power, cooling, and networking infrastructure. In the case of a local zone failure,
availability zones are designed so that if the one zone is affected, regional services,
capacity, and high availability are supported by the remaining two zones.

Failures can range from software and hardware failures to events such as earthquakes,
floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation
of Azure services. For more detailed information on availability zones in Azure, see
Regions and availability zones.

Azure availability zones-enabled services are designed to provide the right level of
reliability and flexibility. They can be configured in two ways. They can be either zone
redundant, with automatic replication across zones, or zonal, with instances pinned to a
specific zone. You can also combine these approaches. For more information on zonal
vs. zone-redundant architecture, see Recommendations for using availability zones and
regions.

Fabric makes commercially reasonable efforts to support zone-redundant availability


zones, where resources automatically replicate across zones, without any need for you
to set up or configure.

Prerequisites
Fabric currently provides partial availability-zone support in a limited number of
regions. This partial availability-zone support covers experiences (and/or certain
functionalities within an experience).
Experiences such as Data Engineering, Data Science, and Event Streams don't
support availability zones.
Zone availability may or may not be available for Fabric experiences or
features/functionalities that are in preview.
On-premises gateways and large semantic models in Power BI don't support
availability zones.
Data Factory (pipelines) support availability zones in West Europe, but new or
inprogress pipelines runs may fail in case of zone outage.

Supported regions
Fabric makes commercially reasonable efforts to provide availability zone support in
various regions as follows:

ノ Expand table

Americas Power Datamarts Data Real-Time Data Factory


BI Warehouses Analytics (pipelines)

Brazil South

Canada Central

Central US

East US

East US 2

South Central US

West US 2

West US 3

Europe Power Datamarts Data Real-Time


BI Warehouses Analytics

France Central

Germany West
Central

North Europe

UK South

West Europe

Norway East

Middle East Power Datamarts Data Real-Time


BI Warehouses Analytics
Americas Power Datamarts Data Real-Time Data Factory
BI Warehouses Analytics (pipelines)

Qatar Central

Africa Power Datamarts Data Real-Time


BI Warehouses Analytics

South Africa
North

Asia Pacific Power Datamarts Data Real-Time


BI Warehouses Analytics

Australia East

Japan East

Southeast Asia

Zone down experience


During a zone-wide outage, no action is required during zone recovery. Fabric
capabilities in regions listed in supported regions self-heal and rebalance automatically
to take advantage of the healthy zone.

) Important

While Microsoft strives to provide uniform and consistent availability zone support,
in some cases of availability-zone failure, Fabric capacities located in Azure regions
with higher customer demand fluctuations might experience higher than normal
latency.

Cross-region disaster recovery and business


continuity
Disaster recovery (DR) is about recovering from high-impact events, such as natural
disasters or failed deployments that result in downtime and data loss. Regardless of the
cause, the best remedy for a disaster is a well-defined and tested DR plan and an
application design that actively supports DR. Before you begin to think about creating
your disaster recovery plan, see Recommendations for designing a disaster recovery
strategy.
When it comes to DR, Microsoft uses the shared responsibility model. In a shared
responsibility model, Microsoft ensures that the baseline infrastructure and platform
services are available. At the same time, many Azure services don't automatically
replicate data or fall back from a failed region to cross-replicate to another enabled
region. For those services, you are responsible for setting up a disaster recovery plan
that works for your workload. Most services that run on Azure platform as a service
(PaaS) offerings provide features and guidance to support DR and you can use service-
specific features to support fast recovery to help develop your DR plan.

This section describes a disaster recovery plan for Fabric that's designed to help your
organization keep its data safe and accessible when an unplanned regional disaster
occurs. The plan covers the following topics:

Cross-region replication: Fabric offers cross-region replication for data stored in


OneLake. You can opt in or out of this feature based on your requirements.

Data access after disaster: In a regional disaster scenario, Fabric guarantees data
access, with certain limitations. While the creation or modification of new items is
restricted after failover, the primary focus remains on ensuring that existing data
remains accessible and intact.

Guidance for recovery: Fabric provides a structured set of instructions to guide


you through the recovery process. The structured guidance makes it easier for you
to transition back to regular operations.

Power BI, now a part of the Fabric, has a solid disaster recovery system in place and
offers the following features:

BCDR as default: Power BI automatically includes disaster recovery capabilities in


its default offering. You don't need to opt in or activate this feature separately.

Cross-region replication: Power BI uses Azure storage geo-redundant replication


and Azure SQL geo-redundant replication to guarantee that backup instances exist
in other regions and can be used. This means that data is duplicated across
different regions, enhancing its availability, and reducing the risks associated with
regional outages.

Continued services and access after disaster: Even during disruptive events, Power
BI items remain accessible in read-only mode. Items include semantic models,
reports, and dashboards, ensuring that businesses can continue their analysis and
decision-making processes without significant hindrance.

For more information, see the Power BI high availability, failover, and disaster recovery
FAQ
) Important

For customers whose home regions don't have an Azure pair region and are
affected by a disaster, the ability to utilize Fabric capacities may be compromised—
even if the data within those capacities is replicated. This limitation is tied to the
home region’s infrastructure, essential for the capacities' operation.

Home region and capacity functionality


For effective disaster recovery planning, it's critical that you understand the relationship
between your home region and capacity locations. Understanding home region and
capacity locations helps you make strategic selections of capacity regions, as well as the
corresponding replication and recovery processes.

The home region for your organization's tenancy and data storage is set to the billing
address location of the first user that signs up. For further details on tenancy setup, go
to Power BI implementation planning: Tenant setup. When you create new capacities,
your data storage is set to the home region by default. If you wish to change your data
storage region to another region, you'll need to enable Multi-Geo, a Fabric Premium
feature.

) Important

Choosing a different region for your capacity doesn't entirely relocate all of your
data to that region. Some data elements still remain stored in the home region. To
see which data remains in the home region and which data is stored in the Multi-
Geo enabled region, see Configure Multi-Geo support for Fabric Premium.

In the case of a home region that doesn't have a paired region, capacities in any
Multi-Geo enabled region may face operational issues if the home region
encounters a disaster, as the core service functionality is tethered to the home
region.

If you select a Multi-Geo enabled region within the EU, it's guaranteed that your
data is stored within the EU data boundary.

To learn how to identify your home region, see Find your Fabric home region.

Disaster recovery capacity setting


Fabric provides a disaster recovery switch on the capacity settings page. It's available
where Azure regional pairings align with Fabric's service presence. Here are the specifics
of this switch:

Role access: Only users with the capacity admin role or higher can use this switch.

Granularity: The granularity of the switch is the capacity level. It's available for both
Premium and Fabric capacities.

Data scope: The disaster recovery toggle specifically addresses OneLake data,
which includes Lakehouse and Warehouse data. The switch does not influence
your data stored outside OneLake.

BCDR continuity for Power BI: While disaster recovery for OneLake data can be
toggled on and off, BCDR for Power BI is always supported, regardless of whether
the switch is on or off.

Frequency: Once you change the disaster recovery capacity setting, you must wait
30 days before being able to alter it again. The wait period is set in place to
maintain stability and prevent constant toggling,

7 Note

After turning on the disaster recovery capacity setting, it can take up to one week
for the data to start replicating.

Data replication
When you turn on the disaster recovery capacity setting, cross-region replication is
enabled as a disaster recovery capability for OneLake data. The Fabric platform aligns
with Azure regions to provision the geo-redundancy pairs. However, some regions don't
have an Azure pair region, or the pair region doesn't support Fabric. For these regions,
data replication isn't available. For more information, see Regions with availability zones
and no region pair and Fabric region availability.

7 Note

While Fabric offers a data replication solution in OneLake to support disaster


recovery, there are notable limitations. For instance, the data of KQL databases and
query sets is stored externally to OneLake, which means that a separate disaster
recovery approach is needed. Refer to the rest of this document for details of the
disaster recovery approach for each Fabric item.

Billing
The disaster recovery feature in Fabric enables geo-replication of your data for
enhanced security and reliability. This feature consumes more storage and transactions,
which are billed as BCDR Storage and BCDR Operations respectively. You can monitor
and manage these costs in the Microsoft Fabric Capacity Metrics app, where they appear
as separate line items.

For an exhaustive breakdown of all associated disaster recovery costs to help you plan
and budget accordingly, see OneLake compute and storage consumption.

Set up disaster recovery


While Fabric provides disaster recovery features to support data resiliency, you must
follow certain manual steps to restore service during disruptions. This section details the
actions you should take to prepare for potential disruptions.

Phase 1: Prepare

Activate the disaster recovery capacity settings: Regularly review and set the
disaster recovery capacity settings to make sure they meet your protection and
performance needs.

Create data backups: Copy critical data stored outside of OneLake to another
region in a way that aligns to your disaster recovery plan.
Phase 2: Disaster failover
When a major disaster renders the primary region unrecoverable, Microsoft Fabric
initiates a regional failover. Access to the Fabric portal is unavailable until the failover is
complete and a notification is posted on the Microsoft Fabric support page .

The time it takes for failover to complete can vary, although it typically takes less than
one hour. Once failover is complete, here's what you can expect:

Fabric portal: You can access the portal, and read operations such as browsing
existing workspaces and items continue to work. All write operations, such as
creating or modifying a workspace, are paused.

Power BI: You can perform read operations, such as displaying dashboards and
reports. Refreshes, report publish operations, dashboard and report modifications,
and other operations that require changes to metadata aren't supported.

Lakehouse/Warehouse: You can't open these items, but files can be accessed via
OneLake APIs or tools.

Spark Job Definition: You can't open Spark job definitions, but code files can be
accessed via OneLake APIs or tools. Any metadata or configuration will be saved
after failover.

Notebook: You can't open notebooks, and code content won't be saved after the
disaster.

ML Model/Experiment: You can't open ML models or experiments. Code content


and metadata such as run metrics and configurations won't be saved after the
disaster.

Dataflow Gen2/Pipeline/Eventstream: You can't open these items, but you can use
supported disaster recovery destinations (lakehouses or warehouses) to protect
data.

KQL Database/Queryset: You won't be able to access KQL databases and query
sets after failover. More prerequisite steps are required to protect the data in KQL
databases and query sets.

In a disaster scenario, the Fabric portal and Power BI are in read-only mode, and other
Fabric items are unavailable, you can access their data stored in OneLake using APIs or
third-party tools. Both portal and Power BI retain the ability to perform read-write
operations on that data. This ability ensures that critical data remains accessible and
modifiable, and mitigates potential disruption of your business operations.
OneLake data remains accessible through multiple channels:

OneLake ADLS Gen2 API: See Connecting to Microsoft OneLake

Examples of tools that can connect to OneLake data:

Azure Storage Explorer: See Integrate OneLake with Azure Storage Explorer

OneLake File Explorer: See Use OneLake file explorer to access Fabric data

Phase 3: Recovery plan


While Fabric ensures that data remains accessible after a disaster, you can also act to
fully restore their services to the state before the incident. This section provides a step-
by-step guide to help you through the recovery process.

Recovery steps
1. Create a new Fabric capacity in any region after a disaster. Given the high demand
during such events, we recommend selecting a region outside your primary geo to
increase likelihood of compute service availability. For information about creating a
capacity, see Buy a Microsoft Fabric subscription.

2. Create workspaces in the newly created capacity. If necessary, use the same names
as the old workspaces.

3. Create items with the same names as the ones you want to recover. This step is
important if you use the custom script to recover lakehouses and warehouses.

4. Restore the items. For each item, follow the relevant section in the Experience-
specific disaster recovery guidance to restore the item.

Next steps
Experience-specific disaster recovery guidance
Reliability in Azure

Feedback
Was this page helpful?  Yes  No
Provide product feedback
Experience-specific disaster recovery
guidance
Article • 11/15/2023

This document provides experience-specific guidance for recovering your Fabric data in
the event of a regional disaster.

Sample scenario
A number of the guidance sections in this document use the following sample scenario
for purposes of explanation and illustration. Refer back to this scenario as necessary.

Let's say you have a capacity C1 in region A that has a workspace W1. If you've turned
on disaster recovery for capacity C1, OneLake data will be replicated to a backup in
region B. If region A faces disruptions, the Fabric service in C1 fails over to region B.

The following image illustrates this scenario. The box on the left shows the disrupted
region. The box in the middle represents the continued availability of the data after
failover, and the box on the right shows the fully covered situation after the customer
acts to restore their services to full function.

Here's the general recovery plan:

1. Create a new Fabric capacity C2 in a new region.

2. Create a new W2 workspace in C2, including its corresponding items with same
names as in C1.W1.

3. Copy data from the disrupted C1.W1 to C2.W2.

4. Follow the dedicated instructions for each component to restore items to their full
function.
Experience-specific recovery plans
The following sections provide step-by-step guides for each Fabric experience to help
customers through the recovery process.

Data Engineering
This guide walks you through the recovery procedures for the Data Engineering
experience. It covers lakehouses, notebooks, and Spark job definitions.

Lakehouse
Lakehouses from the original region remain unavailable to customers. To recover a
lakehouse, customers can re-create it in workspace C2.W2. We recommend two
approaches for recovering lakehouses:

Approach 1: Using custom script to copy Lakehouse Delta tables


and files
Customers can recreate lakehouses by using a custom Scala script.

1. Create the lakehouse (for example, LH1) in the newly created workspace C2.W2.

2. Create a new notebook in the workspace C2.W2.

3. To recover the tables and files from the original lakehouse, you need to use the
ABFS path to access the data (see Connecting to Microsoft OneLake). You can use
the code example below (see Introduction to Microsoft Spark Utilities) in the
notebook to get the ABFS paths of files and tables from the original lakehouse.
(Replace C1.W1 with the actual workspace name)

mssparkutils.fs.ls('abfs[s]://<C1.W1>@onelake.dfs.fabric.microsoft.com/
<item>.<itemtype>/<Tables>/<fileName>')

4. Use the following code example to copy tables and files to the newly created
lakehouse.

a. For Delta tables, you need to copy table one at a time to recover in the new
lakehouse. In the case of Lakehouse files, you can copy the complete file
structure with all the underlying folders with a single execution.
b. Reach out to the support team for the timestamp of failover required in the
script.

%%spark
val source="abfs path to original Lakehouse file or table directory"
val destination="abfs path to new Lakehouse file or table directory"
val timestamp= //timestamp provided by Support

mssparkutils.fs.cp(source, destination, true)

val filesToDelete = mssparkutils.fs.ls(s"$source/_delta_log")


.filter{sf => sf.isFile && sf.modifyTime > timestamp}

for(fileToDelte <- filesToDelete) {


val destFileToDelete =
s"$destination/_delta_log/${fileToDelte.name}"
println(s"Deleting file $destFileToDelete")
mssparkutils.fs.rm(destFileToDelete, false)
}

mssparkutils.fs.write(s"$destination/_delta_log/_last_checkpoint", "",
true)

5. Once you run the script, the tables will appear in the new lakehouse.

Approach 2: Use Azure Storage Explorer to copy files and tables

To recover only specific Lakehouse files or tables from the original lakehouse, use Azure
Storage Explorer. Refer to Integrate OneLake with Azure Storage Explorer for detailed
steps. For large data sizes, use Approach 1.

7 Note

The two approaches described above recover both the metadata and data for
Delta-formatted tables, because the metadata is co-located and stored with the
data in OneLake. For non-Delta formatted tables (e.g. CSV, Parquet, etc.) that are
created using Spark Data Definition Language (DDL) scripts/commands, the user is
responsible for maintaining and re-running the Spark DDL scripts/commands to
recover them.

Notebook
Notebooks from the primary region remain unavailable to customers and the code in
notebooks won't be replicated to the secondary region. To recover Notebook code in
the new region, there are two approaches to recovering Notebook code content.

Approach 1: User-managed redundancy with Git integration (in


public preview)

The best way to make this easy and quick is to use Fabric Git integration, then
synchronize your notebook with your ADO repo. After the service fails over to another
region, you can use the repo to rebuild the notebook in the new workspace you created.

1. Setup Git integration and select Connect and sync with ADO repo.

The following image shows the synced notebook.

2. Recover the notebook from the ADO repo.

a. In the newly created workspace, connect to your Azure ADO repo again.
b. Select the Source control button. Then select the relevant branch of the repo.
Then select Update all. The original notebook will appear.
c. If the original notebook has a default lakehouse, users can refer to the
Lakehouse section to recover the lakehouse and then connect the newly
recovered lakehouse to the newly recovered notebook.

d. The Git integration doesn't support syncing files, folders, or notebook snapshots
in the notebook resource explorer.

i. If the original notebook has files in the notebook resource explorer:

i. Be sure to save files or folders to a local disk or to some other place.

ii. Re-upload the file from your local disk or cloud drives to the recovered
notebook.

ii. If the original notebook has a notebook snapshot, also save the notebook
snapshot to your own version control system or local disk.
For more information about Git integration, see Introduction to Git integration.

Approach 2: Manual approach to backing up code content


If you don't take the Git integration approach, you can save the latest version of your
code, files in the resource explorer, and notebook snapshot in a version control system
such as Git, and manually recover the notebook content after a disaster:

1. Use the "Import notebook" feature to import the notebook code you want to
recover.
2. After import, go to your desired workspace (for example, "C2.W2") to access it.

3. If the original notebook has a default lakehouse, refer to the Lakehouse section.
Then connect the newly recovered lakehouse (that has the same content as the
original default lakehouse) to the newly recovered notebook.

4. If the original notebook has files or folders in the resource explorer, re-upload the
files or folders saved in the user's version control system.

Spark Job Definition


Spark job definitions (SJD) from the primary region remain unavailable to customers,
and the main definition file and reference file in the notebook will be replicated to the
secondary region via OneLake. If you want to recover the SJD in the new region, you can
follow the manual steps described below to recover the SJD. Note that historical runs of
the SJD won't be recovered.

You can recover the SJD items by copying the code from the original region by using
Azure Storage Explorer and manually reconnecting Lakehouse references after the
disaster.

1. Create a new SJD item (for example, SJD1) in the new workspace C2.W2, with the
same settings and configurations as the original SJD item (for example, language,
environment, etc.).

2. Use Azure Storage Explorer to copy Libs, Mains and Snapshots from the original
SJD item to the new SJD item.
3. The code content will appear in the newly created SJD. You'll need to manually add
the newly recovered Lakehouse reference to the job (Refer to the Lakehouse
recovery steps). Users will need to reenter the original command line arguments
manually.

Now you can run or schedule your newly recovered SJD.

For details about Azure Storage Explorer, see Integrate OneLake with Azure Storage
Explorer.

Data Science
This guide walks you through the recovery procedures for the Data Science experience.
It covers ML models and experiments.

ML Model and Experiment


Data Science items from the primary region remain unavailable to customers, and the
content and metadata in ML models and experiments won't be replicated to the
secondary region. To fully recover them in the new region, save the code content in a
version control system (such as Git), and manually rerun the code content after the
disaster.

1. Recover the notebook. Refer to the Notebook recovery steps.

2. Configuration, historically run metrics, and metadata won't be replicated to the


paired region. You'll have to rerun each version of your data science code to fully
recover ML models and experiments after the disaster.
Data Warehouse
This guide walks you through the recovery procedures for the Data Warehouse
experience. It covers warehouses.

Warehouse
Warehouses from the original region remain unavailable to customers. To recover
warehouses, use the following two steps.

1. Create a new interim lakehouse in workspace C2.W2 for the data you'll copy over
from the original warehouse.

2. Populate the warehouse's Delta tables by leveraging the warehouse Explorer and
the T-SQL capabilities (see Tables in data warehousing in Microsoft Fabric).

7 Note

It's recommended that you keep your Warehouse code (schema, table, view, stored
procedure, function definitions, and security codes) versioned and saved in a safe
location (such as Git) according to your development practices.

Data ingestion via Lakehouse and T-SQL code


In newly created workspace C2.W2:

1. Create an interim lakehouse "LH2" in C2.W2.

2. Recover the Delta tables in the interim lakehouse from the original warehouse by
following the Lakehouse recovery steps.

3. Create a new warehouse "WH2" in C2.W2.

4. Connect the interim lakehouse in your warehouse explorer.


5. Depending on how you're going to deploy table definitions prior to data import,
the actual T-SQL used for imports can vary. You can use INSERT INTO, SELECT INTO
or CREATE TABLE AS SELECT approach to recover Warehouse tables from
lakehouses. Further in the example, we would be using INSERT INTO flavor. (If you
use the code below, replace samples with actual table and column names)

USE WH1

INSERT INTO [dbo].[aggregate_sale_by_date_city]([Date],[City],


[StateProvince],[SalesTerritory],[SumOfTotalExcludingTax],
[SumOfTaxAmount],[SumOfTotalIncludingTax], [SumOfProfit])

SELECT [Date],[City],[StateProvince],[SalesTerritory],
[SumOfTotalExcludingTax],[SumOfTaxAmount],[SumOfTotalIncludingTax],
[SumOfProfit]
FROM [LH11].[dbo].[aggregate_sale_by_date_city]
GO

6. Lastly, change the connection string in applications using your Fabric warehouse.
7 Note

For customers who need cross-regional disaster recovery and fully automated
business continuity, we recommend keeping two Fabric Warehouse setups in
separate Fabric regions and maintaining code and data parity by doing regular
deployments and data ingestion to both sites.

Data Factory
Data Factory items from the primary region remain unavailable to customers and the
settings and configuration in data pipelines or dataflow gen2 items won't be replicated
to the secondary region. To recover these items in the event of a regional failure, you'll
need to recreate your Data Integration items in another workspace from a different
region. The following sections outline the details.

Dataflows Gen2
If you want to recover a Dataflow Gen2 item in the new region, you need to export a
PQT file to a version control system such as Git and then manually recover the Dataflow
Gen2 content after the disaster.

1. From your Dataflow Gen2 item, in the Home tab of the Power Query editor, select
Export template.

2. In the Export template dialog, enter a name (mandatory) and description (optional)
for this template. When done, select OK.
3. After the disaster, create a new Dataflow Gen2 item in the new workspace "C2.W2".

4. From the current view pane of the Power Query editor, select Import from a Power
Query template.

5. In the Open dialog, browse to your default downloads folder and select the .pqt file
you saved in the previous steps. Then select Open.

6. The template is then imported into your new Dataflow Gen2 item.
Data Pipelines
Customers can't access data pipelines in the event of regional disaster, and the
configurations aren't replicated to the paired region. We recommend building your
critical data pipelines in multiple workspaces across different regions.

Real-Time Analytics
This guide walks you through the recovery procedures for the Real-Time Analytics
experience. It covers KQL databases/querysets and eventstreams.

KQL Database/Queryset
KQL database/queryset users must undertake proactive measures to protect against a
regional disaster. The following approach ensures that, in the event of a regional
disaster, data in your KQL databases querysets remains safe and accessible.

Use the following steps to guarantee an effective disaster recovery solution for KQL
databases and querysets.

1. Establish independent KQL databases: Configure two or more independent KQL


databases/querysets on dedicated Fabric capacities. These should be set up across
two different Azure regions (preferably Azure-paired regions) to maximize
resilience.

2. Replicate management activities: Any management action taken in one KQL


database should be mirrored in the other. This ensures that both databases remain
in sync. Key activities to replicate include:

Tables: Make sure that the table structures and schema definitions are
consistent across the databases.

Mapping: Duplicate any required mappings. Make sure that data sources and
destinations align correctly.

Policies: Make sure that both databases have similar data retention, access,
and other relevant policies.

3. Manage authentication and authorization: For each replica, set up the required
permissions. Make sure that proper authorization levels are established, granting
access to the required personnel while maintaining security standards.
4. Parallel data ingestion: To keep the data consistent and ready in multiple regions,
load the same dataset into each KQL database at the same time as you ingest it.

Eventstream
An eventstream is a centralized place in the Fabric platform for capturing, transforming,
and routing real-time events to various destinations (for example, lakehouses, KQL
databases/querysets) with a no-code experience. So long as the destinations are
supported by disaster recovery, eventstreams won't lose data. Therefore, customers
should use the disaster recovery capabilities of those destination systems to guarantee
data availability.

Customers can also achieve geo-redundancy by deploying identical Eventstream


workloads in multiple Azure regions as part of a multi-site active/active strategy. With a
multi-site active/active approach, customers can access their workload in any of the
deployed regions. This approach is the most complex and costly approach to disaster
recovery, but it can reduce the recovery time to near zero in most situations. To be fully
geo-redundant, customers can

1. Create replicas of their data sources in different regions.

2. Create Eventstream items in corresponding regions.

3. Connect these new items to the identical data sources.

4. Add identical destinations for each eventstream in different regions.

Related information
Microsoft Fabric disaster recovery guide

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric end-to-end security
scenario
Article • 05/23/2024

Security is a key aspect of any data analytics solution, especially when it involves
sensitive or confidential data. For this reason, Microsoft Fabric provides a comprehensive
set of security features that enables you to protect your data at rest and in transit, as
well as control access and permissions for your users and applications.

In this article, you'll learn about Fabric security concepts and features that can help you
confidently build your own analytical solution with Fabric.

Background
This article presents a scenario where you're a data engineer who works for a healthcare
organization in the United States. The organization collects and analyzes patient data
that's sourced from various systems, including electronic health records, lab results,
insurance claims, and wearable devices.

You plan to build a lakehouse by using the medallion architecture in Fabric, which
consists of three layers: bronze, silver, and gold.

The bronze layer stores the raw data as it arrives from the data sources.
The silver layer applies data quality checks and transformations to prepare the data
for analysis.
The gold layer provides aggregated and enriched data for reporting and
visualization.

While some data sources are located on your on-premises network, others are behind
firewalls and require secure, authenticated access. There are also some data sources that
are managed in Azure, such as Azure SQL Database and Azure Storage. You need to
connect to these Azure data sources in a way that doesn't expose data to the public
internet.

You've decided to use Fabric because it can securely ingest, store, process, and analyze
your data in the cloud. Importantly, it does so while complying with the regulations of
your industry and policies of your organization.

Because Fabric is software as a service (SaaS), you don't need to provision individual
resources, such as storage or compute resources. All you need is a Fabric capacity.
You need to set up data access requirements. Specifically, you need to ensure that only
you and your fellow data engineers have access to the data in the bronze and silver
layers of the lakehouse. These layers are where you plan to perform data cleansing,
validation, transformation, and enrichment. You also need to restrict access to the data
in the gold layer. Only authorized users, including data analysts and business users,
should have access to the gold layer. They require this access to use the data for various
analytical purposes, such as reporting, machine learning, and predictive analytics. Data
access needs to be further restricted by the role and department of the user.

Connect to Fabric (inbound protection)


You first set up inbound protection, which is concerned with how you and other users
sign in and have access to Fabric.

Because Fabric is deployed to a Microsoft Entra tenant, authentication and authorization


are handled by Microsoft Entra. You sign in with a Microsoft Entra organization account
(work or school account). Next, you consider how other users will connect to Fabric.

The Microsoft Entra tenant is an identity security boundary that's under the control of
your IT department. Within this security boundary, the administration of Microsoft Entra
objects (such as user accounts) and the configuration of tenant-wide settings are done
by your IT administrators. Like any SaaS service, Fabric logically isolates tenants. Data
and resources in your tenant can't ever be accessed by other tenants unless you
explicitly authorize them to do so.

Here's what happens when a user signs in to Fabric.

ノ Expand table

Item Description

The user opens a browser (or a client application) and signs in to the Fabric portal .
Item Description

The user is immediately redirected to Microsoft Entra ID, and they're required to
authenticate. Authentication verifies that it's the correct person signing in.

After authentication succeeds, the web front end receives the user's request and delivers
the front-end (HTML and CSS) content from the nearest location. It also routes the request
to the metadata platform and backend capacity platform.

The metadata platform, which resides in your tenant's home region, stores your tenant's
metadata, such as workspaces and access controls. This platform ensures that the user is
authorized to access the relevant workspaces and Fabric items.

The back-end capacity platform performs compute operations and stores your data. It's
located in the capacity region. When a workspace is assigned to Fabric capacity, all data
that resides in the workspace, including the data lake OneLake, is stored and processed in
the capacity region.

The metadata platform and the back-end capacity platform each run in secured virtual
networks. These networks expose a series of secure endpoints to the internet so that
they can receive requests from users and other services. Apart from these endpoints,
services are protected by network security rules that block access from the public
internet.

When users sign in to Fabric, you can enforce other layers of protection. That way, your
tenant is only be accessible to certain users and when other conditions, like network
location and device compliance, are met. This layer of protection is called inbound
protection.

In this scenario, you're responsible for sensitive patient information in Fabric. So, your
organization has mandated that all users who access Fabric must perform multifactor
authentication (MFA), and that they must be on the corporate network—just securing
user identity isn't enough.

Your organization also provides flexibility for users by allowing them to work from
anywhere and to use their personal devices. Because Microsoft Intune supports bring-
your-own-device (BYOD), you enroll approved user devices in Intune.

Further, you need to ensure that these devices comply with the organization policies.
Specifically, these policies require that devices can only connect when they have the
latest operating system installed and the latest security patches. You set up these
security requirements by using Microsoft Entra Conditional Access.

Conditional Access offers several ways to secure your tenant. You can:

Grant or block access by network location.


Block access to devices that run on unsupported operating systems.
Require a compliant device, Intune-joined device, or MFA for all users.
And more.

In the case that you need to lock down your entire Fabric tenant, you can use a virtual
network and block public internet access. Access to Fabric is then only allowed from
within that secure virtual network. This requirement is set up by enabling private links at
the tenant level for Fabric. It ensures that all Fabric endpoints resolve to a private IP
address in your virtual network, including access to all your Power BI reports. (Enabling
private endpoints impacts on many Fabric items, so you should thoroughly read this
article before enabling them.)

Secure access to data outside of Fabric


(outbound protection)
Next, you set up outbound protection, which is concerned with securely accessing data
behind firewalls or private endpoints.

Your organization has some data sources that are located on your on-premises network.
Because these data sources are behind firewalls, Fabric requires secure access. To allow
Fabric to securely connect to your on-premises data source, you install an on-premises
data gateway.

The gateway can be used by Data Factory dataflows and data pipelines to ingest,
prepare, and transform the on-premises data, and then load it to OneLake with a copy
activity. Data Factory supports a comprehensive set of connectors that enable you to
connect to more than 100 different data stores.

You then build dataflows with Power Query, which provides an intuitive experience with
a low-code interface. You use it to ingest data from your data sources, and transform it
by using any of 300+ data transformations. You then build and orchestrate a complex
extract, transform, and load (ETL) process with data pipelines. You ETL processes can
refresh dataflows and perform many different tasks at scale, processing petabytes of
data.

In this scenario, you already have multiple ETL processes. First, you have some pipelines
in Azure Data Factory (ADF). Currently, these pipelines ingest your on-premises data and
load it into a data lake in Azure Storage by using the self-hosted integration runtime.
Second, you have a data ingestion framework in Azure Databricks that's written in Spark.

Now that you're using Fabric, you simply redirect the output destination of the ADF
pipelines to use the lakehouse connector. And, for the ingestion framework in Azure
Databricks, you use the OneLake APIs that supports the Azure Blog Filesystem (ABFS)
driver to integrate OneLake with Azure Databricks. (You could also use the same method
to integrate OneLake with Azure Synapse Analytics by using Apache Spark.)

You also have some data sources that are in Azure SQL Database. You need to connect
to these data sources by using private endpoints. In this case, you decide to set up a
virtual network (VNet) data gateway and use dataflows to securely connect to your
Azure data and load it into Fabric. With VNet data gateways, you don't have to provision
and manage the infrastructure (as you need to do for on-premises data gateway). That's
because Fabric securely and dynamically creates the containers in your Azure Virtual
Network.

If you're developing or migrating your data ingestion framework in Spark, then you can
connect to data sources in Azure securely and privately from Fabric notebooks and jobs
with the help of managed private endpoints. Managed private endpoints can be created
in your Fabric workspaces to connect to data sources in Azure that have blocked public
internet access. They support private endpoints, such as Azure SQL Database and Azure
Storage. Managed private endpoints are provisioned and managed in a managed VNet
that's dedicated to a Fabric workspace. Unlike your typical Azure Virtual Networks,
managed VNets and managed private endpoints won't be found in the Azure portal.
That's because they're fully managed by Fabric, and you find them in your workspace
settings.

Because you already have a lot of data stored in Azure Data Lake Storage (ADLS) Gen2
accounts, you now only need to connect Fabric workloads, such as Spark and Power BI,
to it. Also, thanks to OneLake ADLS shortcuts, you can easily connect to your existing
data from any Fabric experience, such as data integration pipelines, data engineering
notebooks, and Power BI reports.

Fabric workspaces that have a workspace identity can securely access ADLS Gen2
storage accounts, even when you've disabled the public network. That's made possible
by trusted workspace access. It allows Fabric to securely connect to the storage accounts
by using a Microsoft backbone network. That means communication doesn't use the
public internet, which allows you to disable public network access to the storage
account but still allow certain Fabric workspaces to connect to them.

Compliance
You want to use Fabric to securely ingest, store, process, and analyze your data in the
cloud, while maintaining compliance with the regulations of your industry and the
policies of your organization.
Fabric is part of Microsoft Azure Core Services, and it's governed by the Microsoft
Online Services Terms and the Microsoft Enterprise Privacy Statement . While
certifications typically occur after a product launch (Generally Available, or GA),
Microsoft integrates compliance best practices from the outset and throughout the
development lifecycle. This proactive approach ensures a strong foundation for future
certifications, even though they follow established audit cycles. In simpler terms, we
prioritize building compliance in from the start, even when formal certification comes
later.

Fabric is compliant with many industry standards such as ISO 27001, 27017, 27018 and
27701. Fabric is also HIPAA compliant, which is critical to healthcare data privacy and
security. You can check the Appendix A and B in the Microsoft Azure Compliance
Offerings for detailed insight into which cloud services are in scope for the
certifications. You can also access the audit documentation from the Service Trust Portal
(STP) .

Compliance is a shared responsibility. To comply with laws and regulations, cloud service
providers and their customers enter a shared responsibility to ensure that each does
their part. As you consider and evaluate public cloud services, it's critical to understand
the shared responsibility model and which security tasks the cloud provider handles and
which tasks you handle.

Data handling
Because you're dealing with sensitive patient information, you need to ensure that all
your data is sufficiently protected both at rest and in transit.

Encryption at rest provides data protection for stored data (at rest). Attacks against data
at rest include attempts to obtain physical access to the hardware on which the data is
stored, and then compromise the data on that hardware. Encryption at rest is designed
to prevent an attacker from accessing the unencrypted data by ensuring the data is
encrypted when on disk. Encryption at rest is a mandatory measure required for
compliance with some of the industry standards and regulations, such as the
International Organization for Standardization (ISO) and Health Insurance Portability and
Accountability Act (HIPAA).

All Fabric data stores are encrypted at rest by using Microsoft-managed keys, which
provides protection for customer data and also system data and metadata. Data is never
persisted to permanent storage while in an unencrypted state. With Microsoft-managed
keys, you benefit from the encryption of your data at rest without the risk or cost of a
custom key management solution.
Data is also encrypted in transit. All inbound traffic to Fabric endpoints from the client
systems enforces a minimum of Transport Layer Security (TLS) 1.2. It also negotiates
TLS 1.3, whenever possible. TLS provides strong authentication, message privacy, and
integrity (enabling detection of message tampering, interception, and forgery),
interoperability, algorithm flexibility, and ease of deployment and use.

In addition to encryption, network traffic between Microsoft services always routes over
the Microsoft global network, which is one of the largest backbone networks in the
world.

Data residency
As you're dealing with patient data, for compliance reasons your organization has
mandated that data should never leave the United States geographical boundary. Your
organization's main operations take place in New York and your head office in Seattle.
While setting up Power BI, your organization has chosen the East US region as the
tenant home region. For your operations, you have created a Fabric capacity in the West
US region, which is closer to your data sources. Because OneLake is available around the
globe, you're concerned whether you can meet your organization's data residency
policies while using Fabric.

In Fabric, you learn that you can create Multi-Geo capacities, which are capacities
located in geographies (geos) other than your tenant home region. You assign your
Fabric workspaces to those capacities. In this case, compute and storage (including
OneLake and experience-specific storage) for all items in the workspace reside in the
multi-geo region, while your tenant metadata remains in the home region. Your data will
only be stored and processed in these two geographies, thus ensuring your
organization's data residency requirements are met.

Access control
You need to ensure that only you and your fellow data engineers have full access to the
data in the bronze and silver layers of the lakehouse. These layers allow you to perform
data cleansing, validation, transformation, and enrichment. You need to restrict access to
the data in the gold layer to only authorized users, such as data analysts and business
users, who can use the data for various analytical purposes, such as reporting and
analytics.

Fabric provides a flexible permission model that allows you to control access to items
and data in your workspaces. A workspace is a securable logical entity for grouping
items in Fabric. You use workspace roles to control access to items in the workspaces.
The four basic roles of a workspace are:

Admin: Can view, modify, share, and manage all content in the workspace,
including managing permissions.
Member: Can view, modify, and share all content in the workspace.
Contributor: Can view and modify all content in the workspace.
Viewer: Can view all content in the workspace, but can't modify it.

In this scenario, you create three workspaces, one for each of the medallion layers
(bronze, silver, and gold). Because you created the workspace, you're automatically
assigned to the Admin role.

You then add a security group to the Contributor role of those three workspaces.
Because the security group includes your fellow engineers as members, they're able to
create and modify Fabric items in those workspaces—however they can't share any
items with anyone else. Nor can they grant access to other users.

In the bronze and silver workspaces, you and your fellow engineers create Fabric items
to ingest data, store the data, and process the data. Fabric items comprise a lakehouse,
pipelines, and notebooks. In the gold workspace, you create two lakehouses, multiple
pipelines and notebooks, and a Direct Lake semantic model, which delivers fast query
performance of data stored in one of the lakehouses.

You then give careful consideration to how the data analysts and business users can
access the data they're allowed to access. Specifically, they can only access data that's
relevant to their role and department.

The first lakehouse contains the actual data and doesn't enforce any data permissions in
its SQL analytics endpoint. The second lakehouse contains shortcuts to the first
lakehouse, and it enforces granular data permissions in its SQL analytics endpoint. The
semantic model connects to the first lakehouse. To enforce appropriate data
permissions for the users (so they can only access data that's relevant to their role and
department), you don't share the first lakehouse with the users. Instead, you share only
the Direct Lake semantic model and the second lakehouse that enforces data
permissions in its SQL analytics endpoint.

You set up the semantic model to use a fixed identity, and then implement row-level
security (RLS) in the semantic model to enforce model rules to govern what data the
users can access. You then share only the semantic model with the data analysts and
business users because they shouldn't access the other items in the workspace, such as
the pipelines and notebooks. Lastly, you grant Build permission on the semantic model
so that the users can create Power BI reports. That way, the semantic model becomes a
shared semantic model and a source for their Power BI reports.

Your data analysts need access to the second lakehouse in the gold workspace. They'll
connect to the SQL analytics endpoint of that lakehouse to write SQL queries and
perform analysis. So, you share that lakehouse with them and provide access only to
objects they need (such as tables, rows, and columns with masking rules) in the
lakehouse SQL analytics endpoint by using the SQL security model. Data analysts can
now only access data that's relevant to their role and department and they can't access
the other items in the workspace, such as the pipelines and notebooks.

Common security scenarios


The following table lists common security scenarios and the tools you can use to
accomplish them.

ノ Expand table

Scenario Tools Direction

I'm an ETL developer and I want to load Use on-premises data gateway with Outbound
large volumes of data to Fabric at-scale from data pipelines (copy activity).
multiple source systems and tables. The
source data is on-premises (or other cloud)
and is behind firewalls and/or Azure data
sources with private endpoints.

I'm a power user and I want to load data to Use on-premises data gateway with Outbound
Fabric from source systems that I have Dataflow Gen 2.
access to. Because I'm not a developer, I
need to transform the data by using a low-
code interface. The source data is on-
premises (or other cloud) and is behind
firewalls.

I'm a power user and I want to load data in Use a VNet data gateway with Outbound
Fabric from source systems that I have Dataflow Gen 2.
access to. The source data is in Azure behind
private endpoints, and I don't want to install
and maintain on-premises data gateway
infrastructure.

I'm a developer who can write data Use Fabric notebooks with Azure Outbound
ingestion code by using Spark notebooks. I private endpoints.
want to load data in Fabric from source
systems that I have access to. The source
data is in Azure behind private endpoints,
Scenario Tools Direction

and I don't want to install and maintain on-


premises data gateway infrastructure.

I have many existing pipelines in Azure Data Use the Lakehouse connector in Outbound
Factory (ADF) and Synapse pipelines that existing pipelines.
connect to my data sources and load data
into Azure. I now want to modify those
pipelines to load data into Fabric.

I have a data ingestion framework developed Use the OneLake and the Azure Data Outbound
in Spark that connects to my data sources Lake Storage (ADLS) Gen2 API (Azure
securely and loads them into Azure. I'm Blob Filesystem driver)
running it on Azure Databricks and/or
Synapse Spark. I want to continue using
Azure Databricks and/or Synapse Spark to
load data into Fabric.

I want to ensure that my Fabric endpoints As a SaaS service, the Fabric back end Inbound
are protected from the public internet. is already protected from the public
internet. For more protection, use
Microsoft Entra conditional access
policies for Fabric and/or enable
private links at tenant level for Fabric
and block public internet access.

I want to ensure that Fabric can be accessed Use Microsoft Entra conditional Inbound
from only within my corporate network access policies for Fabric.
and/or from compliant devices.

I want to ensure that anyone accessing Use Microsoft Entra conditional Inbound
Fabric must perform multifactor access policies for Fabric.
authentication.

I want to lock down my entire Fabric tenant Enable private links at tenant level for Inbound
from the public internet and allow access Fabric and block public internet
only from within my virtual networks. access.

Related content
For more information about Fabric security, see the following resources.

Security in Microsoft Fabric


OneLake security overview
Microsoft Fabric concepts and licenses
Questions? Try asking the Microsoft Fabric community .
Suggestions? Contribute ideas to improve Microsoft Fabric .
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community

You might also like