Fabric Security
Fabric Security
e OVERVIEW
p CONCEPT
c HOW-TO GUIDE
e OVERVIEW
Microsoft Entra ID
Zero Trust
Conditional Access
c HOW-TO GUIDE
Direct lake
i REFERENCE
b GET STARTED
e OVERVIEW
p CONCEPT
Microsoft Purview
Microsoft Fabric is a software as a service (SaaS) platform that lets users get, create,
share, and visualize data.
As a SaaS service, Fabric offers a complete security package for the entire platform.
Fabric removes the cost and responsibility of maintaining your security solution, and
transfers it to the cloud. With Fabric, you can use the expertise and resources of
Microsoft to keep your data secure, patch vulnerabilities, monitor threats, and comply
with regulations. Fabric also allows you to manage, control and audit your security
settings, in line with your changing needs and demands.
As you bring your data to the cloud and use it with various analytic experiences such as
Power BI, Data Factory, and the next generation of Synapse, Microsoft ensures that built-
in security and reliability features secure your data at rest and in transit. Microsoft also
makes sure that your data is recoverable in cases of infrastructure failures or disasters.
Compliant – Fabric has data sovereignty out of the box with multi geo capacities.
Fabric also supports a wide range of compliance standards.
Governable - Fabric comes with a set of governance tools such data lineage,
information protection labels, data loss prevention and purview integration.
Authenticate
Microsoft Fabric is a SaaS platform, like many other Microsoft services such as Azure,
Microsoft Office, OneDrive, and Dynamics. All these Microsoft SaaS services including
Fabric, use Microsoft Entra ID as their cloud-based identity provider. Microsoft Entra ID
helps users connect to these services quickly and easily from any device and any
network. Every request to connect to Fabric is authenticated with Microsoft Entra ID,
allowing users to safely connect to Fabric from their corporate office, when working at
home, or from a remote location.
To configure Private Links in Fabric, see Set up and use private links.
With Fabric you can access firewall enabled Azure Data Lake Gen 2 accounts securely.
Fabric workspaces that have a workspace identity can securely access Azure Data Lake
Gen 2 accounts with public network access enabled, from selected virtual networks and
IP addresses. You can limit ADLS gen 2 access to specific Fabric workspaces. For more
information, see Trusted workspace access.
7 Note
Managed virtual networks are virtual networks that are created and managed by
Microsoft Fabric for each Fabric workspace. Managed virtual networks provide network
isolation for Fabric Spark workloads, meaning that the compute clusters are deployed in
a dedicated network and are no longer part of the shared virtual network.
Managed virtual networks also enable network security features such as managed
private endpoints, and private link support for Data Engineering and Data Science items
in Microsoft Fabric that use Apache Spark.
Data gateway
To connect to on-premises data sources or a data source that might be protected by a
firewall or a virtual network, you can use one of these options:
On-premises data gateway - The gateway acts as a bridge between your on-
premises data sources and Fabric. The gateway is installed on a server within your
network, and it allows Fabric to connect to your data sources through a secure
channel without the need to open ports or make changes to your network.
Virtual network (VNet) data gateway - The VNet gateway allows you to connect
from Microsoft Cloud services to your Azure data services within a VNet, without
the need of an on-premises data gateway.
Use service Tags to ingest data without the use of data gateways, from data sources
deployed in an Azure virtual network, such as Azure SQL Virtual Machines (VMs), Azure
SQL Managed Instance (MI) and REST APIs. You can also use service tags to get traffic
from a virtual network or an Azure firewall. For example, service tags can allow
outbound traffic to Fabric so that a user on a VM can connect to Fabric SQL endpoints
from SSMS, while blocked from accessing other public internet resources.
IP allowlists
If you have data that doesn't reside in Azure, you can enable an IP allowlist on your
organization's network to allow traffic to and from Fabric. An IP allowlist is useful if you
need to get data from data sources that don't support service tags, such as on-premises
data sources. With these shortcuts, you can get data without copying it into OneLake
using a Lakehouse SQL endpoint or Direct Lake.
You can get the list of Fabric IPs from Service tags on-premises. The list is available as a
JSON file, or programmatically with REST APIs, PowerShell, and Azure Command-Line
Interface (CLI).
Secure Data
In Fabric, all data that is stored in OneLake is encrypted at rest. All data at rest is stored
in your home region, or in one of your capacities at a remote region of your choice so
that you can meet data at rest sovereignty regulations. For more information, see
Microsoft Fabric security fundamentals.
The query execution layer, query caches, and item data assigned to a multi-geo
workspace remain in the Azure geography of their creation. However, some metadata,
and processing, is stored at rest in the tenant's home geography.
Fabric is part of a larger Microsoft ecosystem. If your organization is already using other
cloud subscription services, such as Azure, Microsoft 365, or Dynamics 365, then Fabric
operates within the same Microsoft Entra tenant. Your organizational domain (for
example, contoso.com) is associated with Microsoft Entra ID. Like all Microsoft cloud
services.
Fabric ensures that your data is secure across regions when you're working with several
tenants that have multiple capacities across a number of geographies.
Data logical separation - The Fabric platform provide logical isolation between
tenants to protect your data.
Data sovereignty - To start working with multi-geo, see Configure Multi-Geo
support for Fabric.
Access data
Fabric controls data access using workspaces. In workspaces, data appears in the form of
Fabric items, and users can't view or use items (data) unless you give them access to the
workspace. You can find more information about workspace and item permissions, in
Permission model.
Workspace roles
Workspace access is listed in the table below. It includes workspace roles and Fabric and
OneLake security. Users with a viewer role can run SQL, Data Analysis Expressions (DAX)
or Multidimensional Expressions (MDX) queries, but they can't access Fabric items or run
a notebook.
ノ Expand table
Admin, member, and contributor Can use all the items in the workspace
Share items
You can share Fabric items with users in your organization that don't have any
workspace role. Sharing items gives restricted access, allowing users to only access the
shared item in the workspace.
Limit access
You can limit viewer access to data using row-level security (RLS), column-level security
(CLS) and object-level security (OLS). With RLS, CLS and OLS, you can create user
identities that have access to certain portions of your data, and limit SQL results
returning only what the user's identity can access.
You can also add RLS to a DirectLake dataset. If you define security for both SQL and
DAX, DirectLake falls back to DirectQuery for tables that have RLS in SQL. In such cases,
DAX, or MDX results are limited to the user's identity.
To expose reports using a DirectLake dataset with RLS without a DirectQuery fallback,
use direct dataset sharing or apps in Power BI. With apps in Power BI you can give
access to reports without viewer access. This kind of access means that the users can't
use SQL. To enable DirectLake to read the data, you need to switch the data source
credential from Single Sign On (SSO) to a fixed identity that has access to the files in the
lake.
Protect data
Fabric supports sensitivity labels from Microsoft Purview Information Protection. These
are the labels, such as General, Confidential, and Highly Confidential that are widely used
in Microsoft Office apps such as Word, PowerPoint, and Excel to protect sensitive
information. In Fabric, you can classify items that contain sensitive data using these
same sensitivity labels. The sensitivity labels then follow the data automatically from
item to item as it flows through Fabric, all the way from data source to business user.
The sensitivity label follows even when the data is exported to supported formats such
as PBIX, Excel, PowerPoint, and PDF, ensuring that your data remains protected. Only
authorized users can open the file. For more information, see Governance and
compliance in Microsoft Fabric.
To help you govern, protect, and manage your data, you can use Microsoft Purview.
Microsoft Purview and Fabric work together letting you store, analyze, and govern your
data from a single location, the Microsoft Purview hub.
Recover data
Fabric data resiliency ensures that your data is available if there is a disaster. Fabric also
enables you to recover your data in case of a disaster, Disaster recovery. For more
information, see Reliability in Microsoft Fabric.
Administer Fabric
As an administrator in Fabric, you get to control capabilities for the entire organization.
Fabric enables delegation of the admin role to capacities, workspaces, and domains. By
delegating admin responsibilities to the right people, you can implement a model that
lets several key admins control general Fabric settings across the organization, while
other admins who are in charge of settings related to specific areas.
Using various tools, admins can also monitor key Fabric aspects such as capacity
consumption.
Audit Logs
To view your audit logs, follow the instructions in Track user activities in Microsoft Fabric.
You can also refer to the Operation list to see which activities are available for searching
in the audit logs.
Note that an internal issue caused OneLake audit events to not be shown in the
Microsoft 365 Admin center from 4/21 through 5/6. You can request this data if need be
through support channels.
Capabilities
Review this section for a list of some of the security features available in Microsoft
Fabric.
ノ Expand table
Capability Description
Fabric and OneLake Learn how to secure your data in Fabric and OneLake.
security
Service tags Enable an Azure SQL Managed Instance (MI) to allow incoming
connections from Microsoft Fabric
Related content
Security fundamentals
Admin overview
Feedback
Was this page helpful? Yes No
The article is primarily targeted at Fabric administrators, who are responsible for
overseeing Fabric in the organization. It's also relevant to enterprise security
stakeholders, including security administrators, network administrators, Azure
administrators, workspace administrators, and database administrators.
Fabric platform
Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything
from data movement to data science, real-time analytics, and business intelligence (BI).
The Fabric platform comprises a series of services and infrastructure components that
support the common functionality for all Fabric experiences. Collectively, they offer a
comprehensive set of analytics experiences designed to work together seamlessly.
Experiences include Lakehouse, Data Factory, Synapse Data Engineering, Synapse Data
Warehouse, Power BI, and others.
With Fabric, you don't need to piece together different services from multiple vendors.
Instead, you benefit from a highly integrated, end-to-end, and easy-to-use product
that's designed to simplify your analytics needs. Fabric was designed from the outset to
protect sensitive assets.
Architectural diagram
The architectural diagram below shows a high-level representation of the Fabric security
architecture.
The architectural diagram depicts the following concepts.
3. The web front end receives user requests and facilitates sign-in. It also routes
requests and serves front-end content to the user.
4. The metadata platform stores tenant metadata, which can include customer data.
Fabric services query this platform on demand in order to retrieve authorization
information and to authorize and validate user requests. It's located in the tenant
home region.
5. The back-end capacity platform is responsible for compute operations and for
storing customer data, and it's located in the capacity region. It leverages Azure
core services in that region as necessary for specific Fabric experiences.
Fabric platform infrastructure services are multitenant. There is logical isolation between
tenants. These services don't process complex user input and are all written in managed
code. Platform services never run any user-written code.
The metadata platform and the back-end capacity platform each run in secured virtual
networks. These networks expose a series of secure endpoints to the internet so that
they can receive requests from customers and other services. Apart from these
endpoints, services are protected by network security rules that block access from the
public internet. Communication within virtual networks is also restricted based on the
privilege of each internal service.
The application layer ensures that tenants are only able to access data from within their
own tenant.
Authentication
Fabric relies on Microsoft Entra ID to authenticate users (or service principals). When
authenticated, users receive access tokens from Microsoft Entra ID. Fabric uses these
tokens to perform operations in the context of the user.
A key feature of Microsoft Entra ID is conditional access. Conditional access ensures that
tenants are secure by enforcing multifactor authentication, allowing only Microsoft
Intune enrolled devices to access specific services. Conditional access also restricts user
locations and IP ranges.
Authorization
All Fabric permissions are stored centrally by the metadata platform. Fabric services
query the metadata platform on demand in order to retrieve authorization information
and to authorize and validate user requests.
Data residency
In Fabric, a tenant is assigned to a home metadata platform cluster, which is located in a
single region that meets the data residency requirements of that region's geography.
Tenant metadata, which can include customer data, is stored in this cluster.
Customers can control where their workspaces are located. They can choose to locate
their workspaces in the same geography as their metadata platform cluster, either
explicitly by assigning their workspaces on capacities in that region or implicitly by using
Fabric Trial, Power BI Pro, or Power BI Premium Per User license mode. In the latter case,
all customer data is stored and processed in this single geography. For more
information, see Microsoft Fabric concepts and licenses.
Customers can also create Multi-Geo capacities located in geographies (geos) other
than their home region. In this case, compute and storage (including OneLake and
experience-specific storage) is located in the multi-geo region, however the tenant
metadata remains in the home region. Customer data will only be stored and processed
in these two geographies. For more information, see Configure Multi-Geo support for
Fabric.
Data handling
This section provides an overview of how data handling works in Fabric. It describes
storage, processing, and the movement of customer data.
Data at rest
All Fabric data stores are encrypted at rest by using Microsoft-managed keys. Fabric
data includes customer data as well as system data and metadata.
While data can be processed in memory in an unencrypted state, it's never persisted to
permanent storage while in an unencrypted state.
Data in transit
Data in transit across the public internet between Microsoft services is always encrypted
with at least TLS 1.2. Fabric negotiates to TLS 1.3 whenever possible. Traffic between
Microsoft services always routes over the Microsoft global network.
Inbound Fabric communication also enforces TLS 1.2 and negotiates to TLS 1.3,
whenever possible. Outbound Fabric communication to customer-owned infrastructure
prefers secure protocols but might fall back to older, insecure protocols (including TLS
1.0) when newer protocols aren't supported.
Telemetry
Telemetry is used to maintain performance and reliability of the Fabric platform. The
Fabric platform telemetry store is designed to be compliant with data and privacy
regulations for customers in all regions where Fabric is available, including the European
Union (EU). For more information, see EU Data Boundary Services .
OneLake
OneLake is a single, unified, logical data lake for the entire organization, and it's
automatically provisioned for every Fabric tenant. It's built on Azure and it can store any
type of file, structured or unstructured. Also, all Fabric items, like warehouses and
lakehouses, automatically store their data in OneLake.
OneLake supports the same Azure Data Lake Storage Gen2 (ADLS Gen2) APIs and SDKs,
therefore it's compatible with existing ADLS Gen2 applications, including Azure
Databricks.
For more information, see Fabric and OneLake security.
Workspace security
Workspaces represent the primary security boundary for data stored in OneLake. Each
workspace represents a single domain or project area where teams can collaborate on
data. You manage security in the workspace by assigning users to workspace roles.
For more information, see Fabric and OneLake security (Workspace security).
Item security
Within a workspace, you can assign permissions directly to Fabric items, like warehouses
and lakehouses. Item security provides the flexibility to grant access to an individual
Fabric item without granting access to the entire workspace. Users can set up per item
permissions either by sharing an item or by managing the permissions of an item.
Compliance resources
The Fabric service is governed by the Microsoft Online Services Terms and the
Microsoft Enterprise Privacy Statement .
For the location of data processing, refer to the Location of Data Processing terms in the
Microsoft Online Services Terms and to the Data Protection Addendum .
For compliance information, the Microsoft Trust Center is the primary resource for
Fabric. For more information about compliance, see Microsoft compliance offerings.
The Fabric service follows the Security Development Lifecycle (SDL), which consists of a
set of strict security practices that support security assurance and compliance
requirements. The SDL helps developers build more secure software by reducing the
number and severity of vulnerabilities in software, while reducing development cost. For
more information, see Microsoft Security Development Lifecycle Practices .
Related content
For more information about Fabric security, see the following resources.
Feedback
Was this page helpful? Yes No
This article outlines Power BI data handling practices when it comes to storing,
processing, and transferring customer data.
Data at rest
Power BI uses two primary data storage resource types:
Azure Storage
In most scenarios, Azure Storage is utilized to persist the data of Power BI artifacts, while
Azure SQL Databases are used to persist artifact metadata.
Optionally, organizations can utilize Power BI Premium to use their own keys to encrypt
data at rest that is imported into a semantic model. This approach is often described as
bring your own key (BYOK). Utilizing BYOK helps ensure that even in case of a service
operator error, customer data won't be exposed – something that can't easily be
achieved using transparent service-side encryption. See Bring your own encryption keys
for Power BI for more information.
Power BI semantic models allow for various data source connection modes that
determine whether the data source data is persisted in the service or not.
ノ Expand table
Semantic Model Mode (Kind) Data Persisted in Power BI
Import Yes
DirectQuery No
Live Connect No
Regardless of the semantic model mode utilized, Power BI may temporarily cache any
retrieved data to optimize query and report load performance.
Data in processing
Data is in processing when it's either actively being used by one or more users as part of
an interactive scenario, or when a background process, such as refresh, touches this
data. Power BI loads actively processed data into the memory space of one or more
service workloads. To facilitate the functionality required by the workload, the processed
data in memory isn't encrypted.
In an embed for your customers scenario, ISVs typically own Power BI tenants and Power
BI items (dashboards, reports, semantic models, and others). It's the responsibility of an
ISV back-end service to authenticate its end users and decide which artifacts and which
access level is appropriate for that end user. ISV policy decisions are encrypted in
an embed token generated by Power BI and passed to the ISV back-end for further
distribution to the end users according to the business logic of the ISV. End users using
a browser or other client applications aren't able to automatically append the encrypted
embed token to Power BI requests as an Authorization: EmbedToken header. Based on
this header, Power BI enforces all policies (such as access or RLS) precisely as was
specified by the ISV during generation. Power BI Client APIs automatically append the
encrypted embed token to Power BI requests as an Authorization: EmbedToken header.
Based on this header, Power BI enforces all policies (such as access or RLS) precisely as
was specified by the ISV during generation.
To enable embedding and automation, and to generate the embed tokens described
above, Power BI exposes a rich set of REST APIs. These Power BI REST APIs support both
user delegated and service principal Microsoft Entra methods of authentication and
authorization.
Power BI embedded analytics and its REST APIs support all Power BI network isolation
capabilities described in this article: For example, Service Tags and Private Links.
Paginated reports
Paginated reports are designed to be printed or shared. They're called paginated
because they're formatted to fit well on a page. They display all the data in a table, even
if the table spans multiple pages. You can control their report page layout exactly.
Paginated reports support rich and powerful expressions written in Microsoft Visual
Basic .NET. Expressions are widely used throughout Power BI Report Builder paginated
reports to retrieve, calculate, display, group, sort, filter, parameterize, and format data.
Expressions are created by the author of the report with access to the broad range of
features of the .NET framework. The processing and execution of paginated reports is
performed inside a sandbox.
The Microsoft Entra token obtained during the authentication is used to communicate
directly from the browser to the Power BI Premium cluster.
A paginated report can access a wide set of data sources as part of the rendering of the
report. The sandbox doesn't communicate directly with any of the data sources but
instead communicates with the trusted process to request data, and then the trusted
process appends the required credentials to the connection. In this way, the sandbox
never has access to any credential or secret.
In order to support features such as Bing maps, or calls to Azure Functions, the sandbox
does have access to the internet.
Power BI Mobile
Power BI Mobile is a collection of apps designed for the primary mobile platforms:
Android, iOS. Security considerations for the Power BI Mobile apps fall into two
categories:
Device communication
For device communication, all Power BI Mobile applications communicate with the
Power BI service, and use the same connection and authentication sequences used by
browsers, which are described in detail earlier in this white paper. The Power BI mobile
applications for iOS and Android bring up a browser session within the application itself.
Power BI Mobile apps actively communicate with the Power BI service. Telemetry is used
to gather mobile app usage statistics and similar data, which is transmitted to services
that are used to monitor usage and activity; no customer data is sent with telemetry.
The Power BI application stores data on the device that facilitates use of the app:
Microsoft Entra ID and refresh tokens are stored in a secure mechanism on the
device, using industry-standard security measures.
Data and settings (key-value pairs for user configuration) is cached in storage on
the device and can be encrypted by the OS. In iOS this is automatically done when
the user sets a passcode. In Android this can be configured in the settings. T data
and settings (key-value pairs for user configuration) are cached in storage on the
device in a sandbox and internal storage that is accessible only to the app.
Data encryption can be enhanced by applying file-level encryption via Microsoft Intune,
a software service that provides mobile device and application management. Both
platforms for which Power BI Mobile is available support Intune. With Intune enabled
and configured, data on the mobile device is encrypted, and the Power BI application
itself can't be installed on an SD card. Learn more about Microsoft Intune .
In order to implement SSO, some secured storage values related to the token-based
authentication are available for other Microsoft first party apps (such as Microsoft
Authenticator) and are managed by the Microsoft Authentication Library (MSAL).
Power BI Mobile cached data is deleted when the app is removed, when the user signs
out of Power BI Mobile, or when the user fails to sign in (such as after a token expiration
event or password change). The data cache includes dashboards and reports previously
accessed from the Power BI Mobile app.
Power BI Mobile doesn't access other application folders or files on the device.
The Power BI apps for iOS and Android let you protect your data by configuring extra
identification, such as providing Face ID, Touch ID, or a passcode for iOS, and biometric
ID (Fingerprint ID) for Android. Learn more about additional identification. Users can also
configure their app to require identification each time the app is brought to the
foreground using Face ID, Touch ID, or passcode.
Related content
Security in Microsoft Fabric
Security fundamentals
Feedback
Was this page helpful? Yes No
Microsoft Fabric has a flexible permission model that allows you to control access to
data in your organization. This article explains the different types of permissions in
Fabric and how they work together to control access to data in your organization.
A workspace is a logical entity for grouping items in Fabric. Workspace roles define
access permissions for workspaces. Although items are stored in one workspace, they
can be shared with other users across Fabric. When you share Fabric items, you can
decide which permissions to grant the user you're sharing the item with. Certain items
such as Power BI reports, allow even more granular control of data. Reports can be set
up so that depending on their permissions, users who view them only see a portion of
the data they hold.
Workspace roles
Workspace roles are used to control access to workspaces and the content within them.
A Fabric administrator can assign workspace roles to individual users or groups.
Workspace roles are confined to a specific workspace and don't apply to other
workspaces, the capacity the workspace is in, or the tenant.
There are four Workspace roles and they apply to all items within the workspace. Users
that don't have any of these roles, can't access the workspace. The roles are:
Viewer - Can view all content in the workspace, but can't modify it.
Member - Can view, modify, and share all content in the workspace.
Admin - Can view, modify, share, and manage all content in the workspace,
including managing permissions.
This table shows a small set of the capabilities each role has. For a full and more detailed
list, see Microsoft Fabric workspace roles.
ノ Expand table
Add admins ✅ ❌ ❌ ❌
Add members ✅ ✅ ❌ ❌
Write data ✅ ✅ ✅ ❌
Create items ✅ ✅ ✅ ❌
Read data ✅ ✅ ✅ ✅
Item permissions
Item permissions are used to control access to individual Fabric items within a
workspace. Item permissions are confined to a specific item and don't apply to other
items. Use item permissions to control who can view, modify, and manage individual
items in a workspace. You can use item permissions to give a user access to a single
item in a workspace that they don't have access to.
When you're sharing the item with a user or group, you can configure item permissions.
Sharing an item grants the user the read permission for that item by default. Read
permissions allow users to see the metadata for that item and view any reports
associated with it. However, read permissions don't allow users to access underlying
data in SQL or OneLake.
Different Fabric items have different permissions. To learn more about the permissions
for each item, see:
Semantic model
warehouse
Data Factory
Lakehouse
Data science
Real-Time Intelligence
Compute permissions
Permissions can also be set within a specific compute engine in Fabric, specifically
through the SQL endpoint or in a semantic model. Compute engine permissions enable
a more granular data access control, such as table and row level security.
SQL endpoint - The SQL endpoint provides direct SQL access to tables in OneLake,
and can have security configured natively through SQL commands. These
permissions only apply to queries made through SQL.
Semantic model - Semantic models allow for security to be defined using DAX.
Restrictions defined using DAX apply to users querying through the semantic
model or Power BI reports built on the semantic model.
Learn more about OneLake Data Access Control Model and view the how-to guides.
Order of operation
Fabric has three different security levels. A user must have access at each level in order
to access the data. Each level evaluates sequentially to determine if a user has access.
Security rules such as Microsoft Information Protection policies evaluate at a given level
to allow or disallow access. The order of operation when evaluating Fabric security is:
Examples
This section provides two examples of how permissions can be set up in Fabric.
The sales department has a manager, a sales team lead, and sales team members.
Wingtip Toys also employs one analyst for the entire organization.
The following table shows the requirements for each role in the sales department and
how permissions are set up to enable them.
ノ Expand table
Manager View and modify all content in the sales A member role for all the sales
department in the entire organization workspaces in the organization
Team lead View and modify all content in the sales A member role for the sales
department in a specific region workspace in the region
Sales team View stats of other sale members in the No roles for any of the sales
member region workspaces
View and modify his own sales report Access to a specific report that
lists the member's sale figures
Analyst View all content in the sales department in A viewer role for all the sale
the entire organization workspaces in the organization
Wingtip also has a quarterly report that lists its sales income per sales member. This
report is stored in a finance workspace. By using row-level security, the report is set up
so that each sales member can only see their own sale figures. Team leads can see the
sales figures of all the sale members in their region, and the sales manager can see sale
figures of all the sale members in the organization.
Example 2: Workspace and item permissions
When you share an item, or change its permissions, workspace roles don't change. The
example in this section shows how workspace and item permissions interact.
Veronica and Marta work together. Veronica is the owner of a report she wants to share
with Marta. If Veronica shares the report with Marta, Marta will be able to access it
regardless of the workspace role she has.
Let's say that Marta has a viewer role in the workspace where the report is stored. If
Veronica decides to remove Marta's item permissions from the report, Marta will still be
able to view the report in the workspace. Marta will also be able to open the report from
the workspace and view its content. This is because Marta has view permissions to the
workspace.
If Veronica doesn't want Marta to view the report, removing Marta's item permissions
from the report isn't enough. Veronica also needs to remove Marta's viewer permissions
from the workspace. Without the workspace viewer permissions, Marta won't be able to
see that the report exists because she won't be able to access the workspace. Marta will
also not be able to use the link to the report, because she doesn't have access to the
report.
Now that Marta doesn't have a workspace viewer role, if Veronica decides to share the
report with her again, Marta will be able to view it using the link Veronica shares with
her, without having access to the workspace.
Furthermore you can limit viewer access to data using Row-level security (RLS), with RLS
you can create roles that have access to certain portions of your data, and limit results
returning only what the user's identity can access.
This works fine when using import models as the data is imported in the semantic
model and the recipients have access to this as part of the app. With DirectLake the
report reads the data directly from the Lakehouse and the report recipient needs to
have access to these files in the lake. You can do this in several ways:
Because RLS is defined in the Semantic Model the data will be read first and then the
rows will be filtered.
If any security is defined in the SQL endpoint that the report is built on, the queries
automatically fall back to DirectQuery mode. If you do not want this default fallback
behavior, you can create a new Lakehouse using shortcuts to the tables in the original
Lakehouse and not define RLS or OLS in SQL on the new Lakehouse.
Related content
Security fundamentals
Feedback
Was this page helpful? Yes No
e OVERVIEW
Security overview
c HOW-TO GUIDE
Manage access
Audit
` DEPLOY
Information protection
i REFERENCE
Security documentation
Admin documentation
i REFERENCE
p CONCEPT
Workspace management
c HOW-TO GUIDE
Discover
Track lineage
Analyze impact
e OVERVIEW
Curate
p CONCEPT
Monitor
e OVERVIEW
Automate
Microsoft Fabric documentation for
admins
Learn about the Microsoft Fabric admin settings, options, and tools.
e OVERVIEW
b GET STARTED
Region availability
c HOW-TO GUIDE
i REFERENCE
Governance documentation
Security documentation
e OVERVIEW
c HOW-TO GUIDE
e OVERVIEW
p CONCEPT
c HOW-TO GUIDE
Workspace administration
p CONCEPT
Manage workspaces
c HOW-TO GUIDE
Security is a top priority for Microsoft Fabric. As a Fabric customer, you need to
safeguard your assets from threats and follow your organization's security policies. The
Microsoft Fabric security white paper serves as an end-to-end security overview for
Fabric. It covers details on how Microsoft secures your data by default as a software as a
service (SaaS) service, and how you can secure, manage, and govern your data when
using Fabric.
The Fabric security white paper combines several online security documents into a single
downloadable PDF document for reading convenience. This PDF is updated at regular
intervals, while the online documentation at Microsoft Fabric security is always up to
date.
Related content
Microsoft Fabric security
Security in Microsoft Fabric
Microsoft Fabric security fundamentals
Feedback
Was this page helpful? Yes No
Inbound traffic is traffic coming into Fabric from the internet. This article explains the
differences between the two ways to protect inbound traffic in Microsoft Fabric. Use this
article to decide which method is best for your organization.
Private links - Fabric uses a private IP address from your virtual network. The
endpoint allows users in your network to communicate with Fabric over the private
IP address using private links.
Once traffic enters Fabric, it gets authenticated by Microsoft Entra ID, which is the same
authentication method used by Microsoft 365, OneDrive, and Dynamics 365. Microsoft
Entra ID authentication allows users to securely connect to cloud applications from any
device and any network, whether they’re at home, remote, or in their corporate office.
The Fabric backend platform is protected by a virtual network and isn't directly
accessible from the public internet other than through secure endpoints. To understand
how traffic is protected in Fabric, review Fabric's Architectural diagram.
Traffic between clients and Fabric is encrypted using at least the Transport Layer
Security (TLS) 1.2 protocol.
Entra Conditional Access
Every interaction with Fabric is authenticated with Microsoft Entra ID. Microsoft Entra ID
is based upon the Zero Trust security model, which assumes that you're not fully
protected within your organization's network perimeter. Instead of looking at your
network as a security boundary, Zero Trust looks at identity as the primary perimeter for
security.
To determine access at the time of authentication you can define and enforce
conditional access policies based on your users' identity, device context, location,
network, and application sensitivity. For example, you can require multifactor
authentication, device compliance, or approved apps for accessing your data and
resources in Fabric. You can also block or limit access from risky locations, devices, or
networks.
Conditional access policies help you protect your data and applications without
compromising user productivity and experience. Here are a few examples of access
restrictions you can enforce using conditional access.
Fabric doesn't support other authentication methods such as account keys or SQL
authentication, which rely on usernames and passwords.
7 Note
Conditional access can be considered too broad for some customers as any policy
will be applied to Fabric and the related Azure services.
Licensing
Conditional access requires Microsoft Entra ID P1 licenses. Often these licenses are
already available in your organization because they're shared with other Microsoft
products such as Microsoft 365. To find the right license for your requirements,
see License requirements.
Trusted access
Fabric doesn't need to reside in your private network, even when you have your data
stored inside one. With PaaS services, it's common to put the compute in the same
private network as the storage account. However, with Fabric this isn't needed. To enable
trusted access into Fabric, you can use features such as on-premises Data gateways,
Trusted workspace access and managed private endpoints. For more information, see
Security in Microsoft Fabric.
Private links
With private endpoints your service is assigned a private IP address from your virtual
network. The endpoint allows other resources in the network to communicate with the
service over the private IP address.
Using Private links, a tunnel from the service into one of your subnets creates a private
channel. Communication from external devices travels from their IP address, to a private
endpoint in that subnet, through the tunnel and into the service.
When implementing private links, Fabric is no longer accessible through the public
internet. To access Fabric, all users have to connect through the private network. The
private network is required for all communications with Fabric, including viewing a
Power BI report in the browser and using SQL Server Management Studio (SSMS) to
connect to an SQL endpoint.
On-premises networks
If you're using on-premises networks, you can extend them to the Azure Virtual Network
(VNet) using an ExpressRoute circuit, or a site-to-site VPN, to access Fabric using private
connections.
Bandwidth
With private links, all traffic to Fabric travels through the private endpoint, causing
potential bandwidth issues. Users are no longer able to load global distributed nondata
related resources such as images .css and .html files used by Fabric, from their region.
These resources are loaded from the location of the private endpoint. For example, for
Australian users with a US private endpoint, traffic travels to the US first. This increases
load times and might reduce performance.
Cost
The cost of private links and the increase of the ExpressRoute bandwidth to allow
private connectivity from your network, might add costs to your organization.
Related content
Private links for secure access to Fabric
Feedback
Was this page helpful? Yes No
You can use private links to provide secure access for data traffic in Fabric. Azure Private
Link and Azure Networking private endpoints are used to send data traffic privately
using Microsoft's backbone network infrastructure instead of going across the internet.
When private link connections are used, those connections go through the Microsoft
private network backbone when Fabric users access resources in Fabric.
To learn more about Azure Private Link, see What is Azure Private Link.
Enabling private endpoints has an impact on many items, so you should review this
entire article before enabling private endpoints.
Private endpoints don't guarantee that traffic from Fabric to your external data sources,
whether in the cloud or on-premises, is secured. Configure firewall rules and virtual
networks to further secure your data sources.
A private endpoint is a single, directional technology that lets clients initiate connections
to a given service but doesn't allow the service to initiate a connection into the
customer network. This private endpoint integration pattern provides management
isolation since the service can operate independently of customer network policy
configuration. For multitenant services, this private endpoint model provides link
identifiers to prevent access to other customers' resources hosted within the same
service.
The Fabric service implements private endpoints and not service endpoints.
Restrict traffic from the internet to Fabric and route it through the Microsoft
backbone network.
Ensure only authorized client machines can access Fabric.
Comply with regulatory and compliance requirements that mandate private access
to your data and analytics services.
If Azure Private Link is properly configured and Block public Internet access is enabled:
Supported Fabric items are only accessible for your organization from private
endpoints, and aren't accessible from the public Internet.
Traffic from the virtual network targeting endpoints and scenarios that support
private links are transported through the private link.
Traffic from the virtual network targeting endpoints and scenarios that don't
support private links will be blocked by the service, and won't work.
There might be scenarios that don't support private links, which therefore will be
blocked at the service when Block Public Internet Access is enabled.
If Azure Private Link is properly configured and Block public Internet access is disabled:
OneLake
OneLake supports Private Link. You can explore OneLake in the Fabric portal or from any
machine within your established virtual network using via OneLake file explorer, Azure
Storage Explorer, PowerShell, and more.
Direct calls using OneLake regional endpoints don't work via private link to Fabric. For
more information about connecting to OneLake and regional endpoints, see How do I
connect to OneLake?.
Warehouse and Lakehouse SQL endpoint
Accessing Warehouse items and Lakehouse SQL endpoints in the portal is protected by
Private Link. Customers can also use Tabular Data Stream (TDS) endpoints (for example,
SQL Server Management Studio, Azure Data Studio) to connect to Warehouse via Private
link.
Visual query in Warehouse doesn't work when the Block Public Internet Access tenant
setting is enabled.
Once the managed virtual network has been provisioned, the starter pools (default
Compute option) for Spark are disabled, as these are prewarmed clusters hosted in a
shared virtual network. Spark jobs run on custom pools that are created on-demand at
the time of job submission within the dedicated managed virtual network of the
workspace. Workspace migration across capacities in different regions isn't supported
when a managed virtual network is allocated to your workspace.
When the private link setting is enabled, Spark jobs won't work for tenants whose home
region doesn't support Fabric Data Engineering, even if they use Fabric capacities from
other regions that do.
Dataflow Gen2
You can use Dataflow gen2 to get data, transform data, and publish dataflow via private
link. When your data source is behind the firewall, you can use the VNet data gateway to
connect to your data sources. The VNet data gateway enables the injection of the
gateway (compute) into your existing virtual network, thus providing a managed
gateway experience. You can use VNet gateway connections to connect to a Lakehouse
or Warehouse in the tenant that requires a private link or connect to other data sources
with your virtual network.
Pipeline
When you connect to Pipeline via private link, you can use the data pipeline to load data
from any data source with public endpoints into a private-link-enabled Microsoft Fabric
lakehouse. Customers can also author and operationalize data pipelines with activities,
including Notebook and Dataflow activities, using the private link. However, copying
data from and into a Data Warehouse isn't currently possible when Fabric's private link is
enabled.
Power BI
If internet access is disabled, and if the Power BI semantic model, Datamart, or
Dataflow Gen1 connects to a Power BI semantic model or Dataflow as a data
source, the connection will fail.
Publish to Web isn't supported when the tenant setting Azure Private Link is
enabled in Fabric.
Email subscriptions aren't supported when the tenant setting Block Public Internet
Access is enabled in Fabric.
Exporting a Power BI report as PDF or PowerPoint isn't supported when the tenant
setting Azure Private Link is enabled in Fabric.
If your organization is using Azure Private Link in Fabric, modern usage metrics
reports will contain partial data (only Report Open events). A current limitation
when transferring client information over private links prevents Fabric from
capturing Report Page Views and performance data over private links. If your
organization had enabled the Azure Private Link and Block Public Internet Access
tenant settings in Fabric, the refresh for the dataset fails and the usage metrics
report doesn't show any data.
Eventhouse
Eventhouse supports Private Link, allowing secure data ingestion and querying from
your Azure Virtual Network via a private link. You can ingest data from various sources,
including Azure Storage accounts, local files, and Dataflow Gen2. Streaming ingestion
ensures immediate data availability. Additionally, you can utilize KQL queries or Spark to
access data within an eventhouse.
Limitations:
To enable these capabilities in Desktop, admins can configure service tags for the
underlying services that support Microsoft Purview Information Protection, Exchange
Online Protection (EOP), and Azure Information Protection (AIP). Make sure you
understand the implications of using service tags in a private links isolated network.
Tenant migration is blocked when Private Link is turned on in the Fabric admin
portal.
Private link doesn't support in Trial capacity. When accessing Fabric via Private Link
traffic, trial capacity won't work.
Any uses of external images or themes aren't available when using a private link
environment.
Each private endpoint can be connected to one tenant only. You can't set up a
private link to be used by more than one tenant.
For Fabric users: On-premises data gateways aren't supported and fail to register
when Private Link is enabled. To run the gateway configurator successfully, Private
Link must be disabled. Learn more about this scenario. VNet data gateways will
work. For more information, see these considerations.
msauth.net
msftauth.net
graph.microsoft.com
blob/json )
Related content
Set up and use secure private endpoints
Managed VNet for Fabric
Conditional Access
How to find your Microsoft Entra tenant ID
Feedback
Was this page helpful? Yes No
In Fabric, you can configure and use an endpoint that allows your organization to access
Fabric privately. To configure private endpoints, you must be a Fabric administrator and
have permissions in Azure to create and configure resources such as virtual machines
(VMs) and virtual networks (VNets).
The steps that allow you to securely access Fabric from private endpoints are:
5. In the editor, create the following a Fabric resource using the ARM template as
shown below, where
<tenant-object-id> is your Microsoft Entra tenant ID. See How to find your
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"resources": [
{
"type":"Microsoft.PowerBI/privateLinkServicesForPowerBI",
"apiVersion": "2020-06-01",
"name" : "<resource-name>",
"location": "global",
"properties" :
{
"tenantId": "<tenant-object-id>"
}
}
]
}
If you're using an Azure Government cloud for Power BI, location should be the
region name of the tenant. For example, if the tenant is in US Gov Texas, you
should put "location": "usgovtexas" in the ARM template. The list of Power BI US
Government regions can be found in the Power BI for US government article.
) Important
ノ Expand table
Setting Value
Project details
Resource group Select **Create new. Enter test-PL as the name. Select OK.
Region
7. On the review screen, select Create to accept the terms and conditions.
Step 3. Create a virtual network
The following procedure creates a virtual network with a resource subnet, an Azure
Bastion subnet, and an Azure Bastion host.
The number of IP addresses your subnet will need is the number of capacities on your
tenant plus five. For example, if you're creating a subnet for a tenant with seven
capacities, you'll need twelve IP addresses.
3. On the Basics tab of Create virtual network, enter or select the following
information:
ノ Expand table
Setting Value
Project details
Instance details
Region Select the region where you'll initiate the connection to Fabric.
4. Select Next to proceed to the Security tab. You can leave as default or change
based on business need.
5. Select Next to proceed to the IP Addresses tab. You can leave as default or change
based on business need.
6. Select Save.
7. Select Review + create at the bottom of the screen. When validation passes, select
Create.
1. In the Azure portal, go to Create a resource > Compute > Virtual machines.
ノ Expand table
Settings Value
Project details
Instance details
Virtual machine Enter a name for the new virtual machine. Select the info bubble
name next to the field name to see important information about virtual
machine names.
Settings Value
Image Select the image you want. For example, choose Windows Server
2022.
ADMINISTRATOR
ACCOUNT
INBOUND PORT
RULES
4. On the Disks tab, leave the defaults and select Next: Networking.
ノ Expand table
Settings Value
6. Select Review + create. You're taken to the Review + create page where Azure
validates your configuration.
1. In the search box at the top of the portal, enter Private endpoint. Select Private
endpoints.
3. On the Basics tab of Create a private endpoint, enter or select the following
information:
ノ Expand table
Settings Value
Project details
Instance details
Region Select the region you created for your virtual network in Step 3.
The following image shows the Create a private endpoint - Basics window.
4. Select Next: Resource. In the Resource pane, enter or select the following
information:
ノ Expand table
Settings Value
The following image shows the Create a private endpoint - Resource window.
5. Select Next: Virtual Network. In Virtual Network, enter or select the following
information.
ノ Expand table
Settings Value
NETWORKING
7. Select Create.
3. Select the Connect button, and choose Connect via Bastion from the dropdown
menu.
5. On the Bastion page, enter the required authentication credentials, then click
Connect.
3. You receive a response similar to the following message and can see that the
private IP address is returned. You can see that the Onelake endpoint and
Warehouse endpoint also return private IPs.
If you disable public access for Fabric, certain constraints on access to Fabric services are
put into place, as described in the next section.
) Important
When you turn on Block Internet Access, some unsupported Fabric items will be
disabled. Learn full list of limitations and considerations in About private links
To disable public access for Fabric, sign in to Fabric as an administrator, and navigate
to the Admin portal. Select Tenant settings and scroll to the Advanced networking
section. Enable the toggle button in the Block Public Internet Access tenant setting.
It takes approximately 15 minutes for the system to disable your organization's access
to Fabric from the public Internet.
If Azure Private Link is properly configured and Block public Internet access is enabled:
Fabric is only accessible for your organization from private endpoints, and isn't
accessible from the public Internet.
Traffic from the virtual network targeting endpoints and scenarios that support
private links are transported through the private link.
Traffic from the virtual network targeting endpoints and scenarios that don't
support private links will be blocked by the service, and won't work.
There may be scenarios that don't support private links, which therefore will be
blocked at the service when Block public Internet access is enabled.
If Azure Private Link is properly configured and Block public Internet access is disabled:
The following video shows how to connect a mobile device to Fabric, using private
endpoints:
7 Note
This video might use earlier versions of Power BI Desktop or the Power BI service.
https://www.youtube-nocookie.com/embed/-3yFtlZBpqs
Related content
About private links
Feedback
Was this page helpful? Yes No
You can use Azure service tags to enable connections to and from Microsoft Fabric. In
Azure, a service tag is a defined group of IP addresses that is automatically managed, as
a group, to minimize the complexity of updates or changes to network security rules.
ノ Expand table
Related content
Private endpoints
Azure IP Ranges and Service Tags – Public Cloud
You can refer to the PowerBI tag. Microsoft Fabric currently doesn't support
regional service tags nor breakdown IP ranges by region.
Feedback
Was this page helpful? Yes No
The Conditional Access feature in Microsoft Entra ID offers several ways enterprise
customers can secure apps in their tenants, including:
Multifactor authentication
Allowing only Intune enrolled devices to access specific services
Restricting user locations and IP ranges
For more information on the full capabilities of Conditional Access, see the article
Microsoft Entra Conditional Access documentation.
Configure a single, common, conditional access policy for the Power BI Service,
Azure Data Explorer, Azure SQL Database, and Azure Storage. Having a single,
common policy significantly reduces unexpected prompts that might arise from
different policies being applied to downstream services, and the consistent security
posture provides the best user experience in Microsoft Fabric and its related
products.
Product
Power BI Service
Azure Data Explorer
Azure SQL Database
Azure Storage
If you create a restrictive policy (such as one that blocks access for all apps except
Power BI), certain features, such as dataflows, won't work.
7 Note
If you already have a conditional access policy configured for Power BI, be sure to
include the other products listed above in your existing Power BI policy, otherwise
conditional access may not operate as intended in Fabric.
The following steps show how to configure a conditional access policy for Microsoft
Fabric.
1. Sign in to the Azure portal using an account with global administrator permissions.
2. Select Microsoft Entra ID.
3. On the Overview page, choose Security from the menu.
4. On the Security | Getting started page, choose Conditional Access.
5. On the Conditional Access | Overview page, select +Create new policy.
6. Provide a name for the policy.
7. Under Assignments, select the Users field. Then, on the Include tab, choose Select
users and groups, and then check the Users and groups checkbox. The Select
users and groups pane opens, and you can search for and select a Microsoft Entra
user or group for conditional access. When done, click Select.
8. Place your cursor in the Target resources field and choose Cloud apps from the
drop-down menu. Then, on the Include tab, choose Select apps and place your
cursor in the Select field. In the Select side pane that appears, find and select
Power BI Service, Azure Data Explorer, Azure SQL Database, and Azure Storage.
When you've selected all four items, close the side pane by clicking Select.
9. Under Access controls, put your cursor in the Grant field. In the Grant side pane
that appears, configure the policy you want to apply, and then click Select.
10. Set the Enable policy toggle to On, then select Create.
Next steps
Microsoft Entra Conditional Access documentation
Feedback
Was this page helpful? Yes No
This article contains the allowlist of the Microsoft Fabric URLs required for interfacing
with Fabric workloads. For the Power BI allowlist, see Add Power BI URLs to your
allowlist.
The URLs are divided into two categories: required and optional. The required URLs are
necessary for the service to work correctly. The optional URLs are used for specific
features that you might not use. To use Fabric, you must be able to connect to the
endpoints marked required in the tables in this article, and to any endpoints marked
required on the linked sites. If the link to an external site refers to a specific section, you
only need to review the endpoints in that section. You can also add endpoints that are
marked optional to allowlists for specific functionality to work.
Fabric requires only TCP Port 443 to be opened for the listed endpoints.
The Endpoint column lists domain names and links to external sites, which contain
further endpoint information.
OneLake
ノ Expand table
Pipeline
ノ Expand table
For inbound connections No specific endpoints other than the customer's data
store endpoints required in pipelines and behinds the
firewall.
Purpose Endpoint Port
Lakehouse
ノ Expand table
Notebook
ノ Expand table
Spark
ノ Expand table
Purpose Endpoint Port
Inbound connections (library management for local static endpoints for N/A
Conda) condaPackages
Data Warehouse
ノ Expand table
Data Science
ノ Expand table
Inbound connections (library management for local static endpoints for N/A
Conda) condaPackages
KQL Database
ノ Expand table
https://*.z[0-9].kusto.fabric.microsoft.com
Eventstream
ノ Expand table
Related content
Add Power BI URLs to allowlist
Feedback
Was this page helpful? Yes No
This article contains the allowlist of the Power BI URLs required for interfacing with
Power BI. For the Microsoft Fabric allowlist, see Add Fabric URLs to your allowlist.
The Power BI service requires internet connectivity. The endpoints listed in the following
tables should be reachable for customers who use the Power BI service. All endpoints in
the Power BI service support HTTP/2.
To use the Power BI service, you must be able to connect to the endpoints marked
required in the tables in this article, and to any endpoints marked required on the
linked sites. If the link to an external site refers to a specific section, you only need to
review the endpoints in that section.
You can also add endpoints that are marked optional to allowlists for specific
functionality to work.
The Power BI service requires only TCP Port 443 to be opened for the listed endpoints.
Wildcards (*) represent all levels under the root domain. N/A is used when information
isn't available. The Destination(s) column lists domain names and links to external sites,
which contain further endpoint information.
) Important
Authentication
Power BI depends on the required endpoints in the Microsoft 365 authentication and
identity sections. To use Power BI, you must be able to connect to the endpoints in the
following linked site.
ノ Expand table
Required: Authentication and See the documentation for Microsoft 365 Common and N/A
Purpose Destination Port
ノ Expand table
Required: Microsoft 365 See the documentation for Microsoft 365 Common N/A
integration and Office Online URLs
ノ Expand table
Required: For managing users and See the documentation for Microsoft 365 Common N/A
viewing audit logs and Office Online URLs
Getting data
To get data from specific data sources, such as OneDrive, you must be able to connect
to the endpoints in the following table. Access to other internet domains and URLs
might be required for specific data sources that your organization uses.
ノ Expand table
Optional: Import files From OneDrive See the Required URLs and ports for N/A
personal OneDrive site
Optional: PubNub streaming data sources See the PubNub documentation N/A
Required: Excel See the documentation for Microsoft 365 Common and Office N/A
integration Online URLs
Power BI visuals
Power BI depends on certain endpoints to view and access Power BI visuals. You must be
able to connect to the endpoints and linked sites in the following table.
ノ Expand table
Optional: PowerApps See the Required services section from the PowerApps N/A
system requirements site
Optional: Visio See the documentation for Microsoft 365 Common and N/A
Office Online URLs, as well as SharePoint Online and
OneDrive for work or school
ノ Expand table
Required: OneDrive and See the documentation for SharePoint Online and N/A
SharePoint integration OneDrive for Business URLs
ノ Expand table
logx.optimizely.com
mscom.demdex.net
tags.tiqcdn.com
Feedback
Was this page helpful? Yes No
Managed virtual networks are virtual networks that are created and managed by
Microsoft Fabric for each Fabric workspace. Managed virtual networks provide network
isolation for Fabric Spark workloads, meaning that the compute clusters are deployed in
a dedicated network and are no longer part of the shared virtual network.
Managed virtual networks also enable network security features such as managed
private endpoints, and private link support for Data Engineering and Data Science items
in Microsoft Fabric that use Apache Spark.
Fabric workspaces that are provisioned with a dedicated virtual network provide you
with value in three ways:
With a managed virtual network you get complete network isolation for the Spark
clusters running your Spark jobs (which allow users to run arbitrary user code)
while offloading the burden of managing the virtual network to Microsoft Fabric.
You don't need to create a subnet for the Spark clusters based on peak load, as
this is managed for you by Microsoft Fabric.
A managed virtual network for your workspace, along with managed private
endpoints, allows you to access data sources that are behind firewalls or otherwise
blocked from public access.
7 Note
Managed virtual networks are currently not supported in the Switzerland West and
West Central US regions.
Enabling Private Link and running a Spark job in a Fabric Workspace. Tenant
admins can enable the Private Link setting in the Admin portal of their Microsoft
Fabric tenant.
Once you have enabled the Private Link setting, running the first Spark job
(Notebook or Spark job definition) or performing a Lakehouse operation (for
example, Load to Table, or a table maintenance operation such as Optimize or
Vacuum) will result in the creation of a managed virtual network for the workspace.
Learn more about configuring Private Links for Microsoft Fabric
7 Note
Related content
About managed private endpoints
How to create managed private endpoints
About private links
Feedback
Was this page helpful? Yes No
Managed private endpoints are feature that allows secure and private access to data
sources from Fabric Spark workloads.
Managed private endpoints allow Fabric Spark workloads to securely access data
sources without exposing them to the public network or requiring complex
network configurations.
The private endpoints provide a secure way to connect and access the data from
these data sources using items such as notebooks and Spark job definitions.
Microsoft Fabric creates and manages managed private endpoints based on the
inputs from the workspace admin. Workspace admins can set up managed private
endpoints from the workspace settings by specifying the resource ID of the data
source, identifying the target subresource, and providing a justification for the
private endpoint request.
Managed private endpoints support various data sources, such as Azure Storage,
Azure SQL Database and many more.
For more information about supported data sources for managed private endpoints in
Fabric, see Supported data sources.
Managed private endpoints: Managed private endpoints are supported only for
Fabric trial capacity and Fabric capacities F64 or higher.
ノ Expand table
Region
West Central US
Israel Central
Switzerland West
Region
Italy North
West India
Mexico Central
Qatar Central
Spain Central
ノ Expand table
Region
West Central US
Switzerland West
Italy North
Qatar Central
West India
France South
Germany North
Japan West
Korea South
Southafrica West
UAE Central
Spark job resilience: To prevent Spark job failures or errors, migrate workspaces
with managed private endpoints to Fabric capacity SKUs of F64 or higher.
Related content
Create and use managed private endpoints
Overview of private links in Fabric
Overview of managed virtual networks in Fabric
Feedback
Was this page helpful? Yes No
Users with admin permissions to a Microsoft Fabric workspace can create, view, and
delete managed private endpoints from the Fabric portal through the workspace
settings.
The user can also monitor the status and the approval process of the managed
private endpoints from the Network security section of the workspace settings.
The user can access the data sources using the private endpoint name from the
Fabric Spark workloads.
3. When the managed private endpoint has been provisioned, the Activation status
change to Succeeded.
In addition the request for the private endpoint access is sent to the data source.
The data source admins are notified on the Azure portal resource pages for their
data sources. There they'll see a pending access request with the request message.
Taking SQL server as an example, users can navigate to the Azure portal and search for
the "SQL Server" resource.
1. On the Resource page, select Networking from the navigation menu and then
select the Private Access tab.
2. Data source administrators should be able to view the active private endpoint
connections and new connection requests.
3. Admins can either Approve or Reject by providing a business justification.
4. Once the request has been approved or rejected by the data source admin, the
status is updated in the Fabric workspace settings page upon refresh.
5. When the status has changed to approved, the endpoint can be used in notebooks
or Spark job definitions to access the data stored in the data source from Fabric
workspace.
This guide provides code samples to help you get started in your own notebooks to
access data from data sources such as SQL DB through managed private endpoints.
Prerequisites
1. Access to the data source. This example looks at Azure SQL Server and Azure SQL
Database.
3. Navigate to the Azure SQL Server's resource page in the Azure portal and select
the Properties menu. Copy the Resource ID for the SQL Server that you would like
to connect to from Microsoft Fabric.
4. Using the steps listed in Create a managed private-endpoint, create the managed
private endpoint from the Fabric Network security settings page.
5. Once the data source administrator of the SQL server has approved the new
private endpoint connection request, you should be able to use the newly created
Managed Private Endpoint.
3. Now, in the notebook, by specifying the name of the SQL database and its
connection properties, you can connect through the managed private endpoint
connection that's been set up to read the tables in the database and write them to
your lakehouse in Microsoft Fabric.
serverName = "<server_name>.database.windows.net"
database = "<database_name>"
dbPort = 1433
dbUserName = "<username>"
dbPassword = “<db password> or reference based on Keyvault>”
spark = SparkSession.builder \
.appName("Example") \
.config("spark.jars.packages", "com.microsoft.azure:azure-sqldb-
spark:1.0.2") \
.config("spark.sql.catalogImplementation",
"com.microsoft.azure.synapse.spark") \
.config("spark.sql.catalog.testDB", "com.microsoft.azure.synapse.spark")
\
.config("spark.sql.catalog.testDB.spark.synapse.linkedServiceName",
"AzureSqlDatabase") \
.config("spark.sql.catalog.testDB.spark.synapse.linkedServiceName.connection
String", f"jdbc:sqlserver://{serverName}:{dbPort};database={database};user=
{dbUserName};password={dbPassword}") \ .getOrCreate()
jdbcURL = "jdbc:sqlserver://{0}:{1};database=
{2}".format(serverName,dbPort,database)
connection = {"user":dbUserName,"password":dbPassword,"driver":
"com.microsoft.sqlserver.jdbc.SQLServerDriver"}
# You can also specify a custom path for the table location
# df.write.mode("overwrite").format("delta").option("path",
"abfss://yourlakehouse.dfs.core.windows.net/Employee").saveAsTable("Employee
")
Now that the connection has been established, next step is to create a data frame to
read the table in the SQL Database.
ノ Expand table
Cognitive /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Services name}/providers/Microsoft.CognitiveServices/accounts/{resource-name}
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Databricks name}/providers/Microsoft.Databricks/workspaces/{workspace-name}
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Database for name}/providers/Microsoft.DBforMariaDB/servers/{server-name}
MariaDB
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Database for name}/providers/Microsoft.DBforMySQL/servers/{server-name}
MySQL
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Database for name}/providers/Microsoft.DBforPostgreSQL/servers/{server-name}
Service Resource ID Format
PostgreSQL
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Cosmos DB name}/providers/Microsoft.DocumentDB/databaseAccounts/{account-name}
for MongoDB
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Cosmos DB name}/providers/Microsoft.DocumentDB/databaseAccounts/{account-name}
for NoSQL
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Monitor name}/providers/Microsoft.Insights/privateLinkScopes/{scope-name}
Private Link
Scopes
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Machine name}/providers/Microsoft.MachineLearningServices/workspaces/{workspace-
Learning name}
Microsoft /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Purview name}/providers/Microsoft.Purview/accounts/{account-name}
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Synapse name}/providers/Microsoft.Synapse/workspaces/{workspace-name}
Analytics
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Synapse name}/providers/Microsoft.Synapse/workspaces/{workspace-name}
Analytics
(Artifacts)
Azure /subscriptions/{subscription-id}/resourceGroups/{resource-group-
Functions name}/providers/Microsoft.Web/sites/{function-app-name}
Related content
About managed private endpoints in Fabric
About private links in Fabric
Overview of managed virtual networks in Fabric
Feedback
Was this page helpful? Yes No
Workspace identities can be created in the workspace settings of workspaces that are
associated with a Fabric capacity. A workspace identity is automatically assigned the
workspace contributor role and has access to workspace items.
When you create a workspace identity, Fabric creates a service principal in Microsoft
Entra ID to represent the identity. An accompanying app registration is also created.
Fabric automatically manages the credentials associated with workspace identities,
thereby preventing credential leaks and downtime due to improper credential handling.
7 Note
Fabric workspace identity is generally available. You can only create a workspace
identity in F64 or higher capacities. For information about buying a Fabric
subscription, see Buy a Microsoft Fabric subscription.
While Fabric workspace identities share some similarities with Azure managed identities,
their lifecycle, administration, and governance are different. A workspace identity has an
independent lifecycle that is managed entirely in Fabric. A Fabric workspace can
optionally be associated with an identity. When the workspace is deleted, the identity
gets deleted. The name of the workspace identity is always the same as the name of the
workspace it's associated with.
When the workspace identity has been created, the tab displays the workspace identity
details and the list of authorized users.
The sections of the workspace identity configuration are described in the following
sections.
Identity details
ノ Expand table
Detail Description
Name Workspace identity name. The workspace identity name is the same as the workspace
name.
ID The workspace identity GUID. This is a unique identifier for the identity.
Role The workspace role assigned to the identity. Workspace identities are automatically
assigned the contributor role upon creation.
State The state of the workspace. Possible values: Active, Inactive, Deleting, Unusable, Failed,
DeleteFailed
Authorized users
For information, see Access control.
Delete a workspace identity
When an identity is deleted, Fabric items relying on the workspace identity for trusted
workspace access or authentication will break. Deleted workspace identities cannot be
restored.
7 Note
Access control
Workspace identity can be created and deleted by workspace admins. The workspace
identity has the workspace contributor role on the workspace.
Application Administrators or users with higher roles can view, modify, and delete the
service principal and app registration associated with the workspace identity in Azure.
2 Warning
Modifying or deleting the service principal or app registration in Azure is not
recommended, as it will cause Fabric items relying on workspace identity to stop
working.
7 Note
Enterprise applications
The application associated with the workspace identity can be seen in Enterprise
Applications in the Azure portal. Fabric Identity Management app is its configuration
owner.
2 Warning
Modifications to the application made here will cause the workspace identity to
stop working.
To view the audit logs and sign-in logs for this identity:
App registrations
The application associated with the workspace identity can be seen under App
registrations in the Azure portal. No modifications should be made there, as this will
cause the workspace identity to stop working.
Advanced scenarios
The following sections describe scenarios involving workspace identities that might
occur.
When a workspace is deleted, its workspace identity is deleted as well. If the workspace
is restored after deletion, the workspace identity is not restored. If you want the
restored workspace to have a workspace identity, you must create a new one.
Renaming the workspace
When a workspace gets renamed, the workspace identity is also renamed to match the
workspace name. However its Entra application and service principal remain the same.
Note that there can be multiple application and app registration objects with same
name in a tenant.
If you run into issues the first time you create a workspace identity in your tenant,
try the following steps:
1. If the workspace identity state is failed, wait for an hour and then delete the
identity.
2. After the identity has been deleted, wait 5 minutes and then create the
identity again.
Related content
Trusted workspace access
Fabric identities
Feedback
Was this page helpful? Yes No
Fabric allows you to access firewall-enabled Azure Data Lake Storage (ADLS) Gen2
accounts in a secure manner. Fabric workspaces that have a workspace identity can
securely access ADLS Gen2 accounts with public network access enabled from selected
virtual networks and IP addresses. You can limit ADLS Gen2 access to specific Fabric
workspaces.
Fabric workspaces that access a storage account with trusted workspace access need
proper authorization for the request. Authorization is supported with Microsoft Entra
credentials for organizational accounts or service principals. To find out more about
resource instance rules, see Grant access from Azure resource instances.
To limit and protect access to firewall-enabled storage accounts from certain Fabric
workspaces, you can set up resource instance rule to allow access from specific Fabric
workspaces.
7 Note
Trusted workspace access is generally available. Fabric workspace identity can only
be created in workspaces associated with a Fabric capacity (F64 or higher). For
information about buying a Fabric subscription, see Buy a Microsoft Fabric
subscription.
Use the T-SQL COPY statement to ingest data into your Warehouse from a firewall-
enabled ADLS Gen2 account that has trusted workspace access enabled.
2. Choose Build your own template in the editor. For a sample ARM template that
creates a resource instance rule, see ARM template sample.
3. Create the resource instance rule in the editor. When done, choose Review +
Create.
4. On the Basics tab that appears, specify the required project and instance details.
When done, choose Review + Create.
5. On the Review + Create tab that appears, review the summary and then select
Create. The rule will be submitted for deployment.
7 Note
Resource instance rules for Fabric workspaces can only be created through
ARM templates. Creation through the Azure portal is not supported.
The subscriptionId "00000000-0000-0000-0000-000000000000" must be used
for the Fabric workspace resourceId.
You can get the workspace id for a Fabric workspace through its address bar
URL.
Here's an example of a resource instance rule that can be created through ARM
template. For a complete example, see ARM template sample.
JSON
"resourceAccessRules": [
"resourceId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourcegroups/Fabric/providers/Microsoft.Fabric/workspaces/b27
88a72-eef5-4258-a609-9b1c3e454624"
}
]
This configuration isn't recommended, and support might be discontinued in the future.
We recommend that you use resource instance rules to grant access to specific
resources.
Who can configure Storage accounts for trusted service
access?
A Contributor on the storage account (an Azure RBAC role) can configure resource
instance rules or trusted service exception.
You can create a new ADLS shortcut in a Fabric Lakehouse to start analyzing your
data with Spark, SQL, and Power BI.
You can create a data pipeline that leverages trusted workspace access to directly
access a firewall-enabled ADLS Gen2 account.
You can use a T-SQL Copy statement that leverages trusted workspace access to
ingest data into a Fabric warehouse.
Prerequisites
A Fabric workspace associated with a Fabric capacity. See Workspace identity.
Create a workspace identity associated with the Fabric workspace.
The user account or service principal used for creating the shortcut should have
Azure RBAC roles on the storage account. The principal must have a Storage Blob
Data Contributor, Storage Blob Data owner, or Storage Blob Data Reader role at
the storage account scope, or a Storage Blob Delegator role at the storage account
scope in addition to a Storage Blob Data Reader role at the container scope.
Configure a resource instance rule for the storage account.
7 Note
Steps
1. Start by creating a new shortcut in a Lakehouse.
5. The lakehouse shortcut is created, and you should be able to preview storage data
in the shortcut.
With OneCopy in Fabric, you can access your OneLake shortcuts with trusted access
from all Fabric workloads.
Spark: You can use Spark to access data from your OneLake shortcuts. When
shortcuts are used in Spark, they appear as folders in OneLake. You just need to
reference the folder name to access the data. You can use the OneLake shortcut to
storage accounts with trusted workspace access in Spark notebooks.
SQL endpoint: Shortcuts created in the "Tables" section of your lakehouse are also
available in the SQL endpoint. You can open the SQL endpoint and query your data
just like any other table.
Pipelines: Data pipelines can access managed shortcuts to storage accounts with
trusted workspace access. Data pipelines can be used to read from or write to
storage accounts through OneLake shortcuts.
Semantic models and reports: The default semantic model associated with a
Lakehouse SQL endpoint can read managed shortcuts to storage accounts with
trusted workspace access. To see the managed tables in the default semantic
model, go to the SQL endpoint, select Reporting, and choose Automatically
update semantic model.
You can also create new semantic models that reference table shortcuts to storage
accounts with trusted workspace access. Go to the SQL endpoint, select Reporting
and choose New semantic model.
You can create reports on top of the default semantic models and custom semantic
models.
KQL Database: You can also create OneLake shortcuts to ADLS Gen2 in a KQL
database. The steps to create the managed shortcut with trusted workspace access
remain the same.
Prerequisites
Steps
1. Start by selecting Get Data in a lakehouse.
2. Select New data pipeline. Provide a name for the pipeline and then select Create.
4. Provide the URL of the storage account that has been configured with trusted
workspace access, and choose a name for the connection. For Authentication kind,
choose Organizational account or Service Principal.
5. Select the file that you need to copy into the lakehouse.
6. On the Review + save screen, select Start data transfer immediately. When done,
select Save + Run.
7. When the pipeline status changes from Queued to Succeeded, go to the lakehouse
and verify that the data tables were created.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2023-01-01",
"name": "<storage account name>",
"id": "/subscriptions/<subscription id of storage
account>/resourceGroups/<resource group
name>/providers/Microsoft.Storage/storageAccounts/<storage account name>",
"location": "<region>",
"sku": {
"name": "Standard_RAGRS",
"tier": "Standard"
},
"kind": "StorageV2",
"properties": {
"networkAcls": {
"resourceAccessRules": [
{
"tenantId": "<tenantid>",
"resourceId": "/subscriptions/00000000-0000-
0000-0000-
000000000000/resourcegroups/Fabric/providers/Microsoft.Fabric/workspaces/<wo
rkspace-id>"
}]
}
}
}
]
}
Related content
Workspace identity
Grant access from Azure resource instances
Trusted access based on a managed identity
Feedback
Was this page helpful? Yes No
Sharing items with guest users in Fabric is similar to sharing items with guest users in
Power BI, except that in Fabric, you can only share items by sharing the workspace.
Explicit sharing of particular items with guest users isn't supported, except for reports,
dashboards, semantic models, and apps.
For more information about Guest user sharing in Power BI, see Distribute Power BI
content to external guest users with Microsoft Entra B2B.
Feedback
Was this page helpful? Yes No
Use Customer Lockbox for Microsoft Azure to control how Microsoft engineers access
your data. In this article you'll learn how Customer Lockbox requests are initiated,
tracked, and stored for later reviews and audits.
After the access request is submitted, the JIT service evaluates the request, considering
factors such as:
Permissions levels
Based on the JIT role, the request may also include an approval from internal Microsoft
approvers. For example, the approver might be the customer support lead or the
DevOps Manager.
When the request requires direct access to customer data, a Customer Lockbox request
is initiated. For example, in cases where remote desktop access to a customer's virtual
machine is needed. Once the Customer Lockbox request is made, it awaits customer's
approval before access is granted.
These steps describe a Microsoft initiated Customer Lockbox request, for Microsoft
Fabric service.
2. The email provides a link to Customer Lockbox in the Azure Administration
module. Using the link, the designated approver signs in to the Azure portal to
view any pending Customer Lockbox requests. The request remains in the
customer queue for four days. After that, the access request automatically expires
and no access is granted to Microsoft engineers.
3. To get the details of the pending request, the designated approver can select the
Customer Lockbox request from the Pending Requests menu option.
4. After reviewing the request, the designated approver enters a justification and
selects one of the options below. For auditing purposes, the actions are logged
in the Customer Lockbox logs.
Logs
Customer Lockbox has two type of logs:
To access the activity logs, in the Azure portal, select Activity Log. You can filter the
results for specific actions.
Audit logs - Available from the Microsoft Purview compliance portal. You can see
the audit logs in the admin portal.
ノ Expand table
Exclusions
Customer Lockbox requests aren't triggered in the following engineering support
scenarios:
Emergency scenarios that fall outside of standard operating procedures. For
example, a major service outage requires immediate attention to recover or restore
services in an unexpected scenario. These events are rare and usually don't require
access to customer data.
External legal demands for data. For details, see government requests for
data on the Microsoft Trust Center.
Data access
Access to data varies according to the Microsoft Fabric experience your request is for.
This section lists which data the Microsoft engineer can access, after you approve a
Customer Lockbox request.
Power BI - When running the operations listed below, the Microsoft engineer will
have access to a few tables linked to your request. Each operation the Microsoft
engineer uses, is reflected in the audit logs.
Get refresh history
Delete admin usage dashboard
Delete usage metrics v2 package
Delete admin monitoring folder
Real-Time Analytics - The Real-Time Analytics engineer will have access to the
data in the KQL database that's linked to your request.
Data Engineering - The Data Engineering engineer will have access to the
following Spark logs linked to your request:
Driver logs
Event logs
Executor logs
Data Factory - The Data Factory engineer will have access to data pipeline
definitions linked to your request, if permission is granted.
Related content
Microsoft Purview Customer Lockbox
Microsoft 365 guidance for security & compliance
Security overview
Feedback
Was this page helpful? Yes No
Row-level security (RLS) with Power BI can be used to restrict data access for given users.
Filters restrict data access at the row level, and you can define filters within roles. In the
Power BI service, users with access to a workspace have access to semantic models in
that workspace. RLS only restricts data access for users with Viewer permissions. It
doesn't apply to Admins, Members, or Contributors.
You can configure RLS for data models imported into Power BI with Power BI. You can
also configure RLS on semantic models that are using DirectQuery, such as SQL Server.
For Analysis Services or Azure Analysis Services lives connections, you configure row-
level security in the model, not in Power BI. The security option doesn't show up for live
connection semantic models.
7 Note
You can't define roles within Power BI Desktop for Analysis Services live
connections. You need to do that within the Analysis Services model.
7 Note
5. Under Tables, select the table to which you want to apply a DAX (Data Analysis
Expression) rule.
6. In the Table filter DAX expression box, enter the DAX expressions. This expression
returns a value of true or false. For example: [Entity ID] = “Value” .
7 Note
You can use username() within this expression. Be aware that username() has
the format of DOMAIN\username within Power BI Desktop. Within the Power
BI service and Power BI Report Server, it's in the format of the user's User
Principal Name (UPN). Alternatively, you can use userprincipalname(), which
always returns the user in the format of their user principal name,
[email protected].
7. After you've created the DAX expression, select the checkmark above the
expression box to validate the expression.
7 Note
In this expression box, use commas to separate DAX function arguments even
if you're using a locale that normally uses semicolon separators (e.g. French or
German).
8. Select Save.
You can't assign users to a role within Power BI Desktop. You assign them in the Power
BI service. You can enable dynamic security within Power BI Desktop by making use of
the username() or userprincipalname() DAX functions and having the proper
relationships configured.
For more information, see Bidirectional cross-filtering using DirectQuery in Power BI and
the Securing the Tabular BI Semantic Model technical article.
1. In Power BI Desktop, enable the preview by going to Files > Options and Settings
> Options > Preview features and turn on “Enhanced row-level security editor”.
Alternatively you can use this editor in the Service by editing your data model in
the Power BI service.
4. From the Manage roles window, select New to create a new role.
5. Under Roles, provide a name for the role and select enter.
6. Under Select tables, select the table you want to apply a row-level security filter to.
7. Under Filter data, use the default editor to define your roles. The expressions
created return a true or false value.
7 Note
Not all row-level security filters supported in Power BI can be defined using
the default editor. Limitations include expressions that today can only be
defined using DAX including dynamic rules such as username() or
userprincipalname(). To define roles using these filters switch to use the DAX
editor.
8. Optionally select Switch to DAX editor to switch to using the DAX editor to define
your role. You can switch back to the default editor by selecting Switch to default
editor. All changes made in either editor interface persist when switching
interfaces when possible.
When defining a role using the DAX editor that can't be defined in the default
editor, if you attempt to switch to the default editor you'll be prompted with a
warning that switching editors may result in some information being lost. To keep
this information, select Cancel and continue only editing this role in the DAX
editor.
9. Select Save
The View as roles window appears, where you see the roles you've created.
Screenshot of the View as roles window with None selected.
3. You can also select Other user and supply a given user.
It's best to supply the User Principal Name (UPN) because that's what the Power BI
service and Power BI Report Server use.
Within Power BI Desktop, Other user displays different results only if you're using
dynamic security based on your DAX expressions. In this case, you need to include
the username as well as the role.
4. Select OK.
The report renders based on what the RLS filters allow the user to see.
7 Note
The View as roles feature doesn't work for DirectQuery models with Single
Sign-On (SSO) enabled.
1. In the Power BI service, select the More options menu for a semantic model. This
menu appears when you hover on a semantic model name, whether you select it
from the navigation menu or the workspace page.
2. Select Security.
Security takes you to the Role-Level Security page where you add members to a role
you created. Contributor (and higher workspace roles) will see Security and can assign
users to a role.
Add members
In the Power BI service, you can add a member to the role by typing in the email address
or name of the user or security group. You can't add Groups created in Power BI. You
can add members external to your organization.
Distribution Group
Mail-enabled Group
Microsoft Entra Security Group
Note that Microsoft 365 groups aren't supported and can't be added to any roles.
You can also see how many members are part of the role by the number in parentheses
next to the role name, or next to Members.
Remove members
You can remove members by selecting the X next to their name.
In the page header, the role being applied is shown. Test other roles, a combination of
roles, or a specific person by selecting Now viewing as. Here you see important
permissions details pertaining to the individual or role being tested. For more
information about how permissions interact with RLS, see RLS user experience.
Test other reports connected to the semantic model by selecting Viewing in the page
header. You can only test reports located in the same workspace as your semantic
model.
To return to normal viewing, select Back to Row-Level Security.
7 Note
The Test as role feature doesn't work for DirectQuery models with Single Sign-On
(SSO) enabled. Additionally, not all aspects of a report can be validated in the Test
as role feature including Q&A visualizations, Quick insights visualizations, and
Copilot.
Within Power BI Desktop, username() will return a user in the format of DOMAIN\User
and userprincipalname() will return a user in the format of [email protected].
Within the Power BI service, username() and userprincipalname() will both return the
user's User Principal Name (UPN). This looks similar to an email address.
If you previously defined roles and rules in the Power BI service, you must re-create
them in Power BI Desktop.
You can define RLS only on the semantic models created with Power BI Desktop. If
you want to enable RLS for semantic models created with Excel, you must convert
your files into Power BI Desktop (PBIX) files first. Learn more.
Service principals can't be added to an RLS role. Accordingly, RLS isn't applied for
apps using a service principal as the final effective identity.
Only Import and DirectQuery connections are supported. Live connections to
Analysis Services are handled in the on-premises model.
The Test as role/View as role feature doesn't work for DirectQuery models with
single sign-on (SSO) enabled.
The Test as role/view as role feature shows only reports from semantic models
workspace.
The Test as role/View as role feature doesn't work for paginated reports.
Keep in mind that if a Power BI report references a row with RLS configured then the
same message displays as for a deleted or non-existing field. To these users, it looks like
the report is broken.
FAQ
Question: What if I have previously created roles and rules for a dataset in the Power BI
service? Do they still work if I do nothing?
Answer: No, visuals won't render properly. You have to re-create the roles and rules
within Power BI Desktop and then publish to the Power BI service.
Question: Can I create these roles for Analysis Services data sources?
Answer: Yes, if you imported the data into Power BI Desktop. If you're using a live
connection, you can't configure RLS within the Power BI service. You define RLS in the
Analysis Services model on-premises.
Question: Can I use RLS to limit the columns or measures accessible by my users?
Answer: No, if a user has access to a particular row of data, they can see all the columns
of data for that row. To restrict access to columns and column metadata, consider using
object-level security.
Question: Does RLS let me hide detailed data but give access to data summarized in
visuals?
Answer: No, you secure individual rows of data, but users can always see either the
details or the summarized data.
Question: My data source already has security roles defined (for example SQL Server
roles or SAP BW roles). What's the relationship between these roles and RLS?
Answer: The answer depends on whether you're importing data or using DirectQuery. If
you're importing data into your Power BI dataset, the security roles in your data source
aren't used. In this case, you should define RLS to enforce security rules for users who
connect in Power BI. If you're using DirectQuery, the security roles in your data source
are used. When a user opens a report, Power BI sends a query to the underlying data
source, which applies security rules to the data based on the user's credentials.
Related content
Restrict data access with row-level security (RLS) for Power BI Desktop
Row-level security (RLS) guidance in Power BI Desktop
Power BI implementation planning: Report consumer security planning
RLS for Embedded scenarios for ISVs
Feedback
Was this page helpful? Yes No
Provide product feedback | Ask the community
Object-level security (OLS)
Article • 04/26/2024
Object-level security (OLS) enables model authors to secure specific tables or columns
from report viewers. For example, a column that includes personal data can be restricted
so that only certain viewers can see and interact with it. In addition, you can also restrict
object names and metadata. This added layer of security prevents users without the
appropriate access levels from discovering business critical or sensitive personal
information like employee or financial records. For viewers that don’t have the required
permission, it's as if the secured tables or columns don't exist.
To create roles on Power BI Desktop semantic models, use external tools such as Tabular
Editor .
2. On the External Tools ribbon, select Tabular Editor. If you don’t see the Tabular
Editor button, install the program . When open, Tabular Editor will automatically
connect to your model.
3. In the Model view, select the drop-down menu under Roles. The roles you created
in step one will appear.
4. Select the role you want to enable an OLS definition for, and expand the Table
Permissions.
None: OLS is enforced and the table or column will be hidden from that role
Read: The table or column will be visible to that role
6. After you define object-level security for the roles, save your changes.
8. In the Power BI Service, navigate to the Security page by selecting the more
options menu on the semantic model, and assign members or groups to their
appropriate roles.
The OLS rules are now defined. Users without the required permission will receive a
message that the field can't be found for all report visuals using that field.
Considerations and limitations
OLS only applies to Viewers in a workspace. Workspace members assigned Admin,
Member, or Contributor have edit permission for the semantic model and,
therefore, OLS doesn’t apply to them. Read more about roles in workspaces.
Semantic models with OLS configured for one or more table or column objects
aren't supported with these Power BI features:
Q&A visualizations
Quick insights visualizations
Smart narrative visualizations
Excel Data Types gallery
Related content
Object-level security in Azure Analysis Services
Power BI implementation planning: Report consumer security planning
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? Yes No
This article describes reliability support in Microsoft Fabric, and both regional resiliency
with availability zones and cross-region recovery and business continuity. For a more
detailed overview of reliability in Azure, see Azure reliability.
Failures can range from software and hardware failures to events such as earthquakes,
floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation
of Azure services. For more detailed information on availability zones in Azure, see
Regions and availability zones.
Azure availability zones-enabled services are designed to provide the right level of
reliability and flexibility. They can be configured in two ways. They can be either zone
redundant, with automatic replication across zones, or zonal, with instances pinned to a
specific zone. You can also combine these approaches. For more information on zonal
vs. zone-redundant architecture, see Recommendations for using availability zones and
regions.
Prerequisites
Fabric currently provides partial availability-zone support in a limited number of
regions. This partial availability-zone support covers experiences (and/or certain
functionalities within an experience).
Experiences such as Data Engineering, Data Science, and Event Streams don't
support availability zones.
Zone availability may or may not be available for Fabric experiences or
features/functionalities that are in preview.
On-premises gateways and large semantic models in Power BI don't support
availability zones.
Data Factory (pipelines) support availability zones in West Europe, but new or
inprogress pipelines runs may fail in case of zone outage.
Supported regions
Fabric makes commercially reasonable efforts to provide availability zone support in
various regions as follows:
ノ Expand table
Brazil South
Canada Central
Central US
East US
East US 2
South Central US
West US 2
West US 3
France Central
Germany West
Central
North Europe
UK South
West Europe
Norway East
Qatar Central
South Africa
North
Australia East
Japan East
Southeast Asia
) Important
While Microsoft strives to provide uniform and consistent availability zone support,
in some cases of availability-zone failure, Fabric capacities located in Azure regions
with higher customer demand fluctuations might experience higher than normal
latency.
This section describes a disaster recovery plan for Fabric that's designed to help your
organization keep its data safe and accessible when an unplanned regional disaster
occurs. The plan covers the following topics:
Data access after disaster: In a regional disaster scenario, Fabric guarantees data
access, with certain limitations. While the creation or modification of new items is
restricted after failover, the primary focus remains on ensuring that existing data
remains accessible and intact.
Power BI, now a part of the Fabric, has a solid disaster recovery system in place and
offers the following features:
Continued services and access after disaster: Even during disruptive events, Power
BI items remain accessible in read-only mode. Items include semantic models,
reports, and dashboards, ensuring that businesses can continue their analysis and
decision-making processes without significant hindrance.
For more information, see the Power BI high availability, failover, and disaster recovery
FAQ
) Important
For customers whose home regions don't have an Azure pair region and are
affected by a disaster, the ability to utilize Fabric capacities may be compromised—
even if the data within those capacities is replicated. This limitation is tied to the
home region’s infrastructure, essential for the capacities' operation.
The home region for your organization's tenancy and data storage is set to the billing
address location of the first user that signs up. For further details on tenancy setup, go
to Power BI implementation planning: Tenant setup. When you create new capacities,
your data storage is set to the home region by default. If you wish to change your data
storage region to another region, you'll need to enable Multi-Geo, a Fabric Premium
feature.
) Important
Choosing a different region for your capacity doesn't entirely relocate all of your
data to that region. Some data elements still remain stored in the home region. To
see which data remains in the home region and which data is stored in the Multi-
Geo enabled region, see Configure Multi-Geo support for Fabric Premium.
In the case of a home region that doesn't have a paired region, capacities in any
Multi-Geo enabled region may face operational issues if the home region
encounters a disaster, as the core service functionality is tethered to the home
region.
If you select a Multi-Geo enabled region within the EU, it's guaranteed that your
data is stored within the EU data boundary.
To learn how to identify your home region, see Find your Fabric home region.
Role access: Only users with the capacity admin role or higher can use this switch.
Granularity: The granularity of the switch is the capacity level. It's available for both
Premium and Fabric capacities.
Data scope: The disaster recovery toggle specifically addresses OneLake data,
which includes Lakehouse and Warehouse data. The switch does not influence
your data stored outside OneLake.
BCDR continuity for Power BI: While disaster recovery for OneLake data can be
toggled on and off, BCDR for Power BI is always supported, regardless of whether
the switch is on or off.
Frequency: Once you change the disaster recovery capacity setting, you must wait
30 days before being able to alter it again. The wait period is set in place to
maintain stability and prevent constant toggling,
7 Note
After turning on the disaster recovery capacity setting, it can take up to one week
for the data to start replicating.
Data replication
When you turn on the disaster recovery capacity setting, cross-region replication is
enabled as a disaster recovery capability for OneLake data. The Fabric platform aligns
with Azure regions to provision the geo-redundancy pairs. However, some regions don't
have an Azure pair region, or the pair region doesn't support Fabric. For these regions,
data replication isn't available. For more information, see Regions with availability zones
and no region pair and Fabric region availability.
7 Note
Billing
The disaster recovery feature in Fabric enables geo-replication of your data for
enhanced security and reliability. This feature consumes more storage and transactions,
which are billed as BCDR Storage and BCDR Operations respectively. You can monitor
and manage these costs in the Microsoft Fabric Capacity Metrics app, where they appear
as separate line items.
For an exhaustive breakdown of all associated disaster recovery costs to help you plan
and budget accordingly, see OneLake compute and storage consumption.
Phase 1: Prepare
Activate the disaster recovery capacity settings: Regularly review and set the
disaster recovery capacity settings to make sure they meet your protection and
performance needs.
Create data backups: Copy critical data stored outside of OneLake to another
region in a way that aligns to your disaster recovery plan.
Phase 2: Disaster failover
When a major disaster renders the primary region unrecoverable, Microsoft Fabric
initiates a regional failover. Access to the Fabric portal is unavailable until the failover is
complete and a notification is posted on the Microsoft Fabric support page .
The time it takes for failover to complete can vary, although it typically takes less than
one hour. Once failover is complete, here's what you can expect:
Fabric portal: You can access the portal, and read operations such as browsing
existing workspaces and items continue to work. All write operations, such as
creating or modifying a workspace, are paused.
Power BI: You can perform read operations, such as displaying dashboards and
reports. Refreshes, report publish operations, dashboard and report modifications,
and other operations that require changes to metadata aren't supported.
Lakehouse/Warehouse: You can't open these items, but files can be accessed via
OneLake APIs or tools.
Spark Job Definition: You can't open Spark job definitions, but code files can be
accessed via OneLake APIs or tools. Any metadata or configuration will be saved
after failover.
Notebook: You can't open notebooks, and code content won't be saved after the
disaster.
Dataflow Gen2/Pipeline/Eventstream: You can't open these items, but you can use
supported disaster recovery destinations (lakehouses or warehouses) to protect
data.
KQL Database/Queryset: You won't be able to access KQL databases and query
sets after failover. More prerequisite steps are required to protect the data in KQL
databases and query sets.
In a disaster scenario, the Fabric portal and Power BI are in read-only mode, and other
Fabric items are unavailable, you can access their data stored in OneLake using APIs or
third-party tools. Both portal and Power BI retain the ability to perform read-write
operations on that data. This ability ensures that critical data remains accessible and
modifiable, and mitigates potential disruption of your business operations.
OneLake data remains accessible through multiple channels:
Azure Storage Explorer: See Integrate OneLake with Azure Storage Explorer
OneLake File Explorer: See Use OneLake file explorer to access Fabric data
Recovery steps
1. Create a new Fabric capacity in any region after a disaster. Given the high demand
during such events, we recommend selecting a region outside your primary geo to
increase likelihood of compute service availability. For information about creating a
capacity, see Buy a Microsoft Fabric subscription.
2. Create workspaces in the newly created capacity. If necessary, use the same names
as the old workspaces.
3. Create items with the same names as the ones you want to recover. This step is
important if you use the custom script to recover lakehouses and warehouses.
4. Restore the items. For each item, follow the relevant section in the Experience-
specific disaster recovery guidance to restore the item.
Next steps
Experience-specific disaster recovery guidance
Reliability in Azure
Feedback
Was this page helpful? Yes No
Provide product feedback
Experience-specific disaster recovery
guidance
Article • 11/15/2023
This document provides experience-specific guidance for recovering your Fabric data in
the event of a regional disaster.
Sample scenario
A number of the guidance sections in this document use the following sample scenario
for purposes of explanation and illustration. Refer back to this scenario as necessary.
Let's say you have a capacity C1 in region A that has a workspace W1. If you've turned
on disaster recovery for capacity C1, OneLake data will be replicated to a backup in
region B. If region A faces disruptions, the Fabric service in C1 fails over to region B.
The following image illustrates this scenario. The box on the left shows the disrupted
region. The box in the middle represents the continued availability of the data after
failover, and the box on the right shows the fully covered situation after the customer
acts to restore their services to full function.
2. Create a new W2 workspace in C2, including its corresponding items with same
names as in C1.W1.
4. Follow the dedicated instructions for each component to restore items to their full
function.
Experience-specific recovery plans
The following sections provide step-by-step guides for each Fabric experience to help
customers through the recovery process.
Data Engineering
This guide walks you through the recovery procedures for the Data Engineering
experience. It covers lakehouses, notebooks, and Spark job definitions.
Lakehouse
Lakehouses from the original region remain unavailable to customers. To recover a
lakehouse, customers can re-create it in workspace C2.W2. We recommend two
approaches for recovering lakehouses:
1. Create the lakehouse (for example, LH1) in the newly created workspace C2.W2.
3. To recover the tables and files from the original lakehouse, you need to use the
ABFS path to access the data (see Connecting to Microsoft OneLake). You can use
the code example below (see Introduction to Microsoft Spark Utilities) in the
notebook to get the ABFS paths of files and tables from the original lakehouse.
(Replace C1.W1 with the actual workspace name)
mssparkutils.fs.ls('abfs[s]://<C1.W1>@onelake.dfs.fabric.microsoft.com/
<item>.<itemtype>/<Tables>/<fileName>')
4. Use the following code example to copy tables and files to the newly created
lakehouse.
a. For Delta tables, you need to copy table one at a time to recover in the new
lakehouse. In the case of Lakehouse files, you can copy the complete file
structure with all the underlying folders with a single execution.
b. Reach out to the support team for the timestamp of failover required in the
script.
%%spark
val source="abfs path to original Lakehouse file or table directory"
val destination="abfs path to new Lakehouse file or table directory"
val timestamp= //timestamp provided by Support
mssparkutils.fs.write(s"$destination/_delta_log/_last_checkpoint", "",
true)
5. Once you run the script, the tables will appear in the new lakehouse.
To recover only specific Lakehouse files or tables from the original lakehouse, use Azure
Storage Explorer. Refer to Integrate OneLake with Azure Storage Explorer for detailed
steps. For large data sizes, use Approach 1.
7 Note
The two approaches described above recover both the metadata and data for
Delta-formatted tables, because the metadata is co-located and stored with the
data in OneLake. For non-Delta formatted tables (e.g. CSV, Parquet, etc.) that are
created using Spark Data Definition Language (DDL) scripts/commands, the user is
responsible for maintaining and re-running the Spark DDL scripts/commands to
recover them.
Notebook
Notebooks from the primary region remain unavailable to customers and the code in
notebooks won't be replicated to the secondary region. To recover Notebook code in
the new region, there are two approaches to recovering Notebook code content.
The best way to make this easy and quick is to use Fabric Git integration, then
synchronize your notebook with your ADO repo. After the service fails over to another
region, you can use the repo to rebuild the notebook in the new workspace you created.
1. Setup Git integration and select Connect and sync with ADO repo.
a. In the newly created workspace, connect to your Azure ADO repo again.
b. Select the Source control button. Then select the relevant branch of the repo.
Then select Update all. The original notebook will appear.
c. If the original notebook has a default lakehouse, users can refer to the
Lakehouse section to recover the lakehouse and then connect the newly
recovered lakehouse to the newly recovered notebook.
d. The Git integration doesn't support syncing files, folders, or notebook snapshots
in the notebook resource explorer.
ii. Re-upload the file from your local disk or cloud drives to the recovered
notebook.
ii. If the original notebook has a notebook snapshot, also save the notebook
snapshot to your own version control system or local disk.
For more information about Git integration, see Introduction to Git integration.
1. Use the "Import notebook" feature to import the notebook code you want to
recover.
2. After import, go to your desired workspace (for example, "C2.W2") to access it.
3. If the original notebook has a default lakehouse, refer to the Lakehouse section.
Then connect the newly recovered lakehouse (that has the same content as the
original default lakehouse) to the newly recovered notebook.
4. If the original notebook has files or folders in the resource explorer, re-upload the
files or folders saved in the user's version control system.
You can recover the SJD items by copying the code from the original region by using
Azure Storage Explorer and manually reconnecting Lakehouse references after the
disaster.
1. Create a new SJD item (for example, SJD1) in the new workspace C2.W2, with the
same settings and configurations as the original SJD item (for example, language,
environment, etc.).
2. Use Azure Storage Explorer to copy Libs, Mains and Snapshots from the original
SJD item to the new SJD item.
3. The code content will appear in the newly created SJD. You'll need to manually add
the newly recovered Lakehouse reference to the job (Refer to the Lakehouse
recovery steps). Users will need to reenter the original command line arguments
manually.
For details about Azure Storage Explorer, see Integrate OneLake with Azure Storage
Explorer.
Data Science
This guide walks you through the recovery procedures for the Data Science experience.
It covers ML models and experiments.
Warehouse
Warehouses from the original region remain unavailable to customers. To recover
warehouses, use the following two steps.
1. Create a new interim lakehouse in workspace C2.W2 for the data you'll copy over
from the original warehouse.
2. Populate the warehouse's Delta tables by leveraging the warehouse Explorer and
the T-SQL capabilities (see Tables in data warehousing in Microsoft Fabric).
7 Note
It's recommended that you keep your Warehouse code (schema, table, view, stored
procedure, function definitions, and security codes) versioned and saved in a safe
location (such as Git) according to your development practices.
2. Recover the Delta tables in the interim lakehouse from the original warehouse by
following the Lakehouse recovery steps.
USE WH1
SELECT [Date],[City],[StateProvince],[SalesTerritory],
[SumOfTotalExcludingTax],[SumOfTaxAmount],[SumOfTotalIncludingTax],
[SumOfProfit]
FROM [LH11].[dbo].[aggregate_sale_by_date_city]
GO
6. Lastly, change the connection string in applications using your Fabric warehouse.
7 Note
For customers who need cross-regional disaster recovery and fully automated
business continuity, we recommend keeping two Fabric Warehouse setups in
separate Fabric regions and maintaining code and data parity by doing regular
deployments and data ingestion to both sites.
Data Factory
Data Factory items from the primary region remain unavailable to customers and the
settings and configuration in data pipelines or dataflow gen2 items won't be replicated
to the secondary region. To recover these items in the event of a regional failure, you'll
need to recreate your Data Integration items in another workspace from a different
region. The following sections outline the details.
Dataflows Gen2
If you want to recover a Dataflow Gen2 item in the new region, you need to export a
PQT file to a version control system such as Git and then manually recover the Dataflow
Gen2 content after the disaster.
1. From your Dataflow Gen2 item, in the Home tab of the Power Query editor, select
Export template.
2. In the Export template dialog, enter a name (mandatory) and description (optional)
for this template. When done, select OK.
3. After the disaster, create a new Dataflow Gen2 item in the new workspace "C2.W2".
4. From the current view pane of the Power Query editor, select Import from a Power
Query template.
5. In the Open dialog, browse to your default downloads folder and select the .pqt file
you saved in the previous steps. Then select Open.
6. The template is then imported into your new Dataflow Gen2 item.
Data Pipelines
Customers can't access data pipelines in the event of regional disaster, and the
configurations aren't replicated to the paired region. We recommend building your
critical data pipelines in multiple workspaces across different regions.
Real-Time Analytics
This guide walks you through the recovery procedures for the Real-Time Analytics
experience. It covers KQL databases/querysets and eventstreams.
KQL Database/Queryset
KQL database/queryset users must undertake proactive measures to protect against a
regional disaster. The following approach ensures that, in the event of a regional
disaster, data in your KQL databases querysets remains safe and accessible.
Use the following steps to guarantee an effective disaster recovery solution for KQL
databases and querysets.
Tables: Make sure that the table structures and schema definitions are
consistent across the databases.
Mapping: Duplicate any required mappings. Make sure that data sources and
destinations align correctly.
Policies: Make sure that both databases have similar data retention, access,
and other relevant policies.
3. Manage authentication and authorization: For each replica, set up the required
permissions. Make sure that proper authorization levels are established, granting
access to the required personnel while maintaining security standards.
4. Parallel data ingestion: To keep the data consistent and ready in multiple regions,
load the same dataset into each KQL database at the same time as you ingest it.
Eventstream
An eventstream is a centralized place in the Fabric platform for capturing, transforming,
and routing real-time events to various destinations (for example, lakehouses, KQL
databases/querysets) with a no-code experience. So long as the destinations are
supported by disaster recovery, eventstreams won't lose data. Therefore, customers
should use the disaster recovery capabilities of those destination systems to guarantee
data availability.
Related information
Microsoft Fabric disaster recovery guide
Feedback
Was this page helpful? Yes No
Security is a key aspect of any data analytics solution, especially when it involves
sensitive or confidential data. For this reason, Microsoft Fabric provides a comprehensive
set of security features that enables you to protect your data at rest and in transit, as
well as control access and permissions for your users and applications.
In this article, you'll learn about Fabric security concepts and features that can help you
confidently build your own analytical solution with Fabric.
Background
This article presents a scenario where you're a data engineer who works for a healthcare
organization in the United States. The organization collects and analyzes patient data
that's sourced from various systems, including electronic health records, lab results,
insurance claims, and wearable devices.
You plan to build a lakehouse by using the medallion architecture in Fabric, which
consists of three layers: bronze, silver, and gold.
The bronze layer stores the raw data as it arrives from the data sources.
The silver layer applies data quality checks and transformations to prepare the data
for analysis.
The gold layer provides aggregated and enriched data for reporting and
visualization.
While some data sources are located on your on-premises network, others are behind
firewalls and require secure, authenticated access. There are also some data sources that
are managed in Azure, such as Azure SQL Database and Azure Storage. You need to
connect to these Azure data sources in a way that doesn't expose data to the public
internet.
You've decided to use Fabric because it can securely ingest, store, process, and analyze
your data in the cloud. Importantly, it does so while complying with the regulations of
your industry and policies of your organization.
Because Fabric is software as a service (SaaS), you don't need to provision individual
resources, such as storage or compute resources. All you need is a Fabric capacity.
You need to set up data access requirements. Specifically, you need to ensure that only
you and your fellow data engineers have access to the data in the bronze and silver
layers of the lakehouse. These layers are where you plan to perform data cleansing,
validation, transformation, and enrichment. You also need to restrict access to the data
in the gold layer. Only authorized users, including data analysts and business users,
should have access to the gold layer. They require this access to use the data for various
analytical purposes, such as reporting, machine learning, and predictive analytics. Data
access needs to be further restricted by the role and department of the user.
The Microsoft Entra tenant is an identity security boundary that's under the control of
your IT department. Within this security boundary, the administration of Microsoft Entra
objects (such as user accounts) and the configuration of tenant-wide settings are done
by your IT administrators. Like any SaaS service, Fabric logically isolates tenants. Data
and resources in your tenant can't ever be accessed by other tenants unless you
explicitly authorize them to do so.
ノ Expand table
Item Description
The user opens a browser (or a client application) and signs in to the Fabric portal .
Item Description
The user is immediately redirected to Microsoft Entra ID, and they're required to
authenticate. Authentication verifies that it's the correct person signing in.
After authentication succeeds, the web front end receives the user's request and delivers
the front-end (HTML and CSS) content from the nearest location. It also routes the request
to the metadata platform and backend capacity platform.
The metadata platform, which resides in your tenant's home region, stores your tenant's
metadata, such as workspaces and access controls. This platform ensures that the user is
authorized to access the relevant workspaces and Fabric items.
The back-end capacity platform performs compute operations and stores your data. It's
located in the capacity region. When a workspace is assigned to Fabric capacity, all data
that resides in the workspace, including the data lake OneLake, is stored and processed in
the capacity region.
The metadata platform and the back-end capacity platform each run in secured virtual
networks. These networks expose a series of secure endpoints to the internet so that
they can receive requests from users and other services. Apart from these endpoints,
services are protected by network security rules that block access from the public
internet.
When users sign in to Fabric, you can enforce other layers of protection. That way, your
tenant is only be accessible to certain users and when other conditions, like network
location and device compliance, are met. This layer of protection is called inbound
protection.
In this scenario, you're responsible for sensitive patient information in Fabric. So, your
organization has mandated that all users who access Fabric must perform multifactor
authentication (MFA), and that they must be on the corporate network—just securing
user identity isn't enough.
Your organization also provides flexibility for users by allowing them to work from
anywhere and to use their personal devices. Because Microsoft Intune supports bring-
your-own-device (BYOD), you enroll approved user devices in Intune.
Further, you need to ensure that these devices comply with the organization policies.
Specifically, these policies require that devices can only connect when they have the
latest operating system installed and the latest security patches. You set up these
security requirements by using Microsoft Entra Conditional Access.
Conditional Access offers several ways to secure your tenant. You can:
In the case that you need to lock down your entire Fabric tenant, you can use a virtual
network and block public internet access. Access to Fabric is then only allowed from
within that secure virtual network. This requirement is set up by enabling private links at
the tenant level for Fabric. It ensures that all Fabric endpoints resolve to a private IP
address in your virtual network, including access to all your Power BI reports. (Enabling
private endpoints impacts on many Fabric items, so you should thoroughly read this
article before enabling them.)
Your organization has some data sources that are located on your on-premises network.
Because these data sources are behind firewalls, Fabric requires secure access. To allow
Fabric to securely connect to your on-premises data source, you install an on-premises
data gateway.
The gateway can be used by Data Factory dataflows and data pipelines to ingest,
prepare, and transform the on-premises data, and then load it to OneLake with a copy
activity. Data Factory supports a comprehensive set of connectors that enable you to
connect to more than 100 different data stores.
You then build dataflows with Power Query, which provides an intuitive experience with
a low-code interface. You use it to ingest data from your data sources, and transform it
by using any of 300+ data transformations. You then build and orchestrate a complex
extract, transform, and load (ETL) process with data pipelines. You ETL processes can
refresh dataflows and perform many different tasks at scale, processing petabytes of
data.
In this scenario, you already have multiple ETL processes. First, you have some pipelines
in Azure Data Factory (ADF). Currently, these pipelines ingest your on-premises data and
load it into a data lake in Azure Storage by using the self-hosted integration runtime.
Second, you have a data ingestion framework in Azure Databricks that's written in Spark.
Now that you're using Fabric, you simply redirect the output destination of the ADF
pipelines to use the lakehouse connector. And, for the ingestion framework in Azure
Databricks, you use the OneLake APIs that supports the Azure Blog Filesystem (ABFS)
driver to integrate OneLake with Azure Databricks. (You could also use the same method
to integrate OneLake with Azure Synapse Analytics by using Apache Spark.)
You also have some data sources that are in Azure SQL Database. You need to connect
to these data sources by using private endpoints. In this case, you decide to set up a
virtual network (VNet) data gateway and use dataflows to securely connect to your
Azure data and load it into Fabric. With VNet data gateways, you don't have to provision
and manage the infrastructure (as you need to do for on-premises data gateway). That's
because Fabric securely and dynamically creates the containers in your Azure Virtual
Network.
If you're developing or migrating your data ingestion framework in Spark, then you can
connect to data sources in Azure securely and privately from Fabric notebooks and jobs
with the help of managed private endpoints. Managed private endpoints can be created
in your Fabric workspaces to connect to data sources in Azure that have blocked public
internet access. They support private endpoints, such as Azure SQL Database and Azure
Storage. Managed private endpoints are provisioned and managed in a managed VNet
that's dedicated to a Fabric workspace. Unlike your typical Azure Virtual Networks,
managed VNets and managed private endpoints won't be found in the Azure portal.
That's because they're fully managed by Fabric, and you find them in your workspace
settings.
Because you already have a lot of data stored in Azure Data Lake Storage (ADLS) Gen2
accounts, you now only need to connect Fabric workloads, such as Spark and Power BI,
to it. Also, thanks to OneLake ADLS shortcuts, you can easily connect to your existing
data from any Fabric experience, such as data integration pipelines, data engineering
notebooks, and Power BI reports.
Fabric workspaces that have a workspace identity can securely access ADLS Gen2
storage accounts, even when you've disabled the public network. That's made possible
by trusted workspace access. It allows Fabric to securely connect to the storage accounts
by using a Microsoft backbone network. That means communication doesn't use the
public internet, which allows you to disable public network access to the storage
account but still allow certain Fabric workspaces to connect to them.
Compliance
You want to use Fabric to securely ingest, store, process, and analyze your data in the
cloud, while maintaining compliance with the regulations of your industry and the
policies of your organization.
Fabric is part of Microsoft Azure Core Services, and it's governed by the Microsoft
Online Services Terms and the Microsoft Enterprise Privacy Statement . While
certifications typically occur after a product launch (Generally Available, or GA),
Microsoft integrates compliance best practices from the outset and throughout the
development lifecycle. This proactive approach ensures a strong foundation for future
certifications, even though they follow established audit cycles. In simpler terms, we
prioritize building compliance in from the start, even when formal certification comes
later.
Fabric is compliant with many industry standards such as ISO 27001, 27017, 27018 and
27701. Fabric is also HIPAA compliant, which is critical to healthcare data privacy and
security. You can check the Appendix A and B in the Microsoft Azure Compliance
Offerings for detailed insight into which cloud services are in scope for the
certifications. You can also access the audit documentation from the Service Trust Portal
(STP) .
Compliance is a shared responsibility. To comply with laws and regulations, cloud service
providers and their customers enter a shared responsibility to ensure that each does
their part. As you consider and evaluate public cloud services, it's critical to understand
the shared responsibility model and which security tasks the cloud provider handles and
which tasks you handle.
Data handling
Because you're dealing with sensitive patient information, you need to ensure that all
your data is sufficiently protected both at rest and in transit.
Encryption at rest provides data protection for stored data (at rest). Attacks against data
at rest include attempts to obtain physical access to the hardware on which the data is
stored, and then compromise the data on that hardware. Encryption at rest is designed
to prevent an attacker from accessing the unencrypted data by ensuring the data is
encrypted when on disk. Encryption at rest is a mandatory measure required for
compliance with some of the industry standards and regulations, such as the
International Organization for Standardization (ISO) and Health Insurance Portability and
Accountability Act (HIPAA).
All Fabric data stores are encrypted at rest by using Microsoft-managed keys, which
provides protection for customer data and also system data and metadata. Data is never
persisted to permanent storage while in an unencrypted state. With Microsoft-managed
keys, you benefit from the encryption of your data at rest without the risk or cost of a
custom key management solution.
Data is also encrypted in transit. All inbound traffic to Fabric endpoints from the client
systems enforces a minimum of Transport Layer Security (TLS) 1.2. It also negotiates
TLS 1.3, whenever possible. TLS provides strong authentication, message privacy, and
integrity (enabling detection of message tampering, interception, and forgery),
interoperability, algorithm flexibility, and ease of deployment and use.
In addition to encryption, network traffic between Microsoft services always routes over
the Microsoft global network, which is one of the largest backbone networks in the
world.
Data residency
As you're dealing with patient data, for compliance reasons your organization has
mandated that data should never leave the United States geographical boundary. Your
organization's main operations take place in New York and your head office in Seattle.
While setting up Power BI, your organization has chosen the East US region as the
tenant home region. For your operations, you have created a Fabric capacity in the West
US region, which is closer to your data sources. Because OneLake is available around the
globe, you're concerned whether you can meet your organization's data residency
policies while using Fabric.
In Fabric, you learn that you can create Multi-Geo capacities, which are capacities
located in geographies (geos) other than your tenant home region. You assign your
Fabric workspaces to those capacities. In this case, compute and storage (including
OneLake and experience-specific storage) for all items in the workspace reside in the
multi-geo region, while your tenant metadata remains in the home region. Your data will
only be stored and processed in these two geographies, thus ensuring your
organization's data residency requirements are met.
Access control
You need to ensure that only you and your fellow data engineers have full access to the
data in the bronze and silver layers of the lakehouse. These layers allow you to perform
data cleansing, validation, transformation, and enrichment. You need to restrict access to
the data in the gold layer to only authorized users, such as data analysts and business
users, who can use the data for various analytical purposes, such as reporting and
analytics.
Fabric provides a flexible permission model that allows you to control access to items
and data in your workspaces. A workspace is a securable logical entity for grouping
items in Fabric. You use workspace roles to control access to items in the workspaces.
The four basic roles of a workspace are:
Admin: Can view, modify, share, and manage all content in the workspace,
including managing permissions.
Member: Can view, modify, and share all content in the workspace.
Contributor: Can view and modify all content in the workspace.
Viewer: Can view all content in the workspace, but can't modify it.
In this scenario, you create three workspaces, one for each of the medallion layers
(bronze, silver, and gold). Because you created the workspace, you're automatically
assigned to the Admin role.
You then add a security group to the Contributor role of those three workspaces.
Because the security group includes your fellow engineers as members, they're able to
create and modify Fabric items in those workspaces—however they can't share any
items with anyone else. Nor can they grant access to other users.
In the bronze and silver workspaces, you and your fellow engineers create Fabric items
to ingest data, store the data, and process the data. Fabric items comprise a lakehouse,
pipelines, and notebooks. In the gold workspace, you create two lakehouses, multiple
pipelines and notebooks, and a Direct Lake semantic model, which delivers fast query
performance of data stored in one of the lakehouses.
You then give careful consideration to how the data analysts and business users can
access the data they're allowed to access. Specifically, they can only access data that's
relevant to their role and department.
The first lakehouse contains the actual data and doesn't enforce any data permissions in
its SQL analytics endpoint. The second lakehouse contains shortcuts to the first
lakehouse, and it enforces granular data permissions in its SQL analytics endpoint. The
semantic model connects to the first lakehouse. To enforce appropriate data
permissions for the users (so they can only access data that's relevant to their role and
department), you don't share the first lakehouse with the users. Instead, you share only
the Direct Lake semantic model and the second lakehouse that enforces data
permissions in its SQL analytics endpoint.
You set up the semantic model to use a fixed identity, and then implement row-level
security (RLS) in the semantic model to enforce model rules to govern what data the
users can access. You then share only the semantic model with the data analysts and
business users because they shouldn't access the other items in the workspace, such as
the pipelines and notebooks. Lastly, you grant Build permission on the semantic model
so that the users can create Power BI reports. That way, the semantic model becomes a
shared semantic model and a source for their Power BI reports.
Your data analysts need access to the second lakehouse in the gold workspace. They'll
connect to the SQL analytics endpoint of that lakehouse to write SQL queries and
perform analysis. So, you share that lakehouse with them and provide access only to
objects they need (such as tables, rows, and columns with masking rules) in the
lakehouse SQL analytics endpoint by using the SQL security model. Data analysts can
now only access data that's relevant to their role and department and they can't access
the other items in the workspace, such as the pipelines and notebooks.
ノ Expand table
I'm an ETL developer and I want to load Use on-premises data gateway with Outbound
large volumes of data to Fabric at-scale from data pipelines (copy activity).
multiple source systems and tables. The
source data is on-premises (or other cloud)
and is behind firewalls and/or Azure data
sources with private endpoints.
I'm a power user and I want to load data to Use on-premises data gateway with Outbound
Fabric from source systems that I have Dataflow Gen 2.
access to. Because I'm not a developer, I
need to transform the data by using a low-
code interface. The source data is on-
premises (or other cloud) and is behind
firewalls.
I'm a power user and I want to load data in Use a VNet data gateway with Outbound
Fabric from source systems that I have Dataflow Gen 2.
access to. The source data is in Azure behind
private endpoints, and I don't want to install
and maintain on-premises data gateway
infrastructure.
I'm a developer who can write data Use Fabric notebooks with Azure Outbound
ingestion code by using Spark notebooks. I private endpoints.
want to load data in Fabric from source
systems that I have access to. The source
data is in Azure behind private endpoints,
Scenario Tools Direction
I have many existing pipelines in Azure Data Use the Lakehouse connector in Outbound
Factory (ADF) and Synapse pipelines that existing pipelines.
connect to my data sources and load data
into Azure. I now want to modify those
pipelines to load data into Fabric.
I have a data ingestion framework developed Use the OneLake and the Azure Data Outbound
in Spark that connects to my data sources Lake Storage (ADLS) Gen2 API (Azure
securely and loads them into Azure. I'm Blob Filesystem driver)
running it on Azure Databricks and/or
Synapse Spark. I want to continue using
Azure Databricks and/or Synapse Spark to
load data into Fabric.
I want to ensure that my Fabric endpoints As a SaaS service, the Fabric back end Inbound
are protected from the public internet. is already protected from the public
internet. For more protection, use
Microsoft Entra conditional access
policies for Fabric and/or enable
private links at tenant level for Fabric
and block public internet access.
I want to ensure that Fabric can be accessed Use Microsoft Entra conditional Inbound
from only within my corporate network access policies for Fabric.
and/or from compliant devices.
I want to ensure that anyone accessing Use Microsoft Entra conditional Inbound
Fabric must perform multifactor access policies for Fabric.
authentication.
I want to lock down my entire Fabric tenant Enable private links at tenant level for Inbound
from the public internet and allow access Fabric and block public internet
only from within my virtual networks. access.
Related content
For more information about Fabric security, see the following resources.