Skip to main content

Prepare for Management Activities in Microsoft Fabric

After migrating workloads to Microsoft Fabric, it is essential to plan and implement management activities early. Failure in this area can lead to system outages, security incidents, or performance issues.

Note
This guide extends the general principles of the Cloud Adoption Framework for Azure – Management. The same fundamental principles apply to Microsoft Fabric, adapted to Fabric-specific components such as Lakehouse, Data Warehouse, Notebooks, Pipelines, and Semantic Models.

Minimum Management Goals for Fabric Workloads

For each Fabric workload, at least the following management tasks should be implemented:

Logs and Telemetry

  • Monitoring via Microsoft Fabric Monitoring: Enable logs for pipelines, notebooks, semantic models, dataflows, SQL endpoints, events, and OneLake.
  • Integration with Azure Monitor Logs: For cross-organization correlation and centralized analysis.

Alerts

  • Configure Fabric-based alerts in the Power BI Admin Portal, Fabric Monitoring, and Azure Monitor (e.g., via Kusto queries against Log Analytics).
  • Alerts for failed pipelines, expired tokens, volume anomalies, capacity issues, etc.

Backup and Restore

  • Backup strategies for:
    • OneLake data: via snapshot backups or geo-redundant storage.
    • Semantic models: version control and deployment via Git integration.
    • Pipelines and notebooks: export to code repositories, e.g., via Deployment Pipelines

Business Continuity / Disaster Recovery (BCDR)

  • Deployment across multiple Fabric capacities in separate regions.
  • Securing production data through geo-redundant Fabric capacities (e.g., EU + CH).

Security Monitoring

  • Microsoft Purview integration for monitoring data classification and access.
  • Defender for Cloud for monitoring Fabric capacities and used Microsoft Entra ID resources.
  • Access monitoring and MFA enforcement on Microsoft Fabric and Power BI artifacts.

Lifecycle Management

  • Automatic shutdown or timeout for inactive capacities (if configurable).
  • Automatic updates of the Fabric platform are performed by Microsoft but must be integrated into change processes.

Tags

  • Compliance with tagging requirements (e.g., environment=prod/dev/test, workload=…).

Organizational Handover to Operations

To successfully hand over workloads to regular operations, the following measures should be taken:

  • Involvement of Data Product Owners (e.g., Power BI owners, domain-specific data product leads).
  • Coordination with Platform Teams for observability, SIEM integration, and data protection.
  • Role clarification across Mesh roles (e.g., Fabric admin as Platform Owner, deployment responsible as Data Product Developer, data steward as Data Product Quality Manager, data owner as Domain Data Owner).

Additional Relevant Resources

Contributors