Skip to main content

Optimize Costs After Microsoft Fabric Migration

After your data workloads have been migrated to Microsoft Fabric, it's important to optimize costs to ensure operational efficiency, maximize your investment, and avoid unnecessary spending. This article outlines tools, strategies, and a phased decommissioning process tailored for Fabric environments.


Optimize Fabric Workloads for Cost​

Microsoft Fabric provides integrated visibility into storage, compute, and Fabric Capacity usage. After migration, review the actual performance and usage data of your workloads to fine-tune compute capacity, refresh schedules, and dataset sizes.

Fabric-Specific Optimization Techniques​

StrategyDescription
Rightsize LakehousesAnalyze usage patterns of Lakehouses and reduce file sizes or partitioning
Optimize Refresh FrequencyUse event-based or incremental refresh to reduce compute load
Manage Fabric CapacityRebalance workloads across capacities and scale up/down as needed
Pause Unused WarehousesSuspend unused SQL Endpoints or Warehouse compute
Power BI Report OptimizationIdentify expensive queries, cache models, and remove unused visuals
Unify DataflowsRefactor complex Power Query transformations to be reusable and shared

Tools for Cost Management​

  • Microsoft Fabric Capacity Metrics App: Monitor workload consumption and identify peak usage times.
  • Power BI Admin Portal: View dataset refresh history, storage size, and premium capacity utilization.
  • Azure Cost Management + Billing: Use for global visibility when Fabric is part of a broader Azure ecosystem.
  • Usage Metrics in Fabric: Available for each workspace, including capacity, refresh duration, query activity.

πŸ”— Optimize cost in Microsoft Fabric


Decommission Retired Artifacts​

Once data is verified and users have fully transitioned to Fabric:

  1. Disable Legacy Refresh Pipelines

    • Turn off on-prem or hybrid data refresh agents (e.g., gateways or scheduled Logic Apps).
  2. Archive Legacy Data Sources

    • Export metadata or back up old SQL servers or SSAS models if needed.
  3. Retire Classic Workspaces

    • Consolidate into Fabric workspaces and deprecate unused environments.
  4. Communicate Artifact Status

    • Notify owners of deprecated reports, datasets, or notebooks.

Monitoring Residual Use​

Use Fabric’s built-in monitoring to ensure no activity persists on retired resources. Validate:

  • Storage access patterns on Lakehouses or Delta tables
  • SQL queries against deprecated Warehouse endpoints
  • Scheduled pipeline executions (Data Factory or Notebook jobs)
  • Power BI reports still being accessed from legacy sources

Define a Holding Period​

Allow a 30–90 day holding window for old assets and datasets that have been retired. Retain as archived backups or export snapshots to OneLake for compliance or recovery.

ℹ️ Always consult your data governance officer regarding minimum retention periods or compliance requirements (e.g., FINMA, DSGVO, HIPAA).


Cost Optimization Summary Checklist​

  • Dataset refreshes use incremental load or event triggers
  • Legacy data sources are archived and decommissioned
  • Fabric capacity is right-sized and balanced
  • Fabric monitoring and logs show no usage for deprecated assets
  • Users have validated report accuracy post-migration
  • Holding period policies are documented and approved

Contributors