Chapter 12 of 12

Performance Optimization

Speed up your Power BI reports — optimize data models, DAX queries, visuals, and use Performance Analyzer.

Meritshot39 min read
Power BIPerformanceOptimizationDAXBest Practices
All Power BI Chapters

Performance Optimization

Why Performance Matters

Report performance directly affects whether people actually use your reports. Research by Power BI engineering teams shows that reports taking longer than 5 seconds to load see a significant drop in user engagement. Slow reports erode trust in the data, encourage workarounds (like exporting data to Excel), and ultimately undermine the value of your BI investment.

The Business Impact of Performance

MetricFast Reports (< 3 seconds)Slow Reports (> 10 seconds)
User adoption rateHighLow — users abandon slow reports
Data-driven decisionsFrequent — users explore freelyRare — users avoid interaction
Support ticketsLowHigh — "the report is broken" complaints
Executive trustStrongEroded — "I'll just use the spreadsheet"
Capacity costsEfficient — queries finish quicklyWasteful — queries consume resources longer
Refresh reliabilityHigher — fast refresh within time windowsLower — refresh may time out

What Affects Performance

Performance is influenced by several factors across the entire stack:

  1. Data model — Table sizes, column cardinality, data types, relationships
  2. DAX calculations — Measure complexity, iterator usage, filter patterns
  3. Visuals — Number of visuals per page, visual types, cross-filtering
  4. Data source — Query folding, DirectQuery source performance, network latency
  5. Service infrastructure — Capacity size, concurrent users, refresh schedules

This chapter covers optimization strategies for each of these areas.


The Power BI Architecture

Understanding how Power BI processes queries is essential for meaningful optimization.

Storage Engine vs Formula Engine

Power BI's Analysis Services engine has two main components:

ComponentRoleCharacteristics
Storage Engine (SE)Retrieves data from the in-memory storeMultithreaded, fast, scans compressed columns
Formula Engine (FE)Evaluates DAX expressionsSingle-threaded, slower, handles complex logic

Optimal queries maximize work done by the Storage Engine and minimize work done by the Formula Engine. The Storage Engine is highly optimized for scanning compressed columnar data, while the Formula Engine processes results row by row.

VertiPaq (In-Memory Columnar)

VertiPaq is the in-memory columnar storage engine used by Power BI's Import mode:

  • Data is stored column-by-column, not row-by-row
  • Each column is compressed independently using dictionary encoding and run-length encoding
  • Compression is most effective for low-cardinality columns (fewer unique values)
  • Queries scan only the columns needed, not entire rows
  • Data is stored entirely in RAM for fast access

Key insight: The more unique values a column has (higher cardinality), the larger it is in memory and the slower it is to scan. This principle drives many optimization strategies.

How Queries Execute

When a user interacts with a report visual:

  1. The visual generates a DAX query based on the fields, filters, and slicers
  2. The Formula Engine parses the DAX and creates a query plan
  3. The query plan sends Storage Engine queries (xmSQL) to retrieve data
  4. The Storage Engine scans the relevant columns in VertiPaq and returns results
  5. The Formula Engine combines and processes the results
  6. The final result is sent to the visual for rendering

Understanding the Query Plan

A query plan consists of:

  • Logical query plan — What the query needs to compute (high-level)
  • Physical query plan — How the computation will be executed
  • Storage Engine queries — The actual data retrieval operations sent to VertiPaq

Performance issues arise when:

  • Too many Storage Engine queries are generated (measure complexity)
  • The Formula Engine processes too many rows (iterator functions on large tables)
  • Storage Engine queries scan high-cardinality columns (large segments)

Performance Analyzer

Performance Analyzer is a built-in tool in Power BI Desktop that helps you identify performance bottlenecks in your report.

Opening Performance Analyzer

Step 1: Open your report in Power BI Desktop.

Step 2: Navigate to the View tab on the ribbon.

Step 3: Click Performance Analyzer in the Panes group.

Step 4: The Performance Analyzer pane appears on the right side of the canvas.

Recording Performance Data

Step 1: Click Start recording in the Performance Analyzer pane.

Step 2: Interact with your report — change pages, click slicers, apply filters, or click Refresh visuals to record a full page refresh.

Step 3: Performance Analyzer captures timing data for every visual that renders.

Step 4: Click Stop when you have captured enough data.

Interpreting Results

Each visual appears in the Performance Analyzer pane with timing breakdowns:

MetricWhat It MeasuresWhat to Look For
DAX queryTime to execute the DAX query against the datasetHigh values indicate complex measures or large scans
Visual displayTime to render the visual in the browser/canvasHigh values indicate complex visuals or too many data points
OtherMiscellaneous processing (waiting for other visuals, system tasks)High values indicate congestion from too many visuals

What the Numbers Mean

DurationAssessmentAction
< 100msExcellentNo action needed
100ms – 500msGoodAcceptable for most scenarios
500ms – 1000msNeeds attentionInvestigate DAX and model optimization
1000ms – 3000msPoorOptimize urgently — users will notice
> 3000msCriticalMajor refactoring needed

Identifying Bottlenecks

  1. Sort by duration to find the slowest visuals (click the expand arrow on each visual)
  2. Look for visuals where DAX query time dominates — these need DAX or model optimization
  3. Look for visuals where Visual display time dominates — these need visual simplification (fewer data points, simpler visual type)
  4. Look for visuals where Other time dominates — reduce the number of visuals on the page

Exporting Results

Step 1: After recording, click Export in the Performance Analyzer pane.

Step 2: Save the JSON file for detailed analysis.

Step 3: You can analyze this file in Power BI itself or in DAX Studio for deeper investigation.

Copying DAX Queries

Step 1: Expand a visual's entry in Performance Analyzer.

Step 2: Click Copy query next to the DAX query line.

Step 3: Paste the DAX query into DAX Studio or the DAX query view for further analysis and optimization.


Data Model Optimization

The data model is the foundation of report performance. Optimizing it has the highest impact on overall performance.

Remove Unused Columns and Tables

Every column in your model consumes memory and increases model size. Unused columns waste resources.

What to remove:

  • Columns used only during data loading (staging columns)
  • System columns (internal IDs, audit timestamps) that are not used in visuals or measures
  • Duplicate columns across tables
  • Large text columns (descriptions, comments) that are not used for filtering or display
  • Tables that were imported for exploration but are not used in the final model

How to identify unused columns:

Step 1: Use Tabular Editor or DAX Studio to list all columns.

Step 2: Check column references in measures, calculated columns, and relationships.

Step 3: Check visual field usage across all report pages.

Step 4: Remove any column not referenced anywhere.

Before:

Sales table: 25 columns, 10M rows, Model size: 850 MB

After removing 10 unused columns:

Sales table: 15 columns, 10M rows, Model size: 520 MB

Reduce Cardinality

High-cardinality columns (many unique values) compress poorly and increase model size. Strategies to reduce cardinality:

StrategyExampleImpact
Round timestamps to the minute or hour08:31:47 > 08:30:00Reduces DateTime cardinality dramatically
Remove decimal places from non-financial numbers3.14159 > 3.14Reduces unique values
Group low-frequency text values500 cities with < 10 orders each > "Other"Reduces dimension size
Use integer keys instead of string keys"PROD-A-2024-001" > 1001Smaller column, faster joins
Split date and time into separate columns"2024-01-15 08:30:00" > Date: "2024-01-15" + Time: "08:30"Each column compresses better

Use Appropriate Data Types

Choosing the right data type affects both model size and query performance:

Data TypeSize per ValueWhen to Use
Whole Number (Int64)8 bytes (before compression)IDs, keys, counts, quantities
Decimal Number (Double)8 bytes (before compression)Scientific data, calculations
Fixed Decimal (Currency)8 bytes (before compression)Financial amounts — precise to 4 decimal places
Text (String)Variable (2 bytes per char)Names, categories, descriptions
Date/Time8 bytes (before compression)Dates and timestamps
TRUE/FALSE (Boolean)1 byte (before compression)Flags, yes/no indicators

Best practice: Use Whole Number for keys and IDs instead of Text. Use Fixed Decimal for financial data instead of Decimal Number (it's faster for aggregations).

Avoid Calculated Columns — Use Power Query Instead

Calculated columns are evaluated using the Formula Engine and stored in the model. They increase model size and are computed during refresh.

Instead of this calculated column:

FullName = [FirstName] & " " & [LastName]

Use this Power Query step:

= Table.AddColumn(PreviousStep, "FullName", each [FirstName] & " " & [LastName], type text)

Why Power Query is better:

  • The transformation is done during data load, not during query time
  • Power Query transformations can be folded back to the data source (native query)
  • The result is stored as a regular column, which compresses better
  • It keeps the DAX layer clean for measures only

Exception: Calculated columns that depend on relationships (using RELATED or RELATEDTABLE) cannot be done in Power Query. These are acceptable as calculated columns but should be used sparingly.

Disable Auto Date/Time

Power BI automatically creates hidden date/time tables for every date column in your model. This is useful for quick date intelligence but wasteful if you have your own date table.

To disable Auto Date/Time:

Step 1: Go to File > Options and settings > Options.

Step 2: Under Current File, click Data Load.

Step 3: Uncheck Auto date/time for new files (under Time Intelligence).

Step 4: Click OK.

Impact: If you have 10 date columns, disabling Auto Date/Time removes 10 hidden tables, each with rows for every date in the range. This can save significant memory.

Optimize Relationships

PracticeDetails
Use integer keysInteger-to-integer relationships are fastest
Avoid inactive relationshipsRemove relationships you do not use
Prefer one-to-many over many-to-manyOne-to-many is simpler and faster
Set correct cross-filter directionDefault single direction is almost always best
Ensure referential integrityIn dataset settings, mark "Assume referential integrity" for DirectQuery

Reducing Model Size

Why Model Size Matters

Impact AreaHow Size Affects It
Memory consumptionLarger models consume more RAM on the capacity
Refresh timeLarger models take longer to compress and load
Dataset limitsPro: 1 GB, PPU: 100 GB, Premium: 10–400 GB depending on SKU
Sharing performanceLarger models take longer to load when first opened
Gateway throughputLarger datasets mean more data transferred through the gateway

Identifying Large Tables and Columns

Use DAX Studio with VertiPaq Analyzer to identify the largest components:

Step 1: Open DAX Studio and connect to your Power BI Desktop model.

Step 2: Navigate to Advanced > View Metrics (VertiPaq Analyzer).

Step 3: Review the Tables tab — sorted by size.

Step 4: Drill into columns within large tables — sorted by size.

Step 5: Look for columns with high cardinality (many unique values) and large dictionary size.

Common Size Offenders

Column TypeWhy It's LargeWhat to Do
High-cardinality text (e.g., descriptions, comments)Many unique values, poor compressionRemove if not needed; summarize if needed
GUIDs and unique IDsEvery value is unique (worst-case cardinality)Replace with integer surrogate keys
Precise timestampsSeconds/milliseconds create many unique valuesRound to minute or hour
Large decimal numbersRandom decimals don't compress wellRound to needed precision
Denormalized columnsRepeated data in fact tablesNormalize into dimension tables

Aggregations (User-Defined)

For very large datasets (hundreds of millions or billions of rows), Power BI supports user-defined aggregations:

  1. Create a summary (aggregated) table alongside your detail table
  2. Define aggregation mappings (SUM, COUNT, MIN, MAX, GROUP BY)
  3. Power BI automatically routes queries to the aggregation table when possible
  4. When a query needs detail-level data, it falls through to the detail table

Example:

Detail table (FactSales): 500M rows, stored in DirectQuery mode

Aggregation table (FactSales_Agg): 100K rows, stored in Import mode

Aggregation ColumnDetail ColumnFunction
Agg_AmountAmountSUM
Agg_QuantityQuantitySUM
Agg_RowCount(table)COUNT
ProductKeyProductKeyGROUP BY
DateKeyDateKeyGROUP BY
RegionKeyRegionKeyGROUP BY

Queries that aggregate by Product, Date, and Region hit the small Import table (fast). Queries that need individual transactions fall through to DirectQuery (slower but available).


Star Schema for Performance

Why Star Schema is Faster

The star schema is the recommended data model design for Power BI. It consists of:

  • Fact tables — Narrow tables with numeric measures and foreign keys (transaction data)
  • Dimension tables — Wide tables with descriptive attributes (lookup data)
     DimDate
       │
DimProduct ── FactSales ── DimRegion
       │
   DimCustomer

Performance Benefits

BenefitExplanation
Fewer joinsEach dimension connects directly to the fact table — no chain of joins
Better compressionFact tables are narrow (few columns) and contain integers/numbers that compress well
Efficient filteringFilters on dimensions propagate cleanly to the fact table
Predictable query plansThe Storage Engine optimizes well for star schemas
RLS compatibilitySecurity filters on dimensions propagate naturally to facts
Simpler DAXMeasures are cleaner when written against a well-designed star schema

Denormalization for Performance

In a star schema, dimensions are denormalized — attributes from related lookup tables are merged into a single dimension table:

Instead of this snowflake structure:

Product → SubCategory → Category → Department

Use this denormalized star structure:

DimProduct (ProductName, SubCategory, Category, Department)
    └── FactSales

Why: Each join adds query complexity. A denormalized dimension eliminates joins and keeps the model simple.

Narrow Fact Tables

Keep fact tables as narrow as possible:

IncludeExclude
Foreign keys to dimensions (integer)Descriptive text (put in dimensions)
Numeric measures (amount, quantity, cost)Calculated fields (use DAX measures)
Date keysRedundant columns
Degenerate dimensions (order number)Columns duplicated from dimensions

Example — Optimized fact table:

OrderKeyDateKeyProductKeyCustomerKeyRegionKeyAmountQuantityCost
1001202401154215873500.002320.00

Eight columns, all integers or decimals. This compresses extremely well.


DAX Optimization

DAX performance has a direct impact on report interactivity. The difference between a well-written and poorly-written measure can be orders of magnitude.

Use Variables to Avoid Repeated Calculations

Variables (VAR) compute a value once and reuse it. Without variables, the same subexpression may be evaluated multiple times.

Before (slower):

Profit Margin =
DIVIDE(
    SUM(Sales[Amount]) - SUM(Sales[Cost]),
    SUM(Sales[Amount])
)

In this example, SUM(Sales[Amount]) is evaluated twice.

After (faster):

Profit Margin =
VAR TotalAmount = SUM(Sales[Amount])
VAR TotalCost = SUM(Sales[Cost])
RETURN
    DIVIDE(TotalAmount - TotalCost, TotalAmount)

Now SUM(Sales[Amount]) is evaluated once and reused.

Avoid FILTER(table) — Use Column Filters

Using FILTER on an entire table is expensive because it iterates row by row through the Formula Engine. Using column filters instead allows the Storage Engine to handle the work.

Before (slower):

East Sales =
CALCULATE(
    SUM(Sales[Amount]),
    FILTER(Sales, Sales[Region] = "East")
)

This forces the Formula Engine to iterate all rows in the Sales table.

After (faster):

East Sales =
CALCULATE(
    SUM(Sales[Amount]),
    Sales[Region] = "East"
)

This passes the filter directly to the Storage Engine, which handles it natively with compressed column scans.

When FILTER Is Necessary

Use FILTER only when you need a condition that involves:

  • Multiple columns in the same row (e.g., FILTER(Sales, Sales[Amount] > Sales[Cost]))
  • A measure result (e.g., FILTER(Products, [Total Sales] > 1000))
  • Complex logic that cannot be expressed as a simple column filter

Even then, filter on the smallest possible table:

Before (slower):

High Value Sales =
CALCULATE(
    SUM(Sales[Amount]),
    FILTER(Sales, Sales[Amount] > 1000)
)

After (faster — if a dimension can be used):

High Value Sales =
CALCULATE(
    SUM(Sales[Amount]),
    FILTER(ALL(Sales[Amount]), Sales[Amount] > 1000)
)

Using ALL(Sales[Amount]) iterates only the unique values of the Amount column, not every row in the Sales table.

CALCULATE with Column Filters vs Table Filters

PatternEngine UsedPerformance
CALCULATE(expr, Table[Column] = "value")Storage EngineFast
CALCULATE(expr, FILTER(ALL(Table[Column]), condition))Storage EngineGood
CALCULATE(expr, FILTER(Table, condition))Formula EngineSlow

KEEPFILTERS vs REMOVEFILTERS

KEEPFILTERS modifies how CALCULATE interacts with existing filters:

-- Without KEEPFILTERS: replaces the existing Region filter
East Sales =
CALCULATE(SUM(Sales[Amount]), Sales[Region] = "East")
-- With KEEPFILTERS: intersects with the existing Region filter
East Sales =
CALCULATE(SUM(Sales[Amount]), KEEPFILTERS(Sales[Region] = "East"))

Performance tip: KEEPFILTERS is slightly faster in some scenarios because it narrows the data set rather than replacing the filter context.

Avoid Complex Iterators on Large Tables

Iterator functions like SUMX, AVERAGEX, COUNTX, MAXX, and MINX evaluate an expression for each row in the table. On large tables (millions of rows), this can be extremely slow.

Before (slower — iterating millions of rows):

Weighted Average Price =
SUMX(Sales, Sales[Quantity] * Sales[UnitPrice]) / SUM(Sales[Quantity])

After (faster — if a pre-calculated column exists):

Weighted Average Price =
DIVIDE(SUM(Sales[LineTotal]), SUM(Sales[Quantity]))

Where LineTotal is a column calculated in Power Query as Quantity * UnitPrice. This avoids the row-by-row iteration entirely.

SUMMARIZECOLUMNS vs SUMMARIZE

SUMMARIZECOLUMNS is generally faster than SUMMARIZE for generating summary tables:

FunctionPerformanceNotes
SUMMARIZECOLUMNSFasterOptimized for query generation; cannot be used inside other iterators
SUMMARIZESlowerCan be nested inside iterators; older function
ADDCOLUMNS + VALUESModerateAlternative pattern; explicit and controllable

Recommendation: For calculated tables or DAX queries, prefer SUMMARIZECOLUMNS. For use inside measures (where SUMMARIZECOLUMNS is restricted), use ADDCOLUMNS(VALUES(Dim[Column]), ...).

DAX Optimization Summary Table

PracticeBefore (Slow)After (Fast)
Use variablesDIVIDE(SUM(X) - SUM(Y), SUM(X))VAR a = SUM(X) VAR b = SUM(Y) RETURN DIVIDE(a - b, a)
Column filterFILTER(Sales, Sales[Region]="East")Sales[Region] = "East"
Reduce iterator scopeSUMX(Sales, ...) on 10M rowsPre-compute column in Power Query
Avoid nested CALCULATECALCULATE(CALCULATE(...))Flatten into single CALCULATE with multiple filters
Use DIVIDEIF(denominator = 0, 0, num/denom)DIVIDE(num, denom, 0)
Use SELECTEDVALUEIF(HASONEVALUE(T[C]), VALUES(T[C]))SELECTEDVALUE(T[C])

Query Folding

Query folding is one of the most important performance features in Power Query. When it works, transformations are translated into native database queries and executed at the source — the data is filtered and transformed before it arrives in Power BI.

What It Is

When you apply transformations in Power Query (filter rows, select columns, group by), Power Query attempts to translate those steps into the native query language of the data source (e.g., SQL for databases, OData for web services).

Without folding: Power BI downloads all data and processes it locally.

With folding: The database processes the filter/transform and sends only the results.

Why It Matters

AspectWith FoldingWithout Folding
Data transferredOnly result rowsAll rows, then filtered locally
Processing locationSource database (optimized)Power BI (Formula Engine)
Refresh timeFasterSlower
Memory usageLowerHigher
ScalabilityHandles large sourcesLimited by local resources

Checking If Folding Occurs

Step 1: In Power Query Editor, right-click on a step.

Step 2: Look for View Native Query in the context menu.

  • If View Native Query is available and not grayed out — the step is folded
  • If View Native Query is grayed out — the step is not folded (or the source does not support viewing native queries)

Step 3: Click View Native Query to see the generated SQL or native query.

Steps That Break Folding

Once a step breaks folding, all subsequent steps also run locally. Common folding breakers:

TransformationFoldable?Notes
Filter rows (simple comparison)YesBasic comparisons fold
Select/Remove columnsYesColumn projection folds
Sort rowsYes (usually)Folds for most databases
Group byYes (usually)Folds for most databases
Merge queries (join)Yes (sometimes)Folds if both tables are from the same source
Add custom column (simple)SometimesSimple arithmetic may fold
Add custom column (complex M)NoCustom M functions break folding
Pivot/UnpivotNo (usually)Most sources do not support this natively
Change type (some types)SometimesDepends on source capabilities
Add index columnNoNo native equivalent in most sources
Buffer tableNoExplicitly breaks folding
Replace values (complex)SometimesSimple replacements may fold

Maximizing Folding

  1. Apply foldable steps first — Filters and column selections should come before non-foldable transformations
  2. Use the source's native capabilities — If your source supports it, let Power Query fold the work
  3. Avoid unnecessary steps — Each step that breaks folding prevents all subsequent steps from folding
  4. Use database views — If complex transformation logic breaks folding, encapsulate it in a database view
  5. Check folding after each step — Right-click and verify "View Native Query" is available

DirectQuery Performance

DirectQuery mode queries the source database in real-time instead of importing data. This is necessary for very large datasets or real-time data requirements, but it introduces performance challenges.

Challenges with DirectQuery

ChallengeWhy It Occurs
Every interaction generates a queryUnlike Import mode, there is no local cache — each slicer change hits the source
Source database performancePower BI is only as fast as the source database
Network latencyRound-trip time to the database adds up
Concurrent usersMultiple users generate multiple queries, stressing the source
Limited DAX supportSome DAX patterns generate inefficient SQL
No in-memory cachingEach visual refresh fetches data from scratch

Dual Storage Mode

Dual storage mode allows a table to be stored in both Import and DirectQuery modes simultaneously:

  • When the table is used in a query with other Import tables, the Import copy is used (fast)
  • When the table is used in a query with DirectQuery tables, the DirectQuery mode is used
  • Best applied to dimension tables (small, frequently filtered) while keeping fact tables in DirectQuery

Composite Models

Composite models combine Import and DirectQuery tables in the same dataset:

TableStorage ModeReason
FactSales (500M rows)DirectQueryToo large to import
DimDateDualSmall, frequently filtered
DimProductDualSmall, frequently filtered
DimRegionImportVery small, rarely changes
FactBudgetImportSmall, separately managed

Source Database Optimization

Since DirectQuery performance depends on the source database, optimize it:

OptimizationDescription
Create indexesIndex columns used in filters, joins, and GROUP BY clauses
Materialized viewsPre-compute common aggregations at the database level
PartitioningPartition large tables by date for faster scans
Statistics updatesKeep database statistics current for optimal query plans
Connection poolingConfigure the source to handle concurrent connections efficiently
Reduce query complexitySimplify the generated SQL by simplifying the Power BI model

Aggregation Tables for DirectQuery

Combine Import-mode aggregation tables with DirectQuery detail tables:

  1. Create a summarized version of the fact table with key dimensions and aggregated measures
  2. Import this summary table (fast, in-memory)
  3. Keep the detail table in DirectQuery (for drill-through or detail-level queries)
  4. Power BI automatically routes queries to the aggregation when possible

Visual Optimization

Even with a perfectly optimized data model and DAX, too many visuals or poorly configured visuals can degrade performance.

Reduce the Number of Visuals Per Page

Each visual on a page generates its own DAX query. More visuals mean more queries, more rendering, and longer page load times.

Visuals Per PageExpected PerformanceRecommendation
1–5ExcellentIdeal for focused dashboards
6–8GoodAcceptable for detailed analysis pages
9–12ModerateConsider splitting into multiple pages
13–20PoorUsers experience noticeable lag
20+CriticalRefactor immediately — split into multiple pages

Avoid High-Cardinality Visuals

Visuals that display many data points are slow to render:

Visual ConfigurationData PointsImpact
Card with a single value1Minimal
Bar chart with 10 categories10Minimal
Table with 100 rows, 5 columns500Low
Table with 1,000 rows, 10 columns10,000Moderate — consider pagination
Scatter chart with 10,000 points10,000High — consider sampling
Matrix with 500 rows x 50 columns25,000Very high — use drillthrough instead
Table with 100,000 rows100,000+Unacceptable — aggregate the data

Use Bookmarks Instead of Too Many Visuals

Instead of displaying 20 visuals on one page:

  1. Create multiple views using different sets of visuals
  2. Use bookmarks to capture each view state
  3. Add bookmark navigator buttons for users to switch between views
  4. Only the visuals visible in the current bookmark generate queries

This dramatically reduces the number of concurrent queries per page load.

Limit Visual Interactions

By default, every visual on a page cross-filters and cross-highlights every other visual. When a user clicks on a bar in a chart, every other visual re-queries and re-renders.

To disable unnecessary interactions:

Step 1: Select the visual you want to control.

Step 2: Go to Format tab > Edit interactions.

Step 3: For each other visual on the page, choose:

  • Filter — The visual filters the other visual
  • Highlight — The visual highlights matching data in the other visual
  • None — No interaction (best for performance)

Disable interactions between visuals that do not need to interact. This reduces the cascade of re-queries when users click.

The Optimize Ribbon Feature

Power BI Desktop includes an Optimize feature for visuals:

  • Reduce visuals — Identifies visuals that could be consolidated
  • Reduce data — Identifies visuals loading excessive data
  • Performance tips — Suggests improvements based on visual configuration

Access it from the Optimize tab in the ribbon (available in recent versions).


Incremental Refresh

Incremental refresh allows you to refresh only the most recent data instead of the entire dataset, dramatically reducing refresh time for large datasets.

Benefits

BenefitDescription
Faster refreshOnly new/changed data is loaded — unchanged data is preserved
Reduced source loadFewer queries and less data transferred from the source
Larger datasetsYou can maintain years of historical data without refreshing it all
ReliabilityShorter refresh windows reduce the chance of timeout failures
Cost efficiencyLess capacity usage during refresh

How It Works

Incremental refresh divides the dataset into partitions based on a date column:

  • Historical partitions — Older data that is never refreshed after initial load
  • Incremental partitions — Recent data that is refreshed on schedule
  • Real-time partition — (Optional) Current data that can use DirectQuery for near real-time

Configuring Incremental Refresh

Step 1: In Power Query, create two parameters:

  • RangeStart — DateTime type, initial value: a past date
  • RangeEnd — DateTime type, initial value: today's date

Step 2: Filter the date column in your table to be between RangeStart and RangeEnd.

Step 3: Go to the table's context menu and select Incremental refresh.

Step 4: Configure the policy:

SettingDescriptionExample
Archive data startingHow far back to keep data5 years
Incrementally refresh data startingHow recent data to refresh each time10 days
Detect data changesRefresh only if data has changed (uses a max-date column)LastModifiedDate
Only refresh complete daysSkip the current day to avoid partial dataYes
Get the latest data in real timeAdd a DirectQuery partition for the current periodOptional

Step 5: Publish to Power BI Service. The incremental refresh policy is applied during Service-side refresh.

Important Notes

  • Incremental refresh requires the date filter to use query folding — the source must support the RangeStart/RangeEnd filter natively
  • The RangeStart and RangeEnd parameters must not be renamed — Power BI looks for these exact names
  • Incremental refresh is configured in Desktop but only executes in the Service
  • Works with Import mode tables only (or Import mode partitions in composite models)

Report-Level Optimization

Beyond the data model and DAX, the overall report design affects performance.

Reduce Pages

  • Each page with visible content generates queries for all its visuals
  • Users often navigate through pages sequentially, so fewer pages means a faster experience
  • Consolidate related information onto fewer pages using bookmarks and drillthrough

Use Hidden Pages for Drillthrough

Instead of showing detail data on the main page:

  1. Create a dedicated drillthrough page with the detail visual
  2. Mark the page as a drillthrough target
  3. The page is only loaded when a user actively drills through — not during initial page load

Disable Visual Interactions Where Not Needed

As described in the Visual Optimization section, disable cross-filtering between visuals that do not need to interact. This reduces the number of queries generated when users click on visuals.

Limit Cross-Filtering

If a page has 10 visuals and all cross-filter each other, a single click generates up to 9 re-queries (one for each other visual). Reducing interactions to only the necessary ones can cut this dramatically.

Optimize Background Images

If you use background images on report pages:

Image FormatSize (1920x1080)Performance Impact
Uncompressed PNG5–10 MBSlow page load, especially on mobile
Optimized PNG500 KB – 1 MBAcceptable
JPEG (80% quality)200–500 KBGood — recommended for photos
SVGVariableBest for logos and simple graphics
WebP100–300 KBBest compression but limited support

Recommendation: Keep background images under 500 KB. Use JPEG for photographic backgrounds and SVG or optimized PNG for logos and geometric designs.


Monitoring and Troubleshooting

For advanced performance analysis, external tools provide deeper insights than Performance Analyzer.

DAX Studio

DAX Studio is a free, open-source tool for analyzing and optimizing DAX queries.

Connecting to a Dataset:

Step 1: Download and install DAX Studio from daxstudio.org.

Step 2: If connecting to a local model, open Power BI Desktop first.

Step 3: Launch DAX Studio and select the Power BI Desktop model (or a published dataset via XMLA endpoint).

Step 4: Click Connect.

Running DAX Queries

In DAX Studio, you can run any DAX query against the model:

EVALUATE
SUMMARIZECOLUMNS(
    DimProduct[Category],
    "Total Sales", SUM(FactSales[Amount]),
    "Order Count", COUNTROWS(FactSales)
)

This helps you test and optimize measure logic in isolation, without the overhead of visual rendering.

Server Timings

Step 1: In DAX Studio, enable Server Timings from the toolbar.

Step 2: Run your DAX query.

Step 3: The Server Timings pane shows:

MetricWhat It Means
TotalTotal query execution time
SE (Storage Engine)Time spent scanning VertiPaq data
FE (Formula Engine)Time spent evaluating DAX expressions
SE QueriesNumber of Storage Engine queries generated
SE CacheWhether results came from cache (hot) or cold storage

Interpreting Server Timings

ObservationDiagnosisAction
High FE time, low SE timeComplex DAX causing Formula Engine workSimplify DAX, use variables, avoid iterators
High SE time, low FE timeLarge data scansReduce cardinality, add aggregations
Many SE queriesComplex measure generating many sub-queriesSimplify measure logic, reduce visual complexity
SE Cache = YesResults served from cacheGood — this is optimal for repeated queries
SE Cache = NoCold query — data scanned from disk/memoryFirst query after refresh; expected behavior

VertiPaq Analyzer

VertiPaq Analyzer (accessible via DAX Studio > Advanced > View Metrics) provides detailed information about the data model:

InformationWhy It's Useful
Table sizes (rows, columns, bytes)Identify the largest tables
Column sizes and cardinalityFind columns that consume the most memory
Relationship sizesIdentify costly relationships
Partition informationUnderstand data distribution
Encoding typeCheck if columns use optimal encoding
Dictionary sizeLarge dictionaries indicate high cardinality

Using VertiPaq Analyzer Output

Step 1: Export VertiPaq Analyzer data to a .vpax file.

Step 2: Open the file in DAX Optimizer (online tool) or analyze it in Power BI itself.

Step 3: Focus on:

  • Columns consuming more than 10% of the total model size
  • Columns with cardinality greater than 1 million (consider reducing)
  • Tables with more than 20 columns (consider removing unused columns)

Optimization Checklist

Use this comprehensive checklist when optimizing any Power BI report:

Data Model Checklist

ItemStatusAction
Remove unused columns[ ]Delete columns not used in visuals, measures, or relationships
Remove unused tables[ ]Delete tables not referenced in the model
Use integer keys[ ]Replace text-based keys with integer surrogates
Optimize data types[ ]Use Whole Number for IDs; Fixed Decimal for currency
Disable Auto Date/Time[ ]Turn off if you have a dedicated date table
Star schema design[ ]Ensure fact tables connect to dimension tables
Narrow fact tables[ ]Move text attributes to dimension tables
Reduce cardinality[ ]Round timestamps, group rare values, use integer keys
Remove calculated columns[ ]Replace with Power Query steps where possible

DAX Checklist

ItemStatusAction
Use variables[ ]Replace repeated subexpressions with VARs
Column filters in CALCULATE[ ]Use Table[Column] = "value" instead of FILTER(Table, ...)
Avoid FILTER on entire tables[ ]Use FILTER(ALL(Table[Column]), ...) instead
Minimize iterators on large tables[ ]Pre-compute columns in Power Query to avoid SUMX on millions of rows
Use DIVIDE function[ ]Replace manual division with DIVIDE(num, denom, alt)
Use SELECTEDVALUE[ ]Replace IF(HASONEVALUE(...), VALUES(...)) pattern
Avoid nested CALCULATE[ ]Flatten to a single CALCULATE with multiple filters
Keep measures simple[ ]Break complex measures into smaller sub-measures using variables

Visual Checklist

ItemStatusAction
Visuals per page < 10[ ]Split busy pages, use bookmarks for alternative views
No high-cardinality tables[ ]Limit table/matrix visuals to paginated data or drillthrough
Interactions disabled where not needed[ ]Set interaction to "None" between unrelated visuals
Drillthrough for detail pages[ ]Move detail tables to hidden drillthrough pages
Background images optimized[ ]Keep under 500 KB; use JPEG or SVG
Minimal cross-filtering[ ]Only enable cross-filter between visuals that meaningfully interact

Refresh and Data Source Checklist

ItemStatusAction
Query folding verified[ ]Right-click steps in Power Query to check "View Native Query"
Incremental refresh configured[ ]Set up for large tables with date columns
Gateway performance adequate[ ]Monitor gateway machine CPU and memory
Data source indexed[ ]Ensure database indexes exist for filtered and joined columns
Scheduled refresh within limits[ ]Pro: 8/day; Premium: 48/day; keep refresh windows reasonable
Credentials and connectivity tested[ ]Verify data source credentials are valid and gateway is online

Advanced Checklist

ItemStatusAction
Performance Analyzer run[ ]Record and analyze all visuals; address items over 1 second
DAX Studio Server Timings reviewed[ ]Check FE vs SE time; optimize high-FE measures
VertiPaq Analyzer reviewed[ ]Identify largest columns and tables; reduce where possible
Aggregation tables considered[ ]For datasets over 100M rows, implement user-defined aggregations
Composite model evaluated[ ]Use Import for small tables, DirectQuery for very large tables

Practice Exercises

Exercise 1 — Performance Analyzer Investigation

  1. Open a Power BI report with at least five visuals on one page
  2. Open Performance Analyzer (View tab > Performance Analyzer)
  3. Click Start recording and then Refresh visuals
  4. Identify the three slowest visuals by total duration
  5. For each slow visual, determine whether the bottleneck is DAX query time, visual display time, or other
  6. Document your findings and propose optimizations for each

Exercise 2 — Data Model Optimization

  1. Open a Power BI Desktop file (or create one with sample data)
  2. List all columns in the model and identify at least five unused columns
  3. Remove the unused columns
  4. Check for any columns that could use a more efficient data type (e.g., text IDs that could be integers)
  5. Disable Auto Date/Time if a dedicated date table exists
  6. Compare the model size before and after (File > Info shows approximate size)

Exercise 3 — DAX Optimization

Take the following measures and optimize them. Test before and after in Performance Analyzer.

Measure 1 (uses FILTER on entire table):

High Value Orders =
CALCULATE(
    COUNTROWS(Sales),
    FILTER(Sales, Sales[Amount] > 1000)
)

Measure 2 (repeated calculations without variables):

YoY Growth =
DIVIDE(
    SUM(Sales[Amount]) - CALCULATE(SUM(Sales[Amount]), SAMEPERIODLASTYEAR(DimDate[Date])),
    CALCULATE(SUM(Sales[Amount]), SAMEPERIODLASTYEAR(DimDate[Date]))
)

Measure 3 (uses IF instead of DIVIDE):

Avg Price =
IF(
    SUM(Sales[Quantity]) = 0,
    BLANK(),
    SUM(Sales[Amount]) / SUM(Sales[Quantity])
)

Rewrite each measure using the optimization techniques from this chapter and compare performance.

Exercise 4 — Query Folding Verification

  1. In Power Query, connect to a database source (or use an OData feed)
  2. Apply the following steps in order: Filter rows, Select columns, Group by, Add a custom column
  3. After each step, right-click and check if "View Native Query" is available
  4. Identify which step breaks query folding
  5. Rearrange the steps to maximize folding (move foldable steps before non-foldable ones)

Exercise 5 — Visual Optimization

  1. Create a report page with 15 visuals (charts, tables, cards)
  2. Record performance with Performance Analyzer
  3. Note the total page load time
  4. Apply the following optimizations: a. Remove 5 visuals and move their data to a drillthrough page b. Disable interactions between at least 3 pairs of visuals c. Replace a high-cardinality table (showing 1000+ rows) with a chart or top-N table
  5. Record performance again and compare the improvement

Exercise 6 — Incremental Refresh Configuration

  1. Open a Power BI Desktop file with a large table that has a date column
  2. Create the RangeStart and RangeEnd parameters in Power Query
  3. Filter the date column using these parameters
  4. Configure incremental refresh to archive 2 years of data and refresh the last 7 days
  5. Publish to Power BI Service (Premium or PPU workspace required)
  6. Verify the refresh policy is applied by checking the dataset settings

Exercise 7 — DAX Studio Deep Dive

  1. Install DAX Studio (daxstudio.org)
  2. Connect to a Power BI Desktop model
  3. Enable Server Timings
  4. Run a simple query: EVALUATE SUMMARIZECOLUMNS(DimProduct[Category], "Sales", SUM(FactSales[Amount]))
  5. Note the SE and FE times
  6. Run a complex query with FILTER on a large table and compare SE/FE times
  7. Open VertiPaq Analyzer and identify the three largest columns in the model
  8. Document which columns you would target for optimization

Summary

Performance optimization is not a one-time task but an ongoing discipline that should be part of every Power BI project. In this chapter, you learned:

  • Performance directly impacts adoption — Users abandon slow reports, so optimization is a business necessity, not just a technical nicety
  • The Power BI engine consists of the Storage Engine (fast, multithreaded, columnar scans) and the Formula Engine (slower, single-threaded, row-by-row processing) — optimize to maximize Storage Engine usage
  • Performance Analyzer is the built-in tool for identifying bottlenecks, showing DAX query time, visual display time, and other processing time for each visual
  • Data model optimization has the highest impact — remove unused columns, reduce cardinality, use integer keys, disable Auto Date/Time, and design star schemas with narrow fact tables
  • Model size reduction through removing high-cardinality text columns, using aggregation tables, and normalizing repeated data into dimension tables
  • Star schema design is faster because it reduces joins, improves compression, and enables predictable query plans
  • DAX optimization focuses on using variables, column filters instead of FILTER(table), avoiding complex iterators on large tables, and using efficient function patterns
  • Query folding translates Power Query transformations into native database queries, dramatically reducing data transfer and processing time
  • DirectQuery performance depends on the source database — use dual storage mode, composite models, aggregation tables, and source-level optimization
  • Visual optimization targets the number of visuals per page (under 10 is ideal), high-cardinality visuals, unnecessary cross-filtering, and background image sizes
  • Incremental refresh refreshes only recent data, making large dataset refreshes faster and more reliable
  • DAX Studio and VertiPaq Analyzer are essential external tools for deep performance analysis, providing Server Timings and detailed model metrics

Congratulations!

You have completed the Power BI tutorial series. Over these chapters, you have journeyed from the fundamentals of data import and modeling through advanced DAX, visualizations, Power BI Service, Row-Level Security, and now Performance Optimization. You now have a comprehensive foundation for building professional-grade Power BI solutions.

Next Steps

To continue your Power BI journey, consider these paths:

  • Microsoft Certification PL-300 — The Microsoft Power BI Data Analyst certification validates your skills and is widely recognized by employers. Study topics align closely with what you have learned in this series.
  • Advanced DAX — Explore topics like calculation groups, field parameters, dynamic format strings, and complex time intelligence patterns.
  • Power BI Embedded — Learn to embed Power BI reports in custom web applications using the JavaScript API and REST APIs.
  • Microsoft Fabric — The next evolution of Microsoft's analytics platform, integrating Power BI with data engineering, data science, and real-time analytics in a unified lakehouse architecture.
  • Community Resources — Join the Power BI Community (community.powerbi.com), follow the Power BI Blog, subscribe to Guy in a Cube and SQLBI YouTube channels, and attend local Power BI user group meetings.
  • Practice — The most effective way to solidify your skills is to build real projects. Find a dataset that interests you, build a complete solution from data import through published report, and iterate based on feedback.

Your Power BI journey is just beginning. The skills you have built here will serve as a strong foundation for whatever you build next.