A mid-market company with forty KPIs on a flat list faces a structural problem that no amount of better visualisation can fix. When every indicator sits at the same level — revenue alongside machine utilisation alongside customer satisfaction alongside days payable outstanding — there is no way to distinguish what is strategic from what is operational, no way to trace a board-level concern down to its root cause, and no way for a department head to understand how their numbers connect to the company’s objectives. The indicators exist. The structure does not.
This article explains how to organise KPIs into a hierarchy that connects strategic objectives to operational execution, and how cascading — the discipline of deriving lower-level KPIs from higher-level ones — turns a list of indicators into a coherent performance architecture.
What hierarchy and cascading mean in practice
A KPI hierarchy is the structured arrangement of indicators across organisational levels: strategic at the top, tactical in the middle, operational at the base. Each level serves a different audience and answers a different question.
Cascading is the process of deriving child KPIs from parent KPIs. It is the logical link that ensures a department-level indicator contributes to a business-unit objective, which in turn contributes to a strategic goal. Without cascading, a hierarchy is merely a categorised list — labels without logic.
The purpose of both is threefold:
- Alignment. Every team can see how their work connects to the organisation’s priorities.
- Accountability. Each KPI has a clear owner at the appropriate organisational level.
- Traceability. When a strategic indicator moves, the hierarchy shows which operational factors contributed.
Why flat lists fail
Research from BCG and MIT found that 60% of managers believe they need better KPIs. In many cases, “better” means “better structured” — not more numerous. The typical mid-market organisation has accumulated indicators over years, each added for a reasonable purpose at the time, but none connected to the others in a deliberate hierarchy.
The consequences are predictable:
Paralysis at the executive level. When fifty indicators sit side by side with no hierarchy, the monthly review becomes a data tour rather than a decision meeting. Executives skim all indicators and interrogate none. Research from the KPI Institute shows that 68% of organisations see positive performance improvements after introducing structured KPI tracking — the structure, not the tracking alone, is what produces the result.
No drill-down path. A strategic KPI — say, gross margin — drops 150 basis points. Without a hierarchy, the executive team can see the problem but cannot trace it. Is the issue in pricing, in input costs, in product mix, or in a specific business unit? A flat list of indicators provides no investigative path. A hierarchy does: gross margin cascades to business unit margins, which cascade to product-line margins, which cascade to cost drivers and volume metrics. The drill-down follows the hierarchy.
Disconnection between strategy and operations. The board sets strategic objectives. Departments set operational targets. Without cascading, these exist in separate worlds. The operations team may be meeting all their indicators while the company misses its strategic goals — because nobody verified that operational KPIs actually contribute to strategic outcomes. Data from the Institute of Global Controlling in the Czech Republic shows that only 11% of companies manage controlling via process KPIs, suggesting that most organisations stop measuring at the tactical level and never cascade to operational execution.
The three levels of a KPI hierarchy
Strategic KPIs (board and executive level)
Three to five indicators that measure progress toward the organisation’s most important objectives. These are the numbers the board reviews quarterly and the CEO monitors monthly. Examples: revenue growth rate, group operating margin, return on capital employed, customer retention rate, cash conversion ratio.
Strategic KPIs are always lagging indicators — they confirm outcomes. Their value lies not in predicting but in confirming whether the strategy is producing results.
| Property | Strategic KPIs |
|---|---|
| Count | 3–5 maximum |
| Audience | Board, CEO, executive committee |
| Review cadence | Monthly or quarterly |
| Nature | Predominantly lagging |
| Ownership | CEO or executive sponsor |
Tactical KPIs (business unit and function level)
These indicators sit one level below strategic KPIs and are owned by business unit heads or functional leaders. They bridge the gap between “are we on track overall?” and “what specifically needs attention?” Each tactical KPI should have a clear parent — a strategic KPI it contributes to.
Examples: business unit revenue, departmental cost-to-income ratio, segment gross margin, pipeline coverage ratio.
Tactical KPIs begin to include leading indicators alongside lagging ones. Pipeline coverage, for instance, is a leading indicator for revenue — it predicts future performance rather than confirming past results.
Operational KPIs (team and process level)
The base of the hierarchy. These indicators measure the activities and processes that ultimately determine tactical and strategic outcomes. They are owned by team leads or process owners and reviewed weekly or daily.
Examples: production yield rate, order processing time, invoice accuracy rate, customer response time, stock availability percentage.
Operational KPIs are predominantly leading indicators. They provide the earliest signals of emerging problems — signals that will eventually surface in tactical and strategic indicators if not addressed. The IGC finding that only 11% of organisations use process KPIs confirms that this is where the hierarchy is thinnest in most mid-market companies. The gap between tactical measurement and operational measurement is where visibility breaks down.
How cascading works
Cascading is not simply assigning KPIs to lower organisational levels. It is a logical derivation: each child KPI must demonstrably contribute to its parent. The relationship can take several forms:
Decomposition. A strategic KPI is broken into its component parts. Group revenue decomposes into business unit revenues. Group operating margin decomposes into segment margins. Each component is owned by a different leader, but the components aggregate back to the parent.
Driver identification. A tactical KPI is linked to the operational drivers that influence it. Segment gross margin is driven by pricing, input costs, product mix, and volume. Each driver has a corresponding operational KPI. When the margin moves, the hierarchy indicates which driver changed.
Contribution mapping. Some KPIs do not decompose neatly. Customer retention, for instance, is influenced by service quality, product reliability, pricing competitiveness, and relationship management. Cascading in this case means identifying the two or three operational indicators with the strongest influence and monitoring them as leading signals.
Aggregation logic must be explicit
For a hierarchy to function, the rules for rolling operational data up to strategic totals must be defined and consistent. Revenue aggregation is straightforward — sum the parts. Margin aggregation requires weighted averages. Customer satisfaction may require index construction. If the aggregation logic is ambiguous or inconsistent, the hierarchy produces numbers that do not reconcile, and trust erodes.
A practical test: can you take the operational KPIs, apply the aggregation rules, and reproduce the strategic KPI? If not, the hierarchy has a gap.
Accountability must follow the hierarchy
Each level of the hierarchy needs a named owner — not a department, but a person. The KPI framework should specify who is accountable for each indicator, what authority they have to take corrective action, and to whom they escalate when a threshold is breached.
This accountability mapping follows RACI principles: who is responsible for the KPI’s performance, who is accountable for decisions about it, who is consulted when it moves, and who is informed of changes. Without this mapping, cascading produces a tidy organisational chart of indicators but no mechanism for response.
Common mistakes in KPI hierarchy design
Decorative structure without genuine connection
The most common failure. The organisation creates a three-level hierarchy on paper, assigns KPIs to each level, but never verifies the logical links. Strategic and operational KPIs coexist in a diagram without any demonstrated relationship. If you remove an operational KPI, does the tactical KPI above it become less explainable? If not, the link is decorative.
Too many levels
Three levels — strategic, tactical, operational — are sufficient for most mid-market organisations. Adding sub-levels (strategic, sub-strategic, tactical, sub-tactical, operational, sub-operational) creates complexity without value. Each additional level adds a reporting layer, an accountability layer, and a review cycle. The goal is the minimum hierarchy that provides alignment and traceability.
Cascading without ownership
Assigning KPIs to levels without assigning owners produces a structure that nobody maintains. The hierarchy exists in a presentation slide but not in practice. Each KPI at each level needs a named owner who reviews it at the agreed cadence and is accountable for explaining movement.
Confusing the KPI hierarchy with the organisational chart
A KPI hierarchy reflects performance logic, not reporting lines. Revenue per customer may be owned by the commercial director, but the operational KPIs that influence it — order accuracy, delivery time, service response — may sit in operations, logistics, and customer service respectively. The hierarchy follows the performance driver chain, not the management structure.
Balanced Scorecard adoption without genuine cascading
The Balanced Scorecard remains the most commonly referenced cascading framework, particularly in Central European markets. It provides four useful perspectives (financial, customer, internal process, learning and growth) for categorising indicators. However, many organisations adopt the four-perspective structure without implementing genuine cascading logic — they create categorised lists rather than connected hierarchies. The perspectives are useful as a completeness check. They are not a substitute for the logical derivation that makes cascading work.
Vendor-defined hierarchy
Purchasing a BI capability and using its default template to define the KPI hierarchy reverses the correct sequence. The hierarchy must be designed from strategy and performance logic; the BI capability then visualises and enables drill-down through that hierarchy. The capability can model hierarchies. It cannot design them.
Building a hierarchy: practical sequence
Start with strategic objectives. What three to five outcomes must the organisation achieve? These become the roots of the hierarchy.
Identify the performance drivers for each objective. What must change for this outcome to materialise? Drivers become tactical KPIs.
For each driver, define the operational indicators that provide early signals. These become operational KPIs — the leading indicators that predict movement in tactical and strategic measures.
Define aggregation logic. How does each operational KPI contribute to its tactical parent? How does each tactical KPI contribute to its strategic parent? Write the rules explicitly.
Assign ownership at every level. Name the individual responsible for each KPI. Define their review cadence and escalation protocol.
Test the hierarchy end-to-end. Pick a strategic KPI and trace it down to operational level. Can you explain how a change at the operational level would propagate upward? Pick an operational KPI and trace it upward. Can you explain which strategic objective it serves?
Sector examples
The hierarchy structure is universal; the specific KPIs vary by sector:
- Manufacturing: Company margin (strategic) cascades to plant-level margin (tactical), which cascades to production line yield, machine utilisation, and quality reject rates (operational). The hierarchy follows the physical structure: company, plant, line, machine.
- Professional services: Revenue per partner (strategic) cascades to practice revenue and utilisation (tactical), which cascades to project margin, billable hours, and client retention at the team level (operational). The hierarchy follows the organisational structure: firm, practice, team.
- Retail: Like-for-like sales growth (strategic) cascades to regional performance (tactical), which cascades to store-level conversion rate, average basket value, and stock availability (operational). The hierarchy follows the geographic structure: company, region, store.
Frequently asked questions
How deep should a KPI hierarchy go? Three levels are sufficient for most mid-market organisations: strategic, tactical, operational. The test is whether each level adds genuine insight and accountability. If a level exists only because symmetry demands it, remove it.
Can one operational KPI serve multiple tactical parents? Yes. Order accuracy, for example, may influence both customer satisfaction (retention driver) and cost efficiency (rework reduction). The hierarchy should acknowledge these cross-links. What matters is that the operational KPI has a clear primary owner regardless of how many parents it serves.
How often should the hierarchy be reviewed? The hierarchy itself — the structure and logic — should be reviewed annually or when strategy changes materially. The KPIs within the hierarchy should be reviewed at their defined cadences. Do not confuse reviewing the hierarchy structure with reviewing KPI performance.
What is the relationship between KPI hierarchy and reporting structure? The KPI hierarchy determines what appears on dashboards and reports at each organisational level. Strategic KPIs populate the executive dashboard. Tactical KPIs populate business unit reports. Operational KPIs populate team dashboards. The hierarchy is the content architecture; the reporting structure is the delivery mechanism. See Management Reporting Framework for the reporting side.
Does cascading work with the Balanced Scorecard? The Balanced Scorecard’s four perspectives (financial, customer, process, learning) are useful for ensuring completeness. Cascading within BSC requires the same logical derivation described in this article — parent-child relationships, aggregation logic, and ownership. Organisations that adopt BSC perspectives without implementing genuine cascading logic end up with categorised lists, not hierarchies.
Related Reading
- KPI Framework for Financial Reporting
- Designing Effective KPIs
- Metrics vs KPIs — Understanding the Difference
- Financial Dashboards for Executive Decisions
- Data Ownership Framework
Sources
- BCG/MIT — 60% of managers believe they need better KPIs. BCG–MIT research on performance management effectiveness.
- Institute of Global Controlling (CZ) — Only 11% of companies manage controlling via process KPIs.
- KPI Institute — 68% of organisations report positive performance improvements after structured KPI tracking.
Martin Duben is the founder of Onetribe, advising mid-market companies across Central Europe on financial reporting, data governance, and performance management. He works with CFOs to build reporting structures that connect measurement to decision-making.