Most mid-market companies do not forecast — they reforecast by updating last year’s budget with current actuals, producing updated numbers rather than updated insight. A financial forecasting framework establishes forecasting as a distinct discipline from budgeting: budgets set targets through political commitment, forecasts predict outcomes through analytical rigour. The distinction is foundational — a forecast that consistently matches the budget is not a forecast but the budget restated. Driver-based methodology is the single most impactful upgrade from financial extrapolation, with Aberdeen reporting a 14% improvement in revenue forecast accuracy. Process design — inputs, methodology, cadence, governance — matters more than granularity or tooling. The practical starting point: material line items, a clear cadence, and explicit assumptions, iterated from there rather than waiting for a perfect model.
How confident is the leadership team in the numbers they use to make decisions about the next twelve months? In most mid-market companies, the honest answer is: not very. The annual budget was approved in November. By March, the assumptions behind it have shifted. By June, the budget bears little resemblance to operational reality. Yet it remains the only forward-looking financial view the company has.
The gap is not a budget problem — budgets do what budgets are designed to do. The gap is the absence of a forecasting discipline. Most mid-market companies do not forecast. They reforecast — which means they update last year’s numbers with this year’s actuals, paste in new assumptions for the remaining months, and call the result a forecast. The methodology does not change. The assumptions are not validated. The process produces updated numbers, not updated insight.
A financial forecasting framework establishes forecasting as a distinct capability with its own purpose, methodology, cadence, and governance — separate from the annual budget and designed to answer a different question.
Budgets Set Targets. Forecasts Predict Outcomes.
This distinction is foundational and widely misunderstood.
A budget is a political and managerial commitment. It reflects what the organisation has agreed to achieve — revenue targets, cost limits, headcount plans. Budgets are negotiated, approved, and governed. They incorporate stretch targets, sandbagged costs, and compromise assumptions. A budget that is “wrong” may still have served its purpose if it aligned the organisation around shared targets.
A forecast is an analytical prediction. It reflects what the organisation expects will actually happen — based on current data, operational trends, and validated assumptions. A forecast should be unbiased. If the forecast consistently matches the budget, it is not a forecast — it is the budget restated. A forecast that predicts a shortfall is doing its job; a budget that predicts a shortfall has failed its purpose.
| Budget | Forecast | |
|---|---|---|
| Purpose | Target-setting, resource allocation, accountability | Prediction, early warning, decision readiness |
| Nature | Political commitment | Analytical prediction |
| Bias | Intentional (stretch targets, buffers) | Should be zero (accurate prediction) |
| Update cadence | Annual | Monthly or quarterly |
| What “wrong” means | Targets were unrealistic | Predictions were inaccurate |
| Primary audience | Board, department heads | Management team, operational leaders |
Companies that treat the budget as the forecast — or that produce a “reforecast” by adjusting budget numbers — conflate two instruments with fundamentally different purposes. The result is a single document that serves neither purpose well: too politically adjusted to predict accurately, too infrequently updated to provide early warning.
Forecast Types and Horizons
Different decisions require different forecast horizons and different levels of precision:
Short-term forecasts (4–13 weeks)
Primary object: Cash flow. Purpose: Ensuring the company can meet its near-term obligations — payroll, supplier payments, debt service, tax. Methodology: Bottom-up from known commitments, receivables ageing, and payables schedules. Precision requirement: High. Cash forecasts must be accurate to ±3–5% because the consequences of error are immediate — missed payments, unnecessary borrowing, covenant breaches.
Medium-term forecasts (3–12 months)
Primary object: P&L and working capital. Purpose: Informing operational decisions — hiring, investment, capacity, pricing. Methodology: Driver-based — projecting revenue from pipeline and conversion, costs from headcount and volume, working capital from payment terms and inventory cycles. Precision requirement: Moderate. ±5–10% accuracy is acceptable because decisions at this horizon tolerate more uncertainty.
Long-term forecasts (1–5 years)
Primary object: Strategic financial trajectory. Purpose: Informing strategic decisions — market entry, major investment, capital structure, M&A. Methodology: Scenario-based — multiple versions of the future with different market, competitive, and operational assumptions. Precision requirement: Low for point estimates. The value is in the range and direction, not in specific numbers. ±15–25% is typical and appropriate.
Most mid-market companies have only one forward-looking view: the annual budget, which attempts to cover all three horizons in a single document. A forecasting framework separates these needs and matches methodology to purpose.
The Forecast Process Architecture
A structured forecast follows a repeatable process:
Inputs
Operational data. Pipeline value, production volumes, headcount actuals, capacity utilisation, order backlog. These are the raw inputs that drive the forecast — and they must come from the functions that own them, not from finance extrapolating financial trends.
External data. Market indicators, commodity prices, exchange rates, customer-reported plans. External inputs frame the context within which operational drivers operate.
Assumptions. Every forecast contains assumptions that are neither data nor calculation — conversion rates, growth rates, win probabilities, cost escalation rates. These must be explicit, documented, and owned. An assumption register — listing each assumption, its source, its owner, and its last validation date — is the single most undervalued component of the forecast process.
Methodology
The methodology determines how inputs become projections. Four approaches exist on a spectrum:
| Methodology | How it works | Accuracy potential | Effort level |
|---|---|---|---|
| Financial extrapolation | Apply growth rates to historical financials | Low — repeats past, ignores change | Low |
| Trend-based projection | Statistical analysis of historical patterns | Low to moderate — identifies patterns, misses inflections | Low to moderate |
| Driver-based forecasting | Project operational drivers, calculate financial outcomes | Moderate to high — captures operational reality | Moderate |
| Scenario-enriched | Driver-based with multiple assumption sets | High — captures uncertainty, enables decisions | Moderate to high |
Most mid-market companies operate at the first level — financial extrapolation. The annual budget applies a growth rate to last year’s revenue, an inflation rate to last year’s costs, and calls the result a plan. Moving from financial extrapolation to driver-based forecasting is the single most impactful upgrade. Aberdeen research reports a 14% improvement in revenue forecast accuracy specifically from adopting driver-based methodology.
Cadence
The forecast must be updated regularly enough to remain useful, but not so frequently that it consumes all available capacity:
- Quarterly is the minimum for meaningful forecasting. It provides four opportunities per year to incorporate new information and adjust expectations.
- Monthly is the standard for mature mid-market companies. It aligns with the management meeting rhythm and provides twelve feedback cycles per year.
- Weekly is appropriate only for short-term cash forecasting, not for full P&L and balance sheet views.
The cadence must match the decision cadence. If management makes resource decisions monthly, a quarterly forecast arrives too late to inform them.
Governance
Forecast governance answers four questions:
Who forecasts? Each driver has an owner. Sales forecasts pipeline. Operations forecasts production. HR forecasts headcount. Finance builds the model that converts driver inputs to financial outputs. Finance does not forecast — finance calculates.
Who validates? Driver assumptions are validated by the functions that own them. Financial model integrity is validated by finance. The CFO or head of finance validates the consolidated view.
When does the forecast trigger action? A forecast that predicts a cash shortfall in six months but triggers no response has failed. The governance structure must define escalation thresholds — what level of deviation from plan requires management discussion, board notification, or contingency activation.
How is accuracy tracked? Forecast accuracy must be measured at every cycle — comparing prior forecasts to actuals, calculating accuracy metrics, and diagnosing root causes of error. Without this feedback loop, the forecast process cannot improve.
Outputs
The forecast produces forward-looking financial statements — P&L, balance sheet, and cash flow — plus commentary that explains the key assumptions, the major changes from the prior forecast, and the risks and opportunities that could move outcomes outside the forecast range.
The commentary matters as much as the numbers. A forecast without commentary is a set of assumptions that nobody has interrogated. A forecast with commentary is a narrative about where the business is heading and what might change.
The Reforecast Trap
The most common mid-market forecasting practice is not forecasting at all — it is reforecasting. The distinction matters:
Reforecasting takes the annual budget, replaces year-to-date months with actuals, and adjusts remaining months. The methodology does not change. The assumptions are not re-examined. The driver model (if one exists) is not updated. The result is a number that is more current but not more insightful.
Forecasting starts from current operational reality — pipeline, capacity, order book, headcount — and projects forward using validated assumptions and explicit methodology. It may arrive at a number close to the reforecast, or it may arrive somewhere very different. The difference is that the forecast number can be interrogated: “Why £12 million? Because the pipeline is £4 million at 30% conversion, plus £8 million in contracted recurring revenue, plus £1.2 million in expected expansion.”
The reforecast answers: “What do updated numbers look like?” The forecast answers: “What do we expect to happen, and why?”
Building the Assumptions Register
Every forecast depends on assumptions. Most forecasts bury them — embedded in cell formulae, implied in growth rates, assumed by convention. When the forecast proves wrong, nobody can identify which assumption failed because nobody documented the assumptions in the first place.
An assumptions register makes every material assumption explicit:
| Assumption | Value | Source | Owner | Last validated | Sensitivity |
|---|---|---|---|---|---|
| Revenue conversion rate | 28% | Sales pipeline analysis | Head of Sales | March 2026 | ±5% = ±£400K revenue |
| Average deal size | £95,000 | 12-month trailing average | Head of Sales | March 2026 | ±10% = ±£350K revenue |
| Headcount additions | +8 FTE by Q3 | Approved hiring plan | HR Director | February 2026 | Each FTE = £65K annual cost |
| Raw material cost increase | +4% year-on-year | Supplier contracts | Procurement | January 2026 | ±2% = ±£180K COGS |
This is not overhead — it is the mechanism that makes the forecast transparent, interrogable, and improvable. When the forecast is wrong, the register identifies which assumption failed. When stakeholders question the forecast, the register provides the evidence base.
Common Pitfalls
Treating the budget as the forecast
The budget is a target. The forecast is a prediction. When the same document serves both purposes, it serves neither well. Targets contain intentional stretch; predictions should be unbiased. A forecast that always matches the budget is not a forecast.
Forecasting only revenue
Revenue is the most visible forecast object, but it is not the most consequential. Cash flow forecasting failures cause liquidity crises. Cost forecasting failures cause margin erosion. Working capital forecasting failures cause covenant breaches. A forecasting framework covers all three financial statements, not just the top line.
Updating numbers without updating assumptions
The most common reforecast failure. Revenue is revised downward, but nobody examines whether the pipeline has changed, whether conversion rates have shifted, or whether a major customer is at risk. The numbers change; the understanding does not.
Forecasting in isolation
When finance produces the forecast without operational input, the result is a financial extrapolation — mathematically consistent but operationally disconnected. Sales knows the pipeline is weakening. Operations knows capacity is constrained. Marketing knows the campaign is underperforming. None of this reaches the forecast because the process does not include them.
Over-engineering the first forecast
A 500-line forecast model that takes three months to build and requires a full-time analyst to maintain is worse than a 50-line model that can be built in a week and maintained alongside other responsibilities. The 80/20 principle applies: five to ten drivers explain 80% of financial variance. Start there. Iterate.
Frequently Asked Questions
Do we need a separate forecasting process, or can we just update the budget more often? A more frequently updated budget is still a budget — targets adjusted for actuals. A forecast is a fundamentally different exercise: predicting outcomes from operational drivers rather than adjusting targets. The two processes can share a model, but they serve different purposes and should be governed differently.
How much time does maintaining a forecast process take? For a quarterly, driver-based forecast: two to three days of finance time per cycle, plus one to two hours from each driver owner. For a monthly process: one to two days of finance time. The investment is modest relative to the annual budget process, which typically consumes six to twelve weeks.
Should the forecast replace the annual budget? Not immediately, and possibly not ever. The annual budget serves governance, accountability, and compensation functions that the forecast does not naturally replace. Most mature organisations run both — the budget as the target, the rolling forecast as the expectation. The hybrid model is the pragmatic answer for most mid-market companies.
What is the minimum viable forecast? A monthly or quarterly view of revenue, total costs, and cash flow — built from five to ten key drivers, updated by driver owners, and reviewed by the management team. This can be built in a spreadsheet in one to two weeks and maintained in one to two days per cycle.
How do we know if our forecasting process is improving? Measure forecast accuracy at every cycle. Track MAPE and bias over time. If accuracy is improving and bias is trending toward zero, the process is working. If not, diagnose root causes and adjust.
Related Reading
- Driver-Based Forecasting — the methodology that produces the largest accuracy improvement
- Forecast Accuracy — Measurement and Improvement — the measurement discipline that closes the improvement loop
- Rolling Forecast — How to Implement Continuous Planning — the cadence framework for sustained forecasting
- How to Build an Annual Budget That Works — the budgeting framework that the forecast complements
- Variance Analysis — A Practical Guide — the backward-looking analysis that validates forecast assumptions
- Glossary: Financial Planning | Forecast Accuracy | Driver-Based Planning
Sources
- McKinsey — Forecasting Best Practices — rolling forecast adoption as the single best predictor of CFO satisfaction with planning
- Aberdeen — Driver-Based Planning Research — 14% improvement in revenue forecast accuracy with driver-based methodology
- AFP — Rolling Forecast Adoption Survey — 42% adoption rate; mid-market adoption significantly lower
- KPMG — Forecast Accuracy and Valuation — companies with less than 5% forecast deviation achieve 12% higher market valuation
- Deloitte UK — Planning, Budgeting and Forecasting Survey — 75% of annual budgets disconnected from reality by mid-year
Martin Duben is managing director at Onetribe, where he works with mid-market finance teams on planning, forecasting, and performance analysis. He has spent over fifteen years helping companies establish forecasting as a structured discipline distinct from the annual budgeting cycle.