Forecast accuracy baseline
Current vs. actual variance over the last 12 weeks, by entity, by category. Where the model (or the spreadsheet) is consistently wrong — and where it's accidentally right.
From a spreadsheet nobody trusts to a forecast the board acts on. Diagnostic, integration and build-out — delivered alongside your team, not behind a curtain.
Most cashflow projects fail at the data layer long before the model. Subsidiaries produce 13-week numbers under pressure, each with a different risk appetite. Spreadsheets multiply. The board sees a number it doesn't challenge because it can't trace it. Even a 10% accuracy uplift is worth real money at scale — but you only get there by fixing the data and the process, not just adding a model on top.
Two to three weeks, fixed scope. A hands-on diagnostic of accuracy, data, coverage and governance. Delivered as a written report with a prioritised action list — not a slide deck and a proposal for another engagement.
Goal: establish the baseline and tell you honestly whether ML is what you need next, or whether a process fix would move the number further, faster.
Current vs. actual variance over the last 12 weeks, by entity, by category. Where the model (or the spreadsheet) is consistently wrong — and where it's accidentally right.
Bank feeds, AR/AP, payroll, treasury movements. What's machine-readable, what's still manual, what's tagged to a chart-of-accounts a model can use.
Which entities are forecast, which are inferred, which are guessed. How aggressive vs. conservative the underlying assumptions are — and whether that's policy or personality.
Who owns the number, who reviews, who challenges. The path from a subsidiary submission to a board-pack figure, and where breaks happen.
Step 2 · Integration
The boring 80% of the work is where the accuracy gain actually comes from. We stand up the data spine, tag the history, engineer the features and validate the baseline so the model has a chance.
Bank APIs, ERP, TMS and payroll routed into the treasury data spine. No more exports, no more monthly refreshes — the forecast sits on a live substrate.
The 80% of the work nobody owns. Dedupe, standardise, tag to a stable chart of accounts, resolve entity / currency / account breaks, make the history defensible.
The variables a model can actually reason about — day-of-week, month-end seasonality, counterparty payment patterns, large-and-unusual flags. Separates what ML should forecast from what judgement should.
Feature scaling so one outlier doesn't dominate the model. Training / validation / testing split, explicit measures of precision, recall and MAPE. A baseline everyone trusts.
ML handles the regular, explainable flows. Judgement handles the one-offs — M&A, dividends, bond coupons. The forecasting stack knows which is which, and so does the team.
Rolling 13-week direct cashflow forecast per entity, with ML-driven category-level estimates and judgement overlays for the one-offs (M&A, dividends, bond coupons). Live liquidity buffer tracking against policy.
Indirect forecast driven by P&L assumptions, working-capital swings and financing decisions. Used to stress-test covenant headroom and shape the financing roadmap.
AI-drafted variance explanations that tie the forecast back to the numbers, in your tone of voice. The board pack writes the first draft of itself.
Runbooks, notebooks, model cards and training so the team can interrogate the forecast themselves — no black box, no permanent consultant dependency.
Proven

Start a conversation
Book a 30-minute diagnostic call. We'll tell you within the hour whether we can help, and where the biggest wins likely sit.