I gave an AI (specifically, Claude Opus 4.7 from Anthropic, accessed 4/18/2026) an assignment. Review my past Data Science for Decision Makers (DSDM) articles to understand my preferred topics, points of view, and writing style. Pick a topic I’d pick, one the audience would actually use, and develop a point of view. Write the article; then think like an editor and revise the article; then think like a DSDM reader, asking if the article is clear and if it is useful, and make final revisions.
I gave instructions up front. I made no edits. None. So the article below, which I stand behind by the way, has something useful to say about forecasting. But as you read it, perhaps you will get a sense of where we stand in terms of AI-generated written content.
And in next month’s column, perhaps you will wonder if I wrote it at all.
The Permission Forecast
Picture your last planning review.
A 47-slide deck. A hero number on slide 12 — let’s call it $34 million in incremental revenue. Assumptions footnoted in eight-point type. A confidence interval that is technically present but has been sanded smooth enough that nobody asks what “conservative” means. The analytics lead says the model is robust. Finance says the model is reasonable. You nod. Everyone nods. The number becomes the plan.
Eighteen months later, the number was wrong. This does not surprise anyone in particular. There is a post-mortem. Phrases like “macro conditions” and “execution challenges” appear. Someone suggests tightening the forecasting methodology. Everyone nods again. The ritual repeats.
What nobody says out loud: the forecast did exactly what it was built to do.
It just wasn’t built to forecast.
Most Forecasts Are Not Predictions
Most forecasts that reach an executive’s desk are political documents wearing predictive clothing. They use the vocabulary of prediction — ranges, models, sensitivity analyses — because predictive vocabulary signals rigor, and rigor signals that the number has been earned rather than chosen. But the actual job of the document is almost never to answer the question what will happen?
The job is to answer a different question. Sometimes several.
This is not a moral failing on the part of your analytics team. They are responding rationally to how forecasts get used in your organization. If you have ever watched a team produce a forecast, cut it by fifteen percent in the first round of finance review, cut it by another ten in the second round, and then treat the surviving number as the baseline — your team has learned that the opening bid matters. They are not going to hand you their real best estimate. They are going to hand you the number that survives the process.
The three most common roles a forecast is actually playing have very little to do with prediction.
Three Roles a Forecast Is Actually Playing
1. The Permission Slip. The forecast built after the decision has been made. The business wants to launch the new channel, acquire the competitor, or expand into the new territory. The analysis arrives to justify the ask. The number is not derived; it is reverse-engineered. It hits precisely the threshold at which the investment looks attractive, and not a dollar higher — because any dollar higher raises expectations, and any dollar lower kills the project. The Permission Slip is not trying to predict anything. It is trying to get approved.
You recognize a Permission Slip by how little the number moves when you challenge the assumptions. The assumptions are downstream of the conclusion; they are the scaffolding, not the foundation.
2. The Alibi Generator. The forecast designed to make whoever produced it defensible regardless of outcome. The range is wide. The floor is conservative enough that anything short of catastrophe counts as a win. The ceiling is aspirational enough that if things go well, someone can claim credit for “seeing the upside.” Hedging language is present throughout: directional, based on current conditions, subject to macro assumptions. The Alibi Generator is a forecast optimized for the post-mortem, not the strategy.
You recognize an Alibi Generator by how often, in the follow-up meeting, someone produces a version of the original deck that already contained the excuse. Last I checked, “subject to macro assumptions” was not a business model.
3. The Anchor Bid. The forecast submitted as the opening move in a budget negotiation, knowing the final number will land somewhere south of it. Both sides understand the game. The analytics team inflates the ask by roughly the expected cut. Finance cuts by roughly that amount. Everyone agrees the process has been rigorous. Everyone goes home. Nothing has been forecast. A negotiation has occurred.
You recognize an Anchor Bid by the suspicious regularity with which the “cut” version of the forecast lands right at the boundary of what the team actually needs.
You Are Reviewing the Wrong Thing
None of these three forecasts are doing anything wrong, relative to what they are built to do. The Permission Slip gets the launch approved. The Alibi Generator keeps the forecaster employed. The Anchor Bid yields a workable budget. The machinery is functioning.
The problem is that you, the executive, are then asked to treat all three documents as predictions. You review them for predictive accuracy. You “hold people accountable” when the predictions are wrong. You rotate in new vendors or new tools to “improve forecasting discipline.” Each of these responses reinforces the political game rather than fixing it — because a more sophisticated Permission Slip is still a Permission Slip. A more rigorous-looking Alibi Generator still isn’t forecasting anything. An Anchor Bid with tighter math is still a bid.
The cost is not forecast accuracy. The cost is that the organization loses track of what it actually believes is going to happen.
When your team “beats” a forecast it produced, that is not a prediction. That is a performance.
What a Real Forecast Actually Looks Like
A forecast doing actual forecasting work has three markers.
First, the team is willing to name the conditions under which its view would change. Not generically (“if the market shifts”) but specifically: If week-three retention for the new segment comes in below 38%, we lose confidence in the revenue trajectory. A forecast that lists no conditions under which its view would change is not a forecast. It is a mood.
Second, the range reflects real uncertainty, not polite uncertainty. A $30-$35 million range almost always means the team believes the number is $32 million and is pretending to be humble. A $20-$45 million range means the team does not actually know — which, depending on what they are forecasting, may be exactly the right answer.
Third, the team is willing to separate the forecast from the recommendation. We think the revenue will be X. We still recommend the project. Or: We think the revenue will be X. We do not recommend the project. When the forecast and the recommendation are fused into a single number — one that happens to make the decision look correct — you are not reading a forecast. You are reading a vote.
Where This Goes Wrong
Three failure modes. Each common. Each fixable.
Confusing accuracy with honesty. Organizations obsess over whether last year’s forecast was close to the actual outcome. This is the wrong question when the forecast was not trying to be close in the first place. A Permission Slip that “hit the number” did not predict well; it argued well, and then the team worked to make the argument true. Those are different things.
Rewarding forecasts that never miss. (This pattern plays out constantly.) A marketing analytics team hits its forecast, within three points, for eleven consecutive quarters. The CEO treats this as a sign of forecasting excellence. It is nothing of the sort. The team has learned — rationally — that missing the number has career consequences, and beating the number by too much causes next year’s budget to rise. So they optimize for landing inside the band. Their forecasts are not predictions. They are a governor on the engine. A forecast that never missed is not a forecast. It is a ceiling.
Treating the planning deck as the decision document. Most organizations fuse the “what will happen” document and the “what should we do” document into a single artifact. This saves meetings and destroys clarity. A decision document argues for a position. A forecast document estimates a range of outcomes. When one artifact has to do both, the forecast is always the one that loses — because the decision is already what the room cares about.
Your Monday Morning Mandate
The next time a forecast arrives on your desk, do three things.
Ask which role it is actually playing. Not accusatorially. Diagnostically. Is this a prediction, a proposal, or a position? If the person presenting cannot answer, you have your answer. If they can, the conversation has already become more useful than it would have been otherwise.
Separate the decision document from the forecast document. Require two artifacts where there was one. A recommendation memo that says what the team thinks the company should do, and why. A forecast deliverable that estimates the outcome and names the conditions that would change that estimate. When these are separated, each becomes honest. When they are fused, neither is.
Stop grading forecasts on how close they were to actuals. Start grading them on how much they changed your decision. A forecast that came in within three percent of the outcome, but that you were going to approve regardless of what it said, did not do any forecasting work. A forecast that was off by twenty percent, but that caused you to shift capital, staffing, or sequencing, did exactly what a forecast is supposed to do. The value of a forecast is measured in the decisions it moves.
Your analytics team is not bad at forecasting. They are excellent at giving you what you reward. If the forecasts you have been getting look like performances, the question is not whether your team is undertrained. The question is which meetings, which rituals, and which incentives are quietly teaching them that performance is what the executive wants.
Fix that, and the forecasts get more honest the next cycle.
Do not fix it, and you will spend the next decade holding people accountable for predictions they were never making.
Click here for more columns from Michael Bagalman’s Data Science for Decision Makers series.
Contributor
-
View all postsMichael Bagalman is VP of Business Intelligence & Data Science at Starz and Professor of Practice at the University of Oklahoma. He has spent more than 25 years building and leading data and decision-making capabilities at organizations including AT&T, Sony, Publicis, and Deutsch. He writes the Data Science for Decision Makers column at All Things Insights and publishes Data Science Rabbit Hole on Medium. Bagalman holds degrees from Harvard and Princeton. Learn more at MichaelBagalman.com.


















































































































































































































