Skip to content
relevance

The "Black Box" Problem  - Why The Reality of AI Decisioning is Often IRL Disappointment

Josh Webb
Josh Webb

In our last post we argue there is a huge gap between the old paradigm of "personalization", and the new expectation of "relevance" which is increasingly demanded by consumers, but brands are failing to deliver at scale. In this post we move on to look at "AI Decisioning" and it's many promises, and pitfalls, in relation to this.

Tech Wars: A New Hope

So where to from personalization? Of course, AI is the next generation. A new wave of “AI powered” predictive products has entered the game in the past few years. But despite fanfare and promise of 1:1 personalization, the truth is many of these have shown mixed results at best. 

What most vendors won't tell you in their "AI Decisioning" brochures:

Predicting what a customer will do is not the same as influencing what they will do.

The current landscape of AI marketing tools - including the "AI Studios" and "Smart Hubs" offered by major incumbents - relies on two flawed technologies. The promises are large so these are easy traps to fall into.

The Propensity Trap

Traditional predictive models ask: "Who is most likely to buy ?" The model scans your data, finds your most loyal customers (whoever buys most often), and tells you to send them a coupon.

The Result: You send a discount to customers who were going to buy at full price.

The Illusion: The campaign looks like a massive success (high conversion rate!), but you have actually destroyed margin, targeting "Sure Things" instead of "Persuadables."

The Bandit Trap

Newer "Agentic" tools or "Contextual Bandits" optimize correlations.

The Result: The AI learns that sending “Promotion X ” results in a desirable outcome (e.g. customer purchases). It optimizes by sending more of these offers, to more customers.

The Flaw: Correlation is not causation. These models cannot distinguish between an action that created a purchase and one that simply coincided with it. If it turns out the attributed cause was incorrect, budget is wasted giving promotions to customers who would have purchased anyway.

Screenshot 2025-12-15 at 5.52.02 pm

So, even in theory, these tools are less than ideal. But this is not even the biggest problem.

We need to talk about confounding!

Predictive models promise to tell you what will happen in the future, based on the past. This may offer some utility, even if it can't tell you why. But there is another problem which applies to almost any model claiming to discover the "next best action" - and it's logically inescapable.

Predictive models using historical data invariably suffer from degradation as soon as they go through any retraining. This means they go through a gradual decline in their predictive accuracy due to what statisticians call "confounding effects" - as though their crystal ball gets increasingly cloudy over time. Further, this isn't a situation where you can just assume it's still "good enough". It creates critically inaccurate predictions, and this situation is inescapable. 

unnamed-2The reason this happens is that the models have no way to know if the "next-best-action" they suggest actually causes the desired effect, or the effect would have happened anyway. When the model is retrained on new data it collects, it will confuse correlation and causation, and go into a downward spiral of performance. And you have no way of knowing why.

This isn't a minor technical quibble. It’s an inherent limitation of what most still think of as the"state of the art" technology being applied in AI Decisioning. If you've got a system determining "next best action" today it is almost certainly degrading and this is the fundamental reason even the most-expensive AI platforms of 2025 struggle to deliver consistent, provable value.

Share this post