AI might just force some systems to get more honest

29 November, 2025. Alyssa Jade McDonald-Baertl

Not long ago, a senior official in a small island nation told me their public finance system worked “perfectly well, as long as nobody asked too many questions.” It was meant as a joke, but only partly. The truth was that the system ran on cultural obligations, personal relationships, and bureaucratic improvisation. Then our project introduced a simple machine learning model. It was nothing heroic, just a quiet little anomaly detector trained on past spending patterns. Almost immediately, that comfortable line unraveled.

The model flagged transactions that moved through people who did not appear anywhere on the organisational chart. These individuals were not approvers or signatories, yet every meaningful decision touched them. The machine framed this as a set of statistical irregularities. The story beneath it was cultural. Legitimacy lived in places the bureaucracy had never acknowledged. Citizen town hall meetings had hinted at this. The difference was that the data finally offered proof.

This keeps happening. I see that when even modest AI enters a system, it can reveal the habits people would rather not examine. There is seldom malice involved. The technology simply has no loyalty to the story an organisation tells about itself.

Take a coastal community where fishers explained their world through stories of winds, tides, and migration routes. We fed their observations into a hybrid model. An LLM translated their accounts into structured variables. A supervised learner tested predictive accuracy. A human reviewer checked that the mathematics did not erase the meaning then the forecasts outperformed expensive ocean sensors (yes that was the pleasing part!). The uncomfortable part was the model uncovering patterns the community had never recognised, like whole gaps in knowledge rested unnoticed for years. Once again, the technology did not disrupt anything. It simply told the truth.

The same honesty is emerging in supply chains. A European manufacturer, keen to reassure customers of its sustainability leadership, deployed a basic traceability system. The AI tools involved were unremarkable (document classifiers, risk scoring algorithms) and an LLM trained to flag inconsistencies in environmental claims. We will know more in a few months, but it is almost certain that unreliable data will surface. Not fraud, ‘just’ inconsistency, like duplication and missing information. I am sure that our customers are building simliar tools to judge suppliers like us.

Next year I am beginning another project. The goal is to rebuild governance in a long standing organisation by using process mining to map fifty years of decision flows (almost one hundred board meetings). We plan to compare these mapped pathways with the organisation’s official policies using an LLM. I have no idea exactly what we will discover, yet my guess is that we will see duplicated work. Decisions travelling by routes no one intended and possibly, authority sitting in places policy takes too long to catch up with. Whatever we find, it will be the most honest view of the organisation’s inner workings it has ever had.

For that reason, I keep returning to the same conclusion. In the year ahead, AI will certainly transform organisations. But the more interesting story is that AI will reveal them. I do feel that what leaders call AI failure may turn out to be governance failure (or cultural opacity, or data disorder that has finally become impossible to ignore).

AI may become part of a corporate integrity test…

  • If your system is coherent, the technology strengthens it.
  • If your system is fractured, the technology makes the fracture lines visible.
  • If your organisation depends on informal pathways, AI will trace them with clinical accuracy.
  • If your environmental claims rely on narrative rather than proof, AI will expose the weakness.
  • If your circularity ambitions rely on inconsistent data, AI will show that inconsistency long before your customers do.

For my own organisations and for our clients the advice is simple.

  1. Clean the data before imagining new capability.
  2. Map how decisions actually happen.
  3. Treat traceability as protection for revenue, not corporate virtue.
  4. See circularity as logistics, not branding.
  5. Prepare for a world where transparency is no longer optional (it is simply the cost of staying in the market).

AI is not the threat. AI is the mirror.