By Joe Norburn, CEO, TCC and Recordsure
You don’t have to look far to see how quickly AI has settled into financial services. In some firms it remains tied to specific use cases, but in others it is already shaping core processes – sometimes quietly in the background, sometimes at a scale that is becoming hard to ignore.
Much of that momentum has been driven by commercial priorities rather than regulation. Early gains in efficiency, speed and insight have encouraged firms to push ahead, even as questions around data quality and risk management start to surface. Once on the AI journey, stepping back is rarely realistic, even while the implications are still being worked through.
The regulatory response is evolving alongside this shift. There is clear UK Government support for AI at a policy level, reflected in initiatives such as the AI Opportunities Action Plan. At the same time, regulators are becoming more explicit about how AI is – and should be – used in practice within financial services.
Join The European Business Briefing
New subscribers this quarter are entered into a draw to win a Rolex Submariner. Join 40,000+ founders, investors and executives who read EBM every day.
SubscribeIn its 2026–27 annual work programme, the FCA sets out its ambition to become a more data‑led regulator, including plans to integrate AI into supervisory workflows to detect harm more effectively, review firm submissions and support faster decision making. Parliamentary committees are also paying closer attention to AI risk management, particularly in a sector where failures rarely stay contained.
Together, this points to a more balanced position. Firms are expected to keep moving, but they are also expected to demonstrate control – even where the boundaries of that control are still being tested in practice. Joe Norburn, CEO of TCC and Recordsure, explains this challenge in more detail below.
Adoption hasn’t waited for certainty
Across financial services, AI is now embedded in day‑to‑day activities, from fraud detection and onboarding to credit assessment and customer interaction. The Treasury Select Committee’s recent inquiry reflects just how widespread that adoption has become, especially among larger institutions.
What stands out is not that firms are using AI, but how uneven governance approaches remain. Many are applying control frameworks originally designed for more deterministic systems – where decision paths could be traced, explained and challenged relatively easily. As AI becomes more embedded, particularly in complex or automated processes, doing so becomes harder, even where controls exist.
That does not make existing frameworks irrelevant. Consumer Duty, SM&CR and operational resilience remain central. But it does shift the emphasis from whether those regimes apply, to how they are applied when decisions are less visible, and outcomes are shaped by increasingly complex systems.
This raises a more immediate question. If firms are already making judgements about AI risk and control in practice, how are regulators responding?
Regulators are engaging, and expectations are sharpening
There has been extensive commentary on how regulators are approaching AI. The FCA’s engagement with industry has been deliberate and increasingly aligned with its wider aim of becoming a smarter, more responsive regulator. The Mills Review is a good example, examining whether existing regimes remain fit for purpose as AI becomes more embedded, rather than seeking to redraw the rulebook entirely.
Supervisory approaches are also evolving. Greater use of testing environments and closer dialogue with firms point to a more hands‑on model, while initiatives such as the Critical Third Parties regime reflect a growing focus on risks across the wider technology ecosystem.
The overall direction of travel is becoming clearer. Regulators are not asking firms to pause innovation, but they are increasingly focused on how outcomes are evidenced, how risks are identified and mitigated, and how accountability is maintained once AI is operating at scale.
Where the pressure really starts to show
The more difficult questions rarely arise at the point of adoption. They tend to surface later, once systems are live, scaled and embedded into business critical processes.
Questions around fairness, oversight and accountability move quickly from theory into day‑to‑day reality. Firms need to demonstrate how outcomes are monitored when decisions are made in milliseconds, what effective oversight looks like when models are complex, and where accountability sits when multiple teams, suppliers and systems are involved.
It’s not just about identifying the risk. It is demonstrating, consistently and credibly, that it is being managed.
This is where the real gap sits. As firms move beyond experimentation and try to operationalise AI, a more fundamental constraint becomes clear. In most cases, the limiting factor is not AI capability, but the combination of generative AI (GenAI) tools and the quality, consistency and reliability of the underlying data.
Mainstream GenAI and LLM‑based tools are highly effective at text extraction, summarisation and surface‑level pattern recognition. They can turn large volumes of content into something more digestible. However, they are not designed to support regulated decision making. They do not inherently understand what financial advice data represents, how values relate to one another, or why one data point should be trusted over another.
Critically, these models often struggle to provide the explainability, traceability and auditability that regulators expect. They can silently resolve conflicts, obscure data lineage, and produce outputs that sound confident even when they are incomplete or wrong. As a result, firms frequently end up increasing human oversight rather than reducing it – spending more time validating outputs, resolving inconsistencies and evidencing compliance.
By contrast, purpose‑built AI models for analytics and prediction are designed around structured, trusted data. They are trained to understand how advice data is created, how it changes over time, and when it must be corrected rather than inferred. This enables predictive analysis, consistent MI, and defensible insights that can be traced back to source and explained to regulators.
The more reliable, explainable and auditable the data foundation becomes, the more safely AI can be applied. In regulated environments, value does not come from applying GenAI and LLM models to unstructured data, but from combining selective GenAI capabilities with predictive AI operating on trusted, regulator‑ready data. That is what allows automation to scale – and risk to come down.
Inside firms, the conversation is shifting
Earlier discussions around AI focused heavily on opportunity. Those conversations have not disappeared, but they now sit alongside more practical concerns about control, governance and accountability.
Risk and compliance teams are becoming more deeply involved, while senior managers are being asked more searching questions about systems they may not have built themselves. In many cases, AI is exposing weaknesses that were already present – unclear data ownership, inconsistent documentation, or fragile oversight models – and making them harder to defend once decisions are made at speed and scale.
There is still a window to act
The environment has not fully settled, and regulators continue to shape their approach based on what they are seeing in practice. That creates a window.
Firms that invest now in strengthening data foundations, governance and evidencing mechanisms are likely to be better positioned as expectations become more defined. In practice, that tends to matter far more than trying to predict exactly where regulation will land.




































