By Dr Yichuan Zhang, CEO and co-founder of Boltzbit

The need to separate the layers
Almost three quarters (72%) of businesses have now adopted generative AI in at least one of their business functions. The dominant deployment pattern involves feeding proprietary data into a third-party model via retrieval-augmented generation (RAG) or fine-tuning. Both approaches are useful for sure. But neither constitutes durable ownership.
RAG gives a model access to documents at query time. It reads internal data the way a contractor reads a brief, without accumulating institutional knowledge. Fine-tuning encodes domain context at a point in time, then freezes. However, as business conditions shift, that snapshot drifts further from operational reality.
Join The European Business Briefing
New subscribers this quarter are entered into a draw to win a Rolex Submariner. Join 40,000+ founders, investors and executives who read EBM every day.
SubscribeWhat neither approach produces is a model that has internalised the business over time. The patterns, anomalies and judgment calls that accumulate through operational experience never make it into the model.
The answer is to separate the two. The base model (whoever provides it) should be treated as a commodity layer. What is important for the business, though, is to own is the learning layer. That layer is both the training mechanism and the accumulated intelligence it produces.
Out of control
Vendor control over the model layer has increasing come into focus. OpenAI has deprecated multiple model versions in rapid succession, with organisations given windows of just three to six months to migrate dependent workflows. During the introduction of GPT-5 in 2025, OpenAI removed multiple older models simultaneously from ChatGPT causing widespread workflow disruption.
For individual developers, these transitions are an inconvenience. For enterprises with critical workflows built on specific model versions, they represent unplanned operational risk with a timeline out of their control.
Policy risk is real
Beyond deprecation, providers also retain the right to change their acceptable use terms, and those changes can affect use cases regardless of the operational investment made.
The structural fix is the same. It is best to bring the training infrastructure inside the organisation’s environment. When the system that continuously learns from operational data and writes the resulting domain expertise into model weights is self-hosted, the base model becomes interchangeable.
A provider policy change that makes a specific model untenable simply becomes a substitution decision. The business retains the training pipeline, the accumulated intelligence it has produced, and can swap the foundation it runs on. That is what model sovereignty looks like in practice, and it is an architecture decision, not a procurement one.
Where the knowledge sits
European enterprise leaders prefer hosting AI models on their own infrastructure, citing data privacy, reduced vendor breach risk, and customised security protocols as primary motivations. That preference is sound. However, remember infrastructure hosting alone doesn’t resolve the dependency problem if the learning layer still belongs to the provider.
Gartner identifies open GenAI models as reshaping the enterprise landscape. They offer greater flexibility and freedom from vendor lock-in, with the ability to customise, fine-tune, and deploy on an organisation’s own terms. But switching to an open model doesn’t by itself create ownership.
It changes the infrastructure while leaving the same gap in the learning layer. The domain expertise an organisation builds through AI deployment needs to be encoded in the infrastructure it governs, not in a relationship with a provider it can’t control.
The real question for European Businesses is whether the institutional knowledge the AI systems accumulate over time and are trained on sits inside the organisation’s environment or inside someone else’s.
A question of ownership
Most European organisations recognise vendor dependency as a risk. Far fewer have taken concrete steps to address it at the architecture level. That gap is widest precisely where it matters most. The decisions made in year one of an AI deployment compound into entrenched dependencies by year three.
The practical questions for any executive team reviewing AI governance are:
- What workflows are now critically dependent on a specific third-party model version?
- What is the migration cost and timeline if that version is deprecated or the provider’s terms change?
- Where does the domain expertise accumulated through our AI deployments reside, and does the business own it?
These are standard vendor risk questions. Now it is time to apply them to AI too.
Only about one-third of organisations currently report scaling AI across the enterprise. This means the big majority still have time to make architectural decisions before operational dependency becomes too embedded to change.
The organisations that use that window to ask ownership questions, not just capability questions, will be in a materially different position when the next provider policy change or model deprecation cycle arrives.






































