Richard Foster-Fletcher is a recognised voice on responsible artificial intelligence and organisational judgment. As Chair of MKAI (Milton Keynes Artificial Intelligence), he advises governments, corporations, and education leaders on the cultural and strategic consequences of AI adoption.
Named among the UK’s Top 20 AI Researchers & Entrepreneurs (Favikon, 2025) and a LinkedIn Top Voice in AI, Richard combines technical understanding with a deep interest in how AI reshapes leadership, decision-making, and the future of work.
Through his advisory work and his widely read newsletter What Still Matters, Richard explores how AI systems narrow executive thinking, alter organisational culture, and challenge traditional ideas of expertise. His focus is on ensuring AI strengthens human capability rather than replacing it.
In this exclusive conversation with Champions Speakers Agency, he reflects on job displacement, global AI equity, and why responsible innovation now depends as much on moral clarity as it does on technological skill.
Q: With so much hype surrounding AI’s potential, what realistic outcomes should businesses anticipate when it comes to the future of work and employment?
The most significant effect of AI on employment is not replacement, but compression. AI tools establish a high-performance floor, making basic competence a commodity. This narrows the gap between average and elite performers.
The results are now visible. Junior analysts produce competent financial models with AI assistants, a task that once required months of training. Writers who built careers on clean, grammatical prose find that skill is now automated. The baseline for acceptable work has risen, while the ceiling for exceptional work remains unchanged. This compression allows organisations to access “good enough” work instantly yet makes genuinely excellent work harder to identify.
This dynamic makes entire categories of jobs economically unjustifiable. The middle-skill, middle-wage roles that formed traditional career paths are the first casualty. Organisations can now use AI tools combined with senior review to get similar outputs, creating a “missing middle” in their structures. While entry-level staff are still needed for tasks requiring human judgment and senior leaders for strategic direction, the connecting tissue between them is dissolving.
The strategic consequence of this is not a leaner organisation, but a depleted one. These mid-level roles were the training grounds where junior employees developed into senior experts. By automating these intermediate tasks, organisations lose the apprenticeship model that generates future capability. This is the paradox of AI-driven efficiency: automating today’s routine work inadvertently eliminates tomorrow’s expertise.
This is not a future problem. It is a present reality masked by short-term productivity metrics. The timeline is uneven, with knowledge work transforming faster than anticipated. The legal sector offers a clear signal: document review is automated, while courtroom advocacy is not.
The focus on efficiency gains hides a critical second-order effect. When every company adopts similar AI tools, they converge on similar outputs and conclusions. This drive for standardisation delivers short-term gains but destroys the distinctive capabilities that ensure long-term survival. The pursuit of operational efficiency is actively dismantling the mechanisms that produce unique expertise and strategic differentiation.
For leaders, this compression invalidates traditional proxies for talent. Speed, polish, and volume of output are no longer reliable signals of expertise. The primary leadership challenge is shifting from managing the production of work to cultivating the capacity for judgment. This requires developing new metrics and cultural norms that can distinguish between high-quality imitation and genuine insight. The critical task is to recalibrate how the organisation measures and rewards value when competence is automated and easily accessible.
Q: How do you see the adoption of AI technologies playing out in emerging global markets, and what challenges might these regions face when aligning with Western-built platforms?
The adoption of AI in emerging markets is not following the Silicon Valley model. It is defined by a central paradox: infrastructural leapfrogging is creating a new form of platform dependency. Nations are bypassing traditional computing and moving directly to AI-powered mobile services. This appears to be rapid modernisation, but it is in fact a strategic capture. These markets are building their digital futures on platforms where they cannot influence governance, modify core functions, or inspect the underlying algorithms. This is not the organic development of local capacity seen in the PC revolution; it is dependence on foreign infrastructure by design.
This dependency is not just technical; it is conceptual. AI models trained on Western data are not culturally neutral. They export assumptions about causation, agency, and social relations that are embedded in the linguistic and economic patterns of their training data. When a user in an emerging market interacts with these systems, they receive not just a translated answer, but a Western worldview expressed through local vocabulary. The same is true for economic models. Platforms optimised for formal employment and documented transactions create friction in markets where informal economies dominate. This is not a simple user interface problem; it is a fundamental misalignment of operating logic.
This misalignment creates a new and subtle form of value transfer. Emerging markets generate vast quantities of behavioural data, which flows to servers in the United States, Europe, or China. This raw data, which could be used to cultivate local AI capabilities, is instead used to train foreign models. The finished product is then licensed back to the very markets that provided the data. This dynamic represents a modern form of resource extraction, where the raw material is human experience and the refinery is a foreign AI lab. The regulatory responses in these markets, often adapted from Western frameworks like GDPR, are frequently ill-suited to local conditions and fail to address this core issue of data sovereignty.
For leaders in these markets, the most critical challenge is not just technical adoption but managing this deep-seated misalignment. The most valuable professionals are not just data scientists, but “context bridges”: individuals with the skill to identify where the logic of the AI platform diverges from local reality. Their role is to spot the embedded assumptions and mitigate the risks of applying a foreign model to a domestic problem. In this environment, the most durable competitive advantage will come not from using AI, but from understanding its limitations and resisting its homogenising force.
Q: As AI becomes integral to organisational strategy, how do you foresee frameworks for inclusivity and digital ethics evolving across sectors?
Current frameworks for digital ethics treat AI as a compliance problem, creating a theatre of “documented ethics” that fails to address the core challenges. This approach, focused on audits and process compliance, is fundamentally mismatched to the technology it seeks to govern. It establishes review boards and conducts assessments, but the core dynamics of algorithmic decision-making persist unchanged.
The foundational error of these frameworks is treating bias as a technical flaw to be engineered away. AI systems do not merely reflect existing patterns; they crystallise and amplify them through thousands of micro-decisions per second. The cumulative effect creates disparate outcomes even if each individual decision appears defensible. This is compounded by a demand for deterministic guarantees from fundamentally probabilistic systems. We seek certainty where none exists, creating governance that is misaligned with the technology’s nature.
The most subtle risk to inclusivity is cognitive convergence. Organisations may achieve demographic diversity whilst adopting AI tools that systematically favour a single mode of reasoning. When entire sectors use similar models trained on comparable data, they could begin to develop homogenous approaches to problem-solving. The result is surface-level diversity undermined by a deeper, more insidious standardisation of thought. The tool does not just augment work; it reshapes the thinking of the user to fit its own operational logic.
This is a systemic issue. The business models of the technology sector, prioritising speed and scale, are in direct tension with deliberate ethical review. Smaller organisations, lacking the resources for bespoke governance, are forced to adopt third-party tools without visibility into their logic. This leaves individuals with theoretical rights but no practical remedy when an algorithm denies them an opportunity. The frameworks document potential harms but provide no effective mechanism for contestation.
A more robust approach begins from a different premise. It would prioritise maintaining human capacity for judgment over attempting to automate ethical decisions. It would focus on making bias visible and contestable, rather than pursuing the illusion of its elimination. Crucially, it would ensure a diversity of AI models and approaches to prevent the convergence towards a single, optimised worldview. This requires leaders to accept a degree of complexity and inefficiency that runs counter to the prevailing preference for optimisation. It is a strategic choice to preserve judgment in an age of automation.




































