The meteoric rise of generative artificial intelligence (AI) has been extraordinary. It is easy to see why. While traditional AI is programmed by data scientists to perform one specific task, generative AI has democratised the technology like never before. By leveraging foundation models such as large language models (LLMs), it can fulfil a far wider range of tasks across a spectrum of business and creative functions.By Hans Petter Dalen-IBM EMEA AI Governance Platform .
At the same time, generative AI has prompted legitimate questions about transparency, data privacy, security, and governance. Business leaders who already are, will soon, or are considering leveraging the competitive capabilities of generative AI must now navigate the evolving regulatory landscape, the need for core guiding principles for AI usage – including the importance of human oversight – and the role data will play in a wider conversation about the technology.
The regulatory landscape
The EU AI Act is currently on course to become the world’s first comprehensive AI legal framework. First proposed in February 2020, it was drafted with a ‘risk-based’ approach to balance innovation with responsibility and trust. Now in the final stages of negotiation, the EU AI Act focuses on high-risk uses of AI and the importance of high-quality data sets for training AI models, transparency of data usage, and AI having explainable outcomes for those high-risk uses. This risk-based approach is significant because AI regulation needs to be flexible and evolve as technology does. Different industries and companies will have different AI use cases and associated risks – to avoid stifling innovation, blanket regulation should not be applied to all generative AI.
As governments worldwide consider their own AI rulebooks, at IBM we believe that responsible AI regulation should be based on three core principles.
The first is the need to regulate AI risk, not algorithms. This is reflective of the ‘risk-based’ approach currently being employed by the EU and recognises that not all AI carries the same level of risk. As a result, regulation should address the context that AI is being deployed, ensuring that high-risk uses are regulated more closely.
The second principle is accountability. Governments have a significant role to play, but it is also vital that AI creators and deployers are held accountable for the context in which they develop and deploy the technology. This will be essential in preventing organisations claiming immunity in cases related to discrimination, bias and fraudulent activity.
Third, we should not create a licensing regime for AI, which would hinder open innovation and risks creating a form of regulatory capture. We instead are advocates for a vibrant, open AI ecosystem that promotes competition, skilling and security and will help ensure that AI models are shaped by diverse and inclusive voices. Together, these principles would encourage the right balance between innovation and accountability, ensuring that AI can flourish in a transparent, open, and fair environment.
The importance of human oversight
Another important consideration for business leaders will be the need to create an effective AI governance strategy based on transparency and fairness. Any responsible governance strategy will have a significant amount of human oversight. Generative AI, like traditional AI, is organic. It learns from its users and can ‘drift’ or have what are known as ‘hallucinations,’ when an LLM generates false information. Failure to correct errors results in AI learning from inaccurate data which becomes further ingrained in its model.
It is therefore vital that suppliers of foundation models are transparent about their model, so companies deploying the technology can explain the outcomes – a critical step to ensure the trusted use of generative AI.
IBM’s watsonx.goverance, expected to launch by the end of the year, has this principle at its core. As an end-to-end toolkit encompassing both data and AI governance to enable responsible, transparent and explainable AI workflows. It will allow organisations to direct, manage and monitor AI activities, and strengthens the ability to mitigate risk, manage regulatory requirements and address ethical concerns through software automation. As AI becomes further embedded into daily workflows, this proactive governance will be essential for driving responsible decisions across both private and public sectors.
Failure to monitor incorrect outcomes is not just a matter of regulatory compliance, it has far wider financial and reputational risks for a business. There are some exciting real use cases for sectors such as insurance, banking and mortgage lenders – but inaccurate outcomes in these cases could have highly detrimental impacts on customers. Developing a governance strategy underpinned by human oversight and transparency enables businesses to avoid these risks, and business leaders should start to train and educate their workforce in these areas now to prepare for generative AI adoption.
Data, data, data
As the policy landscape evolves, business leaders are faced with the dual task of harnessing the benefits of generative AI to remain competitive whilst protecting their organisations from the financial, reputational, and regulatory risks that ungoverned use of generative AI can bring.
Business leaders face several challenges on generative AI, and one of the biggest questions is about its commercial viability. Business leaders across all sectors have displayed huge enthusiasm for the technology, but actual adoption within organisations requires adequate preparation and proper investment first.
The technology community is already exploring ways to accelerate this process. We are seeing a vibrant community of developers creating foundation models, pre-training them, and open sourcing them. In time, this collaborative approach could accelerate adoption and drive-up ROI for businesses.
The next step for regulation
Generative AI and the regulatory landscape surrounding it will continue to evolve in unison. Business leaders must keep pace with the developing regulatory landscape and, as the EU AI Act is on course to do, we must take a risk-based approach to AI and ensure governance at every level. Together, these steps will enable companies to deploy trusted, responsible, and accountable AI throughout the entire AI lifecycle, allowing businesses and society to reap the benefits of trustworthy AI.