The fast-moving fields of artificial intelligence (AI) and machine learning (ML) are having a transformative effect on business operations, particularly natural language tools like ChatGPT. This groundbreaking technology has developed at an incredible pace over the last year and there are seemingly endless possibilities ahead.By Phil Calvin, Chief Product Officer at Delinea

However, whilst these technologies have tremendous potential as game-changers they also come with their share of complexities to unpick, particularly the cybersecurity implications.

ChatGPT has been ‘pre-trained’ based on large data sets, which could include confidential data. As such, businesses must ensure they can protect the security and confidentiality of any data passing through the tool’s APIs, especially if it contains sensitive or classified information. 

Furthermore, while the power of natural language applications is undeniably impressive, the reliability and accuracy of AI-generated responses is not foolproof. Human oversight of all responses generated through these applications is vital, particularly for any responses that are integrated into critical operations. 

A greater understanding of how these tools function is essential in order to ensure that businesses can maximise their potential without compromising the security of their data. Without the right policies and processes in place, use of these tools could also put businesses in breach of GDPR. 

The security issues of OpenAI 

OpenAI, the creator of ChatGPT and one of the forerunners of natural language AI tools, is facing ever greater scrutiny from industry regulators concerned about the privacy and security of data. 

The company has confirmed it suffered a data breach in March when a bug allowed some users to see payment related information. The incident is a reminder that, as impressive as ChatGPT’s capabilities are, it is still a piece of software that has vulnerabilities. Following the incident, OpenAI took swift and decisive actions and also launched a bug bounty programme offering as much as $20,000 for “exceptional discoveries.”

In terms of the security protections in place, OpenAI implements rigorous access controls, limiting system and data access to authorised personnel only. Moreover, data transmission between the company’s systems and customers is safeguarded using industry-standard encryption protocols. Additionally, OpenAI has robust network security measures in place, including firewalls and intrusion detection and prevention systems.

From a regulatory perspective, OpenAI generally operates in compliance with industry standards and regulations, such as the General Data Protection Regulation (GDPR). However, the recent breach, coupled with the breakneck pace of the technology’s development, has led to several concerns. 

European data regulators are working together as part of a taskforce to investigate the tool further and in June the EU parliament passed the AI act – the world’s first legal framework for AI that would categorise tools based on their risk ranging from ‘unacceptable‘ to ‘minimal’ risk. 

Organisations making use of ChatGPT and other similar natural language tools will need to weigh up potential security and privacy issues like these against business benefits such as increased productivity and automation. 

Tools and Practices for Secure AI/ML Use 

Incorporating the use of AI platforms into normal business operations, requires a strategic, well-thought out approach. 

Where possible, all natural language outputs should have stringent human review. Human oversight provides an additional security check, and is particularly invaluable in high-stakes activities and code generation.  

Along with the human review, “prompt engineering” can be a helpful tool to manage the topic and tone of the output text. This involves providing context, such as a few high-quality examples of the desired behaviour, to steer the outputs in the right direction.

It’s also recommended that organisations put their application through “red-teaming,” in which an independent group attempts to find any weak spots in an organisation’s security in order to make recommendations for improvement.  This approach not only assesses how the system performs under normal conditions but also how it responds to targeted attempts to manipulate or break it.

Implementing user guidelines to guard against misuse 

As businesses’ use of AI/ML technology increases, so too do the opportunities for misuse, so putting in safeguards for users should be part of the security considerations.  

Organisations should generally ask for users to register and log-in to access services. If the use case permits, linking the service to an existing account, such as Gmail or LinkedIn, adds an extra layer of security. For higher-stakes operations, asking for additional authentication, such as a credit card or ID verification, can significantly minimise risks.

Imposing restrictions on the amount of text a user can input prevents misuse through “prompt injection”. This is where a user can potentially hijack a language model’s output with specific prompts that confuse its programming. Equally important is to limit the number of output tokens – the amount of text it can produce at a time – narrowing the range of potential misuse.

When possible, opt for validated dropdown fields for user inputs, such as a pre-defined list, instead of open-ended text fields. Likewise, wherever feasible, guide user queries towards pre-existing materials or responses rather than generating new content from scratch. This not only enhances security but also improves efficiency and accuracy.

Finally, provide a straightforward mechanism for users to report issues or concerns to your security team, and ensure this feedback is monitored and responded to promptly. This feedback loop can be instrumental in continually improving the security and functionality of AI/ML applications.

While the potential of AI/ML technologies like ChatGPT is remarkable, their deployment is not without risks. Any organisations considering the use of such tools must consider the security, privacy and compliance issues and weigh these against the benefits. 

The application of safe practices and robust usage guidelines should be considered mandatory as organisations rely more heavily on natural language tools and entrust them with more valuable tasks. 

By understanding and addressing these risks, businesses can safely harness the power of AI/ML, opening up unprecedented opportunities for growth and innovation. Now more than ever, as we embrace this digital revolution, the adage holds: with great power comes great responsibility.