Sebastian Gierlinger, VP of Engineering at enterprise tech Storyblok, explains how tech leaders can better protect themselves from the cybersecurity threats created by AI. 

The cybersecurity threat landscape is constantly evolving. Each day, we see new threat variants emerge. Hackers, and the technologies they use, become more sophisticated and targeted. At the same time the widespread adoption of IoT devices and remote working technologies has introduced opportunities for cybercriminals. 

The proliferation of AI, especially generative AI, is fundamentally transforming how companies use software. This is simultaneously complicating software’s attack surfaces and making it more susceptible to vulnerabilities. Add all this together and it’s easy to see why we continue to experience a rapid rise in cyber-attacks globally. According to Cybersecurity Ventures, the global cost of cybercrime is expected to reach $10.5 trillion annually by 2025, up from $3 trillion in 2015.

Indicative of this escalating issue, a report by Metomic said a recent survey of 400 Chief Information Security Officers from businesses in the UK and the US found that 72% believe that AI solutions would lead to security breaches. Conversely, 80% said they intended to implement AI tools to defend against AI. This is another reminder of both the promise and the threat of AI. On one hand, AI can be used to create unprecedented security arrangements and enable cybersecurity experts to go on the offensive against hackers. On the other hand, AI will lead to industrial-scale automated attacks and incredible levels of sophistication

For tech companies caught in the middle of this double-edged sword, the big questions are how worried they should be and what can they do to protect themselves.

The challenge of identifying the threat

To begin, it’s important to consider the challenge of identifying the AI risk. One of the most pressing issues we have is that because generative AI can craft realistic and personalised phishing emails and mimic the writing style of colleagues or trusted contacts, it can be 

incredibly hard to establish whether it has been used in a cybersecurity incident. 

As a result, we don’t yet know the scale of generative AI-driven attacks or their effectiveness. If we cannot yet quantify the problem, it becomes difficult to know just how concerned we should be.

For tech startups, that means the best course of action is to focus on and mitigate threats more generally. All evidence suggests that existing cybersecurity measures and solutions underpinned by best practice data governance procedures are up to the task of the existing threat from AI.

The greater cybersecurity risk

With some irony, the biggest existential threat to organisations isn’t necessarily from AI being used in a diabolically brilliant way, it’s from their own very human employees using it carelessly or failing to follow existing security procedures. 

For example, employees sharing sensitive business information while using services such as ChatGPT holds the risk of that data being retrieved at a later date and could lead to leaks of confidential information and subsequent hacks. Reducing this threat means having proper data protection systems in place and better education for generative AI users on the risks involved. Education extends to helping employees understand the current capabilities of AI – particularly when countering phishing and social engineering attacks. 

Recently, a finance officer at a major company paid out $25 million to fraudsters after being tricked by a deep fake conference call mimicking the company’s CFO. So far, so scary. However, reading into the incident you find that this was not ultra-sophisticated from the perspective of AI – it was only one small step above a scam a few years ago that tricked the finance departments at scores of businesses (many of which were startups) to send money to fake client accounts by mimicking the email address of their CEO. 

In both instances, if basic security and compliance checks, or even common sense, had been followed, the scam would have quickly been uncovered. Teaching your employees how AI can be used to generate the voice or appearance of other people and how to spot these hacks is as important as having a robust security infrastructure.

Compliance considerations

Also, the good news in all of this is that the EU is currently at the forefront of creating new legislation for different topics in information technology geared towards elevating standards and ensuring a more ethical digital space. Most pertinent in this, earlier this year the European Parliament approved the EU Cyber Resilience Act (CRA) aimed at bolstering cybersecurity across various sectors. 

At this point, some tech businesses may not be convinced the effort is necessary. However, it’s important that we view this as a big opportunity to make the web and future developments in this space much more secure. This is a huge chance to fix a lot of problems from the past. As digital services are part of our everyday lives we should demand best security practices. It is similar to other omnipresent things like food, or cars where we expect a basic level of security which is required also by law. 

As a general rule, those companies who comply with these regulations will keep if not grow their market share. With this, it’s a great time for tech companies to go back to the drawing board and reassess their approach to cyber security now. Starting this process now means that when the final version comes into force, your business already has the knowledge and procedures in place to make compliance a much smoother and cheaper process. It also goes without saying that developing a more robust cyber approach will confer commercial benefits.

Even with all the media hype, the reality is that we do not yet fully understand the cyber security implications of widespread generative AI use. However, that is not to say that tech businesses can afford to lose sight of the conversation or continue to operate outdated cybersecurity practices, processes, and architecture. As with all major technological shifts, it will be those that keep pace and continue to follow best practices as the market evolves that will remain better protected and future-ready in the long run.