Dr Mary Aiken is one of the world’s leading forensic cyberpsychologists and a trailblazer in the study of human behaviour in the digital age. Widely recognised as one of the most influential technology speakers globally, she has advised governments, international institutions, and law enforcement agencies on the impact of technology on human psychology and cybercrime.
With an academic and professional background that spans cybersecurity, criminology, and behavioural science, Dr Aiken’s groundbreaking research has shaped international policy and inspired the CBS primetime show CSI: Cyber. A Fellow of the Royal Society of Medicine and an expert advisor to Europol’s European Cybercrime Centre, her insights are helping to redefine online safety and resilience in an increasingly digitised world.
In this exclusive interview, Dr Aiken explores the psychological dimensions of AI, the human factors driving cyber risk, and why understanding behaviour is essential to building safer digital futures.
Q: As artificial intelligence continues to evolve, how should policymakers, technologists, and business leaders recalibrate their strategies to manage the psychological and societal risks associated with AI?
Dr Mary Aiken: “When it comes to technologies such as AI, we’ve seen a lot of false dawns and a lot of moral panics. With the introduction of tools like ChatGPT, everyone got very excited about chatbots. But in fact, chatbots have been around for a long time. Eliza was the first chatbot, developed in the 1960s, and she was modelled on what’s called Rogerian psychology, meaning she was very good at eliciting information.
“She would say things like, “How are you?” and follow up with, “Tell me about your day.” The inventor of Eliza was actually horrified when colleagues in the research lab began opening up to the chatbot and confessing all sorts of things. The programme was shut down very quickly.
“I had the pleasure of working on another chatbot, Jabberwacky, back in the 1990s. A colleague of mine built this stunning technology. So, what we’re seeing is continuous evolution in this space over time.
“As for the panic around AI replicating human intelligence and making people redundant—honestly, I’m a behavioural scientist, and we barely understand how the human brain works. The idea that we can build something to replicate or replace what we don’t yet understand is a flawed premise.
We need to properly understand what AI is and, importantly, what it can actually deliver for us.
“Personally, I prefer to conceptualise AI as IIA—Intelligence Augmentation—rather than Artificial Intelligence. IIA is based on the work of Licklider, a scientist from the 1950s, who wrote a brilliant paper titled Man-Computer Symbiosis. He described the symbiotic or interdependent relationship between humans and machines.
“IIA puts the human at the centre of the process, and I think that’s the mindset we need to adopt when it comes to harnessing the best from machine learning or AI.
“Of course, there are going to be extraordinary developments in this space. I’m particularly fascinated by quantum computing combined with ML and AI. That may be the point at which we come close to mimicking human intelligence.”
Q: Cyberpsychology offers a distinct lens through which to interpret digital behaviour. How does this scientific approach shape the way you engage with cross-sector audiences on risk and resilience?
Dr Mary Aiken: “In cyberpsychology, we have certain effects—take, for example, the online disinhibition effect. It shows that people behave online in ways they wouldn’t in the real world, and that’s a critical behavioural driver.
“There’s also the power of online anonymity. It can be a good thing, but it’s also like a superpower of invisibility—and with that comes great responsibility. Unfortunately, humans don’t always use it well.
“There are positive attributes too, such as online altruism—think of crowdsourced fundraising. But the key principle is that human behaviour mutates or changes online, and it’s essential to understand the impact of those changes.
“As a cyberpsychologist, particularly through my work on the professional speaking circuit, I address a broad range of sectors: technology, cybersecurity, infosec, financial services, education, e-commerce, and healthcare.
“All of these sectors benefit from insights into how technology affects human behaviour—both from a user perspective and an operator perspective.
“I’ve worked in many research areas, including cyberchondria—a form of hypochondria that manifests online. We’ve all experienced it: a headache that could be anything from eye strain to a hangover, but after a quick online search, you’re suddenly reading about brain tumours and experiencing anxiety.
“Another current research focus is cyber fraud. In the UK, we’ve seen legislation like the Online Safety Act aiming to address cyber fraud and criminal behaviour online.
“In this field, I’ve led several information campaigns based on one of my core areas of expertise: cyber behavioural profiling. While many campaigns say “Don’t click on the link,” I go deeper. I analyse the language and psychological cues in phishing messages—what buttons they’re trying to press, what emotional levers they’re pulling to get people to hand over sensitive information.
“My talks cover a broad range—from human factors in cybersecurity to cyber behavioural profiling, to the psychology of AI.”
Q: You’ve described ‘human factors’ as critical to cybersecurity. What aspects of human behaviour are still being overlooked in today’s digital defence strategies?
Dr Mary Aiken: “First, let’s define cyberspace. Cyberpsychologists like me have been discussing it for around two decades. In fact, in 2016, NATO formally recognised cyberspace as an operational domain—acknowledging that future conflicts will take place not only on land, sea, and air, but also across computer networks.
“The U.S. military defines cyberspace as comprising three layers. First, the physical network—hardware, cables, infrastructure. Then, the logical network—that facilitates communication across systems. And finally, the cyber persona layer—that’s us, the humans.
“Cybersecurity has historically been very good at addressing the first two layers: the physical and logical networks. However, we know that the vast majority of cyberattacks are facilitated by social engineering. And social engineering is not about technology—it’s about psychology.
“We’re now seeing the emergence of a new sector under the cybersecurity umbrella: the online safety technology sector, or SafetyTech. I’m one of the founding members of this sector in the UK. Our mission is to develop technological solutions to technology-facilitated harms, including criminal and abusive online behaviour.
“To summarise, we must factor the human into the cybersecurity equation—from the perspectives of users, employees, and attackers.
“When we think about cyber attackers—from state-sponsored or state-condoned actors, to hacktivists, activists, organised cybercriminals, and sophisticated threat actors—we need solutions that are not only technically secure but also psychologically resilient.
“We want our data systems and networks to be robust, resilient, and secure—but just as importantly, we need the people who operate those systems to be psychologically robust, resilient, safe, and secure. That’s how we achieve 360-degree resilience.”
Q: Having briefed institutions including the UN, NATO, and Interpol, what lasting impact do you hope your speaking engagements have on how society navigates cybersecurity and digital wellbeing?
Dr Mary Aiken: “As one of the world’s leading experts in cyberpsychology, I’ve had the pleasure of speaking at global events—from the White House to NATO, from the United Nations to Interpol.
“I’ve spoken at conferences spanning cybersecurity, infosec, health tech, fintech, regtech, edtech, as well as policy and policing summits. The breadth and depth of these engagements point to the global relevance and utility of cyberpsychology.
“My role is to equip audiences with the psychological understanding and tools they need to address problems arising at the intersection of humans and technology. I help people create tech solutions to tech-facilitated harm—ranging from damaging behaviours to serious cybercrime.
“From a cyberpsychology perspective, my goal is to make people more informed and more confident in their use of technology—so they can harness it more effectively and ethically. And above all, my aim is to help people work together—across sectors, industries, and nations—to foster a safer, more secure cyberspace for us all.”
This exclusive interview with Dr Mary Aiken was conducted by Mark Matthews.