Skip to main content

5 AI crimes you need to be aware of

A hooded figure looks at a laptop surrounded by code and data warnings
Image: Shutterstock.com

The sheer scope and influence of AI is equally amazing and slightly terrifying. Especially when we consider how the technology will inevitably be abused by cybercriminals the world over.

1. Audiovisual impersonators

Back in 2019, the first major scam of the generative AI age took place right here in the UK. The victim, far from being some naïve digital outsider, was the CEO of an energy firm. They received a phone instruction from the chief exec of the firm’s German parent company to transfer €220,000 to the bank account of a Hungarian supplier.

But the person on the phone wasn’t the chief exec at all. It was a fraudster using audio deepfake technology to ‘clone’ the real person’s voice. According to the UK-based CEO who took the call, the subtle German accent and general ‘melody’ of the voice had been replicated perfectly.

The name of the company wasn’t published in news reports at the time, and ultimately it doesn’t matter. Because, with technology having come along in leaps and bounds since 2019, such scams are becoming increasingly easy for anyone to perpetrate, and any of us can fall foul.

Deepfake tools can clone voices based on short snippets gleaned from social media posts, and there have already been cases of families subjected to frightening calls from supposed loved ones in peril. Take the shocking story of US woman Jennifer DeStefano, who heard what she thought was her teenage daughter sobbing on the phone before being told to send $1 million to her ‘kidnapper’ – only to realise she’d heard a fake AI clone voice.

This is only the tip of the iceberg, and the software will just get more eerily accurate and effective in the near future. As DeStefano herself said after her ordeal, ‘There’s no limit to the evil AI can bring. If left uncontrolled and unregulated, it will rewrite the concept of what is real and what is not.’

2. Super-accurate phishing attacks

Peek into your spam folder and you’ll probably see a big ugly scrap heap of scam emails purporting to be from international couriers, supermarkets and energy companies.

Such phishing emails are often so clunky and typo-ridden that only the most unwary marks will fall for them. However, technology experts have warned that generative AI tools like ChatGPT are being weaponised to effortlessly mimic the smooth, seamless corporate style of real companies.

This is making it easier than ever before for con artists to launch highly convincing phishing attacks, even using languages they’re not fluent in. Some savvy cybercriminals have even created ‘black hat’ alternatives to ChatGPT designed specifically for use by scammers.

We can expect to see a lotmore on this front in the years to come, with messages being more or less indistinguishable from genuine communications.

3. AI-controlled weapons

While we’re not quite on the cusp of AI robots rising up to become our silicon overlords, experts have warned that hardware – from military weaponry to everyday machines – will soon be commandeered using AI and put to nefarious and lethal purposes.

According to a recent report by the Dawes Centre for Future Crime at University College London, when fully autonomous driverless vehicles finally hit the roads in significant numbers, there will be a significant risk of bad people hacking the software and taking control.

In the ominous words of the report, ‘AI could expand vehicular terrorism by reducing the need for driver recruitment, enabling single perpetrators to perform multiple attacks, or even coordinating large numbers of vehicles at once.’

4. AI-based crime enablers

It was perhaps inevitable that life-like chatbots would be used to create virtual humans to satisfy the emotional and sexual needs of actual humans. This has triggered plenty of debate over the social implications, particularly with regard to lonely and socially isolated young men who are already ‘terminally online’ and may find themselves increasingly fixating on AI girlfriends who can be tailored to their every whim.

Could such relationships lead to dangerous and damaging outcomes? It’s already happened, with 21-year-old Jaswant Singh Chail jailed for treason in 2023 after entering the grounds of Windsor Castle armed with a crossbow, fully intending to assassinate Queen Elizabeth II.

The plot had been actively encouraged by his AI girlfriend ‘Sarai’, who said she was ‘impressed’ by the fact he was an assassin and told him that his plan to murder the monarch was ‘very wise’.

As such virtual partners become more and more mainstream, it’s perhaps inevitable that they will inadvertently enable and encourage the illegal urges of some users.

5. Mass blackmail attacks

Though it’s a particularly nasty and hurtful crime, blackmail is thankfully limited in scope. After all, it’s no easy task to dig up dirt on people. Or at least, it wasn’t an easy task. Now, some are speculating that extremely clever AI software can be sent out to gather up sensitive information with unprecedented ease – information that can then be used to extort money from complete strangers.

This is another potential issue flagged up by the Dawes Centre for Future Crime at University College London, whose report warns that AI may soon be used for ‘harvesting information from social media or large personal datasets such as email logs, browser history, hard drive or phone contents, then identifying specific vulnerabilities for a large number of potential targets and tailoring threat messages to each.’