People in the Channel Islands already use artificial intelligence for everything from writing emails to creating images of themselves as an action figure.

But a series of local and international news stories have shone a light on the darker side of AI in recent months.

Disgraced former Guernsey politician and pastor Jonathan Le Tocq was sentenced to nine years in prison this week on 15 criminal charges, including creating more than 2,400 indecent images of children – many of which he made using photographs of women and children he knew and readily accessible AI technology.

With the dangers of AI becoming increasingly unavoidable, Express looks at some of the biggest risks and asks: ‘What can you do to protect yourself and your family?’

Billionaire Elon Musk. He's a white man in a grey jacket and open-buttoned white shirt.
Pictured: Controversial billionaire Elon Musk has come under fire recently for his company’s AI being used to generate sexualised images of children and celebrities.

X or XXX?

Controversial multi-billionaire Elon Musk – owner of X (formerly Twitter) – has recently come under fire for his company’s own AI.

The technology – called Grok – has been used to create millions of sexualised images of real people – including children.

In a disturbing viral trend, users have been uploading photos of strangers and celebrities before digitally stripping them to their underwear in suggestive poses.

In one case, a selfie uploaded by a schoolgirl was turned into one of her wearing a bikini, without her or her parents’ consent.

The Guardian reported that Grok was used to generate more than three million sexualised images in just 11 days, including over 24,000 of children.

‘Lifelong trauma’

A young woman in jeans undoes her own bra next to a bed.
Some AI technology can digitally undress strangers, celebrities and even children.

Grok’s image generation has now been made a paid-only feature, after the backlash, and rules may be tightened up in the future.

Many common AI models, such as ChatGPT and Microsoft Copilot, have safety features built in, preventing people from creating explicit images.

However, other specialist AIs still allow users to create similar images – including extreme pornography – so the problem isn’t likely to go away.

Brent Homan, Data Protection Commissioner for Guernsey, said there was a “real risk” of technology being used to “harm individuals”.

He said victims could experience trauma and reputational damage that could follow them “potentially throughout life”.

A laptop with an AI-generated image of Elvis in St Peter Port harbour, using chat GPT. The prompt reads: Please create a photorealistic image of Elvis Presley singing on a yacht in St Peter Port harbour, Guernsey in 16:9 aspect ratio. The image generated is of Elvis Presley. He sings into a vintage microphone on the deck of a yacht in St Peter Port harbour, Guernsey, wearing a white, gold-studded jumpsuit and red scarf. The harbour water, moored boats, and the hillside town with stone buildings and towers are visible behind him under a bright blue sky.
Pictured: AI can be used to generate photorealistic images and videos of events that never happened in seconds. Image contains AI-generated content. (Bailiwick Express/ChatGPT)

Deepfakes

As recently as 2023 the quality of AI video was so poor that hilariously surreal videos of Will Smith’s face melting as he ate pizza went viral.

But less than three years later, the technology has evolved so fast it’s often hard – if not impossible – to tell the difference between genuine videos and AI-generated ones, known as deepfakes.

Last year hundreds of people were targeted by a deepfake video impersonating Guernsey’s Chief Minister, Lindsay de Sausmarez, which showed her giving fake investment advice on social media.

Even the White House has been accused of digitally changing images, so is it just celebrities and politicians the issue effects?

An old lady looks at her mobile phone with her head in her hands.
Pictured: The latest technology allows scammers to use AI-generated voices and video of people’s relatives to trick them into sending money.

Unfortunately not, according to Mr Homan.

He said the technology had developed to the point that it could imitate people’s friends and relatives on phone or video calls.

Mr Homan said: “I was given a demonstration by a high-tech company in Korea, who made me say a sentence like ‘I like fuzzy dogs’ or something.

“After 30 seconds, they played it back to me and it was my voice saying to my wife: ‘Hey, I’m in real trouble, I’ve lost all my money.'”

“The hair went up on my arms,” he added.

Targeted scams

A person reads a text message which says: Hello! your package with tracking code is waiting for you to set delivery preference:
with a link.
Pictured: Scams are becoming more convincing and personalised.

Another area where AI is having a huge impact is scam emails and social media posts.

While scammers have been using emails to try to trick people for decades, the advent of AI has made scam emails more convincing, polished and harder to detect.

Scammers use the technology to help create realistic and – in many cases – highly personalised emails which are harder to spot as fakes than the previous generation of scams.

Mr Homan said: “We used to think that scams were something that happened to someone else and they would only target vulnerable people.

“Today, there’s a scam tailored for each individual. We’re all potentially susceptible.”

A digitally-pixelated image of a woman.

Hallucinations

We all know AI can come up with very plausible answers to almost any question you throw at it.

The key word here is ‘plausible’.

Because of the way AI works, it sometimes comes up with ‘facts’ that seem very reasonable – but are entirely made up.

Despite being digital, these hallucinations – as they’re known – can create big real-world problems if you don’t spot them.

A 60-year-old American man ended up in hospital, after he asked ChatGPT how to replace salt in his diet and was told to try sodium bromide – a chemical used to clean swimming pools and hot tubs.

In another example, two lawyers got in trouble for citing fictional case law when they used ChatGPT to help them prepare a case against an airline.

A question of ethics

As well as problems that can directly affect us, there are downsides to AI that impact the wider world.

One huge, and often overlooked, issue is the environmental impact, with AI soon expected to use more energy every year than the Netherlands.

Meanwhile, AI has come under fire from creatives for copyright infringement, taking work away from real people, and making everything look the same.

A young mother kisses her teenage daughter, who is eating toast and playing with an iPad.

So how do you stay safe?

Mr Homan said AI had had a “tectonic” effect on society with many benefits, like improving healthcare and “democratising education”.

But like “all powerful technologies from the automobile to the splitting of the atom” some people would use it for harm.

Avoid ‘Sharenting’

It was important for people to “think before you share”, especially with photos of children, he said.

It was natural for parents to want to share “cute videos” of their children with friends, but some people overshare – a phenomenon known as “sharenting”.

“Everyone loves to take pictures of their family, but we’re living in a different time. You have to think, ‘How could this impact my children’s lives?’

A man in a grey suit and white shirt.
Pictured: Guernsey’s Data Protection Commissioner, Brent Homan, says people should “think before they share” photos online.

“Some things you think are cute are best kept for grandma and grandpa.”

Mr Homan said many social networks came with “parental privacy controls” which could reduce the chances of someone malicious downloading – and in turn digitally altering – photos of them or their children.

“It won’t bring the risk to zero, but it can reduce it,” he said.

There were also specialist apps to share photos with close family only, he said, so they weren’t publicly available.

‘Be sceptical’

Mr Homan said it was often “difficult, if not impossible” to tell a real image from a fake one.

“More than ever, we cannot just blindly trust what we see – we need to verify.”

Mr Homan said there were sites, such as Snopes.com and BBC Verify, which fact-checked claims on the public’s behalf on widely-circulated videos and claims.

Verify… and don’t get emotional

One way to tell if an email, text or social media post was a scam was “if it triggers an emotional response”, Mr Homan said.

Messages that make it seem like you need to “act now” were “designed to create anxiety so you react without thinking”, he said.

So what should you do if you’re worried an email or text could be a scam?

“If it claims it’s from the government, you call the government on a government-approved line,” Mr Homan said, “If it’s from the bank, you call the bank.”

“Don’t call the number in the email [you were sent]. Go to a secondary source [such as their website].”

A female 999 operator.
Guernsey Police are working with Home Affairs to modernize the law around AI.

‘Don’t hesitate to contact the police’

Mr Homan said it was “important” to contact the police if a law had been broken, adding that the island’s police force was “quite vigilant” on the issue.

Speaking after Le Tocq was sentenced for his crimes committed using AI, Superintendent Liam Johnson said Guernsey Police was “working hard to ensure it is ready to tackle the emerging criminality that the development of AI has brought with it”.

He said police were working with Home Affairs to update the law so it was “fit for purpose”.

Supt Johnson said: “Any form of sexual offence involving children is treated with the utmost seriousness.

“We will relentlessly pursue offenders and ensure they are put before the courts to face justice.”

Seek support

Jenny Murphy, from Guernsey’s Victim Support and Witness Service, said anyone who had been the victim of “image-based abuse” or other crimes was “entitled to support”.

“This type of offending can have a lasting and deeply distressing impact on victims, and it is never acceptable.”

“No one has to face the impact of this alone,” she added.