People in the Channel Islands already use artificial intelligence for everything from writing emails to creating images of themselves as an action figure.
But a series of local and international news stories have shone a light on the darker side of AI in recent months.
Disgraced former Guernsey politician and pastor Jonathan Le Tocq was sentenced to nine years in prison this week on 15 criminal charges, including creating more than 2,400 indecent images of children – many of which he made using photographs of women and children he knew and readily accessible AI technology.
With the dangers of AI becoming increasingly unavoidable, Express looks at some of the biggest risks and asks: ‘What can you do to protect yourself and your family?’

X or XXX?
Controversial multi-billionaire Elon Musk – owner of X (formerly Twitter) – has recently come under fire for his company’s own AI.
The technology – called Grok – has been used to create millions of sexualised images of real people – including children.
In a disturbing viral trend, users have been uploading photos of strangers and celebrities before digitally stripping them to their underwear in suggestive poses.
In one case, a selfie uploaded by a schoolgirl was turned into one of her wearing a bikini, without her or her parents’ consent.
The Guardian reported that Grok was used to generate more than three million sexualised images in just 11 days, including over 24,000 of children.
‘Lifelong trauma’

Grok’s image generation has now been made a paid-only feature, after the backlash, and rules may be tightened up in the future.
Many common AI models, such as ChatGPT and Microsoft Copilot, have safety features built in, preventing people from creating explicit images.
However, other specialist AIs still allow users to create similar images – including extreme pornography – so the problem isn’t likely to go away.
Brent Homan, Data Protection Commissioner for Guernsey, said there was a “real risk” of technology being used to “harm individuals”.
He said victims could experience trauma and reputational damage that could follow them “potentially throughout life”.

Deepfakes
As recently as 2023 the quality of AI video was so poor that hilariously surreal videos of Will Smith’s face melting as he ate pizza went viral.
But less than three years later, the technology has evolved so fast it’s often hard – if not impossible – to tell the difference between genuine videos and AI-generated ones, known as deepfakes.
Last year hundreds of people were targeted by a deepfake video impersonating Guernsey’s Chief Minister, Lindsay de Sausmarez, which showed her giving fake investment advice on social media.
Even the White House has been accused of digitally changing images, so is it just celebrities and politicians the issue effects?

Unfortunately not, according to Mr Homan.
He said the technology had developed to the point that it could imitate people’s friends and relatives on phone or video calls.
Mr Homan said: “I was given a demonstration by a high-tech company in Korea, who made me say a sentence like ‘I like fuzzy dogs’ or something.
“After 30 seconds, they played it back to me and it was my voice saying to my wife: ‘Hey, I’m in real trouble, I’ve lost all my money.'”
“The hair went up on my arms,” he added.
Targeted scams

Another area where AI is having a huge impact is scam emails and social media posts.
While scammers have been using emails to try to trick people for decades, the advent of AI has made scam emails more convincing, polished and harder to detect.
Scammers use the technology to help create realistic and – in many cases – highly personalised emails which are harder to spot as fakes than the previous generation of scams.
Mr Homan said: “We used to think that scams were something that happened to someone else and they would only target vulnerable people.
“Today, there’s a scam tailored for each individual. We’re all potentially susceptible.”

Hallucinations
We all know AI can come up with very plausible answers to almost any question you throw at it.
The key word here is ‘plausible’.
Because of the way AI works, it sometimes comes up with ‘facts’ that seem very reasonable – but are entirely made up.
Despite being digital, these hallucinations – as they’re known – can create big real-world problems if you don’t spot them.
A 60-year-old American man ended up in hospital, after he asked ChatGPT how to replace salt in his diet and was told to try sodium bromide – a chemical used to clean swimming pools and hot tubs.
In another example, two lawyers got in trouble for citing fictional case law when they used ChatGPT to help them prepare a case against an airline.
A question of ethics
As well as problems that can directly affect us, there are downsides to AI that impact the wider world.
One huge, and often overlooked, issue is the environmental impact, with AI soon expected to use more energy every year than the Netherlands.
Meanwhile, AI has come under fire from creatives for copyright infringement, taking work away from real people, and making everything look the same.

So how do you stay safe?
Mr Homan said AI had had a “tectonic” effect on society with many benefits, like improving healthcare and “democratising education”.
But like “all powerful technologies from the automobile to the splitting of the atom” some people would use it for harm.
Avoid ‘Sharenting’
It was important for people to “think before you share”, especially with photos of children, he said.
It was natural for parents to want to share “cute videos” of their children with friends, but some people overshare – a phenomenon known as “sharenting”.
“Everyone loves to take pictures of their family, but we’re living in a different time. You have to think, ‘How could this impact my children’s lives?’

“Some things you think are cute are best kept for grandma and grandpa.”
Mr Homan said many social networks came with “parental privacy controls” which could reduce the chances of someone malicious downloading – and in turn digitally altering – photos of them or their children.
“It won’t bring the risk to zero, but it can reduce it,” he said.
There were also specialist apps to share photos with close family only, he said, so they weren’t publicly available.
‘Be sceptical’
Mr Homan said it was often “difficult, if not impossible” to tell a real image from a fake one.
“More than ever, we cannot just blindly trust what we see – we need to verify.”
Mr Homan said there were sites, such as Snopes.com and BBC Verify, which fact-checked claims on the public’s behalf on widely-circulated videos and claims.
Verify… and don’t get emotional
One way to tell if an email, text or social media post was a scam was “if it triggers an emotional response”, Mr Homan said.
Messages that make it seem like you need to “act now” were “designed to create anxiety so you react without thinking”, he said.
So what should you do if you’re worried an email or text could be a scam?
“If it claims it’s from the government, you call the government on a government-approved line,” Mr Homan said, “If it’s from the bank, you call the bank.”
“Don’t call the number in the email [you were sent]. Go to a secondary source [such as their website].”

‘Don’t hesitate to contact the police’
Mr Homan said it was “important” to contact the police if a law had been broken, adding that the island’s police force was “quite vigilant” on the issue.
Speaking after Le Tocq was sentenced for his crimes committed using AI, Superintendent Liam Johnson said Guernsey Police was “working hard to ensure it is ready to tackle the emerging criminality that the development of AI has brought with it”.
He said police were working with Home Affairs to update the law so it was “fit for purpose”.
Supt Johnson said: “Any form of sexual offence involving children is treated with the utmost seriousness.
“We will relentlessly pursue offenders and ensure they are put before the courts to face justice.”
Seek support
Jenny Murphy, from Guernsey’s Victim Support and Witness Service, said anyone who had been the victim of “image-based abuse” or other crimes was “entitled to support”.
“This type of offending can have a lasting and deeply distressing impact on victims, and it is never acceptable.”
“No one has to face the impact of this alone,” she added.