AI users are now less likely to trust it than ever, a new study has revealed.

Despite 66% of people surveyed from around the world admitting to using AI regularly, only 46% were willing to trust AI systems. 

These were the key findings of ‘Trust, attitudes and use of artificial intelligence: A global study 2025’, led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne, and Dr Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG. 

It surveyed over 48,000 people across 47 countries between November 2024 and January 2025. 

The study indicates that people have become “less trusting” and “more worried” about AI as its adoption has increased, compared to a previous study conducted prior to the release of ChatGPT in 2022. 

It also showed that UK respondents were less willing to trust AI (42%) than the global average. 

“To some extent AI has been part of the business landscape for years, but the rapid rise and widespread public access to more advanced systems, notably generative AI, has brought AI’s promise and its associated risks into sharp focus,” said David Watterson, Associate Director at KPMG in the Crown Dependencies. 

An untameable beast at work? 

The study stated that the “age of working with AI was here”, since three in five (58%) of employees worldwide were intentionally using AI, with 31% using it daily or weekly. 

Despite this, globally, 60% of organisations provided “responsible AI training”, and only 34% reported an “organisational policy or guidance on the use of generative AI tools”.   

In the UK, it seemed the proliferation of AI had produced a two-pronged result. 

Pictured: David Watterson, Associate Director at KPMG in the Crown Dependencies.

Over half (53+%) of UK respondents said they saw increased efficiency, quality of work and innovation as a result of AI, and 45% reported increased revenue-generating activity. 

However, 54% of UK employees said they had made mistakes in their work due to AI, 58% “relied on AI output” without checking its accuracy, and 38% admitted to using AI “inappropriately” at work. 

“The use of AI at work is creating complex risks for organisations, and a ‘governance gap’ is emerging,” said Mr Watterson. “Complacent use could be due to governance of responsible AI trailing behind.” 

Unchartered territory 

Though 73% of people report personally experiencing or observing the benefits of AI, 43% believed current AI regulations were adequate. 

This number was even lower in the UK, with just 33% of respondents satisfied with current safeguards. 

The risks noted by the study were loss of human interaction, cybersecurity risks, proliferation of misinformation and disinformation, inaccurate outcomes, and deskilling. 

“Loss of human interaction and connection” was the top risk among UK respondents at 55%. 

The impact AI could have politically was of particular concern to UK respondents, with 54% saying they were worried that elections could be manipulated by AI-generated content or bots. 

And 91% wanted laws and action to combat AI-generated misinformation. 

“Although this study didn’t include respondents from the Crown Dependencies, organisations in our jurisdictions are navigating similar challenges,” shared Mr Watterson. 

“Forward-looking organisations’ approaches to adopting innovation are far from reckless. Taking a measured and strategic stance, they are carefully evaluating whether investment in AI and automation will deliver business benefits. 

“They seek to balance innovation with operational needs, regulatory compliance, and long-term business objectives and are proactively addressing these challenges and turning potential risks into strategic opportunities,” he added.