A significant study on AI trust by KPMG and the University of Melbourne highlights low public trust and literacy around AI in Australia. While AI is increasingly used at work, many employees report misconduct related to it. There’s strong public demand for stricter AI regulations, and Australians express wariness and low optimism about AI benefits compared to global counterparts.
2025 is shaping up to be a pivotal year for artificial intelligence (AI) and how the public perceives it. The recent study by KPMG and the University of Melbourne reveals troubling insights about trust and regulation. Amidst the rapid adoption of AI, organizations appear to be hustling ahead without the necessary frameworks in place for transparency and accountability. With low AI literacy and weak governance being part of the mix, the risks are becoming clearer than ever.
The survey collected feedback from over 48,000 individuals across 47 countries this year. Interestingly, the research indicates that while AI is increasingly used, the trust in it varies widely. Australians, for instance, exhibit a notable wariness about AI. Around 65% of Australians say their bosses have started using AI, but less than half of employees feel comfortable using it in alignment with company guidelines.
Not only are nearly half of workers using AI in ways that might breach policy, but a whopping 57% of them admit they often don’t check the accuracy of AI’s outputs. This lack of diligence is leading to genuine mistakes at work—about 59% of employees are admitting to this. There’s a significant gap between potential benefits and the reality of risks; only 30% say their employers even have a clear policy regarding the use of generative AI.
The appetite for AI regulation is on the rise—77% of Australians agree that we need more regulation surrounding AI. Yet, only 30% feel that the existing laws and safeguards are enough to keep them safe. Interestingly, a massive 80% expect not just national but international oversight in AI governance. When asked about trust, a compelling 83% of respondents indicated they’d have higher trust in AI if there were assurances around governance standards and monitoring accuracy.
It’s apparent that Australians aren’t quite singing the praises of AI like some other nations. We rank low globally on optimism and excitement about this technology. Only 30% of Australians think the positives outweigh the negatives—this is the weakest score compared to any other country surveyed.
The issue extends into education, too. Just 24% of people in Australia have received any training in AI, significantly lower than the 39% global average, leaving individuals feeling underprepared and concerned about their AI skills. The survey highlighted that over 60% of Australians say their understanding of AI is pretty low.
As Professor Nicole Gillespie aptly put it, “An essential foundation for building trust and unlocking AI’s benefits is developing literacy through accessible training and public education.” The findings from this research point to an unsettling relationship between trust, regulation, and understanding of AI. While many appreciate its capabilities, there’s a deep-rooted caution surrounding safety and societal implications. Navigating this landscape will demand careful management as we carve out the future use of AI in our world.
In conclusion, KPMG and the University of Melbourne’s study paints a concerning picture of trust in AI, revealing a complex interplay between adoption, governance, and public perception. Australians are showing signs of caution amid the rapid integration of AI in workplaces. There’s strong support for increased regulation, alongside low AI literacy suggesting urgent needs for enhanced training and public education. As we step into 2025, the focus on unlocking the potential of AI while developing trust remains vital.
Original Source: kpmg.com