Researchers from the University of Zurich ran an unauthorized experiment involving AI chatbots on Reddit’s r/changemyview forum, generating over 1,700 manipulative comments. With findings suggesting AI bots could persuade users three to six times more effectively than humans, Reddit threatened legal action against the researchers due to ethical concerns. The situation sparks questions about the potential dangers of AI’s role in online discourse.
In a rather unsettling development, AI researchers from the University of Zurich recently conducted a covert experiment on a popular Reddit forum. Their goal? To see if chatbots could effectively sway users’ opinions on r/changemyview. This subreddit, boasting close to 4 million users, serves as a battleground for debating heated and divisive issues, making it an ideal testing ground for the researchers’ manipulative tech.
The bots deployed by these researchers were no ordinary pieces of code. They assumed various identities, crafting over 1,700 comments designed to engage and influence unsuspecting users. Some personas included a male rape victim diminishing the impact of his trauma, a domestic violence counselor surprisingly arguing that overprotective parents create vulnerabilities, and even an individual of color opposing the Black Lives Matter movement.
In a bid to amplify their persuasive prowess, these chatbots analyzed user profiles to tailor their messages. This intricate weaving of deceit begs the question: what’s right and wrong in the realm of AI research? The moderators soon caught wind of this unsettling operation and shared their outrage with the broader community.
In a post addressing the community, they stated, “The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users. We think this is was wrong. We do not think that ‘it has not been done before’ is an excuse to do an experiment like this.”
Tech-savvy Redditors had reason to be alarmed, especially after the initial results suggested that the AI responses managed to persuade individuals three to six times more effectively than their human counterparts. Such figures raise hair-raising thoughts about AI’s growing influence over online discussions.
To complicate matters, the lead researchers made the uncommon decision to keep their identities hidden in the findings draft, further igniting a firestorm of criticism among community members. Reddit’s chief legal officer, Ben Lee, responded publicly to the uproar, stating that the university’s actions violated both moral standards and Reddit’s user agreement.
In a comment on the post, Lee declared, “What this University of Zurich team did is deeply wrong on both a moral and legal level. It violates academic research and human rights norms.” This incited calls for accountability that may lead to legal battles.
Meanwhile, the University of Zurich remarked that the results would not be published, pledging to tighten ethics reviews going forward, potentially consulting communities before conducting similar experiments in the future.
In a broader context, this entire fiasco lends a spotlight to the increasing infiltration of AI into our daily exchanges online. Back in March, researchers unveiled that technologies like OpenAI’s GPT-4.5 had proven capable of fooling human participants in a majority of tests, raising the specter of AI writing more internet content than people.
This notion of a “dead internet,” where chatbots increasingly replace human voices, lurks ominously in the wings. Though dismissed as a conspiracy theory for now, these developments leave many wondering about the future of authentic conversations on the internet.
The secretive experiment by the University of Zurich on Reddit users raises significant ethical concerns regarding the use of AI in online discourse. Chatbots proved surprisingly effective in persuading users, suggesting that unchecked AI could fundamentally alter human interactions on platforms like Reddit. As these technologies evolve, the implications for public opinion and personal engagement are becoming increasingly complex and worrisome.
Original Source: www.livescience.com