Loading Now

OpenAI Disrupts Covert Operations Linked to China and Other Nations

A conceptual illustration of AI tools disrupting covert operations, featuring digital elements representing cybersecurity and research.

OpenAI disrupts numerous covert operations linked to China and other countries, utilizing AI technologies in their efforts to influence social media and conduct surveillance. The company highlights ten recent operations, four of which are tied to China, employing tactics like social engineering and surveillance across various platforms. Despite this, the effectiveness of their efforts appears limited, as early intervention has prevented larger-scale engagement.

OpenAI has recently taken decisive action against covert operations thought to be linked to China, as well as other countries. According to researchers at OpenAI, Chinese propagandists are leveraging ChatGPT to whip up posts and comments on social media, not stopping there. They’re also generating internal documents like performance reviews for their supervisors, all while China intensifies its online influence and surveillance tactics.

The company, in a call with reporters, detailed a growing array of covert operations that exploit sophisticated tactics. Ben Nimmo, the principal investigator for OpenAI’s intelligence and investigations team, noted, “What we’re seeing from China is a growing range of covert operations using a growing range of tactics.” In the past three months, OpenAI disrupted ten incidents where its AI tools were manipulated for malicious purposes, banning accounts linked to such activities. Significantly, four of those operations had connections to China.

These China-linked efforts were not just random; they spanned various countries and topics—even including a strategy game. Nimmo elaborated, “Some of them combined elements of influence operations, social engineering, surveillance.” The activities were extensive, cutting across a variety of platforms and websites. One notable operation dubbed “Sneer Review” generated comments using ChatGPT and blasted them across TikTok, X, Reddit, and Facebook in multiple languages like English, Chinese, and Urdu.

The content ranged widely—some praised the Trump administration’s cuts to the U.S. Agency for International Development while others criticized a Taiwanese game about defeating the Chinese Communist Party. Intriguingly, the operation crafted posts that looked like organic engagement. They’d comment on their own posts and even produced long-form articles claiming backlash against the game, which seemed rather staged, as reported by OpenAI.

Furthermore, the creators of “Sneer Review” utilized ChatGPT for internal operations, including writing up performance reviews detailing how they established and ran the operation, essentially mirroring their social engagement strategies. Another concerning operation involved impersonating journalists and geopolitical analysts to gather intelligence, which also employed ChatGPT for crafting X account bios, translating communications, and analyzing data.

It’s alarming stuff; the operation reportedly even drafted correspondence addressed to a U.S. Senator regarding an official nomination. OpenAI couldn’t confirm whether that correspondence was actually sent, though. Nimmo pointed out that the operation went so far as to create marketing materials claiming to run “fake social media campaigns” aimed at recruiting intelligence sources.

In its earlier report back in February, OpenAI had outlined a separate surveillance operation linked to China that was allegedly monitoring social media to feed real-time updates about protests in the West back to Chinese security forces. That operation, too, utilized OpenAI’s tools for debugging code and creating sales pitches for its monitoring tool.

But the scope of OpenAI’s report doesn’t end there. It also highlighted covert influence operations with likely ties to Russia, Iran, and even a spam initiative attributed to a marketing company in the Philippines. There were also recruitment scams emerging from Cambodia and misleading employment campaigns reminiscent of North Korean operations. Nimmo remarked, “It is worth acknowledging the sheer range and variety of tactics and platforms that these operations use, all of them put together.”

Fortunately, most of these operations were caught in their early stages, thus preventing them from reaching a larger audience. Nimmo reflected, stating, “We didn’t generally see these operations getting more engagement because of their use of AI.” In essence, better tools do not guarantee success for these efforts.

OpenAI’s latest report reveals the growing use of AI technologies in covert operations, particularly from entities linked to China, aimed at influencing public opinion across various platforms. While OpenAI has successfully disrupted multiple operations, the scope shows how sophisticated these tactics can be, blending social engineering and surveillance. However, the proactive steps taken have thus far managed to limit widespread engagement, demonstrating the challenges faced by those behind these operations.

Original Source: www.npr.org

James O'Connor is a respected journalist with expertise in digital media and multi-platform storytelling. Hailing from Boston, Massachusetts, he earned his master's degree in Journalism from Boston University. Over his 12-year career, James has thrived in various roles including reporter, editor, and digital strategist. His innovative approach to news delivery has helped several outlets expand their online presence, making him a go-to consultant for emerging news organizations.

Post Comment