Tech

OpenAI Uncovers Iranian Influence Operation Using AI Technology

Explore how OpenAI reveals a sophisticated Iranian influence operation leveraging AI technology. Discover the implications of this revelation on global politics and cybersecurity in an era where AI shapes narratives and information.

Published

on

OpenAI Disrupts Iranian Influence Campaign Utilizing AI Technologies

On Friday, OpenAI announced that it had successfully identified and dismantled an Iranian influence operation that exploited the company’s generative artificial intelligence technologies to disseminate misinformation online, particularly content related to the upcoming U.S. presidential election. The San Francisco-based AI company reported that it had banned several accounts associated with this campaign from accessing its online services.

OpenAI noted that the Iranian initiative did not appear to have garnered a significant audience. Ben Nimmo, a principal investigator at OpenAI with extensive experience in tracking covert influence operations at various tech firms, remarked, “The operation doesn’t seem to have benefited from meaningfully increased audience engagement due to the use of A.I. We did not observe any substantial interaction from real users.”

The rise in popularity of generative AI technologies, such as OpenAI’s well-known chatbot, ChatGPT, has sparked discussions regarding their potential role in facilitating online disinformation, especially during a year marked by major global elections.

In May, OpenAI published a groundbreaking report revealing that it had identified and thwarted five additional online campaigns that manipulated public opinion and sought to influence geopolitical dynamics using its technologies. These operations were orchestrated by both state actors and private entities from countries including Russia, China, Israel, and Iran.

These covert efforts harnessed OpenAI’s technology to create social media posts, translate and edit articles, formulate headlines, and debug computer programs, all with the intention of garnering support for political agendas or swaying public opinion in international conflicts.

This week, OpenAI unveiled several ChatGPT accounts that were being used to generate text and images for a clandestine Iranian initiative referred to as Storm-2035. The company indicated that this campaign employed ChatGPT to produce content on an array of subjects, including commentary on candidates involved in the U.S. presidential election.

Interestingly, the commentary generated by the campaign exhibited a mix of perspectives; some content appeared progressive, while other pieces took on a conservative tone. Additionally, it addressed contentious issues, ranging from the ongoing conflict in Gaza to the topic of Scottish independence.

According to OpenAI, the campaign utilized its technologies to create articles and shorter comments that were posted on various websites and social media platforms. In certain instances, the campaign employed ChatGPT to rewrite comments made by other social media users.

OpenAI further revealed that the majority of the campaign’s social media outputs received minimal or no likes, shares, or comments. Moreover, the company found little evidence indicating that web articles produced by the campaign were widely shared across social media platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version