OpenAI, the organization behind ChatGPT, announced last Friday that it had identified and subsequently disabled “accounts associated with an Iranian influence campaign.” These accounts reportedly utilized ChatGPT to “create content spanning various subjects, including the upcoming U.S. presidential elections,” as stated in an official OpenAI announcement.
The entity involved is referred to as Storm-2035. According to reports from Axios, this group is known for its attempts to sway electoral outcomes by establishing fraudulent news websites and disseminating them through social media platforms. In its statement, OpenAI highlighted that this group leveraged ChatGPT to “produce content surrounding several topics—including analysis on candidates from both political parties in the U.S. presidential race—which they then circulated via social media accounts and dedicated websites.”
The generated material addressed both leading party candidates—Vice President Kamala Harris and former President Donald Trump—as well as issues like Israel’s military actions in Gaza, discussions regarding the rights of Latinx communities across English- and Spanish-speaking demographics in the U.S., political dynamics in Venezuela, and matters related to Scottish independence. Additionally, Storm-2035 crafted content related to fashion and beauty—a strategy suspected by OpenAI as an effort to ”seem more credible” while trying to cultivate a follower base.
Despite these efforts, OpenAI indicated that the initiative failed to garner any substantial audience interaction.
“Most of the posts we analyzed received minimal or no engagement in terms of likes or shares,” said OpenAI representatives. “We also found little evidence indicating that articles published online gained traction across social media channels.”
“Regardless of this operation’s lack of significant audience engagement; we remain committed to addressing any attempts at employing our services for foreign influence purposes,” emphasized OpenAI.