News Security Technology

Open AI Claims To Halt Deceptive Usage Of AI In Indian Elections 2024

The company banned a cluster of accounts operated from Israel, which were used to create and edit content for an influence operation targeting audiences in Canada, the United States, Israel, and later, India

OpenAI, the creator of ChatGPT, reported that it took action within 24 hours to stop deceptive uses of AI in operations targeting the Indian elections. This quick response resulted in no significant increase in audience engagement. In a detailed report on its website, OpenAI revealed that STOIC, a political campaign management firm based in Israel, created content related to the Indian elections as well as the Gaza conflict.

Minister of State for Electronics & Technology Rajeev Chandrasekhar responded to the report, stating, “It is absolutely clear and obvious that @BJP4India was and is the target of influence operations.” He emphasised the need for a thorough investigation into these activities, pointing out that such operations pose a serious threat to democracy. Chandrasekhar also criticized the timing of the disclosure, suggesting it should have been made earlier.

OpenAI explained that it disrupted activities by STOIC, but not the company itself. “In May, the network began generating comments that focused on India, criticised the ruling BJP party, and praised the opposition Congress party,” the report said. “In May, we disrupted some activity focused on the Indian elections less than 24 hours after it began.”

The company banned a cluster of accounts operated from Israel, which were used to create and edit content for an influence operation targeting audiences in Canada, the United States, Israel, and later, India. The content was shared on X (formerly Twitter), Facebook, Instagram, websites, and YouTube.

OpenAI noted that this operation, nicknamed Zero Zeno, utilised its models to generate articles and comments on various issues, including the Russia-Ukraine war, the Gaza conflict, the Indian elections, and global politics. These were then posted across multiple platforms.

OpenAI stated its commitment to developing safe and beneficial AI, including efforts to detect and disrupt covert influence operations (IO) aimed at manipulating public opinion or influencing political outcomes. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment,” OpenAI said.

In the past three months, OpenAI has disrupted five covert IOs that attempted to use its models for deceptive activities. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” the report noted.

OpenAI emphasised its multi-pronged approach to combating platform abuse, which includes monitoring and disrupting threat actors, including state-aligned groups. The company invests in technology and teams to identify and disrupt such actors and collaborates with others in the AI ecosystem to highlight and address potential misuses of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *