OpenAI Rules Out Use in Elections and Voter Suppression

    Published on:

    In a decisive move to combat election misinformation, OpenAI has declared: strict attitude It opposes the use of its generative AI tools in election campaigns and voter suppression tactics.

    This announcement comes as an important step to ensure the integrity of a number of important elections scheduled for 2024.

    Also read: OpenAI negotiates news content licenses with CNN, Fox, Time

    Combating abuse with innovation and policy

    OpenAI has launched a strategy to prevent its technology from being misused to manipulate election results. The company has created a specialized team focused on election-related concerns, integrating expertise from various departments, including legal, engineering, and policy. The team's primary objective is to identify and mitigate potential abuses of AI in elections.

    “We bring together the expertise of our security systems, threat intelligence, legal, engineering, and policy teams to create a cross-functional effort focused on election operations.”

    The threat of misinformation in politics is not new, but the emergence of AI technology poses unprecedented challenges. Recognizing this, OpenAI is taking proactive measures. The company plans to employ a combination of technologies such as red teaming, user engagement, and safety guardrails. Specifically, the company's image generation tool, DALL-E, has been updated to no longer create images depicting real people, including political candidates.

    “DALL・E has guardrails in place to deny requests to generate images of real people, including candidates.”

    OpenAI is continually revising its User Policy to keep pace with the evolving landscape of AI technology and its potential abuses. The updated safety policy specifically restricts the development of AI applications for political campaigning and lobbying purposes. Additionally, measures are being taken to prevent the creation of chatbots that mimic real people or organizations.

    Strengthening transparency and accountability

    A key element of OpenAI's strategy is the introduction of the DALL-E tool's progeny. This feature, currently in beta testing, can detect images generated by DALL-E. The company aims to increase transparency in AI-generated content by making this tool available to journalists, platforms, and researchers.

    “We will soon enable our first group of testers, including journalists, platforms, and researchers, to get feedback.”

    OpenAI is also integrating real-time news reporting into ChatGPT. This integration aims to provide users with accurate and timely information and increase transparency of information sources provided by AI.

    OpenAI is a joint effort with the U.S. Association of Secretaries of State, focused on ensuring its technology does not impede election participation. This teamwork involves directing GPT-powered chatbot users to trusted voting information websites such as his

    Rivals follow suit in AI competition

    OpenAI's announcement sets a precedent in the AI ​​industry, with competitors such as: Google LLC Meta Platforms Inc. is also implementing measures to combat misinformation spread through its technology. This joint effort by industry leaders signifies increased awareness and responsibility for the potential impact of AI on democratic processes.

    But is this enough? Charles King of Pund-IT Inc. raises an important issue by questioning whether these measures are timely or reactive. He argues that concerns about AI-generated misinformation have been around for years, and OpenAI's recent announcements could be seen as too little, too late. This perspective provokes a deeper consideration of the role and responsibility of his AI developers in the political landscape.

    “Thus, this announcement suggests that, at best, OpenAI was put to sleep on the Switch. But at worst, this suggests that OpenAI will be in a state of sleep during this year's global elections when generated AI impacts fans. It’s similar to the hand-washing ritual that you can point to.”


    Leave a Reply

    Please enter your comment!
    Please enter your name here