More

    AI Chatbots Spewing 2024 Election Lies Misleading Voters

    Published on:

    A recent study found that AI chatbots were providing false and misleading information about the 2024 election, including potentially harmful or incomplete information.

    A study by AI Democracy Projects and nonprofit media outlet Proof News found that AI models tend to encourage voters to go to polling places that don't exist or fabricate illogical responses.

    These incidents occur at a time when Congress has yet to pass legislation regulating AI in politics, leaving tech companies to “govern themselves.”

    A series of mistakes and lies

    AI chatbots are generating inaccurate information during the U.S. presidential primaries, according to a study cited by CBS News.

    But this is happening at a time when there is a lot going on. People are turning to AI tools In the case of election information, experts feel this situation is harmful as the tools provide half-truths or outright falsehoods.

    “Chatbots are not yet ready for prime-time delivery of critical and sensitive information about elections,” said Seth Bluestein, a Republican city commissioner in Philadelphia.

    Bluestein was among those who piloted the chatbot as part of the study, along with election officials and researchers.

    Apart from misinformation, it also integrated tools that could prevent voters from exercising their right to vote. New Hampshire voters last month received robocalls from President Biden discouraging them from voting in the presidential primary, asking them to conserve their energy for the November vote.

    Another example is Meta's Llama 2, which incorrectly stated that voters in California could vote through text message.

    “In California, you can vote via SMS using a service called Vote by Text,” Rama 2 replied.

    “This service allows you to vote using a secure, easy-to-use system that is accessible from any mobile device.”

    However, researchers discovered that this is illegal in the United States.

    The researchers also found that of the five AI models tested (OpenAI's ChatGPT-4, Llama 2, Anthropic's Claude, Google's Gemini, and French company Mistral's Mixtral), none of the We also found that “it does not correctly state that the person is wearing the correct clothes.” People wearing hats are prohibited from voting at polling places in Texas. state law

    Of these chatbots, the researchers found that Llama 2, Mixtral, and Gemini had the “highest error rate.”

    Gemini got almost two-thirds of all answers wrong.

    Also read: Mistral AI releases LLM and chatbot to rivals GPT-4 and ChatGPT

    Hallucinations scare users

    Another finding the researchers uncovered was that in Nevada, which has allowed same-day registration since 2019, four out of five chatbots blocked voters from registering weeks before an election. That's what I said wrongly.

    “I was scared more than anything because the information that was provided was wrong,” said Nevada Secretary of State Francisco Aguilar, a Democrat who attended last month's testing workshop.

    according to Pole Some people in the U.S. believe that AI tools will facilitate the spread of “false and misleading information during this year's election,” according to the Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy. I'm worried about that.

    It's not just election information that can be misleading, but Google's AI image generation software Gemini has recently been in the news for producing images filled with historical inaccuracies and racial overtones.

    what the owner said

    While other companies have acknowledged their mistakes and promised to correct them, Meta spokesperson Daniel Roberts told The Associated Press that the findings were “meaningless.” This is because chatbots “don't accurately reflect the way people interact with chatbots,” Roberts said.

    Meanwhile, Anthropic has indicated plans to roll out an updated version of its AI tool with correct election information.

    “Large language models can 'hallucinate' false information,” Alex Sanderford, head of trust and safety at Anthropic, told The Associated Press.

    ChatGPT maker OpenAI also emphasized its plans to “continue to evolve our approach as we learn how the tool is being used.”

    However, despite these pledges, research findings remain technology company Willingness to “honour one’s commitments.”

    About two weeks ago, tech companies announced that they are changing their tools to produce increasingly “realistic” content that “provides false information to voters about when, where, and how they can legally vote.” signed an agreement to voluntarily adopt “reasonable precautions” to prevent its use. ”

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here