More

    US Regulator Probes OpenAI’s ChatGPT for Spreading Lies

    Published on:

    The U.S. Federal Trade Commission (FTC) has launched an investigation into OpenAI for allegedly spreading false information and violating data privacy rules over possible violations of consumer protection laws related to OpenAI’s AI chatbot ChatGPT. bottom.

    The Washington Post reported that the competition watchdog sent OpenAI a 20-page letter requesting more information about the company’s business operations, including its privacy policy and AI technology. Data Security Measures and Processes.

    The letter is the latest move by regulators to scrutinize the potential risks of generative AI, a type of artificial intelligence that can be used to create realistic and compelling text, images and videos. ChatGPT was released in his November to rave reviews, sparking an AI “arms race.”

    Also read: Google’s Bard AI chatbot can now read images and have conversations, expanding to the EU

    ChatGPT accused of harming users

    Every report, the FTC is investigating whether ChatGPT harmed people by providing incorrect answers to questions. The company wants to know whether it “engaged in unfair or deceptive privacy or data security practices” that caused “reputational damage” to its users.

    The U.S. Federal Trade Commission asked OpenAI about the safeguards it has taken to prevent its artificial intelligence models from “generating false, misleading, or derogatory statements about real individuals.”

    OpenAI founder and CEO Sam Altman has expressed disappointment that he first learned about the FTC investigation after it was leaked to the Washington Post. write in Altman said on Twitter that the move “doesn’t help build trust,” but added that the firm would work with the FTC.

    “It is very important to us that our technology is safe and consumer-friendly, and we are confident that we are compliant with the law,” he said. “We protect the privacy of our users and design our systems to help them learn about the world, not the individual.”

    Altman also talked about OpenAI’s latest technology, GPT-4. He said the model “builds on years of safety research, taking more than six months from initial training to release to make it safer and more tweaked.” Stated.

    “We are transparent about the limits of our technology, especially when there are limits,” he said.

    As of this writing, the Federal Trade Commission has yet to issue an official comment.

    Further legal issues for OpenAI

    The FTC investigation isn’t the only legal issue OpenAI has to worry about. As MetaNews previously reported, OpenAI has been sued for $3 billion in a class action lawsuit accusing ChatGPT creators of stealing user data.

    according to Complaint In the lawsuit, which was filed in California federal court on June 28, OpenAI said it would use “stolen personal information” to “train and develop” products such as ChatGPT 3.5, ChatGPT 4, Dall-E, and Vall-E. ‘ is said to have been used.

    Last week, comedian Sarah Silverman and two other writers It has been submitted The lawsuit against OpenAI and Meta alleges that their AI systems were trained using copyrighted material from the book without permission.

    The authors allege that the company is using a “shadow library” of copyrighted material to train its AI systems, which constitutes copyright infringement.

    regulatory concerns

    The rapid development of AI has raised concerns about the technology’s potential risks, such as bias, discrimination, and invasion of privacy. As a result, regulators around the world are starting to pay close attention to emerging industries.

    Governments are considering how existing regulations, such as those governing copyright and data privacy, can be applied to AI. They are also considering new rules that may be needed. The two main areas of focus are the data that feeds the AI ​​model and the content that the AI ​​model produces.

    In the United States, Senate Majority Leader Chuck Schumer has called for a “comprehensive law” to safeguard against AI, The Washington Post reported. He also pledged to hold a series of forums later this year aimed at “laying a new foundation for AI policy.”

    Pope Francis recently published guidelines on the responsible development of AI. China and Europe are also tightening and fine-tuning regulations on artificial intelligence.

    share this post

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here