The U.S. is beginning to consider AI regulation, following China and Europe. AI has become the tech buzzword of the year, and multiple giants are embracing the trend.
With the launch of ChatGPT in November, AI-powered chatbots became the fastest growing consumer app in history, surpassing 100 million monthly active users.
2/ 🤔 The United States is researching the possibility of rules to regulate AI
-NTIA is studying potential rules for AI regulation.
-Seeking information on AI accountability mechanisms.
-NTIA advises the White House on communications and information policy.
—NextBigWhat (@nextbigwhat) April 12, 2023
Following the success of ChatGPT, industry leaders such as Google, Microsoft, Baidu and Alibaba began developing similar products.
But the topic of rapid growth and burgeoning AI has also caught the attention of authorities.
Also Read: Will the GPT development moratorium hinder China’s AI progress?
The National Telecommunications and Information Administration, an agency under the Department of Commerce that advises the White House on telecommunications and information policy, is seeking feedback on the need for AI “accountability mechanisms.”
Clearly, regulatory interest in this area has grown in step with the number of ChatGPT users.
US wants trustworthy and safe AI
Authorities are exploring possible measures that can be implemented to ensure AI systems are trustworthy, legal, ethical, effective and secure.
“Responsible AI systems can bring enormous benefits, but only if the potential consequences and harms are addressed. For these systems to reach their full potential, companies and Consumers must be able to trust the system.” Said Alan Davidson, NTIA administrator.
Last week, President Joe Biden said it had not yet been decided whether AI poses any danger.
“Tech companies have a responsibility to make sure their products are safe before they go public,” Biden said.
The agency is seeking information on “accountability mechanisms” for AI, saying there is “increasing regulatory interest” in this area.
The NTIA will draft a report that will examine “efforts to ensure that AI systems work as claimed and without harm.”
This report aims to inform the Biden administration’s efforts to create a cohesive federal approach to AI-related risks and opportunities.
You’re right. The moment you say, “We know you underestimate us, so replace us with AI if possible. Here’s how to regulate it.” You seem to admit something very broken. Imagine a negotiation going on in good faith.
— Mark Harris (@MarkHarrisNYC) March 22, 2023
Meanwhile, the Center for Artificial Intelligence and Digital Policy has urged the U.S. Federal Trade Commission to stop the release of OpenAI’s GPT-4, calling it “biased, deceptive and a risk to privacy and public safety.” said.
General assessment of learning over pausing
Over 1,000 tech leaders have written an open letter calling for a pause in all major AI development and training until developers can better understand how these technologies work. .
Elon Musk and Steve Wozniak were among more than 1,377 celebrities who signed a letter expressing their concerns about the development of autonomous weapons.
Other signatories include AI experts from Google and Meta, renowned computer science professor Stuart Russell, and Turing Award winner Joshua Bengio.
The letter also includes technology company CEOs and leading scientists.
But the public seems to appreciate the government’s efforts to research AI development rather than pausing it.
“Glad they started with it! I don’t think pausing development is a good idea, but it’s really good for the government to look for ways to deal with this phenomenon.” said Rediter.
Most of the US government couldn’t even spell AI.
— Douglas Carr (@douglaskarr) April 5, 2023
Another Redditor commented On the challenge of regulating AI in a timely manner, he says, “The pace at which laws are discussed, debated and written…if you’ve heard the phrase ‘like molasses in January…'”.
This user likens trying to regulate AI to “trying to catch water in a sieve,” suggesting that new AI advances may overtake regulations that are slow to develop and implement. .
The European Union has already announced plans to limit the spread of targeted political advertising based on personal characteristics.
Meanwhile, the UK has released proposals to regulate AI, with a focus on ensuring transparency and accountability in the use of such technology.
“We didn’t properly regulate social media from the beginning. It has almost completely swallowed us up. I’m here.” murmured Brennan Gilmore.
How the US shapes the future of AI remains to be seen.