Artificial General Intelligence (AGI) investor Arram Sabeti warns that the AI arms race is reaching “dangerous” territory and that humans should fear superintelligent AI.
‘AGI is scary,’ says Sabeti I have written In a long thread on Twitter as the corporate AI race reaches a feverish pitch. “It confuses me how people can underestimate the risks.”
“I have invested in two AGI companies and am friends with dozens of researchers working at DeepMind, OpenAI, Anthropic and Google Brain. Almost everyone is concerned.”
Also read: GPT-4 may already be as smart as humans in math, medicine and law – Microsoft
48% of professionals are concerned about AGI
AGI is an artificial intelligence panacea. That’s the point where AI-powered chatbots like ChatGPT have gained the ability to perform intelligent tasks that humans can, and perhaps more.Microsoft recently claimed report That GPT-4 may have already reached AGI.
Autonomous robots approaching or surpassing human-level intelligence are a nightmare, according to. Sabeti, the founder of the US food company Zero Cater. He was also the lead investor in his $6.5 million seed round of Fathom, a startup building his AI note-taker for Zoom.
Drawing similarities to the type of nuclear reactor that produces free power, Sabeti said there are potential risks that could arise from AI smarter than its human creators, so we should fear such an eventuality. warned that there is
He said that while people may be excited about such prospects, “half of nuclear engineers believe there is at least a 10% chance of a ‘very bad’ catastrophe, and safety people think it’s over 30%,” he said. “AGI is such a situation,” warned Sabeti.
AGI is scary. It confuses me how people can underestimate the risks.
I have invested in two AGI companies and am friends with dozens of researchers working at DeepMind, OpenAI, Anthropic and Google Brain. Almost everyone is worried.
— Aram Sabeti (@arram) April 2, 2023
To back up his claims, the entrepreneur turned to data from the 2022 Expert Survey on Advancements in AI.Around investigation48% of the 738 machine learning researchers surveyed said there is at least a 10% chance that advanced AI will produce very bad outcomes such as human extinction.
People were asked questions like:
About 69% of respondents said society should prioritize AI safety research more than it does now. But they generally believe AGI is still 30+ years away.
Alam Sabeti points to another study It was published On the effective altruism forum titled “Existence Risk from AI”. A poll of 44 people involved in AI safety found that on average, the odds of something terrible happening were about 30%, and he had well over 50% for some.
Tech Leaders Discuss AI Risks
Sabeti said he had lost confidence in the current AI development trajectory and called for government intervention.
“My confidence in large AI labs is waning over time. rice field.
“While it is clear that we will see superintelligence in our lifetimes, it is not at all clear that we have reason to believe that it will work.”
He said the timeline is accelerating, but the most uncertain part is when AGI will occur. Machine learning pioneer Geoffrey Hinton recently Said He can’t rule out AGI in the next five years, and it’s not inconceivable that AI will wipe out humanity.
This is an absolutely incredible video.
Hinton: “That’s a problem, isn’t it? We have to seriously think about how to control it.”
Reporter: “Can you do it?”
Hinton: “I don’t know. We haven’t been there yet. But we can try!”
Reporter: “Isn’t that a little worrying?”
Hinton: “Yeah!” pic.twitter.com/S6GJk3eWXW
— Jonathan Mannhart 🔎 (@JMannhart) March 31, 2023
As MetaNews reported this week, Microsoft researchers believe that supersmart AI already exists, as evidenced by GPT-4 completing difficult human-level intelligence tasks such as math, medicine, and law. claims to be Other technology leaders have also been outspoken about the existential risks of AI.
OpenAI CEO Sam Altman said of the risks that AGI could kill everyone: And it’s very important to admit it. “
Twitter owner and billionaire Elon Musk said, “We are using artificial intelligence to summon demons. Take my word for it – AI is far more dangerous than nuclear weapons.”
The late physicist Stephen Hawking: “The development of full artificial intelligence could mean the end of mankind.”
Paul Cristiano, Head of Alignment Research: “Without AI alignment, AI systems could well cause irreversible catastrophes like the extinction of humanity.”
While AI adoption is on the rise and there are clear benefits to its use, concerns about the potential dangers of hyper-intelligent AI persist. Considering the impact of AI on humanity, it is important for organizations to ensure that AI is designed in a secure manner.