More

    AI is Far Worse Than Nuclear War, Says Prominent Researcher

    Published on:

    Artificial General Intelligence (AGI) researcher Eliezer Yudkowsky says AI innovation is much worse than a nuclear bomb and could kill everyone on Earth. But that may not be entirely accurate, according to some of his peers who believe the risks are exaggerated.

    Following an open letter recently signed by celebrities such as Apple co-founder Steve Wozniak, billionaire Elon Musk and Gary Marcus, Yudkowski said the world’s large-scale language AI training requested to be suspended for six months.

    “If someone builds an AI that is too powerful, in the current situation, we expect all members of the human race and all biological life on Earth to die shortly thereafter,” he warned in a recent article. Did. It was published by Time magazine.

    Also Read: Trouble with ChatGPT Paradise? AI-Run Startup Fails to Meet Financial Goals

    AI shutdown

    Yudkowski is an American computer scientist. friendly AI, A term that specifically refers to AI that produces “beneficial and beneficial results rather than harmful results.” He spent his 20 years researching AGI (basically a state in which he can reason like a human) and is considered a pioneer in the field.

    In his article, Yudkowsky argues that artificial intelligence risks cannot be managed through regulation alone. He believes that the development of AI is an existential threat to humanity, and that the only way to deal with the threat is to shut it down completely.

    “Shut down all large GPU clusters (large computer farms with the most powerful AI sophisticated). Shut down all large training runs,” he suggested.

    “Put a cap on the computing power anyone can use to train AI systems…” No government or military in the world should be exempt from complying with these stringent standards, says Machine Intelligence said the co-founder of the Research Institute.

    One of the central issues Yudkowsky raises is what he describes as “the alignment problem”. Essentially, this problem refers to the difficulty in ensuring that the AI ​​system’s goals and objectives match those of its human creators.

    Critics say AI systems risk developing their own goals and objectives that conflict with those of their creators, leading to disastrous consequences. Developing AI without solving alignment problems is like building a skyscraper on an unstable foundation, he said.

    hot nuclear debate

    Yudkowsky fears the unintended dangers of rapidly scaling up the development of super-smart technology without taking proper safety precautions. He proposes the creation of professional organizations and agencies dedicated to addressing these safety concerns.

    but deterministThe s argument is nothing new. Many experts have warned about the dangers of AI for years. In 2018, report The RAND Corporation has warned that the development of AI could increase the risk of nuclear war.

    The report said the integration of AI, machine learning and big data analytics could dramatically improve the military’s ability to locate, track, target and destroy an enemy’s nuclear deterrent.

    Others are also participating in the discussion. Former Google product his leader Bilawal Sidhu says the AI ​​debate can be treated like a nucleus or left unresolved. He says that open-source AI will allow good actors to police the bad guys, minimizing the harm the technology can do.

    and a series of tweets, Sidhu equated the potential of AI technology with the improvement of nuclear power, adding that data is the new crude oil in the digital world.

    “The big data era has digitized everything, creating a treasure trove of both open and closed. Data is the new oil, and with AI, it is easier than ever to extract this crude resource.” he wrote.

    “Unlike in the past, when governments wielded power exclusively, this influence is also held by individuals. For better or worse, anyone can use it.People are already showing their wild abilities.”

    More AI features

    Even geolocation, which was previously protected by sophisticated government spy agencies, can now be done privately. Sidhu cites his AI artist, who used Instagram and public camera feeds to locate top her influencers as a good example.

    What is known as “life pattern analysis” has traditionally been relegated to intelligence agencies, but it can also be done by individuals. He says the potential for such individuals becomes terrifying when a much more skilled large-scale language model like GPT-4, the latest version of the GPT family, is added to the mix.

    With this terrifying prospect, it’s no wonder OpenAI, the company that developed GPT technology, chose to open up for multimodality, Sidhu says. Combining the media’s “detection and tracking algorithms” with the neural link-based “inference” capabilities of his GPT-4 class model in parsing social his feeds creates a powerful big brother.

    He called for more regulation of model types such as image generation, arguing that they have far fewer downsides than general-purpose models like GPT-4.

    “But it’s not even rainbows and sunshine. VFX and Photoshop are always talked about, but image models enable disinformation on an immense scale,” he tweeted.

    Sidhu cited how clearly labeled “VFX videos” fooled millions of people. He said he couldn’t imagine such technology falling into the hands of what he called a “villain.”

    previous interview Former US Secretary of State Henry Kissinger argued in a Newsweek article that AI is just as important as nuclear weapons but “less predictable.” Kissinger believes AI risks can be managed through international cooperation and regulation.

    share this post

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here