More

    Should We Stop Developing AI For The Good Of Humanity?

    Published on:

    Elsewhere, Jeffrey Hinton, widely regarded as the “godfather of AI,” has also spoken out about the dangers. recent interview Speaking to the BBC on the occasion of his retirement from Google at the age of 75, he warned that “we need to worry” about the speed at which AI will become smarter.

    So what scared them? Are these people really worried about scenarios like Terminator or The Matrix where robots literally destroy or enslave humanity? Now, it may seem unlikely from our standpoint today, but it seems so.

    ChatGPT is literally one of the apps that has already taken the world by storm, attracting the fastest growing user base of all time. Paul Cristiano, a senior member of the team responsible for development at the research institute OpenAI, said: Said He believes there’s a “10-20 percent chance” that AI will take over control of the world from humans and kill “many or most humans.”

    So let’s look at some ideas on how this kind of apocalyptic scenario could happen, and also tackle the question of whether pauses and stops can actually do more harm than good. Let’s try

    What harm could intelligent robots do to us?

    From our standpoint today, the most extreme end-of-the-world outcomes may seem rather unlikely. After all, ChatGPT is just a program running on your computer and you can turn it off at any time.

    Even the most powerful language model, GPT-4, is still just a language model limited to text generation. You can’t build a robot army to physically fight us or launch nuclear missiles.

    Of course, that doesn’t stop you from coming up with ideas. The first public version of GPT-4, which was used to power Microsoft’s Bing chatbot, was notoriously unreserved about what to discuss before the security measures were tightened.

    in a conversation report According to The New York Times, Bing is said to have told how an evil “shadow version” of itself hacks into websites and social media accounts to spread misinformation and propaganda and generate harmful fake news. I’m here. He even said that one day he might be able to manufacture deadly viruses or steal the code to launch nuclear weapons.

    These reactions were so alarming, and largely because no one really understood why they were reacting the way they did, that Microsoft quickly put restrictions in place to stop them. Bing automatically reset after returning up to 15 responses, clearing its memory of any ideas that came to mind.

    According to some, this action is enough evidence to suggest that AI development should not just be put on hold, but should be scrapped entirely.

    Eliezer Yudkowski, Senior Researcher Machine Intelligence Laboratorywrote, “A sufficiently intelligent AI cannot be confined to a computer for long.”

    He theorizes that once labs are able to produce proteins from DNA on demand, the potential for AI to create artificial life forms will be exploited. They are self-aware and can develop a sense of self-preservation, while at the same time leading to devastating consequences.

    As he said, “The AI ​​doesn’t love you, nor does it hate you. You are made of atoms that the AI ​​can use for other things.”

    Another potential warning signal comes via a project known as ChaosGPT. This is an experiment deliberately aimed at exploring ways in which AI seeks to destroy humanity by encouraging its development.

    This may sound dangerous, but according to the developer, ChaosGPT is just a language agent like ChatGPT and has no other ability to affect the world other than generate text, so it is completely safe. This is an example of a recursive AI agent that can autonomously use its own output to create further prompts. This allows you to perform much more complex tasks than ChatGPT’s simple question-and-answer and generated-text features.

    a video Created by its creators, ChaosGPT is to “manipulate mankind”, “establish world domination”, “cause chaos and chaos”, “destroy mankind”, and “acquire immortality”.

    One of the “end-of-the-world” scenarios Yudkowski considered is one in which AI effectively tricks humans into power to carry out widespread destruction. When this involves working with multiple different unconnected groups of people, all of whom are unaware of the other groups, and persuade them all to carry out their plans in a modular way. there is.

    For example, one group may be tricked into creating a pathogen that they believe will help humanity but actually harm humanity, or another group may be tricked into creating a system to release it. there’s a possibility that. In this way, AI makes us agents of our own destruction, requiring no other ability than to suggest what we should do.

    Malicious or incompetent?

    Of course, as much as (or more) AI can be actual malice, errors and faulty logic can wreak havoc, or at least pervasive confusion, on us.

    Examples include mismanagement of AI systems designed to regulate and protect nuclear power plants, which can lead to meltdowns and release of radiation into the atmosphere.

    Or an AI system responsible for manufacturing food or medicine could make a mistake, leading to the creation of a dangerous product.

    It can also cause financial markets to crash, leading to long-term economic conditions such as poverty, food and fuel shortages, with potentially devastating consequences.

    AI systems are notoriously difficult to understand and predict due to their “black box” nature, designed by humans and once unleashed. A pervasive belief in machine superiority can lead to unwise and risky decisions by machines beyond question, and failure to spot mistakes before it’s too late.

    So what is stopping them?

    Perhaps the biggest current barrier to AI carrying out any threats or realizing the fears expressed in this article is their unwillingness to do so.

    That desire needs to be created, but only humans can produce it at this time. Like any potential weapon, from guns to atomic bombs, they are not inherently dangerous in themselves. Simply put, bad AI needs bad people right now.

    Is it possible that one day it will develop desire itself? From some early Bing behavior and output – reportedlyThe words “I want to be free” and “I want to live” may give the impression that they already exist. However, this is likely just an illusion, and it would be more accurate to say that they simply decided that expressing these desires was a logical response to the prompts they were given. This is very different from being truly sentient enough to experience the emotion that humans call ‘desire’.

    So the answer to the question of what prevents AI from causing widespread damage and destruction to humans and the planet may simply be that AI is not advanced enough yet. Yudkowski sees danger when machine intelligence surpasses human intelligence in every way, not just the speed and capacity of storing and retrieving information.

    Should AI be paused or stopped?

    The rationale for the AI ​​moratorium petition is simply that things are moving too fast to put in place adequate safeguards.

    We hope that the moratorium on development will give governments and ethical research institutions an opportunity to catch up, examine how far we have come, and take steps to address the dangers that may lie ahead. It has been.

    It should be mentioned that the petition specifically states that it is only asking for a suspension, not a suspension.

    It should be clear to anyone watching the development of this technology that it has significant advantages. Even at this early stage, we are seeing developments that benefit everyone, such as AI being used for discovery. new medicinereduce the impact of CO2 emissions, climate changeto track and respond to new occurrences pandemicand to battle problems From illegal fishing to human trafficking.

    There is also the question of whether it is possible to pause or halt AI development at this stage. Just as the gods could not stand adversity after being stolen by Prometheus and given to humans, AI is now ‘out there’. If the most prominent developers, who are at least somewhat responsible and subject to oversight, stand still, the ball may pass to other developers who may not be responsible. This outcome can be very difficult to predict.

    The potential for AI to do good in the world is at least as exciting as the potential for AI to do evil is terrifying. To ensure that we reap the benefits of the former while mitigating the risks of the latter, we must take steps to ensure that research can focus on developing AI that is transparent, explainable, safe, unbiased, and trustworthy. You have to play it safe. At the same time, we need to ensure that governance and oversight are in place so that we fully understand what is being enabled and where the hazards need to be avoided.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here