How should society address the risks of artificial intelligence? Some argue that restrictions should be pursued lightly because the potential benefits outweigh the immediate dangers.
“The broad and comprehensive proposal that we should stop working is [on AI] It’s a bit misguided,” said Chris Gibson, co-founder and CEO of Recursion Pharmaceuticals, in a recent interview with ZDNET.
“I think it’s very important that people continue to embrace the opportunities that exist in machine learning,” Gibson said.
Related article: The 5 biggest risks of generative AI, according to experts
Gibson’s company is working with big pharmaceutical companies to use AI in drug discovery.
Gibson was responding to a letter published in March by Elon Musk, AI scholar Yoshua Bengio and many others calling for a moratorium on AI research to investigate the dangers.
The petition calls for a pause in what has been described as an “out of control race” for AI supremacy, creating systems that creators cannot “understand, predict and reliably control.”
Gibson focused on unrealistic concerns, such as the possibility that machine-learning programs could become sentient, but scholars who have looked at the question consider it a far-flung scenario.
“The work we are doing at Recursion is very interesting, training billions of parameter models that are really, really exciting in the context of biology,” Gibson told ZDNET. “But they are not sentient and they are not going to be sentient. They are far from it.”
One of Gibson’s main concerns is maintaining the ability of his company and others to advance work such as drug discovery. Recursion, which partners with the likes of Bayer and Genentech, currently has five drug candidates in the clinical stages of its drug development pipeline. The company has amassed more than 13 petabytes worth of information in PhenoMap, a term that refers to a database of “putative relationships” between molecules.
Also: “OpenAI is product development, not AI research,” says LeCun, chief AI scientist at Meta
“I think models that are held in isolation to answer really specific questions are very important for human progress,” Gibson said. “Models like ours, and other companies like us, don’t want to take a six-month or a year hiatus given how much chance they have to move forward.”
Gibson’s company is publicly traded, announced in July It revealed that it received an investment of $ 50 million from Nvidia, which has a GPU chip that dominates AI processing.
Gibson was recognized for speaking out to those concerned about AI and those demanding it stop. “There are really smart people on both sides of this issue,” he said, noting that Recursion’s co-founders stepped away from their day-to-day operations years ago over concerns about the ethical challenges of AI. did.
Recursion Advisor Yoshua Bengio is one of the signatories to the letter.
“Joshua is great, so this motivates me a little bit,” Gibson said. “But I think there are really important arguments on both sides of this debate.”
Also: The big puzzle of body and disease is starting to succumb to AI, says Recursion CEO
He said the difference in positions for and against the suspension “suggests caution,” adding, “But all training and all inference of ML and AI algorithms should be suspended for a period of time. I don’t think so,” he said.
Gibson’s team followed up with ZDNET and Bengio noted: In a blog post on AI riskhave distinguished between AI threats and socially useful applications such as healthcare.
Gibson joins Bengio’s colleagues, including Yann Lucan, chief AI scientist at Meta Properties, who voiced opposition to the initiative of Bengio’s friend and former collaborator. .
Gibson acknowledged that some concepts of risk, even improbable, need careful consideration. One is the doomsday scenario outlined by organizations such as: Future Human Research Institute.
“In the field of AI, if you ask an ML or AI algorithm to maximize some kind of utility function, say to make the world as beautiful and peaceful as possible, the AI algorithm probably isn’t completely wrong. But there are those who think it can, interpreting that humans are responsible for most of the lack of beauty and lack of peace,” Gibson said.
Also: ChatGPT: What The New York Times And Others Are Wrong Wrong
As a result, the program may “introduce something truly terrifying.” Such a prospect is “probably far-fetched,” he says. “But the impact is so great that it’s important to think about it. The chances of our plane crashing as we take to the skies are slim, but the cost is so high that we We are definitely considering the warning.”
Gibson also said, “There are some obvious things we can all agree on today,” such as not allowing programs to control weapons of mass destruction.
“Do I support giving AI and ML algorithms access to our nuclear launch systems? Absolutely not,” he said.
On a more mundane level, Gibson believes that the problem of bias should be addressed with algorithms. “You have to be very careful with your dataset and make sure you don’t have any biases in the utility function that optimizes your algorithm.”
Also: Philosopher David Chalmers says AI could have 20% chance of being sentient within 10 years
“There are more and more biases creeping into the results of these algorithms that are becoming part of our lives,” Gibson said.
In Gibson’s view, the most basic concerns should be obvious to everyone. “I think a good example is that giving an algorithm unrestricted access to the internet is riskier,” he said. “Therefore, there may be some regulations on that in the near future.”
His position on regulation, he said, “part of being in a high functioning society is putting all the options on the table and having important discussions around them. We have to be careful not to extend it into Base regulation for all ML or all AI. “
A pressing concern for AI ethics is the current trend for companies such as OpenAI and Google to increasingly hide the inner workings of their programs. Gibson said he opposes any regulation that requires the program to be open sourced. “But I think it’s very important for most companies to share some of their work with society in different ways to keep everyone moving forward,” he added.
Related article: Stability.ai Founder Explains Why Open Source Is Essential to Alleviating AI Fears
Recursion has open-sourced many of its datasets and “doesn’t rule out the possibility of open-sourcing some of its models in the future,” he said.
Clearly, the big issue of regulation and control concerns the will of the people of a particular country. A key question is how can voters be educated about AI? On that point, Gibson wasn’t optimistic.
Education is important, but “my general idea is that people don’t seem to care about getting an education these days,” he said.
“People who are interested in getting an education tend to care about these things, and most other people in the world don’t, which is very unfortunate,” he said. .