ChatGPT’s hallucinations are commonly viewed as failures or problems with AI systems. But one researcher is bucking the line by pursuing even deeper chatbot hallucinations as a means of discovery.
According to Brian Roemmele, the hallucinatory “ChatGPT SupePrompt runs itself” and runs “infinitely”.
endless hallucinations
SuperPrompt designed to provide Chat GPT Hallucinations can last virtually forever, say its creators.
Chatbot hallucinations are responses to user prompts that often sound plausible but are actually incorrect. Sometimes the hallucinations go horribly wrong. Most ChatGPT users and commentators find psychedelic responses offensive, and in some cases even worry, given how plausible the wrong responses can look.
But one SuperPrompt engineer is challenging this convention by deliberately inducing hallucinations from the chatbot AI. Brian Roemmele created ‘Ingo’ super prompt Push the boundaries of ChatGPT’s hallucinogenic abilities.
“While most of the world in the fog of ‘war’ denounces AI hallucinations as ‘pure fiction’, ’embarrassing’ and ‘dangerous’, we bravely enter the center of this fog. We will build and explore a path that continues,” Roemmele said in typical dramatic style on Twitter. Wednesday.
AI hallucinations can be a powerful force in “creative” thinking. No illusions. It does not replace human creative thinking. they amplify it. “
Ingo SuperPrompt allowed 1,000 people to see what ChatGPT can do when it reaches the limits of its knowledge and is guided to step into the unknown.
After 12 hours of running I found this very interesting exchange.
Join us on an adventure… https://t.co/MZZUW1cvCj pic.twitter.com/dAVqdkCqf7
—Brian Roemmele (@BrianRoemmele) April 6, 2023
wrong term
Romere argues that the term “hallucination” is actually a misnomer and does not adequately describe the phenomena occurring within AI. Instead, he attributed the flight of fancy to something closer to human speech patterns.
“Like a human piecing together ideas at the same time, it fills in the missing pieces,” says Roemmele.
As explained, the process leading to hallucinations is similar to the human experience of searching for the correct word, failing to find it, and replacing it with another word. The main difference is that instead of the missing words, the AI is competing with the missing truth. Something it simply doesn’t know.
Most socially adapted humans may not be able to replace the missing facts with their own creations, but AI is unhindered by human morality. When the AI runs out of facts, it just creates new ones.
According to Roemmele, this state that AI enters when struggling with missing information is similar to hypnosis in humans and is sometimes called wakefulness or lucid dreaming.
Roemmele may have found a use for this factual invention, but most users would definitely prefer if the AI could stick to reality.
You own your own AI.
New very small 100% final test running ChatGPT 3.5 turbo type LLM AI locally on 2015+ laptop hard drive.
There is a pre-configured download, which is significantly smaller than most models I have, just 4 GB.
very soon! pic.twitter.com/KnZkICmGPV
—Brian Roemmele (@BrianRoemmele) April 5, 2023
Personalized AI is coming
Metanews is previously reported About Roemmele’s effort to create a chatbot that can be run from a personal computer without an internet connection.
According to Wednesday’s update, the GPT-3.5 model is almost ready for release and will only take up 4 GB of space on your local computer. Roemmele said he is working with Andriy Mulyar and named GPT4ALL.
Meanwhile, OpenAI’s ChatGPT5 is in the pipeline and is expected to be trained by December, at least according to developer Siqi Chen. This upgrade could be the first chatbot capable of comprehensive AGI (Artificial General Intelligence). This is the ability of an AI system to perform any intelligent task that a human can perform.