More

    Meta Shuts Down China-Based Propaganda Network

    Published on:

    A team of scientists from the Swiss Federal Institute of Technology Lausanne has developed a new machine learning algorithm called Cebra (pronounced “zebra”). It converts brain signals into video, basically allowing you to convert your thoughts into video.

    A new artificial intelligence tool was tested on rodents to predict and reconstruct what they see based on mapping neural activity to specific frames of a video. studywas published in the scientific journal Nature on May 3.

    “Cebra outperforms other algorithms in reconstructing synthetic data, which is important for comparing algorithms. report According to Neuroscience News.

    “Its strength also lies in its ability to combine data across modalities, such as movie features and brain data, which helps limit nuances such as changes in data that depend on how it was collected,” he added. .

    Also read: AI finds ‘eight signals of interest’ in search for extraterrestrial life

    Cebra’s 95% Accuracy

    A study from the Swiss university, also known as the École Polytechnic Federal de Lausanne (EPFL), came shortly after scientists at the University of Texas reported using AI to read people’s minds and convert them to text in real time. was performed on

    For their study, EPFL researchers had Sebra We study real-time brain activity in mice after watching a movie, and arm movements in primates. Some of the brain activity was measured directly with electrode probes inserted into the visual cortex region of the brain.

    The rest were acquired using a transgenic mouse light probe designed to glow green whenever a neuron was activated or received data. Cebra used this data to learn the brain signals associated with specific frames of the movie.

    “Then we can get a new mouse with neural data that we haven’t seen and run this algorithm to predict which frames the mouse actually sees in this movie,” said the study’s lead. Researcher Mackenzie Mathis explains in a video. Posted on youtube.

    The researchers were able to convert this data into their own videos, added the EPFL assistant professor. Her team used open-source data collected from mouse brains using electrophysiological signals.

    “Instead of predicting each pixel, we predict the frame. The probability level is 1/900, so I think 95%+ accuracy is very exciting. is what we plan to do next,” Mattis later said. Said email online.

    AI disrupts industry

    As seen in the video above, mice were made to watch an old black-and-white movie clip (probably from the mid-20th century) of a man running to his car to open the trunk. Another of his screens is nearly identical, showing what the mouse sees from Cebra’s point of view.

    According to Mathis, an AI tool was able to do this using less than 1% of the neurons in the mouse visual cortex, or about 500,000 neurons.

    “We wanted to show how little data is available, both in terms of movie clips and neural data,” she said.

    “Best of all, the algorithm can run in real time, so it takes less than a second for the model to predict an entire video clip.”

    So the question is whether we can reconstruct what someone sees based on brain signals alone. According to research, we still don’t have an answer. However, EPFL researchers “have taken a step in that direction by introducing new algorithms for building artificial neural networks that capture the dynamics of the brain with great accuracy.”

    In the United States, scientists at the University of Texas at Austin used AI to read people’s brain scans and recreate entire stories from brain waves alone. It was published recently.

    In the study, participants sat in a brain-scanning machine known as an fMRI and listened, watched, and imagined stories. I made it.

    However, concerns have been raised about accuracy issues. This is because the AI ​​can easily be fooled if the subject decides to think differently than the song they are listening to.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here