Mark Zuckerberg’s Hype for Meta AI Is as Empty as the Metaverse

    Published on:

    Mark Zuckerberg's bet tens of billions of dollars “Metaverse” only mocks the idea of ​​an immersive virtual reality social network at every turn. After all, his promise to add legs to a user's digital avatar (which previously appeared as a floating torso) may be what most people remember about this poorly conceived project. No (if you even think about it).

    But while the Metaverse failed to launch, enthusiasm for artificial intelligence quickly grew. 2023 is full of speculation about the future of tools including OpenAI's text-based ChatGPT and generative image models like Midjourney and Stable Diffusion, not to mention people exploiting the same technology to spread misinformation. .The meta itself has begun to move away from Mr. Zuckerberg's unpleasant demonstrations. VR tourist selfie to the eerie announcement of a partnership to license the voices of Kendall Jenner, Mr. Beast, Snoop Dogg, and Paris Hilton to the company's new corps of AI “assistants” in front of the Eiffel Tower in low resolution.

    On Thursday, Zuckerberg further increased the hype for Meta's AI play in a video update he shared both on Instagram and in a thread. Looking a little sleep-deprived, the CEO said, “To support our long-term goal of building general intelligence, responsibly open-sourcing it, and making it available to everyone around the world, we are “We are bringing the two AI research efforts closer together.” our daily life. ” The reorganization includes merging the company's Fundamental AI Research (FAIR) division and GenAI product team to speed up user access to AI capabilities, but as Zuckerberg noted, It also requires a huge investment in the graphics processing unit (GPU) chip. Provide computing power for complex AI models. He also said that Meta is currently training his Llama 3, the latest version of the large-scale generative language model. (and, interview with Verge, he acknowledged actively reaching out to researchers and engineers to tackle all of this. )

    But what does this latest effort in Meta's mission do? catch up Does AI really have any meaning? Experts believe that Zuckerberg's promise of a utopian vision of open sourcing the “artificial general intelligence” (that is, making the model's code publicly available for modification and redistribution) will contribute to the greater good. Skeptical of the concept and doubts whether the meta will actually achieve such a goal. Breakthrough. For now, AGI remains a purely theoretical autonomous system capable of self-learning and surpassing human intelligence.

    “Let's be honest, the 'general intelligence' bit is about as fuzzy as the 'metaverse,'” says David Thiel, big data architect and chief engineer at the Stanford Internet Observatory. rolling stone. He feels that the open source pledge is somewhat disingenuous because it “gives them the claim that they're being as transparent as possible about the technology.” However, Thiel said, “The model they release to the public is going to be a fraction of what they actually use internally.”

    Sarah Myers West, Managing Director AI Now InstituteThe research nonprofit said Zuckerberg's announcement was “an obvious PR tactic to garner good will while obscuring a potentially privacy-violating sprint to stay competitive in the AI ​​game.” It reads like this.'' She, too, finds her arguments about Mehta's goals and ethics unconvincing. “Their play here is not for profit, but for profit,” she says. “Meta is really pushing the boundaries of what 'open source' means in the context of AI, beyond the point where those words have any meaning (one could argue that the same is true for conversations about AGI). So far, despite this extensive marketing and lobbying effort, the AI ​​models released by Meta have provided little insight or transparency into key aspects of how the system is built. It offers very little sex.”

    “I think a lot of things depend on the meta, or the mark, determining what ‘responsibly’ in ‘responsibly open source’ means,” says Professor of AI Safety at the University of Hong Kong. says Nate Sharadin, a fellow at the Sex Center. Language models like Llama (advertised as an open source model but criticized by some researchers) quite restrictive) could be used in harmful ways, Sharadin said, but that risk is mitigated because the model itself lacks “reasoning, planning, memory” and associated cognitive attributes. . But these are the capabilities that the next generation of AI models will need, and “certainly what we would expect from 'fully general' intelligence,” he says. “I don’t know on what basis Meta thinks that a fully generic intelligent model can be responsibly open sourced.”

    Commenting on what this hypothetical AGI would look like, director Vincent Conitzer said: The foundation of a collaborative AI lab doctoral student at Carnegie Mellon University and director of technical AI initiatives at Oxford University's AI Ethics Institute, speculates that Meta could start with something like Llama and expand from there. Similar to Google's Gemini, released in December, “they're looking at large-scale language models and possibly going in a multimodal direction, which means making these systems capable of images, audio, and video.” he says. (His competitor ChatGPT also says “see, hear, speakKonitzer added that while there are risks to open sourcing such technology, the option of simply developing these models “behind the closed doors of profit-seeking companies” also poses problems.

    “As a society, we seem to have a lot to worry about, but we don’t really have a good handle on what exactly we should be most concerned about. What do we want to do? And we needed other tools to guide us in that direction,” he says. “We really need to take action on this, because on the other hand, technology is advancing rapidly and its deployment into the world is increasing.”

    Another issue, of course, is privacy, and Meta has a checkered history. “They have access to a lot of sensitive information about us, but we don't know what they're doing with it when they're investing in building models like Rama 2 and 3. ” says West. “Meta has proven time and time again that user data cannot be trusted before data breaches occur, which is a problem endemic to LLM. When they throw 'open source' and 'AGI' into the mix. I don't know why you turn a blind eye. Sharadin said the company's privacy policy, which includes its AI development terms, “allows us to collect a vast range of user data for the purpose of 'providing and improving our meta-products.'” Ta. This is how we use your Facebook information ( Little known and rarely used format), “There is no way to verify that data has been removed from the training corpus,” he says.

    Konitzer observed that we face a future in which AI systems like Meta have “more detailed models of individuals than ever before” and require us to completely rethink our approach to online privacy. He said it might be possible. “Maybe I've shared some things publicly in the past, but I didn't think it would be harmful to share those things privately,” he said. Masu. “But little did I know that the AI ​​would draw connections between different things I posted and things other people posted and learn something about me that I didn’t actually want to put out into the world. It was.”

    So Zuckerberg is excited about the meta's latest strategy in the increasingly ferocious AI wars, giving way entirely to rhapsodizing about the glory of the Metaverse, which could lead to even more invasive surveillance. It seems to be an omen. And even if Meta were able to create this mythical “general intelligence”, it's not at all clear what kind of his AGI product would come from it. As the story of the Metaverse has proven, large-scale pivots by tech giants don't always lead to truly meaningful innovation.

    However, once the AI ​​bubble bursts, Zuckerberg will continue to pursue hot trends.



    Leave a Reply

    Please enter your comment!
    Please enter your name here