More

    Meta infrastructure focus key to AI and metaverse ambitions

    Published on:

    Social media and technology giant Meta is keen to push further into AI and the metaverse, building a solid hardware and software infrastructure for long-term success.

    Over the past two weeks, Meta and its Meta AI division have revealed a number of plans for their products and internal infrastructure, including an AI supercomputer, data center and AI coding assistant platform.

    Facebook’s parent company also revealed for the first time that it has developed an AI chip, the Meta Training and Inference Accelerator (MTIA).

    pivot

    Ray “R” Wang, founder and analyst at Constellation Research, said Meta’s focus on infrastructure bodes well for the company’s growth and longevity.

    “Meta is in the right place now,” he said.

    Wang said the investment will allow the company to move away from its intensive focus on the metaverse world of virtual and augmented reality applications in recent years and steer more towards AI.

    This turnaround comes after Meta spent about $36 billion building the Metaverse, pouring that money into its Reality Labs division. But despite spending a lot of money on this division, Meta got very little profit from their spending.

    However, Meta has been using AI recommenders and other systems for almost 20 years, so its commitment to AI technology is nothing new.

    For example, Facebook’s news feed, which has long leveraged AI, launched in 2005. In 2016, Meta also open-sourced his PyTorch, a machine learning framework for deep neural networks and deep learning research that underpins all of Facebook’s AI workloads. Last December, Meta released his PyTorch 2.0.

    Meta’s vice president of engineering, Aparna Ramani, said during a streaming panel discussion at Meta’s @Scale conference on May 18, “This is just an evolution for us.” , the pace of innovation is picking up really fast.”

    Meta’s current focus on using automation and AI to create efficiencies is “future-proof and smart.” king Said.

    Wang said even Meta’s recent job cuts were the right move for the company’s future, adding that while the company was somewhat bloated on the talent front, it could now focus more on getting the right people. rice field.

    Chief Executive Mark Zuckerberg said in March that the company planned to cut about 11,000 jobs by May. Some of those positions were removed in April. Other layoffs are expected to take place next week.

    “Now they have to prioritize what to do with the network,” Wang said.

    So Meta can continue to work quietly in the Metaverse, out of the public eye, while focusing on building a solid infrastructure to support both AI and Metaverse efforts.

    “AI is the foundation of the metaverse, so we can do both at the same time,” says Wang. “We need to harden the infrastructure of the metaverse.”

    custom AI chip

    To build that foundation, first create a silicon chip.

    MTIA is Meta’s in-house custom accelerator chip. The chip will help the tech giant improve the performance and efficiency of each workload with the help of his GPU.

    Meta aims to use MTIA to improve the user experience of Meta’s Facebook, Instagram and WhatsApp applications.

    According to Meta, accelerators can help provide more accurate and exciting predictions, increase watch time, and improve click-through rates.

    MTIA meets the needs of developer workloads that CPUs and GPUs can’t, Mehta said. Additionally, its software stack is integrated with his Pytorch.

    Gartner analyst Chirag Dekate says MTIA is a way for Meta to move into the next era of specialization.

    GPUs are flexible, but powering modern generative AI techniques and large language models requires more computing power than ever before. Therefore, technology giants such as Meta and Google have started designing new technologies using TPUs to handle these much larger models.

    “They use some of these neural networks to identify commonalities across workload combinations and create purpose-specific cases,” Dekate added.

    Meta’s new AI silicon chips also aim to be more AI-native, he said.

    “It’s not yesterday’s technology,” Dekate continued. “It’s about innovating tomorrow’s model platforms, model products, and model ecosystems.”

    For example, Meta’s metaverse strategy includes a highly immersive experience and ecosystem. This could include not only VR/AR headsets, but also avatar worlds with more and better language options and more realistic movements. However, the current infrastructure makes it difficult to add advertising platforms to the Metaverse ecosystem.

    Therefore, Meta will likely evolve its hardware strategy to develop various chip families that enable faster training and inference for generative AI models and multimodal AI, which will make Meta a better metaverse. You will be able to create experiences, Dekate said.

    “These experiences require stitching together visual models, audio models, and NLP. [natural-language understanding] It’s technique,” he said.

    “We are not just solving generative AI techniques,” added Dekate. “It’s important to build a larger AI-native he ecosystem that uses many of these technologies as building blocks, especially as Meta specializes as a vision towards the Metaverse,” he said.

    looking to the future

    However, building a custom chip is an expensive undertaking that can only be undertaken by the likes of Meta, Google, and AWS with deep pockets.

    “The scale of AI in their organization is huge, and more importantly, they understand exactly what issues need to be addressed not only today, but also in an AI-first future,” says Dekate. says.

    These issues include research on how to optimize Meta’s language model and platforms (Facebook, Instagram, WhatsApp, etc.) with targeted advertising. As a technology company with such a large social reach, Meta uses video, audio and images to deliver the right ads to the right demographics, ensuring that its language model can be extended to many languages ​​of the world. You have to deal with the issue of

    Meta is using what they’ve learned from these platforms to create future large-scale immersive platforms, including one for the Metaverse, Dekate said.

    Part of this strategy includes next-generation data centers. Mehta said the new data center will have an AI-optimized design that supports water-cooled AI hardware and high-performance AI networks.

    Meta also revealed that it has completed building the second phase of its AI supercomputer, Research SuperCluster (RSC). This allows the technology company to train large-scale AI models, such as the large-scale language model LLaMA, on supercomputers.

    Earlier this year, Meta made LLaMA available as an open source model. This is a direction that Microsoft, Google and the creator of ChatGPT, his OpenAI, have shied away from due to the risks involved in misusing the model.

    “By open sourcing LlaMA, Meta hopes to accelerate innovation,” says Cambrian AI analyst Karl Freund.

    Despite criticism of the choice to open source the technology, Meta’s choice of LLaMA shows how the company hopes to rise to the top of the AI ​​industry.

    “Meta uses AI in all of its products and wants to be the leader in creating new LLMs,” said Freund, noting that Meta will not only develop a number of products for internal use, but also large-scale He added that he plans to develop AI models and release them as open source for widespread use. Industry-wide adoption of metatechnology.

    In a statement provided to the media, Zuckerberg said, “We’ve spent years building advanced infrastructure for AI, but this effort reflects a long-term commitment, and all of our work We will continue to advance this technology in the future so that it can be used more effectively.”

    Esther Ajao is a news writer covering artificial intelligence software and systems.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here