More

    Meta Introduces Audio2PhotoReal For Metaverse Interactions

    Published on:

    Meta has introduced another AI concept to the Metaverse industry. According to a recent tweet from AI educator and developer Allen T., the company has released a new framework called his Audio2Photoreal.

    audio 2 photoreal is a framework for generating full-body photorealistic avatars that are naturally gesture-driven by the author's voice. These avatars are brought to life through voice audio integrated with human gestural movements.

    Given raw audio audio from an individual, a model is created that generates corresponding photorealistic gestures. The system consists of two generative models of the avatar that represent her facial expression code and her body pose.

    According to a clip uploaded by Allen T., different parts of the individual are affected by this addition, such as the mouth, hands, and face.

    Some of the published demos include multiple generated samples, two personal conversations, a generated female avatar sample, and a guide pose to drive the diffusion model. Allen T. added that this development will make the Metaverse fun. Elsewhere, the tech community seems excited after the comments on the post. User @EverettWorld tweeted“If the Metaverse is like this, I'll join in too!”

    But another user fumed, saying he doesn't really trust Meta anymore. According to @AIandDesign: meth is harmful to humans Follow Cambridge Analytica throughout. User adds:

    “This is all so cool. I wish it wasn't Meta. I don't really trust them anymore. I'm done with Cambridge Analytica and I'm completely done with Meta. They're harmful to humanity. ​Literally. I'm on FB, but it's only for my family.”

    The technology behind the Audio2Photoreal concept

    ArXivis a curated research sharing platform that allows scientists to share their research before it is peer-reviewed. audio 2 photoreal.

    The avatar's body movements are synthesized using a diffusion model conditioned on speech or text, respectively. For facial movements, an audio-adjusted diffusion model was constructed from the audio input.

    However, the body and face follow very different dynamics, with the face being strongly correlated with the input audio, but the body being weakly correlated with the voice.

    Meta's Audio2Photoreal enables photorealistic avatars with audio

    Importance of Audio2Photoreal in the Metaverse

    Meta's involvement in the Metaverse aims to make the ecosystem more realistic. These Audio2Photoreal avatars can use audio to reflect an individual's facial expressions and body gestures.

    A connection similar to that created when individuals are having a face-to-face conversation. The person has unique physical characteristics, such as height, skin and hair color, body shape, and other precise characteristics. Working in the Metaverse ecosystem is more flexible as it does not require a webcam, video, or high-quality smartphone camera.

    Facebook, X, and Instagram sue Ohio to halt social media law

    In another recent development, NetChoice, a company representing social media platforms such as Facebook, Instagram, and X. lawsuit On January 5th, he opposed Ohio's new social media law.

    The company announced a 34-page lawsuit blocking Ohio's social media parental notification law, which was scheduled to go into effect on January 15, but only applies to accounts created after that date.

    Social media laws say platforms must obtain parental consent for users under 16. But the lawsuit says the law would “impose significant obstacles to the ability of some minors to speak on these websites.”

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here