Sora AI Produces Eye-Popping Videos Instantly

    Published on:

    Sora, an impressive new generative video model created by OpenAI, can take a short text description and turn it into a minute-long, complex, high-resolution film clip.

    OpenAI, the parent company of ChatGPT chatbot and still image generator DALL-E, is one of many companies vying to power this instant video generator. Other companies include startups like Runway and tech giants like Google and Meta Platforms Inc., owner of Facebook and Instagram.

    This technology has the potential to completely replace less skilled digital artists while speeding up the work of skilled filmmakers.

    Also read: OpenAI co-founder Andrei Karpathy steps down, eyes private venture

    free sora

    OpenAI named its new system “Sora,” which means sky in Japanese. The technology's development team, including researchers Tim Brooks and Bill Peebles, chose the name because it “evokes endless creative possibilities.”

    The company also said it has not yet released Sora to the public because it is still investigating the risks associated with the system. Rather, OpenAI shares its technology with a select group of academics and other external researchers who act as a “red team” (a term used to describe looking for potential exploits).

    According to Dr. Brooks, the goal here is to provide a preview of what's to come so people can see the technology's capabilities and get feedback.

    OpenAI tags videos

    OpenAI already tags videos created by its system with a watermark to indicate that they were generated by artificial intelligence (AI). However, the company acknowledges that these may be removed. They added that they can also be difficult to identify.

    according to OpenAIthey teach artificial intelligence (AI) to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction. I am.

    Additionally, we're giving access to several visual artists, designers, and filmmakers to get feedback on how to evolve our models to best serve creative professionals.

    They collaborate with people outside of OpenAI to get feedback and share research progress early to help the public understand what AI capabilities will be. .

    Sora's development

    However, OpenAI declined to say how many videos the system learned from or where they came from. They only said that the training includes both publicly available videos and videos licensed by copyright holders.

    The company has been sued several times for using copyrighted content. Perhaps trying to maintain an edge over its competitors, it does not disclose anything about the data used to train its technology.

    Additionally, this model has a deep understanding of language, allowing it to accurately interpret prompts and generate engaging characters that vividly convey emotion. Sora can also generate multiple shots that maintain visual shots and character within a single generated video.

    OpenAI shared a prompt to generate a video with an X handle, sparking several reactions from X users.

    Weaknesses of the model

    OpenAI says the current model has weaknesses. You may need help accurately simulating the physics of a complex scene, or you may need help understanding a particular instance of cause and effect. For example, if a person bites into a cookie, there may not be a bite mark left on the cookie afterwards.

    The model can also help when spatial details of a prompt need to be disambiguated (for example, left-right confusion) or when accurately describing events that occur over time, such as following a particular camera trajectory. Sometimes it's necessary.


    Leave a Reply

    Please enter your comment!
    Please enter your name here