AI startup Runaway announced Tuesday that it will grant all users access to its Gen-1 artificial intelligence video generator, allowing users to easily create videos from images and text prompts. Until now, this tool has only been available to a small number of users.
and TweetRunway, a web-based machine learning-powered video editor, says Gen-1 can now be used by all content creators to “generate new videos from existing videos with images or text prompts.” says. The company announced Gen-1, the company’s first AI video editing model, in February.
This software can create new videos using data from existing uploaded videos and add effects using images or text prompts. The final video can retain aspects of the original, but with a different style.
The wait is over.
Gen-1 is currently https://t.co/ekldoIshdw pic.twitter.com/Wm2YVOvm26
— Runway (@runwayml) March 27, 2023
What you can do with Gen-1
Runway is at the forefront of AI-powered creative assets with over 30 tools that help users think, generate and edit content. The New York-based company helped create the popular Stable Diffusion AI image generator last year.
according to Gen-1 can transform realistically shot scenes into animated renderings while preserving the “original scene proportions and motion”. Users can also edit videos by isolating subjects in videos and changing them with simple text instructions.
Gen-1 comes with 5 different modes to style your video.
stylization: Transfers the image or prompt style to every frame of the video.
Storyboard: transform your mockups into fully styled and animated renderings.
mask: Isolate subjects in your video and change them with simple text prompts.
give: Apply an input image or prompt to transform textureless rendering into a realistic output.
customization: Finally, users can customize the Gen-1 to create exactly what they want with more options.
Get started with Gen-1
If you want to create a video from existing images and text prompts using Gen-1, you can do it by: First, you need to sign up for Runway. Go to the company website, create an account and log in.
So go to “Gen-1: Video to video” in AI Magic Tools. Then select or upload your input video. You can upload multiple videos and images at once. You can choose from at least three methods.
Driving image: Select or upload a reference photo to transfer the style of your image to your input video.
text prompt: Allows you to add a text prompt describing the style to apply to the uploaded video. You can also add multiple text prompts to generate multiple videos.
preset: Select one of the available preset styles to apply to your input video.
After adding the text prompt,[生成]Click the button to create. Gen-1 uses images and text input to generate the desired video. This process may take several minutes, depending on the number of images and text prompts you have uploaded.
Videos can be downloaded individually or in bulk, and videos can be edited after downloading using the video editing software provided by the platform. Add music, sound effects, and text overlays to make your videos more engaging.
Turn your initial thoughts into finished movies. at Gen-1.
Available now https://t.co/ekldoIshdw pic.twitter.com/vlnO73BGYT
— Runway (@runwayml) March 28, 2023
Gen-1 comes with a free option to create up to 3 video projects. However, if you want to use Runway more, you’ll have to pay $12/month for the standard service and $28/month for the Pro subscription service.
Runway AI development
Runway was founded by an artist with the ambition to “make AI’s limitless creative possibilities to speak to anyone, anywhere, about anything.” As reported by MetaNews, the company last week announced the release of Gen-2, a more advanced AI model than Gen-1.
Also Read: AI Threatens Democracy, Experts Warn
Gen-2 is a multimodal AI system that can generate novel videos using text, images, or video clips. It has two modes of his: text-to-video mode that allows you to provide text prompts to create videos of any style, and “text + image-to-video” that generates videos from text and image prompts.
Gen-2 combines all the features of its predecessor. It is based on work on generative diffusion models by runway research scientist Patrick Esser. It was published According to Cornell University last month.