Runway unveils new hyper-realistic AI video model Gen-3 Alpha

It’s time to celebrate the incredible women who are leading the way in AI! Nominate your inspirational leaders for the VentureBeat Women in AI Awards today through June 18. More information


New York-based Runway ML, also known as Runway, was one of the first startups to focus on realistic high-quality generative AI video creation models.

But after debuting its Gen-1 model in February 2023 and Gen-2 in June 2023, the company has since seen its star eclipsed by other highly realistic AI video generators, namely OpenAI’s yet-to-be-released Sora model and Luma AI’s Dream Machine model. released last week.

That changes today, however, as Runway is returning to generative AI video warfare in a big way: today it announced the Gen-3 Alpha, what it says in a blog post is “the first in an upcoming series of trained models. from Runway on a new infrastructure built for large-scale multimodal training” and “a step toward building general world models,” or AI models, that can “represent and simulate a wide range of situations and interactions as they occur in the real world.” Watch sample videos created using Gen-3 Alpha by Runway below in this article:

The Gen-3 Alpha allows users to create high-quality, detailed, highly realistic 10-second video clips with high precision and a variety of emotional expressions and camera movements.


VB Transform 2024 registration is open

Join business leaders in San Francisco July 9-11 at our flagship AI event. Connect with peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. Register now


According to an email from a Runway spokesperson to VentureBeat, “This initial rollout will support 5 and 10 second generations with significantly faster generation. It takes 45 seconds to generate a 5 second clip and 90 seconds to generate a 10 second clip.”

No exact release date has been given for this model yet, Runway is only showing demo videos on its website and social account on X, and it’s unclear if it will be available through Runway’s free tier or will require a paid subscription to access (which starts at $15 a month or $144 per year).

After this article was published, VentureBeat interviewed Runway co-founder and chief technology officer (CTO) Anastasis Germanidis, who confirmed that the new Gen-3 Alpha model would be available to paying Runway subscribers in “days” at the earliest, but that the free tier had been turned on . package to get the model at some point to be announced in the future.

A Runway spokesperson echoed that statement, emailing VentureBeat to say, “The Gen-3 Alpha will be available in the coming days and will be available to paid Runway subscribers, our Creative Partners program, and Enterprise users.”

On LinkedIn, one Runway user Gabe Michael said he expects to gain access later this week:

On X, Germanidis wrote that Gen-3 Alpha “will be available soon in the Runway product and will power all the existing modes you’re used to (text-to-video, image-to-video, video-to-video) and some new ones that are only possible now with a more capable base model”

Germanidis also wrote that since the release of Gen-2 in 2023, Runway has learned that “video propagation models are nowhere near the saturating performance gains from scaling, and that these models, when taught the task of predicting video, produce really strong representations of the visual world.” “

Diffusion is the process by which an AI model is trained to recompose visuals (still or moving) of concepts from pixelated “noise” based on learning these concepts from annotated image/video and text pairs.

Runway says in a blog post that Gen 3-Alpha is “trained together on videos and images” and “was the result of a cross-disciplinary team of research scientists, engineers, and artists,” though the specific data sets have not yet been released — in keeping with the trend most other leading AI media generators, who also don’t disclose exactly what data their models were trained on and whether any was obtained through paid licensing agreements or just scraped from the web.

Critics argue that AI modelers should pay the original creators of their training data through licensing agreements, and have even filed copyright infringement lawsuits to that effect, but AI modeling companies maintain that they are legally allowed to train on any public published data.

A Runway spokesperson emailed VentureBeat the following when asked what training data was used in Gen-3 Alpha: “We have an internal research team that oversees all of our training, and we use curated internal datasets to train our models.

Interestingly, Runway also notes that it is already “collaborating and working with leading entertainment and media organizations to create their own versions of Gen-3,” which “allows for more stylistically controlled and consistent characters, and focuses on specific artistic and narrative requirements, among other things.” other functions.”

No specific organizations are mentioned, but earlier filmmakers behind recognized and awarded films such as Everything, everywhere, everything at once and People’s joker revealed that they used Runway to create effects for parts of their films.

Runway includes a form in its Gen-3 Alpha announcement inviting other organizations interested in getting their own versions of the new model to sign up here. No price has been released for how much it costs to train your own model.

Meanwhile, it’s clear that Runway is not giving up its fight to be a dominant player or leader in the fast-growing AI video creation space.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top