Runway has introduced an alpha version of the Gen-3 artificial intelligence model for generating videos from prompta and static images.
The neural network does an excellent job of creating expressive human characters with a wide range of movements, gestures and emotions, the announcement says. Gen-3 Alpha is trained to accurately identify key frames in a video and create transitions.
"Gen-3 Alpha is the first model from the upcoming series, trained on a new infrastructure created for large—scale multimodal training. This is a significant improvement in accuracy, consistency and movement compared to Gen-2, as well as a step towards creating "Common Models of the World," Runway said in a statement.The Gen-3 Alpha can create videos of five and ten seconds in high resolution. The generation time is 45 and 90 seconds, respectively. This was stated by the co-founder and technical director of the company Anastasis Germanidis in an interview with TechCrunch.
There are no exact dates for the public release of Gen-3. The alpha version "will soon be available in the Runway product line with support for all existing modes (text-video, image-video and video-video) and some new ones," noted Germanides.
Recall that in February, OpenAI introduced the Sora generative AI model for converting text to video. In May, screenwriter and director Paul Trillo generated a video clip with her help.
Google DeepMind is developing artificial intelligence-based technology to create video soundtracks.