Meta’s AI ambitions have now extended into the emerging field of AI-generated video with its latest creation, Movie Gen. This advanced video generator can produce fairly realistic video content from brief text prompts, and while Meta says it’s designed for both Hollywood filmmakers and casual Instagram users, it’s not yet available to the public. Movie Gen is notable for its ability to create audio alongside video, making it one of the most advanced deepfake tools yet.

In a blog post, Meta showcased several examples of Movie Gen’s capabilities, such as a cheerful baby hippo swimming underwater and penguins clad in “Victorian” attire, albeit with historically inaccurate clothing. Another example showed a DJ performing next to an indifferent cheetah, too absorbed in the music to be a threat. These videos illustrate the potential of AI-generated content, which has gained traction this year with other players like Microsoft’s VASA-1 and OpenAI’s yet-to-be-released Sora.

Movie Gen stands out for its ability to not only generate new video from text but also to edit existing footage, a feature that could prove revolutionary. Users can modify real-world videos or generated clips by adding backgrounds, changing outfits, or even inserting new elements based on simple text descriptions. Meta demonstrated how images of people can be incorporated into AI-generated movies, creating entirely new narratives.

In addition to video, Meta’s tool also integrates a 13-billion parameter audio generator that can layer sound effects or music onto videos. Although this audio generation feature is currently limited to 45-second clips, the company envisions more robust capabilities in the future, such as longer soundtracks for videos or films.

Despite these exciting advancements, Movie Gen isn’t ready for public use just yet. Meta’s chief product officer, Chris Cox, confirmed that while the technology is impressive, it’s still expensive and time-consuming to run. Meta’s whitepaper on Movie Gen details a suite of foundational models, with the largest being a 30-billion parameter transformer model capable of handling up to 73,000 video tokens.

Comparing Meta’s AI video technology to competitors like OpenAI is tricky since many companies have become secretive about their data usage. However, Meta still shares information about its AI tools, even though the exact data sources for Movie Gen remain unclear. It’s likely that some of the training data came from videos shared on Facebook, and the company has previously used footage from its Meta Ray-Ban smart glasses for AI development.

For now, aspiring creators looking to experiment with AI-generated video will need to turn to alternatives like RunwayML’s Gen 3, which allows limited use before charging users. Meta, however, is positioning Movie Gen as a tool for filmmakers, having worked closely with industry professionals to develop it. Reports suggest that Hollywood studios, including A24, are already engaging with AI companies for future projects, while Meta is in discussions with stars like Judi Dench and Awkwafina for potential collaborations in the AI space.

Topics #AI #Artificial Intelligence #Facebook #Instagram #Mark Zukerberg #Meta #Meta AI #Movie Gen #Movie Gen Ai #Music #news #Social Media #Video Generator #WhatsApp