The US company OpenAI is expanding its product line with Sora, an app they claim can create high-quality videos from short text prompts or a still photo.
OpenAI, the US company behind the ChatGPT chatbot and Dall-e image generator, on Thursday unveiled an AI application that can produce short realistic videos.
The application, called Sora, can create high-quality videos of up to a minute in length using a short text command called a prompt, OpenAI said. The application can also turn a photo into a video or extend a short video.
The tool isn't yet publicly available. OpenAI CEO Sam Altman on X, formerly known as Twitter, said the company was "offering access to a limited number of creators" in a testing phase.
He also invited users to suggest prompts on X, and then quickly posted results to the same platform. These included videos of two golden retrievers podcasting on a mountain and of a "half duck half dragon (that) flies through a beautiful sunset with a hamster dressed in adventure gear on its back."
Is Sora really different than other AI tools?
Sora is not the first text-to-video AI application on the market. Google, Meta and the startup Runway ML are among the companies that have already demonstrated similar technology.
But the high quality of videos displayed by OpenAI surprised observers while also raising concerns about the ethical and societal implications.
The Microsoft-backed company also hasn't disclosed what image and video sources were used to train Sora. Open AI has been sued by the New York Times and some authors for using copyrighted works to train ChatGPT.
Tool to be tested for safety
San Francisco-based OpenAI warned that the "current model has weaknesses," such as confusing left and right or failing to maintain visual continuity throughout the length of a video.
"We'll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology," Open AI said.
The company added that safety would be key, and that Sora would face adversarial testing, known as red-teaming, in which dedicated users try to make the platform fail, produce inappropriate content, or go rogue.
Source: Dw
Commentaires