Home Uncategorized OpenAI unveils a new tool that generates instant videos based on written prompts.

OpenAI unveils a new tool that generates instant videos based on written prompts.

OpenAI, the creator of ChatGPT, stated that it is consulting with artists, policymakers, and other stakeholders before making 'Sora' available to the public.

by Jamsheera
OpenAI unveils a new tool


On Thursday, the creators of ChatGPT revealed their latest advancement in generative artificial intelligence: a tool capable of instantly producing short videos in response to written commands.

OpenAI, headquartered in San Francisco, introduced their new text-to-video generator, named Sora. While not the pioneering effort in this domain, it stands out for its remarkable video quality. Other players such as Google, Meta, and the startup Runway ML have also showcased similar technologies.

OpenAI’s achievement garnered significant attention, particularly for the high-quality videos produced. CEO Sam Altman’s call for written prompts from social media users further highlighted the tool’s capabilities. However, along with admiration came concerns regarding the ethical and societal implications of such advanced AI technology.


A freelance photographer from New Hampshire suggested a prompt on X: “An instructional cooking session for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting.” Altman swiftly responded with a realistic video that brought the prompt to life.

Although the tool is not yet available to the public, OpenAI has provided limited information on its development. The company, which has faced lawsuits from authors and The New York Times over its use of copyrighted texts to train ChatGPT, has not disclosed the sources of imagery and videos used to train Sora.

In a blog post, OpenAI stated that it is collaborating with artists, policymakers, and others before releasing the tool to the public. “We are working with red teamers—domain experts in areas like misinformation, hateful content, and bias—who will be adversarially testing the model,” the company said. “We’re also developing tools to detect misleading content, such as a detection classifier capable of identifying videos generated by Sora.”

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More