
Launched in early October 2025, Sora 2 allows users to create short videos in text or combine and edit other people's products. The app is currently only available for iOS and works on an invitation-only basis, but has quickly surpassed both Google's Gemini and ChatGPT to top Apple's free app rankings in the US.
According to OpenAI, Sora 2 uses a new generation video and audio creation model with high authenticity, allowing for almost real scenery and sound synchronization. However, just a few days after its launch, social network X (formerly Twitter) has appeared many controversial videos, including a fake clip of CEO Sam altman committing illegal acts.
These contents raise concerns that video AI could become a large-scale deepfake creation tool, affecting the reputation of individuals, businesses and information security. According to experts, without a clear review and labeling mechanism, the line between creativity and truth distortation will increasingly fade.
In response, OpenAI said it had established safety control layers, where users could decide how their images were used on the platform. We are working hard to listen to feedback and improve every day, Bill Peebles, project leader of Sora, wrote on X.
However, international lawyers say OpenAI will still face a legal challenge similar to what TikTok and Meta have faced: who is the real owner of AI-generated content? Do users have the right to delete videos when they are illegally used?
Despite the controversy, Sora 2 has opened a new phase of digital media, where videos can be created instantly from ideas. But it is that possibility that also raises a bigger question: is society ready to face a world of real - fake places just one command line apart?