[PR]

OpenAI’s “Sora 2” Abruptly Shifts Copyright Policy from “Opt-Out” to “Opt-In,” But Hollywood’s Core Concerns Remain


A firestorm of controversy surrounding OpenAI’s latest AI video generation model, “Sora 2,” has forced a significant shift in its copyright policy. The company initially presented an “opt-out” system, requiring copyright holders to request the removal of their work, a move decried by critics as “the logic of a thief.” In response to the backlash, OpenAI has reversed course, announcing a switch to an “opt-in” model where rights holders must grant permission for use. However, this policy change fails to address the fundamental problem of AI training data, leaving the entertainment industry, particularly Hollywood, with deepening concerns.

 

“The Logic of a Thief”: Criticism Mounts Against Opt-Out System

 

The initial policy unveiled alongside Sora 2’s announcement was a shock to copyright holders. It stipulated that if creators did not want their work used within the Sora application, they would have to proactively file a request to opt out.

This approach drew sharp criticism from legal experts. Ray Sealey, an attorney at KHIKS law firm, remarked, “It’s equivalent to a thief claiming, ‘I have the right to steal everything in your house because you never explicitly told me to stop.'” Simon Pullman, an attorney at Pryor Cashman, echoed this sentiment, stating, “They are effectively declaring, ‘The moment you create your work, we have the right to use it, unless you actively refuse.'” He analyzed this as a typical maneuver by tech companies to establish a new normal before legislation can catch up.

 

An Abrupt Pivot to “Opt-In” Amid Backlash

 

As social media was flooded with videos created by Sora 2—from custom “South Park” episodes to clips featuring Pikachu in “Saving Private Ryan”—OpenAI CEO Sam Altman announced a change in direction.

In a blog post, Altman stated, “We are hearing from many rights holders who are excited about this new form of ‘interactive fan fiction’ and believe that new engagement brings a lot of value. However, they want the right to specify how their characters are used (including not at all).” He then clarified that for the generation of existing characters, the policy would be changed to an “opt-in” model, requiring explicit permission from rights holders.

This new approach will be similar to how personal “likeness” is handled. A new feature in Sora 2 allows users to insert themselves into AI-generated videos, but they can revoke permission for others to use their likeness at any time.

 

Policy Change Fails to Quell Fundamental Doubts Over “Training Data”

 

Despite the pivot to an opt-in system, many argue that the fundamental problem remains unresolved. The core issue lies in the distinction between “output” and “input.”

Even if a rights holder does not opt in, thereby preventing their characters from being “output” (generated) by Sora 2, it does not eliminate the possibility that their work has already been used as “input” (training data) for the Sora model. In other words, while Sora may be blocked from generating Darth Vader, it may have already learned from Darth Vader to create something else. It is virtually impossible for rights holders to audit OpenAI’s training dataset to verify that their work has not been used.

Bryn Mooser, CEO of the AI film studio Asteria, calls it a “sleight of hand.” He asserts, “Just because you can put a filter on it, it doesn’t mean it’s not in the dataset,” arguing that studios should be questioning the use of their IP in training data, not just its potential for generation.

 

Is OpenAI’s True Target Social Media, Not Hollywood?

 

Mooser further analyzes the Sora 2 release as “a statement that OpenAI doesn’t care about Hollywood.” He suggests that the content generated by Sora 2 is likely aimed at the mass production of viral content and memes for social media, rather than serving as a high-quality tool for film and television production.

“This is a sign that they don’t care about Hollywood’s needs, the concerns of artists being replaced by AI, or copyright issues,” Mooser says. For companies like Asteria, which are attempting to integrate AI into the filmmaking process ethically, OpenAI’s moves threaten to tarnish the image of AI for the entire industry.

The entertainment industry is now at a crossroads. Guilds and rights holder organizations are being called upon not just to wait for legal rulings, but to establish their own industry standards and decide how to confront this new wave of technology with serious discussion and decisive action.