- Viral Launch: OpenAI’s new AI-powered video app Sora (iOS-only, invite-only) debuted in early Oct 2025 and instantly topped download charts [1]. Within days it reached #1 on the App Store, with over 1 million downloads reported by OpenAI [2]. Users can input text prompts (or “cameos” of real people) to generate hyper-realistic short videos.
- App Store Scam Wave: Scammers rushed to exploit Sora’s hype. More than a dozen fake “Sora” or “Sora 2” apps appeared on Apple’s App Store soon after launch [3]. These impostors collectively drew ~300,000 installs and over $160,000 in revenue within days [4] [5]. (For context, a study finds fraudulent iOS apps jumped 300% in 2025, and 600% on Android, driven by AI-powered scams [6].) Apple has since pulled many clones, but a few still slip through review [7]. Tech analysts warn this reflects a broader “scam economy” where viral success instantly attracts copycats [8] [9].
- Hollywood Backlash: When OpenAI unveiled Sora 2 (Sept. 30) with a “Cameo” feature allowing real faces/characters in videos, the entertainment industry exploded. Major studios, agencies (WME, CAA, UTA) and unions (SAG-AFTRA) demanded OpenAI stop using actors’ or characters’ likenesses without permission or pay [10] [11]. SAG-AFTRA President Sean Astin warned that Sora’s “opt-out” approach “threatens the economic foundation of our entire industry” [12]. Warner Bros. and others echoed that decades of copyright law give rights-holders control regardless of “opt-out” policies [13]. OpenAI CEO Sam Altman responded on his blog that they will offer more granular controls and revenue-sharing for rights-holders [14] [15]. Hollywood lawyers predict “two titans” (Silicon Valley vs Hollywood) will battle over AI’s future in media [16] [17].
- Privacy & Deepfake Concerns: Users and experts have raised alarm about personal deepfakes. The Washington Post tech columnist Geoffrey Fowler experienced friends using Sora to “deepfake” him in embarrassing videos without his approval [18] [19]. He interviewed experts who stress the need for consent. Eva Galperin (EFF) notes that such videos “are much more dangerous for some populations…Fundamentally, what OpenAI is misunderstanding is power dynamics” [20]. UC Berkeley’s Hany Farid adds: “We should have a lot more control over our identity,” but companies fear stricter controls will hurt virality [21]. As New York Times tech columnist Brian Chen warns, with ultra-realistic video, “the idea that video could serve as an objective record of reality” may be over – we’ll have to view video with the skepticism we now reserve for text [22].
- Expert Opinions: Industry figures underscore the stakes. MPAA boss Charles Rivkin urged OpenAI to “take immediate and decisive action” citing copyright law [23]. Media attorney Kraig Baker (Davis Wright Tremaine) says the flood of casual AI video content will stress publicity and likeness laws, especially for deceased figures [24]. Comedian Trevor Noah warned Sora could be “disastrous” if people’s likenesses are used without consent [25]. On the flip side, OpenAI insists users “are in control of your likeness end-to-end with Sora” (campaigning its opt-in model) [26], though critics note the fine print has real limits.
With that backdrop, let’s delve into the full story:
What is Sora and Why It’s a Big Deal
Sora is OpenAI’s new invite-only mobile app (iOS, web) that turns text prompts – or even short videos (cameos) – into vivid, short video clips using generative AI. Launched in late September 2025, it quickly became a sensation. OpenAI’s own data shows over a million installs within days [27], and it raced to the top of Apple’s free app chart. The hype was fueled by demo clips of people and cartoon characters in absurd or action-packed scenarios (for example, SpongeBob speaking from the Oval Office) [28]. Users praised the uncanny realism: faces, expressions and even voices looked convincingly human [29] [30].
Sora follows OpenAI’s earlier releases (ChatGPT, DALL·E) and represents a leap in AI-generated media. By September 30, Sam Altman announced Sora 2 – adding a “Cameo” mode where anyone could upload short selfies or reference images and then put themselves (or others) into the generated videos [31]. The result: fans could see themselves in crazy AI videos. But this also means Sora 2 by design blurs fiction and reality: you can type “me ordering tacos in a dance club,” and Sora will generate a video of you doing exactly that.
Fake Sora Apps Flood the App Store
As soon as Sora news broke, the App Store was flooded with impostor apps. TechCrunch reported “over a dozen” Sora-branded apps sprang up immediately after the official launch [32]. Many of these were old AI or video apps hastily rebranded as “Sora,” “Sora Video Generator,” or even “Sora 2 Pro.” Some had existed for months under other names and simply changed branding overnight. They made it through Apple’s review despite using OpenAI’s trademarked name [33].
These copycats saw huge numbers: App intelligence firm Appfigures counted roughly 300,000 installs of fake Sora apps (and $160,000 in revenue) in just the first few days [34]. One clone alone (“Sora 2 – Video Generator AI”) attracted over 50,000 installs by aggressively using the keyword “Sora.” (For reference, OpenAI says Sora’s official app hit 1 million downloads in the same span [35].) According to eMarketer, Apple swiftly began pulling the worst offenders, but a few “soft scams” remain hidden. For example, an app called “PetReels – Sora for Pets” managed a few hundred installs, and even a pseudo-name “Vi-sora” coasting on the buzz got thousands [36].
Experts say this isn’t surprising but rather part of a pattern: every breakout tech trend sees instant scam copycats. After ChatGPT’s launch in 2022, similar “fleeceware” apps flooded stores, and crypto surges have led to fake wallet apps in 2024 [37]. According to gadget news analysis, fraudulent apps on iOS jumped 300% in 2025 (and 600% on Android) in large part due to AI tools making cloning easy [38] [39]. These Sora clones exploited that vulnerability. One technologist warns that AI lets novices create fully polished apps (descriptions, icons, etc.) to trick app reviewers and users [40] [41].
Consumer alert: To avoid fakes, experts suggest only installing Sora from official channels (e.g. OpenAI’s site) and checking developer info carefully [42] [43]. Right now, genuine Sora is invite-only; any app promising instant access or downloads for purchase is suspect. This scam wave underscores a tough lesson: viral AI success can generate big money for fraudsters before platforms catch on [44] [45].
Hollywood Strikes Back: Likeness Rights and Opt-Out
Meanwhile, in Hollywood, Sora 2 set off an alarm. Until now, Sora could only create generic fictional people. But with the cameos feature, it could mimic actual celebrities and characters by default – whether the studios liked it or not. Within hours of Sora 2’s debut, the Motion Picture Assn. (MPA) and talent agencies issued a rare joint rebuke. MPA chairman Charles Rivkin stated bluntly: “OpenAI needs to take immediate and decisive action to address this issue”, citing long-established copyright laws that protect likenesses [46]. SAG-AFTRA’s Sean Astin called the approach (opting out by default) “threatening the economic foundation of our entire industry” [47].
In practice, Sora 2 initially included every public image and performance it could find – unless a rights-holder proactively opted out in OpenAI’s system. (One insider says OpenAI quietly told studios before launch to submit any characters or actors that shouldn’t appear [48].) But major agencies didn’t accept that. Within days, WME (which represents stars like Oprah Winfrey and Michael B. Jordan) notified OpenAI that all their clients must opt out immediately [49]. CAA and UTA joined in demanding that actors have control and compensation. Warner Bros. Discovery publicly reminded that “decades of enforceable copyright law” already give creators rights; nobody has to opt-out to block infringement [50].
Even tech defenders acknowledge the clash: OpenAI’s Varun Shetty (VP of Media Partnerships) tried a positive spin, noting fans are “creating original videos and excited about interacting with their favorite characters, which we see as an opportunity for rights-holders to connect with fans” [51]. But studios see huge risk. Former lawyers warn that the next round of AI video tools (from Google, Meta, etc.) will deeply test copyright and publicity rules. Seattle attorney Anthony Glukhov notes this fight pits Silicon Valley’s “move fast” ethos against Hollywood’s aversion to losing control of IP [52].
OpenAI has responded that Sora 2 already has limits (it doesn’t intentionally generate famous figures unless using the cameo feature), and it is working on new guardrails. Sam Altman blogged that OpenAI will add “granular controls” for rights holders and even create revenue-sharing for using copyrighted content [53] [54]. The company also published a Sora 2 safety guide stating, “Only you decide who can use your cameo, and you can revoke access at any time” [55]. (In reality, as critics note, you only control the access, not the content once made – a limitation journalist Geoffrey Fowler pointed out when OpenAI admitted they “shipped it anyway” despite knowing it was an issue [56].)
Talent and unions have vowed to monitor and enforce. SAG-AFTRA is especially vigilant after a recent scare: an AI-generated actor (“Tilly Norwood”) was nearly signed by an agency, freaked people out, and coincided with this Sora saga. One industry attorney sums it up: “The question is less about if the studios will try to assert themselves, but when and how” [57]. In short, a legal battle seems inevitable. For now, the standoff has already cooled some Sora enthusiasm in Hollywood, with many logos and likenesses being blocked by default.
Consent and Deepfake Ethics
Beyond corporate rights, Sora 2 has raised privacy and consent alarms for ordinary people. In contrast to initial Sora (text-only), Sora 2 explicitly encourages you to share your own face to enable cameos. But that system has flaws. As The Washington Post reported, you “opt in” by recording a selfie video and choosing who can see your cameo (friends, everyone, or only you) [58]. Once enabled, any allowed person can generate videos of you doing anything just by typing prompts. The app only notifies you after a video of you is made [59] – by then it may have been shared widely.
WP columnist Geoff Fowler tested this and found it entertaining at first: friends made funny cameo videos of him on TV shows and cooking disasters [60]. But soon a video appeared of him telling an off-color joke he never said – plausible enough to unsettle him [61]. He realized the danger: if AI can realistically fake me committing crimes or hateful acts, viewers will wonder if it’s real. Fowler called experts like Eva Galperin (EFF), who cautions that casual celebrity-style video flips can be very harmful to vulnerable people. Galperin put it bluntly: “Videos of friends shoplifting at Target are much more dangerous for some populations than others…What OpenAI is misunderstanding is power dynamics.” [62] In other words, handing out deepfake power ignores the fact that not everyone is equally protected if that power is misused.
AI researchers echo the sentiment. Hany Farid (UC Berkeley) said bluntly: “We should have a lot more control over our identity,” but points out that every additional safeguard tends to make a social app less viral [63]. Sora’s defaults are indeed quite open: new users automatically share cameos with all “mutuals” (followers who follow back) [64], assuming a friendly relationship that can change over time. Galperin notes people often trust too readily; friendships can become hostility.
OpenAI has since added cameo “guidelines” settings – you can list what topics or portrayals are off-limits – and reiterated that users can delete their cameo entirely. Sora content is also watermarked as AI-generated [65]. However, observers point out that watermarks can be cropped out with a minute of editing. And as Fowler notes, ordinary users (unlike celebrities like Sam Altman or iJustine) won’t have time to audit every draft video featuring them. Even iJustine, who opened her cameo to everyone, had to actively monitor and delete offensive clips made about her [66].
Experts say this technology is forcing a cultural shift: we might have to reclaim the practice of consent culture. One young creator argues people should explicitly ask permission before posting a friend’s cameo video publicly [67]. Fowler himself adopted a strict stance: he set his Sora to “only me,” meaning only he can generate videos of himself. This cripples the social fun of the app, but he considers it a statement on digital trust: if you can’t truly have control, maybe don’t participate.
As NYT columnist Brian Chen put it (quoted in media): Sora’s realism may mean “the end of visual fact” – video can no longer be assumed as reliable evidence of reality [68]. Soon, social media posts of videos will be viewed with as much suspicion as AI-written text. In the meantime, technologists emphasize caution: verify content and be wary of AI face-swaps.
The Road Ahead: Policy and Protection
All these issues point to one thing: the rules for AI video are still being written. OpenAI is racing to add safety features, and companies like Loti (a startup) are getting attention. Loti’s CEO Luke Arrigoni says interest is exploding: people want tools to actively manage their digital likeness, not just react after the fact. He reports his AI-identity management service has 30x growth recently [69]. Tech and legal experts suggest new laws are likely needed – for example, Denmark’s legislation (cited by Trevor Noah) recently gave individuals explicit control over their digital image. In the U.S., some believe right-of-publicity laws may have to evolve for AI.
One thing is clear: users, creators and platforms must work harder to set consent norms. Outcry around Sora 2 has already pressured OpenAI to tweak its policies. In future AI tools, we may see mandatory pre-approval mechanisms or more granular sharing controls built in from the start. Meanwhile, consumers should treat social media video posts skeptically and think twice about giving apps access to their face data.
Bottom Line: OpenAI’s Sora has showcased the incredible power of AI video – but also the seams where law, ethics and safety strain. Fake apps and deepfake fans have exploited its popularity; Hollywood and rights-holders are pushing back hard; and everyday people are just starting to grapple with what it means to “own” their face online. As one expert put it, our society will likely have to view all videos with a new, questioning eye [70].
Sources: Recent news and analyses on Sora from TechCrunch, Washington Post, LA Times, GeekWire, 9to5Mac, eMarketer, and others (see citations). These detail Sora’s launch stats, fake-app scams [71] [72], Hollywood objections [73] [74], and privacy/deepfake concerns with expert commentary [75] [76] [77], among other developments. All quoted experts and figures are cited accordingly.
References
1. techcrunch.com, 2. techcrunch.com, 3. techcrunch.com, 4. techcrunch.com, 5. www.emarketer.com, 6. 9to5mac.com, 7. techcrunch.com, 8. www.emarketer.com, 9. apple.gadgethacks.com, 10. www.latimes.com, 11. www.latimes.com, 12. www.latimes.com, 13. www.latimes.com, 14. www.latimes.com, 15. www.geekwire.com, 16. www.latimes.com, 17. www.geekwire.com, 18. www.washingtonpost.com, 19. www.washingtonpost.com, 20. www.washingtonpost.com, 21. www.washingtonpost.com, 22. www.geekwire.com, 23. www.latimes.com, 24. www.geekwire.com, 25. www.geekwire.com, 26. www.washingtonpost.com, 27. techcrunch.com, 28. www.latimes.com, 29. www.washingtonpost.com, 30. www.washingtonpost.com, 31. www.latimes.com, 32. techcrunch.com, 33. techcrunch.com, 34. www.emarketer.com, 35. techcrunch.com, 36. techcrunch.com, 37. www.emarketer.com, 38. apple.gadgethacks.com, 39. 9to5mac.com, 40. apple.gadgethacks.com, 41. 9to5mac.com, 42. apple.gadgethacks.com, 43. www.emarketer.com, 44. www.emarketer.com, 45. apple.gadgethacks.com, 46. www.latimes.com, 47. www.latimes.com, 48. www.latimes.com, 49. www.latimes.com, 50. www.latimes.com, 51. www.latimes.com, 52. www.latimes.com, 53. www.latimes.com, 54. www.geekwire.com, 55. www.geekwire.com, 56. www.washingtonpost.com, 57. www.latimes.com, 58. www.washingtonpost.com, 59. www.washingtonpost.com, 60. www.washingtonpost.com, 61. www.washingtonpost.com, 62. www.washingtonpost.com, 63. www.washingtonpost.com, 64. www.washingtonpost.com, 65. www.washingtonpost.com, 66. www.washingtonpost.com, 67. www.washingtonpost.com, 68. www.geekwire.com, 69. www.geekwire.com, 70. www.geekwire.com, 71. techcrunch.com, 72. www.emarketer.com, 73. www.latimes.com, 74. www.latimes.com, 75. www.washingtonpost.com, 76. www.washingtonpost.com, 77. www.geekwire.com