SAN FRANCISCO, Jan 30, 2026, 07:14 PST
- Google began rolling out Project Genie, an interactive world-building prototype, to AI Ultra subscribers in the United States.
- The tool uses DeepMind’s Genie 3 world model with Nano Banana Pro and Gemini to turn text or images into explorable scenes.
- Sessions are capped at 60 seconds as Google tests controls, realism and content safeguards.
Google on Thursday began rolling out Project Genie, a Google Labs prototype that lets U.S. Google AI Ultra subscribers create and explore interactive worlds from text prompts or images, the company said. It is limited to users 18 and older. Blog
The launch matters because Google is pushing “world models” out of the lab and into a paid, consumer-facing test. The company is betting that feedback from users will help it harden a technology it sees as useful well beyond games.
World models are AI systems that try to simulate how an environment changes as you move through it — a step toward agents that can plan actions inside a shifting scene. DeepMind has argued that kind of capability is important for AGI, shorthand for a system meant to handle many tasks rather than one narrow job.
Project Genie is powered by DeepMind’s Genie 3 and tied into Google’s Nano Banana Pro image generator and Gemini, according to Google. It starts with what the company calls “world sketching,” where users describe a setting and a main character or use an uploaded image as a base.
Users can set how they want to move — walking, riding, flying or driving — and pick a first- or third-person view before generating the world. Google also lets users remix other creations and download videos of their explorations.
Google capped world generation and navigation at 60 seconds. A Google spokesperson said the company found that “high quality and consistent” output was easier to maintain at that length while keeping the tool available to more users. Theregister
In a hands-on, The Verge said the generated worlds ran at roughly 720p and about 24 frames per second, with movement controlled using familiar keyboard keys. It also flagged input lag that made some sessions feel closer to cloud gaming than a local game.
Diego Rivas, a product manager at Google DeepMind, said the release is meant “to learn about new use cases” for the technology, including visualizing scenes for filmmaking and interactive educational media. He also pointed to simpler tests, like using a photo of a child’s toy to seed a world.
Shlomi Fruchter, a DeepMind research director, cautioned that Project Genie is “not an end-to-end product” people should expect to use every day. The company has described it as a research prototype and said several features it discussed with Genie 3 last year are not yet included.
The rollout comes as more companies pitch their own approaches to world models, with startups like Fei-Fei Li’s World Labs and AI video firm Runway also working in the area, TechCrunch reported. Techcrunch
But the early release still shows rough edges. Google said generated worlds may not follow prompts closely, may break real-world physics, and can leave characters less responsive, while The Verge reported the tool blocked some prompts tied to licensed characters and tightened limits around third-party content. Rivas said Genie 3 was “trained primarily on publicly available data from the web,” and Google is watching feedback closely as it widens access. Theverge
Google said it plans to expand Project Genie beyond U.S. Ultra subscribers over time, without giving dates. For now, it is treating the rollout as a controlled trial — short sessions, strict limits, and a lot of data on what people actually try to build.