San Francisco, February 4, 2026, 02:04 (PST)
- Code in a Google app beta points to Gemini “screen automation” that can place orders and book rides in some Android apps
- Warning text says users stay responsible for actions and can stop the agent mid-task
- Privacy notices say screenshots may be reviewed by trained reviewers if “Keep Activity” is enabled
Google is moving closer to letting its Gemini assistant carry out tasks inside Android apps — including placing orders and booking rides — after new code surfaced in a beta version of the Google app, tech site 9to5Google reported. (9to5Google)
That would push Gemini beyond chat and into “agent” territory, where the software can act on a user’s behalf. The timing matters: companies are scrambling to make AI assistants do chores, not just answer questions, and phones are where those chores stack up.
If Google can make Gemini tap through third-party apps, it could tighten its grip on Android’s front door. It also drags the company into harder territory — mistakes that cost money, awkward edge cases, and the privacy cost of letting an assistant “see” what’s on screen.
Android Authority said strings in the Google app 17.4 beta describe a Labs feature called “Get tasks done with Gemini,” internally codenamed “bonobo,” that uses “screen automation” in certain apps. The text warns: “Gemini can make mistakes. You’re responsible for what it does on your behalf, so supervise it closely,” adding that users can stop it and take over manually. (Android Authority)
Screen automation is basically UI control: the assistant looks at what’s on the display and clicks and types through the same screens a person would. It is broader than a clean app integration, and potentially messier.
Android Police reported the beta text also carries a privacy notice: “When Gemini interacts with an app, screenshots are reviewed by trained reviewers and used to improve Google services if Keep Activity is on.” It advises users not to enter login or payment information into Gemini chats and to avoid screen automation for emergencies or sensitive tasks. (Android Police)
SamMobile said the practical pitch is simple: tell Gemini to order an Uber or get food delivered, and it completes the steps in supported apps on a Galaxy phone. The site said support may start with only a limited set of apps, even if the capability expands later. (SamMobile)
The feature appears tied to Android 16 QPR3 — shorthand for a “Quarterly Platform Release,” Google’s regular feature drop cadence for Android after a major version ships. Google has not said which apps would be supported, or when a Labs experiment like this might graduate to a broad release.
Separately, a Hindustan Times report said the same beta build references a “Likeness” feature codenamed “wasabi,” linked to 3D avatars used in Google Meet calls on Android XR. The text suggests users could manage a likeness through prompts, and says the likeness “can only be used” by that user. (Hindustan Times)
The push lands in a crowded field. Apple has been positioning Siri upgrades as part of its Apple Intelligence effort, while Samsung and others are racing to bundle their own AI features on-device. Google needs Gemini to feel native — and useful — before rivals define what an “AI phone” should do.
But the risk profile is different when an assistant starts clicking “Buy” instead of drafting text. The biggest unknown is reliability: a stray tap, a misread screen, or a confused checkout flow could mean wrong orders, unintended charges, or users handing over sensitive data. And because this is gleaned from work-in-progress code, it may never ship, or ship in a narrower form.