SAN FRANCISCO, Jan 19, 2026, 03:34 (PST)
Google’s AI Overview feature in Search has been giving a bizarre answer to a simple question — claiming 2027 isn’t next year and pointing users to 2028 instead, according to screenshots shared by tech site Futurism. The site also noted the AI sometimes got the current year wrong, with Reddit users reporting the glitch has been around for over a week. 1
This slip is significant because Google aims to embed AI-generated summaries into the Search experience, putting answers front and center above the usual web link list. In a May 2024 product announcement, Google revealed that AI Overviews, powered by a Gemini model tailored for Search, were being rolled out to all U.S. users. The company described them as a tool to “take the legwork out of searching.” 2
This comes as publishers ramp up complaints that summaries are stealing their clicks. On Jan. 13, Google submitted a court filing asking a judge to toss out a lawsuit from Penske Media Corp, which publishes Rolling Stone, Billboard, and Variety. Google argued its AI-generated summaries are integrated into Search and still direct users to publishers’ pages through search results, according to a Reuters report. 3
Filmogaz, another tech outlet, framed the calendar error as just one example in a series of slip-ups by consumer chatbots. They noted that OpenAI’s ChatGPT and Anthropic’s Claude both stumbled at first but then fixed their answers, whereas Google’s Gemini 3 got it right off the bat. 4
Earlier this month, Elon Musk, founder of xAI, chimed in with a quick jab. NDTV Profit covered how Musk responded on X with “Room for improvement” after someone shared a screenshot of Google’s AI Overview mistakenly saying the next year isn’t 2027. 5
Google has previously pulled back on the feature after strange and wrong answers surfaced online. In an update from May 2024, the company admitted that some AI Overviews were “odd, inaccurate or unhelpful” and implemented over a dozen technical fixes. They also reported content policy violations in fewer than one out of every 7 million unique queries that included AI Overviews. 6
The wrong year is a straightforward, easy-to-spot error that critics argue can erode trust in AI-generated answers—especially when those answers appear first in search results and sound authoritative.
This also explains why “hallucination” has taken hold as a term in AI discussions — a quick way to describe when a system spits out info that sounds sure but isn’t actually based on facts.
Users might dismiss it as a meme or see it as a serious warning, depending on how frequently these errors pop up — and how fast Google can prevent a wrong answer from becoming the top result.