SAN FRANCISCO, Jan 19, 2026, 03:34 (PST)
Google’s AI Overview feature in Search has been spitting out a wrong answer to a basic question — telling users that 2027 is not next year and pointing to 2028 instead, according to screenshots highlighted by tech site Futurism. The outlet said the response also appeared to misstate the current year in some cases, and that Reddit posts suggested the error had been showing up for more than a week. (Futurism)
The slip matters because Google is trying to make AI-generated summaries a routine part of how people use Search, with answers placed above the familiar list of web links. In a May 2024 product post, Google said AI Overviews run on a Gemini model customised for Search and were rolling out to all U.S. users, with the company pitching them as a way to “take the legwork out of searching.” (Blog)
It also lands amid growing pressure from publishers who say the summaries siphon off clicks. In a Jan. 13 court filing, Google asked a judge to dismiss a lawsuit brought by Penske Media Corp — publisher of Rolling Stone, Billboard and Variety — and argued its AI overviews are part of Search and still let users reach publishers’ pages through search results, a Reuters report said. (Reuters)
Another tech site, Filmogaz, cast the calendar slip as part of a broader pattern for consumer chatbots, saying OpenAI’s ChatGPT and Anthropic’s Claude initially made similar mistakes before correcting themselves, while Google’s Gemini 3 answered the question correctly. (Filmogaz)
The episode had already drawn drive-by commentary from Elon Musk earlier this month. NDTV Profit reported Musk, who founded xAI, replied “Room for improvement” on X after a user posted a screenshot showing Google’s AI Overview giving an incorrect response about what year comes next. (NDTV Profit)
But Google has had to rein in the feature before, after early examples of bizarre and inaccurate answers spread online. In a May 2024 update, the company said “odd, inaccurate or unhelpful” AI Overviews did appear and it rolled out more than a dozen technical improvements, adding that it found content policy violations in “less than one in every 7 million unique queries” where AI Overviews showed up. (Blog)
The year mix-up is the kind of simple, checkable mistake that critics say can undercut trust in machine-written answers, especially when they sit at the top of a search page and carry the tone of authority.
It also shows why “hallucination” has become a common word in the AI debate — a blunt shorthand for when a system produces information that sounds confident but isn’t grounded in the facts.
Whether users shrug it off as a meme or treat it as a warning may depend on how often errors like this surface — and how quickly Google can stop a wrong answer from becoming the first thing people see.