The Super Bowl is one of the most followed sporting events globally and one of the best advertising opportunities. For Google, it’s an opportunity to showcase how its flagship AI, Gemini, can be useful. Except Gemini screwed up. It took blatantly wrong information from one source and presented it as reliable.
Google has since deleted that part from the ad but it’s yet another reminder of how AI is not as reliable as big tech wants us to believe.
![ai-generated image of three question marks made from cheese on a basalt cutting board](https://cdn.zmescience.com/wp-content/uploads/2025/02/andandand0017_question_marks_made_of_cheese_-ar_169_-styliz_84911c6a-0779-46a8-9a6c-ace40a818922_1.png)
Not gouda enough
If you thought AI slop is something only your grandma would fall for, well, think again: Google also fell for it.
A week ago, the search giant started a new Super Bowl ad campaign, promoting “Google Workspace with Gemini.” It’s basically an AI upgrade to its existing features like Docs or Sheets. The campaign, called “50 stories from 50 states,” will run bespoke Gemini ads, tailored for local markets showing how Google’s AI can help businesses. The cheesy bits happened for the Wisconsin ad.
The ad featured the owner of Wisconsin-based Cheese Mart asking Gemini for “a description of Smoked Gouda that would appeal to cheese lovers.” If you have a shop, Google wants you to use AI instead of writing your own descriptions or hiring a copywriter to do it for you. Gemini produced a response that was shown in that video and is still mentioned on the Cheese Mart website.
But the answer is blatantly wrong. It stated that Gouda is “one of the most popular cheeses in the world, accounting for 50 to 60 percent of the world’s cheese consumption.”
![a screenshot showing before and after of the cheese stat](https://cdn.zmescience.com/wp-content/uploads/2025/02/Untitled26-1024x260.png)
There’s no way that’s true
In the meantime, Google deleted that part from their ad, but only after social media and journalists started poking fun at them. They probably didn’t even realize it before.
Gouda is a popular cheese and, in fairness, there’s no reliable global cheese consumption chart (though now we’d really like one). There’s no doubt that Gouda is one of the most popular cheeses, but no single cheese accounts for half of global consumption. With how popular Mozzarella and Cheddar are, with how varied cheese trends are around the world, Gouda just doesn’t have that leverage. With the sheer volume of Indian paneer or fresh cheeses being so popular in South America, there’s just no way.
<!– Tag ID: zmescience_300x250_InContent_3
–>
Sorry, gouda fans, but the cheese math ain’t adding up.
The Verge, which first reported on this, even went to ask a professor of agricultural economics at Cornell University named Andrew Novakovic, who confirmed that Gouda is “assuredly not the most widely consumed,” although he did note that it could be the single most popular variant.
It’s not a hallucination, it’s just plagiarism
Google President of Cloud Applications Jerry Dischler wants you to know that this is not a hallucination. Dischler posted a note on social media, saying that Gemini’s claim is grounded in “data”. In this case, the data is just a few websites that mention the stat without any source.
Most likely, the stat came from cheese.com, which seems to be the highest-performing cheese website on Google. The website has no mention of where that stat comes from.
![a screenshot from cheese.com showing the claimed gouda stat](https://cdn.zmescience.com/wp-content/uploads/2025/02/Untitled27-1024x791.png)
So it’s not a case of the AI making stuff up, it’s a case of it taking information that others worked to produce and repeating it without verification. This is a feature, not a bug: rather than critically analyzing and verifying facts, AIs like Gemini simply amplify whatever information they find, regardless of its accuracy. This isn’t just an isolated mistake; it’s an inherent flaw in the way Google’s AI models are trained to prioritize high-ranking sources over factual integrity.
It’s also a reminder that these models don’t create information, they just take human-made information and tweak it (or as is the case here, just rewrite it). Right now, they function more like high-speed copy-paste machines mixed with word processors rather than a true creative engine.
This is a funny AI cheese clip, but it’s also a symptom of a much bigger problem. When AI confidently presents misinformation in authoritative contexts, it reinforces false narratives, misleads users, and undermines trust in information systems. If left unchecked, this could have serious consequences, not just for marketing fluff like cheese descriptions, but for critical areas like health, science, and politics.
Granted, the Geimini writing assistant does note that it’s a “creative writing aid, and not intended to be factual,” though, of course, it does that in the small print that no one checks.
The Gouda error may be irrelevant, but it’s a warning sign of AI’s propensity to misinformation. If Google can’t even get cheese facts right, why should we trust its AI with anything more important?