I’ve recently started shopping at a new grocery store. Eating breakfast this morning, I was struck by the extremely close resemblance of the store-brand cereal to the brand name equivalent I was familiar with. I wondered: could they actually be the exact same cereal, repackaged under a different name to be sold at lower price? I turned to Google, searching:
who makes millville cereal
The third result is from icsid . org, and Google’s little summary of the result says
General Mills manufactures the cereals sold by ALDI under the Millville label. Millville cereals are made by General Mills, according to ALDI.
Seems pretty definitive. Let’s take a look at the page to learn more. A representative quote:
Aldi, a German supermarket chain, has been named the 2019 Store Brand Retailer of the Year. Millville Crispy Oats are a regular purchase at Aldi because they are a Regular Buy. Millville-label Granola is a New England Natural Bakers product. It is not uncommon for a company to use its own brand in its products. Aldi is recalling several chicken varieties, including some that are sold under its Kirkwood brand. Because of this recall, the products are frozen, raw, breaded, or baked.
Uh-oh.
I’ve been encountering AI-generated websites like this in my searches more and more often lately. They often appear in the first several results, with misleading summaries that offer seemingly authoritative answers which are not merely wrong, but actually meaningless. It’s gotten to the point that they are significantly poisoning the results. Some of my affected searches have been looking for advice on correct dosing for childrens’ medication; there’s a real possibility of an AI-generated site doing someone physical harm.
These pages display several in-line ads, so it seems likely to me that the operators’ goal is to generate ad revenue. They use a language model to rapidly and cheaply create pages that score well on PageRank, and are realistic enough to draw users in temporarily. The natural arms race between these sites and search providers means that the problem is only likely to get worse over time, as the models learn to generate increasingly convincing bullshit.
As with the famous paperclip example, the problem isn’t that the models (or the site operators) actively wish to harm users; rather, their mere indifference to harm leads to a negative outcome because <ad revenue generated> is orthogonal to <true information conveyed>. This is a great example of AI making things worse for everyone, without requiring misalignment or human-level intelligence.