

For posterity: English Wikipedia is deletionist, so your burden of proof is entirely backwards. I know this because I quit English WP over it; the sibling replies are from current editors who have fully internalized it. English WP’s notability bar is very high and not moved by quantity of sources; it also has suffered from many cranks over the years, and we should not legitimize cranks merely because they publish on ArXiv.
Here’s some food for thought; ha ha, only serious. What if none of this is new?
If this is a dealbreaker today, then it should have been a dealbreaker over a decade ago, when Google first rolled out Knowledge panels, which were also often inaccurate and unhelpful.
If this isn’t acceptable from Google, then it shouldn’t be acceptable from DuckDuckGo, which has the same page-one results including an AI summary and panels, nor any other search engines. If summaries are unacceptable from Gemini, which has handily topped the leaderboards for weeks, then it’s not acceptable using models from any other vendor, including Alibaba, High-Flyer, Meta, Microsoft, or Twitter.
If fake, hallucinated, confabulated, or synthetic search results are ruining the Web today, then they were ruining the Web over two decades ago and have not lessened since. The economic incentives and actors have shifted slightly, but the overall goal of fraudulent clicks still underlies the presentation.
If machine learning isn’t acceptable in collating search results today, then search engines would not exist. The issue is sheer data; ever since about 1991, before the Web existed, there has been too much data available on the Internet to search exhaustively and quickly. The problem is recursive: when a user queries a popular search engine, their results are populated by multiple different searchers using different techniques to learn what is relevant, because no one search strategy works at scale for most users asking most things.
I’m not saying this to defend Google but to steer y’all away from uncanny-valley reactionism. The search-engine business model was always odious, but we were willing to tolerate it because it was very inaccurate and easy to game, like a silly automaton which obeys simple rules. Now we are approaching the ability to conduct automated reference interviews and suddenly we have an “oops, all AI!” moment as if it weren’t always generative AI from the beginning.