A purported leak of 2,500 pages of internal documentation from Google sheds light on how Search, the most powerful arbiter of the internet, operates.

The leaked documents touch on topics like what kind of data Google collects and uses, which sites Google elevates for sensitive topics like elections, how Google handles small websites, and more. Some information in the documents appears to be in conflict with public statements by Google representatives, according to Fishkin and King.

  • flappy@lemm.ee
    link
    fedilink
    English
    arrow-up
    102
    arrow-down
    4
    ·
    6 months ago

    Can’t wait for selfhosted web search to become better.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      2
      ·
      6 months ago

      You mean hosting your own crawler/indexer? That doesn’t really sound like a thing you could do cost-effectively.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        62
        ·
        6 months ago

        No problem we crowdsource the crawling torrent style.

        We outsourced that to google for reasonnable performance reason. But they shit the bed so now there’s no choice but to do it ourselves.

            • wanderingmagus@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              Veilid is a peer-to-peer network and application framework released by the Cult of the Dead Cow on August 11, 2023, at DEF CON 31.[1][2][3][4] Described by its authors as “like Tor, but for apps”,[5] it is written in Rust, and runs on Linux, macOS, Windows, Android, iOS,[6] and in-browser WASM.[7] VeilidChat is a secure messaging application built on Veilid.[1][4]

              Veilid borrows from both the Tor anonymising router and the InterPlanetary File System (IPFS), to offer encrypted and anonymous peer-to-peer connection using a 256-bit public key as the only visible ID. Even details such as IP addresses are hidden.[4]

              Source: https://en.wikipedia.org/wiki/Veilid

      • zutto@lemmy.fedi.zutto.fi
        link
        fedilink
        English
        arrow-up
        19
        ·
        6 months ago

        Surprisingly, it’s very doable, requires basic technical knowledge and relatively minimal computing resources (runs in the background on your computer).

        https://yacy.net/ Github

        I have tampermonkey script that sends yacy to crawl any websites that I visit, and it’s keeping up relatively good index for personal use of the visited websites. Combine yacy with ~300gb of Kiwix databases, add searxng as a frontend and you have pretty strong self hosted search engine.

        Of course you need to supplement your searches from other search engines, as yacy does not crawl the whole web, just what you tell it to.

        I encourage anyone who’s even slightly interested on this stuff to try Yacy, it’s ancient piece of software, but it still works very well and is not an abandoned project yet!

        I personally use Yacy mostly on private mode, but it does have the distributed network there as well. Yacy current freeworld status

        • jonne@infosec.pub
          link
          fedilink
          English
          arrow-up
          7
          ·
          6 months ago

          Yeah, I guess the P2P component sort of solves part of the issue I was imagining by distributing indexes and crawling. I was thinking that people were trying to run all of Google on a raspberry pi at home.

        • Finadil@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          This is interesting, have you had it index reddit? I’m just wondering how much storage space the database takes up.

          • zutto@lemmy.fedi.zutto.fi
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 months ago

            Hi!

            Great question! I don’t crawl reddit, but this applies to other large sites as well. reddit themselves they have at this very moment banned the ip range where I host my Yacy at (Hetzner). I just looked up from my index that I do have 257k pages indexed from reddit through teddit I used to run, this is from before reddit api-enshittification, going to delete those right now.

            And the way how the crawling is done is you define crawling depth, which limits how much content is crawled from the site.

            • 0 crawling depth = only the page you send Yacy to, nothing more.
            • 1 crawling depth = all the links on the page you send Yacy to
            • 2 crawling depth = all links on the page you send Yacy to, and all links on the pages crawled…
            • 3 …
            • n …

            … etc.

            I have my tampermonkey scripts set to only crawling depth of 1 at the moment (Just set them to 2 actually, kinda curious how much more I will be crawling), I’ve manually crawled some local news sites as a curiosity at the beginning. And my database is currently relatively small, only around ~86.38 gigabytes according to Yacy. This stores aproximately 2.6 million documents in Yacy’s Solr.

            Yacy memory & disk usage. Yacy solr index size

            Yacy has tons of options for crawling, so you can customize how much it crawls and even filter out overly large sites with maximum number of documents set when you send Yacy there.

            Large picture of Yacy's interface for starting a crawl.

            The tampermonkey script I’ve been talking about in these posts, it’s very simple script: https://github.com/JeremyRand/YaCyIndexerGreasemonkey

            Hit me up if you guys have more questions! I’m by no means an expert on Yacy, but I will do my best to answer.

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        ·
        6 months ago

        Right!

        Before his company was able to block more of Microsoft’s own tracking scripts, DuckDuckGo CEO and founder Gabriel Weinberg explained in a Reddit reply why firms like his weren’t going the full DIY route:

        “… [W]e source most of our traditional links and images privately from Bing … Really only two companies (Google and Microsoft) have a high-quality global web link index (because I believe it costs upwards of a billion dollars a year to do), and so literally every other global search engine needs to bootstrap with one or both of them to provide a mainstream search product. The same is true for maps btw – only the biggest companies can similarly afford to put satellites up and send ground cars to take streetview pictures of every neighborhood.”

        Ars

    • Jako301@feddit.de
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      6 months ago

      How is that even supposed to work? These search engines need per definition massive databanks to search through. Either you need your own crawler and indexer which is more than just inefficient, or you are limited to a relatively short list of curated static results.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        32
        ·
        edit-2
        6 months ago

        Google actually was good, so there’s probably some good information in this documentation. If nothing else we can perhaps figure out what “went wrong.”

        Edit: I’ve been reading the blog post that appears to be the main person the leak was shared with and there’s a lot of in-depth analysis being done there, but I’m not seeing a link to the actual documents. This is a huge article, though, I might be overlooking it.

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 months ago

        What it looks like beyond Google and Bing

        It would be much harder to know what exists beyond “GBY” (Google, Bing, Yandex) and how it all works without the work of Rohan “Seirdy” Kumar. For three years, Kumar has been updating a heavily annotated list of search engines with their own indexes. It is 7,000 words, but only a portion of it deals with engines offering general indexing, in the English language. You can read Kumar’s evaluation methodology for a better understanding of how he compared and assessed sites.

        What stands out? Mojeek (“it’s not bad… I’d live”) and Stract (“a useful supplement to more major engines”) are two of Kumar’s favorites. Right Dao has “very fast, good results,” in part because its crawler starts off from Wikipedia. Yep reaches farther out, showing results that link to and back from sites related to your query and also promises to share ad revenue with creators. All of them show promise, but you get the sense that they’re a second car, or a third bicycle, rather than a primary transport.

        There are far smaller-scoped engines in other sections of Kumar’s post. If you’re wondering where that one other search engine you’ve heard about is, it’s probably in the “Semi-independent indexes” section, because it uses a GBY index when its own results are not strong enough. Here, you’ll find cryptocurrency-friendly, controversy-courting-founder-having Brave, a few engines that either “resell” GBY results or stuff affiliate links into them, and “the most interesting entry,” according to Kumar, Kagi.

        Kagi requires an account and uses its own index, Teclis, in combination with Google, Bing, Yandex, Mojeek, and others, including, notably, Brave. Kagi’s founder has strong opinions on the AI-based future of search and responding to harmful searches in ways that are not “scalable.” How much of that does or does not bother you will vary, but it’s worth noting that Kagi also suffers when the GBY triumvirate is restricted.

        Ars Technica this week: Bing outage shows just how little competition Google search really has

        The referenced search engine comparison by Rohan “Seirdy” Kumar

        • Mojeek Search Engine@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          can’t emphasise too much that this piece is a very necessary read for anyone who wants to know about search; not just because it says good things about us, but because of the depth of research which has been put in here. Most times you encounter an article about indexes they are just taking whatever a (meta)search engine says about themselves, not even looking at privacy policies for “relationships with microsoft” etc. or doing any comparative work.

        • Fish [Indiana]@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          I’ve been using Kagi and really like it so far. It’s not good for local stuff, but afaik only Google and Bing have the resources and userbase for things like maps and reviews. It’s designed to be an ad-free ‘premium’ search engine and only earns revenue from users paying for membership.

          • NebLem@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 months ago

            OpenStreetMap’s platform is the only real way to compete against Google and Apple and it’s why Microsoft even though it has Bing Maps, has licenced to them resources like satellite imagery for mapping. It’s awesome in bigger population areas but there’s still a lot to map in rural places outside the EU.

            Review is harder. Right now the leading open platform afaik is Open Reviews (aka Mangrove Reviews) which has tie-ins to OSM projects like MapComplete. OsmAnd and OrganicMaps have open tickets to hook into that ecosystem. You’re right about the userbase problem though, I think it (or a successor) needs AP federation to really take off. That being said there’s several active non-Google nonfree alternatives like Yelp and TripAdvisor as well as niche sites for things like camping, parks, and schools.