Did they determine this by comparing what DNA fragments they’ve managed to recover, or by physical skeletal structure similarities, or what?

I’m no expert in the field, but I just don’t see it.

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    30
    ·
    7 months ago

    i think a better way to look at it is; the chicken and the t-rex share a common ancestor

    an animal existed that branched into 2 different paths, one towards birds and the other big ol lizard things (conjecture, am stupid)

      • originalucifer@moist.catsweat.com
        link
        fedilink
        arrow-up
        14
        arrow-down
        2
        ·
        7 months ago

        we have an obscene amount of fossils sittin around in drawers collecting dust. i cant wait til we can feed all that crap into 3-d scanners, feed it into some detection LLM and vastly expand our knowledge at a rate we are not currently capable.

        i read a lot of ‘random scientist finds some random fossil in a drawer proving the opposite of some accepted fact’

          • 𝒎𝒂𝒏𝒊𝒆𝒍@sopuli.xyz
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            That’s because most of what we hear about “AI” is revolving around content “creation” controversies, but these are successfully used in analyzing wide data sets for scientific purposes, like finding new foldings of proteins, diagnosing cancer, reading ancient burned scrolls via etcxrays

            • And all of those things are then analyzed and verified before anything is done with them. No reputable scientist is taking those results and dumping it straight into a paper; the deep learning engines are pointing scientists in the right direction; they’re taking the haystack and making it a handful. Protein folding is a little different because the results can be directly verified programmatically (I think; I’m not an organic chemist, or biologist, or whoever is doing this research).

              The output of LLMs can be great outlines. They can also be wildly, and confidently, wrong.

            • over_clox@lemmy.worldOP
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              edit-2
              7 months ago

              Yeah, about that…

              Different AI models are developing in different ways. Some are learning from legit, curated sources from reputable scholars and professors and such.

              But other AI models are learning from less than reputable sites, such as Reddit…

              Google is learning from Reddit. This tech journey is gonna be fun…

            • Oh, believe me, I don’t. At all. I’ve been working in the software engineering sector since the mid 90’s; I’m quite aware of the rapid pace of change in this sector. I was briefly considering a focus on AI when getting my degree, back in the early 90’s.

              But this specifically mentions LLMs, and the fundamental way LLMs function is not going to lead to self-aware AI, or any sort of system that is going to be able to self-evaluate for accuracy or “truthiness.” It’s going to take an advance in neural net science; maybe in combination with LLM - but LLMs by themselves will only ever be dumb machines that generate predictive text based on - I don’t know, Bayesian probabilities, or whatever.

              • originalucifer@moist.catsweat.com
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                7 months ago

                ha, i never meant full-on GAI, singularity. i just meant a visual model good enough to classify what it sees in a very specific context. i never mentioned or meant to refer to ‘ai’

        • General_Effort@lemmy.world
          cake
          link
          fedilink
          arrow-up
          5
          ·
          7 months ago

          Yes, better tools to analyze data will yield great results. Even a good push to scan all those finds and make all the data available would probably allow amazing new discoveries. The catch is that people like to hoard that data and milk it for their own careers and fame.

          That said… LLM is Large Language Model. By definition, LLMs are unlikely to analyze 3-dimensional shapes. The newer AIs, like Gemini or GPT-4o, also use vision and audio but they are often still called (multimodal) LLMs. It’s justifiable as they still seem to have language at the core, but it’s getting increasingly dubious.