The probe hones in on one of Tesla’s most eyebrow-raising decisions when it comes to its driver assistance package: the insistence on exclusively relying on camera sensors instead of LiDAR and radar like its competitors, which CEO Elon Musk has long derided as a “crutch.”

In 2022, the company went all-in on cameras, ditching ultrasonic sensors in its vehicles altogether — a decision that could prove to be a major mistake as it struggles to catch up with its competition and has now promised robust self-driving capabilities to owners who may lack the necessary sensor hardware.

  • elgordino@fedia.io
    link
    fedilink
    arrow-up
    113
    ·
    19 days ago

    May? This has been obvious for ages. There are Waymo taxis doing a reasonable job now thanks to, at least in part, having appropriate sensors. The Tesla approach of just video is never going to cut it, especially in more hazardous weather conditions.

        • vxx@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          18 days ago

          There has been two software recalls this year. One in February because two cars hit the same truck within a couple of minutes, one in June because a car hit a pole at low speed.

          Well, and they were honking at each other on the parking lot.

    • Rizo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      18 days ago

      I still wonder if it he “attack surface” of a camera based system could be a “looney toons” approach… Painting a tunnel on a wall… 🤔. Does anybody know?

      • CommanderCloon@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 days ago

        I’m not very informed on the subject but I would assume they use multiple cameras to use parallax as depth perception, which would most likely prevent that issue

      • Bezier@suppo.fi
        link
        fedilink
        English
        arrow-up
        22
        ·
        19 days ago

        I can honestly believe that he thought it’s gonna work. He knows fuck all about anything but he still makes his bad decisions with confidence and thinks he’s always right.

  • Alex@lemmy.ml
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    19 days ago

    I can see the argument that visible light should be enough given we humans can drive with just two eyes and a few mirrors. However that argument probably misses the millions of years of evolution of our neural networks have gone through while hunting and tracking threats that happens to make predicting where other cars might be mostly fine.

    I have a feeling regulators aren’t going to be happy with a claim of driving better than the average human. FSD should be aiming to be at least 10x better than the best human drivers and we’re a long way off from that.

    • DarkSurferZA@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      ·
      19 days ago

      This is one of the comments that Elon Musk uses a lot when he says humans drive with their eyes, but its untrue. We actually have a wide array of sensory systems that help us drive. Firstly, we use our ears, eyes and body motion to drive. Secondly, unlike a fixed camera mounted on a car, our heads are in constant motion. This means that we cover blind spots better than a fixed camera, and we are able to determine if it’s a small deer really close by, and a large deer really far away. Our brains take multiple 3d images and stitch them together to determine size, distance and speed.

      The best way to explain the driving using your eyes fallacy is basically to look at fpv RC cars, and see how much sensory information you have been robbed of while trying to pilot the vehicle

      • magic_lobster_party@fedia.io
        link
        fedilink
        arrow-up
        7
        ·
        19 days ago

        Not only are our heads in constant motion. Our eyes are also always in motion. We’re constantly, quickly and accurately shifting our attention to different points in our vision.

        • Alex@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          19 days ago

          That’s mostly accounting for the resolution and motion sensitivity in different parts of the eye. With enough cameras a car should be able too “see” more than we could at any one time.

          • DarkSurferZA@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            19 days ago

            No, not really true.

            The way AI systems have been implemented in cars produces a flat image which we run through some fancy AI and the arrive at a conclusion. But what if 1 camera sees a child and for whatever reason, the other sees a clear road? The AI is not trained to process vision the way we do, where we use all our various senses including the conflicting info we get from each eye to arrive at a conclusion. It just does a merge and then process. It should process from each sensor, then reprocess to arrive at a conclusion

          • FrederikNJS@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            19 days ago

            To some extent you are correct, but also notice that the cameras in teslas are not installed in pairs, so they don’t have depth perception. And since they don’t have lidar or radar it doesn’t have alternate methods to measure depth and distance.

            • NotMyOldRedditName@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              19 days ago

              The cameras have overlaps which can be used to measure depth and distance.

              There are multiple front cameras

              The side pillar camera has overlap with the side rear facing

              The 2 side rear facing each have overlap with the rear.

              Edit: I imagine their weakest depth/ distance perception with the current set up would be their side pillar cameras. But they could also probably do some calculations with how fast it passes from front to rear.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        11
        ·
        edit-2
        19 days ago

        Nothing you said there can’t be done by cameras other than sound and the car has a microphone inside. We just might not have the capabilities yet and need to keep improving them.

        All it really means is maybe the car needs more cameras and more microphones.

        Determining distance with images from multiple angles over time can provide accurate distances and velocity

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          19 days ago

          you’re not wrong, but also that’s a fantasy with current technology. meanwhile, cars are dangerous heavy hard boxes travelling around at high speed while we “get the technology right”, and that’s unacceptable

    • Doom@ttrpg.network
      link
      fedilink
      English
      arrow-up
      17
      ·
      19 days ago

      Self driving cars should be flawless and personally, as someone who does not have any professional experience in tech, I do not understand why you’d ever rely on human senses to act as sensors for a machine. We can’t even see all the colors, why would you not give this thing like four different versions of detecting things? If I have a self driving car I want a fuckin radar on my windshield that knows where every object within 40 feet is.

      • originalucifer@moist.catsweat.com
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        19 days ago

        ha, naw. they only need to be better than humans. if the computers kill 10% less people, its a win. unless you want those 10% dead for some reason…

        • Doom@ttrpg.network
          link
          fedilink
          English
          arrow-up
          8
          ·
          19 days ago

          If that’s the metric we are really aiming for 0 cars would be key then right? All trains and bikes and shit why we even bothering with this tech

          • originalucifer@moist.catsweat.com
            link
            fedilink
            arrow-up
            8
            arrow-down
            1
            ·
            19 days ago

            chasing perfect in lieu of the good is not how progress is ever made. its short-sighted, and honestly… stupid.

            if we only ever attempted to create perfect things nothing would ever get created.

              • originalucifer@moist.catsweat.com
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                19 days ago

                i agree public transport is where its at.

                reality and logistics show we will need both here in the u.s. due to societal and resource constraints.

        • DarkSurferZA@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          19 days ago

          Nope, not a legit argument. Augmented human driving is way safer than autonomous cars. Fun fact, the half assed approach to autonomy used in Tesla’s are pretty shit, even by human standards.

          Augmented driver assistance systems such as accident avoidance, preemptive braking and seatbelt tensioning systems still outperform current generation “fully” autonomous cars. The fact that we let billionaires develop their tech, for profit, in production (on the road), with a direct cost to human life, should always be a problem.

          If he really wants to change the world, pay for the damn R&D, then deploy to the roads.

          • originalucifer@moist.catsweat.com
            link
            fedilink
            arrow-up
            3
            ·
            19 days ago

            luckily we can do more than one thing at a time implementing where each makes the most sense.

            everyone keeps saying ‘no! its this one thing or dont bother!’ no, thats not how progress is made.

            im not arguing for tesla, tesla is objectively garbage.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        19 days ago

        Except the radar doesn’t know where every object is. It can’t detect stopped things while traveling at high speeds.

        You know, the things people keep having accidents with, with or without l2 semi-autonomous software.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          19 days ago

          the large majority of current self driving cars have radar, lidar, ultra sonic, and cameras. their detection sets overlap, and complement each other so they can see a wide array of things that others can’t. focusing on 1 and saying “it doesn’t see X” is a very poor argument when others see those things just fine

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            19 days ago

            The point is that to be truly autonomous when vision is the only fail safe reliable sensor, then vision MUST work to have a truly autonomous vehicle.

            You can’t rely on radar without vision or lidar because it can’t see stopped vehicles at high speed. This a deadly serious problem.

            You can’t rely on lidar in rain/fog/snow/dust because the light bounces off of the particles and gives bad data, plus it can’t tell you anything about what the object is or might intend to do, only that it’s there.

            Only vision can do all of those, it’s just a matter of number of cameras, camera quality, and AI processing capabilities.

            If vision can do all those things perfectly, maybe you don’t need those other sensors after all?

            if vision can’t do it, then we won’t have a truly autonomous future.

            The other sensors are a crutch because the vision problem is so hard.

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              19 days ago

              i don’t think anyone is relying solely on radar - that’s the point. every sensor we have as fallible in some way (and so, btw, are our eyes - they can’t see through things but radar can in some cases!)

              even if you CAN rely solely on vision, why hamstring yourself? with a whole sensor package, the algorithms know when certain sensors are useless - that’s what the training is for… knock 1 out, the others see that it’s in X condition and works around it

              if you only have a single sensor (like cameras) then if something happens you have 0 sensors… our eyes are MUCH better at vision than cameras - just the dynamic range alone, let alone the “resolution”… and that’s not even getting into, as others have said, the fact that our brains have had millions of years of evolution to process images.

              the technology for vision only just isn’t there yet. that’s just straight up fact. can it be? perhaps, but “perhaps in the future” is not “we should do this now”. that’s called a beta test, and you’re playing with human lives not just UI bugs - and there’s no good reason… just add extra sensors

              • NotMyOldRedditName@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                18 days ago

                even if you CAN rely solely on vision, why hamstring yourself?

                Their stance is that by using lidar OEMs are hamstringing themselves on solving vision because they are so reliant on it. They spend less time and resources perfecting vision so they never truly solve the problem. From their perspective you got it backwards.

                and there’s no good reason… just add extra sensors

                The more sensors you deal with, the more your attention gets divided. You aren’t laser focused on one thing.

                The extra sensors also cost a lot of money, you can’t put waymo’s sensor package onto millions of cars that consumers can buy when the suite is 10s of thousands of dollars (and originally well over 100k).

                By focusing on vision where the system can be put onto millions of cars, you can get massive amounts of extra training data and training data is going to be a huge part of solving this problem.

                You might not like the reasons, or their stance, but it’s not such an unreasonable position to take. Mobile Eye even cancelled their next gen lidar project after seeing improvements in vision and radar. What happens when they keep seeing improvements in vision and now radar isn’t needed?

                I don’t know if you’ve ever used AP but all the crazy headlines you see about it are idiots in cars being idiots. As a L2 vision only system it works very well. If people wanna blame Elon for convincing people to be idiots, sure, you can do that, but that has nothing to do with the actual technological approach they are taking. They’re two different things.

                • Pup Biru@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  18 days ago

                  Their stance is that by using lidar OEMs are hamstringing themselves on solving vision because they are so reliant on it.

                  i get that… but… vision is kinda shit. why not use all the tools at your disposal? like literally “x ray vision” is something that we see as a super power because it’d be so useful - radar gives us that

                  vision is an approximation of things like lidar. can you get a depth map out of vision? sure by why not just measure it directly and then you’re not introducing error by your model literally hallucinating

                  The more sensors you deal with, the more your attention gets divided. You aren’t laser focused on one thing.

                  kinda but also the last 20% takes 80% of the effort… solving a lot of easy problems with more information will lead to a better short term outcome, and then when you’re getting good results then you can solve from 80% to 85% then 85 to 90 etc across your whole sensor suite

                  The extra sensors also cost a lot of money

                  so they though? you can buy hobbyist ultrasonic sensors for literally a couple of bucks, lidar for a few hundred - sure that’s not at the grade that you’d use for cars, but at some point it’s an economies of scale problem. they’re not actually that expensive for a commodity “good enough” sensor package

                  You might not like the reasons, or their stance

                  correct - i understand them, but as an engineer it’s just wrong when you’re talking about one of the most dangerous activities that humanity collectively engages in (driving)

                  What happens when they keep seeing improvements in vision and now radar isn’t needed?

                  i think this could be the sticking point - i don’t think any extra sensors are needed, just like i don’t think seatbelts or air bags etc are needed… but… they’re helpful and improve the safety of people in and around the car

                  all the crazy headlines you see about it are idiots in cars being idiots

                  agree, and i totally think driverless is the way to go - humans are far worse drivers than machines are right now without any improvement

                  … however, better isn’t perfect, and when it comes to safety simply ignoring tools because of some belief that eventually it’ll be fine is misguided at best, and negligent at worst

                  If people wanna blame Elon for convincing people to be idiots, sure, you can do that

                  absolutely that too! their systems aren’t “drives itself no problemo” and that’s how they’re marketing it

    • MajorHavoc@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 days ago

      Lol. Yep.

      Anyone want to jump in and confidently vouch that Musk’s moral code won’t allow him to harvest human brains from living humans, shove them in a bottle, and use them as a fake AI in his fleet?