Statistically they’re still less prone to accidents than human drivers.
I never quite undestood why so many people seem to be against autonomous vehicles. Especially on Lemmy. It’s unreasonable to demand perfection before any of these is used on the public roads. In my view the bar to reach is human level driving and after that it seems quite obvious that from safety’s point of view it’s the better choice.
This is just such a bad take, and it’s so disappointing to see it parroted all over the web. So many things are just completely inaccurate about these “statistics”, and it’s probably why it “seems” so many are against autonomous vehicles.
These are self-reported statistics coming from the very company(s) that have extremely vested interests in making themselves look good.
These statistics are for vehicles that are currently being used in an extremely small (and geo-fenced) location(s) picked for their ability to be the easiest to navigate while being able to say “hey we totally work in a big city with lots of people”.
These cars don’t even go onto highways or areas where accidents are more likely.
These cars drive so defensively they literally shut down so as to avoid causing any accidents (hey, who cares if we block traffic and cause jams because we get to juice our numbers).
They always use total human driven miles which are a complete oranges to apples comparison: Their miles aren’t being driven
In bad weather
On dangerous, windy, old, unpaved, or otherwise poor road conditions
In rural areas where there are deer/etc that wander into the road and cause accidents
They also don’t adjust or take any median numbers as I’m not interested in them driving better than the “average” driver when that includes DUIs, crashes caused by neglect or improper maintenance, reckless drivers, elderly drivers, or the fast and furious types crashing their vehicle on some hill climb driving course.
And that’s all just off the top of my head.
So no, I would absolutely not say they are “less prone to accidents than human drivers”. And that’s just the statistics, to say nothing about the legality that will come up. Especially given just how adverse companies seem to be to admit fault for anything.
Sure mile for mile they are less likely. But when they happen they are generally more serious as higher speeds are involved, and if Tesla has shown anything it’s a much more complicated process for autonomous vehicles to navigate and deal with edge cases (like vehicles on the side of the road, emergency or otherwise). Much harder (and dangerous) to just slam on the brakes and put on your hazards on a highway than a side street if the car gets confused.
Well, I do use a car that is able to drive (almost) autonomous on a highway, so I know that the tech to drive on highways exist since several years.
All the difficult stuff – slow traffic, parking cars, crossings, pedestrians… – does not exist on highways.
The only problem that still remains is the problem you mention: what to do in case of trouble?
Of course you have to stop on a highway to prevent an accident or in case of an emergency. That’s exactly what humans do. But then humans get out of the car, set up warning signs, get help &c. Cars cannot do this. The result is reported in this article.
Avoiding dangerous scenarios is the definition of driving safely.
This technology is still an area under active development and nobody (not even Elon!) is claiming this stuff is ready to replace a human in every possible scenario. Are you actually suggesting they should be testing the cars in scenarios that they know wouldn’t be safe with the current technology? Why the fuck would they do that?
So no, I would absolutely not say they are “less prone to accidents than human drivers”.
OK… if you won’t accept the company’s reported data - who’s data will you accept? Do you have a more reliable source that contradicts what the companies themselves have published?
to say nothing about the legality that will come up
No that’s a non issue. When a human driver runs over a pedestrian/etc and causes a serious injury, if it’s a civilised country and a sensible driver, then an insurance company will pay the bill. This happens about a million times a week worldwide and insurance is a well established system that people are, for the most part, happy with.
Autonomous vehicles are also covered by insurance. In fact it’s another area where they’re better than humans - because humans frequently fail to pay their insurance bill or even deliberately drive after they have been ordered by a judge not to drive (which obviously voids their insurance policy).
There have been debates over who will pay the insurance premium, but that seems pretty silly to me. Obviously the human who ordered the car to drive them somewhere will have to pay for all costs involved in the drive. And part of that will be insurance.
I honestly can’t tell if that’s a passive-aggressive swipe at me or not; but just in case it was: stats mean very little w/o context. I believe the quote was “Lies, damned lies, and statistics”. I simply pointed out a few errors with the foundation of these “statistics”. I didn’t need to quote my own statistics because, as I was pointing out, this is a completely apples to oranges comparison. The AV companies want at the same time to preach about how many miles they go w/o accident while comparing themselves to an average they know doesn’t match their own circumstances. Basically they are taking their best case scenario and comparing it against average/worst case scenario stats.
I’d give more weight to the stats if they where completely transparent, worked with a neutral 3rd party, and gave them access to all their video/data/etc to generate (at the very least) proper stats relative to their environment. Sure, I’ll way easier believe waymo/cruises’ numbers over those by tesla; but I still take it with a grain of salt. Because again, they have a HUGE incentive to tweak their numbers to put themselves in the very best light.
No, I see your point, and I agree. These companies are almost guaranteed to cherry-pick those stats, so only a fool would take that as hard evidence. However, I don’t think these stats flat-out lie either. If they show a self-driving car is three times less prone to accidents, I doubt the truth is that humans, in fact, are twice as good. I believe it’s safe to assume that these stats at least point us in the right direction, and that seems to correlate with the little personal experience I have as well. If these systems really sucked as much as the most hardcore AV-skeptics make it seem, I doubt we’d be seeing any of these in use on public roads because the issues would be apparent.
However, the point I’m trying to highlight here is that I make a claim about AV-safety, and I then provide some stats to back me up. People then come telling me that’s nonsense and list a bunch of personal reasons why they feel so but provide nothing concrete evidence except maybe links to articles about individual accidents. That’s just not the kind of data that’s going to change my mind.
I never quite undestood why so many people seem to be against autonomous vehicles.
People aren’t against autonomous vehicles, but against them getting let lose on public roads with zero checks or transparency. We basically learn what they are and aren’t capable of one crash at a time, when all of that should have been figured out years ago in the lab.
The fact that they can put a safety driver in them to absorb any blame is another scandal.
Statistically they’re still less prone to accidents than human drivers.
That’s only due to them not driving in the same condition as humans. Let them drive in fog and suddenly they can’t even see clearly visible emergency vehicles.
None of this would be a problem if those companies would be transparent about what those vehicles are capable of and how they react in unusual situations. All of which they should have tested a million times over in simulation already.
With Tesla the complaint is that the statistics are almost all highway miles so it doesn’t represent the most challenging conditions which is driving in the city. Cruise then exclusively drives in a city and yet this isn’t good enough either. The AV-sceptics are really hard to please…
You’ll always be able to find individual incidents where these systems fail. They’re never going to be foolproof and the more of them that are out there the more news like this you’re going to see. If we reported about human-caused crashes with the same enthusiasm that would be all the news you’re hearing from then on and letting humans drive would seem like the most scandalous thing imaginable.
I do not care about situations that they work in, I care about what situations they will fail at. That’s what matters and that’s what no company will tell you. As said, we learn about the capabilities of self driving cars one crash at a time, and that’s just unacceptable when you could figure all of that out years ago in simulation.
So far none of the self-driving incidences I have seen were some kind of unforeseen freak situation, it was always some rare, but standard thing, fog, pedestrian crossing the road, road blocked by previous crash, etc.
Humans get into accidents all the time. Is that not unacceptable for you?
I feel like people apply standards to self driving cars that they don’t to human driven ones. It’s unreasonable to expect a self driving system never to fail. It’s unreasonable to imagine you can just let it practice in simulation untill it’s perfect. This is what happens when you just narrowly focus on one aspect of self driving cars (individual accidents) - you miss the big picture.
I feel like people apply standards to self driving cars that they don’t to human driven ones.
Human drivers need to pass driving test, self-driving cars do not. Human drivers also have a baseline of common sense that self-driving cars do not have, so they really would need more testing than humans, not less.
It’s unreasonable to expect a self driving system never to fail.
I don’t expect them to never fail, I just want to know when they fail and how badly.
It’s unreasonable to imagine you can just let it practice in simulation untill it’s perfect.
What’s unreasonable about that?
individual accidents
They are only “individual” because there aren’t very many self-driving cars and because not every fail ends up deadly.
Tesla on FSD could easily pass the driving test that’s required for humans. That’s a nonsensical standard. Most people with fresh license are horribly incompetent drivers.
I don’t expect them to never fail, I just want to know when they fail and how badly.
“Over 6.1 million miles (21 months of driving) in Arizona, Waymo’s vehicles were involved in 47 collisions and near-misses, none of which resulted in injuries”
How many human drivers have done millions of miles of driving before they were allowed to drive unsupervised? Your assertion that these systems are untested is just wrong.
“These crashes included rear-enders, vehicle swipes, and even one incident when a Waymo vehicle was T-boned at an intersection by another car at nearly 40 mph. The company said that no one was seriously injured and “nearly all” of the collisions were the fault of the other driver.”
According to insurance companies, human driven cars have 1.24 injuries per million miles travelled. So, if Waymo was “as good as a typical human driver” then there would have been several injuries. They had zero serious injuries.
The data (at least from reputable companies like Waymo) is absolutely available and in excruciating detail. Go look it up.
The data (at least from reputable companies like Waymo) is absolutely available and in excruciating detail. Go look it up.
As already said, I want to know where they fail, preferably in the simulator, not on actual roads. Having vehicles drive in circles on carefully selected roads and making a lot of miles is no big accomplishment and not comparable with humans that have to drive on all the roads under all the conditions.
As a software developer, that’s not how testing works. QA is always trying to come up with weird edge cases to test, but once it’s out in the wild with thousands (or more) of real-world users, there’s always going to be something nobody ever tried to test.
For example, there was a crash where an unmarked truck with exactly the same color as the sky was 90° sideways on the highway. This is just something you wouldn’t think of in lab conditions.
there’s always going to be something nobody ever tried to test.
That’s not what is happening. We don’t see weird edge cases, we see self driving cars blocking emergency vehicles and driving through barriers.
For example, there was a crash where an unmarked truck with exactly the same color as the sky was 90° sideways on the highway.
The sky is blue and the truck was white. Testing the dynamic range of the camera system is absolutely something you do in in lab situation. And a thing blocking the road isn’t exactly unforeseen either.
I don’t expect self driving cars to be perfect and handle everything, but I expect the manufacturers to be transparent about their abilities and they aren’t. Furthermore I expect the self driving system to have a way to react to unforeseen situations, crashing in fog is not acceptable when the fact that there was fog was plainly obvious.
And a thing blocking the road isn’t exactly unforeseen either.
Tesla’s system intentionally assumes “a thing blocking the road” is a sensor error.
They have said if they don’t do that, about every hour or so you’d drive past a building and it would slam on the brakes and stop in the middle of the road for no reason (and then, probably, a car would crash into you from behind).
The good sensors used by companies like Waymo don’t have that problem. They are very accurate.
Let them drive in fog and suddenly they can’t even see clearly visible emergency vehicles.
That article you linked isn’t about self driving car. It’s about Tesla “autopilot” which constantly checks if a human is actively holding onto the steering wheel and depends on the human checking the road ahead for hazards so they can take over instantly. If the human sees flashing lights they are supposed to do so.
The fully autonomous cars that don’t need a human behind the wheel have much better sensors which can see through fog.
That article you linked isn’t about self driving car.
Just because Tesla is worse than others doesn’t make it not self-driving. The “wiggle the steering wheel” feature is little more than a way to shift blame to driver instead of the crappy self-driving software.
so they can take over instantly.
Humans fundamentally can’t do that. If you sit a human in a self driving car doing nothing for hours, they won’t be able to react in a split section when it is needed. Sharing driving in that way does not work.
The fully autonomous cars that don’t need a human behind the wheel have much better sensors which can see through fog.
Is anybody actively testing them in bad weather conditions? Or are we just blindly trusting claims from the manufacturers yet again?
Just because Tesla is worse than others doesn’t make it not self-driving.
The fact that Tesla requires a human driver to take over constantly makes it not self-driving.
so they can take over instantly.
Humans fundamentally can’t do that. If you sit a human in a self driving car doing nothing for hours, they won’t be able to react in a split section when it is needed.
The Human isn’t supposed to be “doing nothing”. The human is supposed to be driving the car. Autopilot is simply keeping the car in the correct lane for you, and also adjusting the speed to match the car ahead.
Tesla’s system won’t even stop at an intersection if you need to give way (for example, a stop sign. Or a red traffic light). There’s plenty of stuff the human needs to be doing other than turning the steering wheel. If there is a vehicle stopped in the middle of the road Tesla’s system will drive straight into it at full speed without even touching the brakes. That’s not something that “might happen” it’s something that will happen, and has happened, any time a stationary vehicle is parked on the road. It can detect the car ahead of you slowing down. It cannot detect a stopped vehicle.
They’ve promised to ship a more capable system “soon” for over a decade. I don’t see any evidence that it’s actually close to shipping though. The autonomous systems by other manufacturers are significantly more advanced. They shouldn’t be compared to Tesla at all.
Is anybody actively testing them in bad weather conditions?
Yes. Tens of millions of testing and they pay especially close attention to any situations where the sensors could potentially fail. Waymo says their biggest challenge is mud (splashed up from other cars) covering the sensors. But the cars are able to detect this, and the mud can be wiped off. it’s a solvable problem.
Unlike Tesla, most of the other manufacturers consider this a research project and are focusing all of their efforts on making the technology better/safer/etc. They’re not making empty promises and they’re being cautious.
On top of the millions of miles of actual testing, they also record all the sensor data for those miles and use it to run updated versions of the algorithm in exactly the same scenario. So the millions of miles have, in fact, been driven thousands and thousands of times over for each iteration of their software.
You don’t understand why people on Lemmy, an alternative platform not controlled by corporations, might not want to get in a car literally controlled by a corporation?
I can easily see a future where your car locks you in and drives you to a police station if you do something “bad”.
As to their safety, I don’t think there are enough AVs to really judge this yet; of course Cruise’s website will claim Cruise AVs cause less accidents.
I can imagine in the future there will be grid locks in front of the police station with AV cars full of black people when the cops send out an ABP with the description of a black suspect.
We’ve seen plenty of racist AI programs in the past because the programmers, intentionally or not, added their own bias into the training data.
The AIs are not racist themselves, it’s a side effect of the full technology stack: cameras have lower dynamic resolution for darker colors, images get encoded with a gamma that leaves less information in darker areas, AIs that work fine with images of light skinned faces, don’t get the same amount of information from images of dark skinned faces, leading to higher uncertainty and more false positives.
The bias starts with cameras themselves; security cameras in particular should have an even higher dynamic range than the human eye, but instead they’re often a cheap afterthought, and then go figure out what have they recorded.
You’re putting words to my mouth. I wasn’t talking about people on Lemmy not wanting to get into one of these vehicles.
The people here don’t seem to want anyone getting into these vehicles. Many here are advocating for all-out ban on self-driving cars and demand that they’re polished to near perfection on closed roads before being allowed for public use even when the little statistics we already have mostly seem to indicate these are at worst as good as human drivers.
If it’s about Teslas the complain often is the lack of LiDAR and radars and when it’s about Cruise which has both it’s then apparently about corruption. In both cases the reaction tends to be mostly emotional and that’s why every time one provides statistics to back up the claims about safety it just gets called marketing bullshit.
Honestly? I don’t want anyone to use AVs because I fear they will become popular enough that eventually I’ll be required to use one.
I honestly haven’t done enough research on AV safety to feel comfortable claiming anything concrete about it. I personally don’t feel comfortable with it yet since the technology is very new and I essentially need to trust it with my life. Maybe in a few years I’ll be more convinced.
I hear you. I love driving and I have zero interest in buying a self-driving vehicle. However I can still stand outside my own preferences and look at it objectively enough to see that it’s just a matter of time untill AI gets so good at it that it could be considered irresponsible to let a human drive. I don’t like it but that’s progress.
Travelling in a community whose public roads require 100% AVs will probably be the safest implementation of driving, period. But if you don’t trust the tech, then just don’t live or travel in that community.
I suspect we’ll see an AV only lane on the hwy soon, and people will realize how much faster you can get through traffic without tailgaters and lane weavers constantly causing micro inefficiencies at best, and wrecks at worst.
When vehicle-to-vehicle communication improves, and gets standardized, it will be interesting to see “AV road trains” of them going almost bumper to bumper, speeding up and slowing down all at the same time.
Autonomous driving isn’t necessarily controlled by a corporation any more than your PC is. Sure, the earliest computers were all built and run by corporations and governments, but today we all enjoy (the choice of) computing autonomy because of those innovations.
I can be pro AV and EV without being pro corporate control over the industries. It’s a fallacy to conflate the two.
The fact is that letting humans drive in a world with AVs is like letting humans manually manage database entries in a world with MySQL. And the biggest difficulty is that we’re trying to live in a world where both humans and computers are “working out of the same database at the same time”. That’s a much more difficult problem to solve than just having robots do it all.
I still have a gas powered manual that I love driving, but I welcome the advancement in EV/AV technology, and am ready to adopt it as soon as sufficient open standards and repairability can be offered for them.
Autonomous driving isn’t necessarily controlled by a corporation any more than your PC is.
That’s just outright wrong.
Modern cars communicate with their manufacturer, and we don’t have any means to control this communication.
I can disconnect my PC from the internet, I cannot disconnect my car. I can install whatever OS and apps pleases me on my PC, I cannot do anything about the software on my car’s computer.
So, while I can take full control of my PC if it pleases me, I cannot take any control of my car.
With all due respect, you’re still not understanding what I’m saying.
If you traveled back 50+ years to when computers took up several hundred sq ft, one might try to make the same argument as you: “don’t rent time on IBM’s mainframe, they can see everything you’re computing and could sell it to your competitor! Computers are always owned by the corporate elite, therefore computers are bad and the technology should be avoided!” But fast forward to today, and now you can own your own PC and do everything you want to with it without anyone else involved. The tech progressed. It wasn’t wrong to not trust corporate owned computing, but the future of a tech itself is completely independent from the corporations who develop them.
For a more recent example, nearly 1 year ago, ChatGPT was released to the world. It was the first time most people had any experience with a LLM. And everything you sent to the bot was given to a proprietary, for profit algorithm to further their corporate interests. One might have been tempted to say that LLMs and AI would always serve the corporate elite, and we should avoid the technology like the plague. But fast forward to now, not even one year later, and people have replicated the tech in open source projects which you can run locally on your own hardware. Even Meta (the epitome of corporate control) has open sourced LLaMA to run for your own purposes without them seeing any of it (granted the licenses will prevent what you can do commercially).
The story is the same for virtually any new technology, so my point is, to denounce all of AVs because today corporations own it is demonstrably shortsighted. Again, I’m not interested in the proprietary solutions available right now, but once the tech develops and we start seeing some open standards and repairability enter the picture, I’ll be all for it.
nearly 1 year ago, ChatGPT was released to the world. It was the first time most people had any experience with a LLM. And everything you sent to the bot was given to a proprietary, for profit algorithm to further their corporate interests
You might want to pick another example, because OpenAI was originally founded as a non-profit organisation, and in order to avoid going bankrupt they became a “limited” profit organisation, which allowed them to source funding from more sources… but really allow them to ever become a big greedy tech company. All they’re able to do is offer some potential return to the people who are giving them hundreds of billions of dollars with no guarantee they’ll ever get it back.
I’m not sure your idea of 70s and 80s IT infrastructure is historically accurate.
50 years ago it was technically impossible to rent time on a mainframe/server owned by a third party without having physical access to the hardware.
You, or to be more accurate, your company would buy a mainframe and hire a mathematician turned programmer to write the software you need.
Even if – later in the course of IT development – you/your company did not develop your own software but bought proprietary software this software was technically not able to “call back home” until internet connection became standard.
So no, computers did not start with “the corporate elite” controlling them.
Computerized cars, on the other hand, are controlled by their manufycturers since they were introduced. There is no open source alternative.
Open standards for computerized cars would be great — but I’m very pessimistic they will evolve unless publically funded and/or enforced.
Well you shouldn’t trust it and the car company tells you this. It’s not foolproof and something to be blindly relied on. It’s a system that assists driving but doesn’t replace the driver. Not in it’s current form atleast though they may be getting close.
Most people consider cruise control to be quite useful feature thought it still requires you to pay attention that you stay on your lane and don’t run into a slower vehicle in front of you. You can then keep adding features such as radar for adaptive cruise control and lane assist and this further decreases the stuff you need to pay attention to but you still need to sit there behind the wheel watching the road. These self-driving systems at their current form are no different. They’re just further along the spectrum towards self driving. Some day we will reach the point that you sitting on the driver’s seat just introduces noise to the system so better you go take a nap on the back seat. We’re not there yet however. This is still just super sophisticated cruise control.
It’s kind of like with chess engines. First humans are better at it than computers. Then computer + human is better than just the computer and then at some point the human is no longer needed and computer will from there on always be better.
Well Cruise is offering a full self driving taxi service where they don’t mandate you as a passenger to pay attention to the traffic and take control if needed so it’s not fair to say that they don’t trust it so why should you.
With Tesla however this is the case but despite their rather aggresive marketing they still make it very clear that this is not finished yet and you are allowed to use it but you’re still the driver and the safe use of it is on your responsibility. That’s the case with the beta version of any software; you get it early which is what early adopters like but you’re expected to encounter bugs and this is the trade-off you have to accept.
The discussed incident does not involve driving assist systems, driverless autonomous taxis are already on the streets:
A number of Cruise driverless vehicles were stopped in the middle of the streets of the Sunset District after Outside Lands in Golden Gate Park on Aug. 11, 2023.
But do they really? If so, why’s there the saying “if you want to murder someone, do it in a car”?
I do think self-driving cars should be held to a higher standard than humans, but I believe the fundamental disagreement is in precisely how much higher.
While zero incidents is naturally what they should be aiming for, it’s more of a goal for continuous improvement, like it is for air travel.
What liability can/should we place on companies that provide autonomous drivers that will ultimately lead to safer travel for everyone?
Well, the laws for sure aren’t perfect, but people are responsible for the accidents they cause. Obviously there are plenty of exceptions, like rich people, but if we’re talking about the ideal real-life scenario, there are consequences for causing an accident. Whether those consequences are appropriate or not is for another discussion.
While zero incidents is naturally what they should be aiming for, it’s more of a goal for continuous improvement, like it is for air travel.
As far as I know, proper self driving (not “autopilot”) AVs are pretty close to zero incidents if you only count crashes where they are at fault.
When another car runs a red light and smashes into the side of an autonomous vehicle at 40mph… it wasn’t the AV’s fault. Those crashes should not be counted and as far as I know they currently are in most stats.
What liability can/should we place on companies that provide autonomous drivers that will ultimately lead to safer travel for everyone?
I’m fine with exactly the same liability as human drivers have. Unlike humans, who are motivated to drive dangerously for fun or get home when they’re high on drugs or continue driving through the night without sleep to avoid paying for a hotel, autonomous vehicles have zero motivation to take risks.
In the absence of that motivation, the simple fact that insurance against accidents is expensive is more than enough to encourage these companies to continue to invest in making their cars safer. Because the safer the cars, the lower their insurance premiums will be.
Globally insurance against car accidents is approaching half a trillion dollars per year and increasing over time. With money like that on the line, why not spend a lazy hundred billion dollars or so on better safety? It won’t actually cost anything - it will save money.
the safer the cars, the lower their insurance premiums will be.
Globally insurance against car accidents is approaching half a trillion dollars per year
That… almost makes it sound like the main opposition to autonomous cars, would be insurance companies: can’t earn more by raising the premiums, if there are no accidents and a competing insurance company can offer a much cheaper insurance.
The bar is much higher than it is for human drivers because we downplay our own shortcomings and think that we have less risk than the average driver.
Humans can be good drivers, sure. But we have serious attention deficits. This means it doesn’t take a big distraction before we blow a red light or fail to observe a pedestrian.
Hell, lot of humans fail to observe and yield to emergency vehicles as well.
But none of that is newsworthy, but an autonomous vehicle failing to yield is.
My personal opinion is that the Cruise vehicles are as ready for operational use as Teslas FSD, ie. should not be allowed.
Obviously corporations will push to be allowed so they can start making money, but this is probably also the biggest threat to a self-driving future.
Regulated so strongly that humans end up being the ones in the driver seat for another few decades - with the cost in human lives which that involves.
By definition nearly half of us are better than average drivers. Given that driving well is a matter of survival, I’ll take my own driving ability over any autonomous vehicle until they’re safer than 99% of drivers.
But how much better would it need to be? 99.9% or 99.9999999999999999999999%, or just 99.01%
A lot of people will have qualms as long as the chance of dying is higher than zero.
People have very poor understanding of statistics and will cancel holidays because someone in the vicinity of where they’re going got bitten by a shark (the current 10 year average of unprovoked shark bites is 74 per year).
Similarly we can expect people to go “I would never get into a self-driving car” when the news inevitably reports on a deadly accident even if the car was hit by a falling rock.
And then there’s the other question:
Since 50% of drivers are worse than the average - would you feel comfortable with those being replaced by self driving cars that were (proven to be) better than the average?
Given that I have no way of communicating with the driverless car and communication is often important to driving, I’d rather the kinda bad driving person. I can compensate for their bad driving when I spot it and give them room. Or sometimes i can even convey information that helps them be safer while they’re not paying attention. I’ve definitely stopped crashes that didn’t involve me using my horn.
There’s no amount of discussion or frantic hand waving that will alter the course of an automated vehicle.
Once I was driving down what had become a narrow street with high snow banks when I came across an older woman stuck between the banks repeatedly backing into the door of her neighbor’s car as she tried to get out of her driveway. After watching her do this for a couple of minutes I offered to get her car straightened out for her. She was ecstatic and about 30 seconds later we were both able to go about our days.
I don’t think drivers are supposed to communicate like that… but it raises a better question: how is a cop directing draffic, supposed to communicate with a driverless car?
If there is no mechanism in place, that’s a huge oversight… while if there is one, why didn’t use it in this case?
I’m not gonna join in the discussion, but if you cite numbers, please don’t link to the advertising website of the company itself. They have a strong interest in cherry picking the data to make positive claims.
These companies are the only ones with access to those stats. Nobody else has it. The alternative here is to not cite stats at all. If you think the stats are wrong you can go find alternative source and post it here.
If they do not give researchers access to the data, then I can guarantee you they are cherry picking their results. A research paper in a reputable journal would be easy publicity and create a lot of trust in the public.
They can’t come quick enough for me.
I can go to work after a night out without fear I might still be over the limit. I won’t have to drive my wife everywhere. Old people will not be prisoners in their own homes. No more nobheads driving about with exhausts that sound like a shoot out with the cops. No more aresholes speeding about and cutting you up. No more hit and runs. Traffic accident numbers falling through the floor. In fact it could even get to a point where the only accidents are the fault of pedestrians/cyclists not looking where they are going.
All of these are solved by better public transport/safe bike routes and more walkable city designs. All of which is we can do now, not rely on some new shiny tech so that we can keep car companies profits up.
The day I can get in a car and not be simultaneously afraid of my own shortcomings and the fact that there are strangers driving massive projectiles around me is a day I will truly celebrate. The fact is that automobiles are weapons, and I don’t want to be the one wielding it when a single mistake can cost an entire family their lives, although I would like to be there to slam on the brakes and prevent it if needed.
When the light turns green the entire row of cars can start moving at the same time like on motor sports. Perhaps you don’t even need traffic lights because they can all just drive to the intersection at the same time and just keep barely missing eachother but never crash due to the superior reaction times and processing speeds of computer. You could also let your car go taxi other people around when you don’t need it.
What if we tied that entire row of cars together as one unit so we could save cost on putting high end computers in each car? Give them their own dedicated lane because we will never have 100% fully autonomous cars on the road unless we make human drivers illegal.
It sure would be nice if the bar was the rational one of “better” but people aren’t rational. It’s literally never going to be good enough, because even if it were perfect it still can’t be used.
I think one of the big issues psychologically about self-driving cars that people find really hard to come to terms with is the fact that even with the best systems, accidents are bound to happen and without a driver there’s no one to blame and we hate that.
I remember something about Mercedes taking liability when self driving is active, although I don’t know if that still holds. Still, this seems like something that can be approached with proper legislation, assuming we can get past the lobbying BS in the US (though the EU will probably make the right call much sooner).
Yep, I’m pretty confident they won’t be autonomously driving on EU roads legally until they conform to pretty strict legislation which I’m pretty sure will include the liability of the company.
Nice of Mercedes to do the right thing without being forced to, that’s surprisingly rare.
I believe they’re already allowed in Germany actually, although their autonomous driving feature is very limited in where it can be activated. Hopefully other vehicle manufacturers follow suit and take liability when doing autonomous driving (as opposed to “assisted driving”, which many vehicles currently have).
Statistically they’re still less prone to accidents than human drivers.
I never quite undestood why so many people seem to be against autonomous vehicles. Especially on Lemmy. It’s unreasonable to demand perfection before any of these is used on the public roads. In my view the bar to reach is human level driving and after that it seems quite obvious that from safety’s point of view it’s the better choice.
This is just such a bad take, and it’s so disappointing to see it parroted all over the web. So many things are just completely inaccurate about these “statistics”, and it’s probably why it “seems” so many are against autonomous vehicles.
So no, I would absolutely not say they are “less prone to accidents than human drivers”. And that’s just the statistics, to say nothing about the legality that will come up. Especially given just how adverse companies seem to be to admit fault for anything.
Accidents are less likely on highways. Most accidents occur in urban settings. Most deadly accidents occur outside of cities, off-highway.
Sure mile for mile they are less likely. But when they happen they are generally more serious as higher speeds are involved, and if Tesla has shown anything it’s a much more complicated process for autonomous vehicles to navigate and deal with edge cases (like vehicles on the side of the road, emergency or otherwise). Much harder (and dangerous) to just slam on the brakes and put on your hazards on a highway than a side street if the car gets confused.
deleted by creator
I could see accidents being more likely for autonomous cars on highways though
Why? Driving on highways is the easiest kind of driving?
For humans, but not necessarily for camera-based autonomous cars? They also can’t just stop on a highway to prevent accidents.
Well, I do use a car that is able to drive (almost) autonomous on a highway, so I know that the tech to drive on highways exist since several years.
All the difficult stuff – slow traffic, parking cars, crossings, pedestrians… – does not exist on highways.
The only problem that still remains is the problem you mention: what to do in case of trouble?
Of course you have to stop on a highway to prevent an accident or in case of an emergency. That’s exactly what humans do. But then humans get out of the car, set up warning signs, get help &c. Cars cannot do this. The result is reported in this article.
Avoiding dangerous scenarios is the definition of driving safely.
This technology is still an area under active development and nobody (not even Elon!) is claiming this stuff is ready to replace a human in every possible scenario. Are you actually suggesting they should be testing the cars in scenarios that they know wouldn’t be safe with the current technology? Why the fuck would they do that?
OK… if you won’t accept the company’s reported data - who’s data will you accept? Do you have a more reliable source that contradicts what the companies themselves have published?
No that’s a non issue. When a human driver runs over a pedestrian/etc and causes a serious injury, if it’s a civilised country and a sensible driver, then an insurance company will pay the bill. This happens about a million times a week worldwide and insurance is a well established system that people are, for the most part, happy with.
Autonomous vehicles are also covered by insurance. In fact it’s another area where they’re better than humans - because humans frequently fail to pay their insurance bill or even deliberately drive after they have been ordered by a judge not to drive (which obviously voids their insurance policy).
There have been debates over who will pay the insurance premium, but that seems pretty silly to me. Obviously the human who ordered the car to drive them somewhere will have to pay for all costs involved in the drive. And part of that will be insurance.
Well hey - atleast I provided some statistics to back me up. That’s not the case with the people refuting those stats.
I honestly can’t tell if that’s a passive-aggressive swipe at me or not; but just in case it was: stats mean very little w/o context. I believe the quote was “Lies, damned lies, and statistics”. I simply pointed out a few errors with the foundation of these “statistics”. I didn’t need to quote my own statistics because, as I was pointing out, this is a completely apples to oranges comparison. The AV companies want at the same time to preach about how many miles they go w/o accident while comparing themselves to an average they know doesn’t match their own circumstances. Basically they are taking their best case scenario and comparing it against average/worst case scenario stats.
I’d give more weight to the stats if they where completely transparent, worked with a neutral 3rd party, and gave them access to all their video/data/etc to generate (at the very least) proper stats relative to their environment. Sure, I’ll way easier believe waymo/cruises’ numbers over those by tesla; but I still take it with a grain of salt. Because again, they have a HUGE incentive to tweak their numbers to put themselves in the very best light.
No, I see your point, and I agree. These companies are almost guaranteed to cherry-pick those stats, so only a fool would take that as hard evidence. However, I don’t think these stats flat-out lie either. If they show a self-driving car is three times less prone to accidents, I doubt the truth is that humans, in fact, are twice as good. I believe it’s safe to assume that these stats at least point us in the right direction, and that seems to correlate with the little personal experience I have as well. If these systems really sucked as much as the most hardcore AV-skeptics make it seem, I doubt we’d be seeing any of these in use on public roads because the issues would be apparent.
However, the point I’m trying to highlight here is that I make a claim about AV-safety, and I then provide some stats to back me up. People then come telling me that’s nonsense and list a bunch of personal reasons why they feel so but provide nothing concrete evidence except maybe links to articles about individual accidents. That’s just not the kind of data that’s going to change my mind.
People aren’t against autonomous vehicles, but against them getting let lose on public roads with zero checks or transparency. We basically learn what they are and aren’t capable of one crash at a time, when all of that should have been figured out years ago in the lab.
The fact that they can put a safety driver in them to absorb any blame is another scandal.
That’s only due to them not driving in the same condition as humans. Let them drive in fog and suddenly they can’t even see clearly visible emergency vehicles.
None of this would be a problem if those companies would be transparent about what those vehicles are capable of and how they react in unusual situations. All of which they should have tested a million times over in simulation already.
With Tesla the complaint is that the statistics are almost all highway miles so it doesn’t represent the most challenging conditions which is driving in the city. Cruise then exclusively drives in a city and yet this isn’t good enough either. The AV-sceptics are really hard to please…
You’ll always be able to find individual incidents where these systems fail. They’re never going to be foolproof and the more of them that are out there the more news like this you’re going to see. If we reported about human-caused crashes with the same enthusiasm that would be all the news you’re hearing from then on and letting humans drive would seem like the most scandalous thing imaginable.
I do not care about situations that they work in, I care about what situations they will fail at. That’s what matters and that’s what no company will tell you. As said, we learn about the capabilities of self driving cars one crash at a time, and that’s just unacceptable when you could figure all of that out years ago in simulation.
So far none of the self-driving incidences I have seen were some kind of unforeseen freak situation, it was always some rare, but standard thing, fog, pedestrian crossing the road, road blocked by previous crash, etc.
Humans get into accidents all the time. Is that not unacceptable for you?
I feel like people apply standards to self driving cars that they don’t to human driven ones. It’s unreasonable to expect a self driving system never to fail. It’s unreasonable to imagine you can just let it practice in simulation untill it’s perfect. This is what happens when you just narrowly focus on one aspect of self driving cars (individual accidents) - you miss the big picture.
Human drivers need to pass driving test, self-driving cars do not. Human drivers also have a baseline of common sense that self-driving cars do not have, so they really would need more testing than humans, not less.
I don’t expect them to never fail, I just want to know when they fail and how badly.
What’s unreasonable about that?
They are only “individual” because there aren’t very many self-driving cars and because not every fail ends up deadly.
Tesla on FSD could easily pass the driving test that’s required for humans. That’s a nonsensical standard. Most people with fresh license are horribly incompetent drivers.
So why don’t we check it? Right now we are blindly trusting the claims of companies.
What are these claims we’re blindly trusting exaclty? Do you have any direct quotes?
“Over 6.1 million miles (21 months of driving) in Arizona, Waymo’s vehicles were involved in 47 collisions and near-misses, none of which resulted in injuries”
How many human drivers have done millions of miles of driving before they were allowed to drive unsupervised? Your assertion that these systems are untested is just wrong.
“These crashes included rear-enders, vehicle swipes, and even one incident when a Waymo vehicle was T-boned at an intersection by another car at nearly 40 mph. The company said that no one was seriously injured and “nearly all” of the collisions were the fault of the other driver.”
According to insurance companies, human driven cars have 1.24 injuries per million miles travelled. So, if Waymo was “as good as a typical human driver” then there would have been several injuries. They had zero serious injuries.
The data (at least from reputable companies like Waymo) is absolutely available and in excruciating detail. Go look it up.
As already said, I want to know where they fail, preferably in the simulator, not on actual roads. Having vehicles drive in circles on carefully selected roads and making a lot of miles is no big accomplishment and not comparable with humans that have to drive on all the roads under all the conditions.
As a software developer, that’s not how testing works. QA is always trying to come up with weird edge cases to test, but once it’s out in the wild with thousands (or more) of real-world users, there’s always going to be something nobody ever tried to test.
For example, there was a crash where an unmarked truck with exactly the same color as the sky was 90° sideways on the highway. This is just something you wouldn’t think of in lab conditions.
That’s not what is happening. We don’t see weird edge cases, we see self driving cars blocking emergency vehicles and driving through barriers.
The sky is blue and the truck was white. Testing the dynamic range of the camera system is absolutely something you do in in lab situation. And a thing blocking the road isn’t exactly unforeseen either.
Or how about railroad crossing, Tesla can’t even the difference between a truck and a train. Trucks blipping in out of existence, even changing direction, totally normal for Tesla too.
I don’t expect self driving cars to be perfect and handle everything, but I expect the manufacturers to be transparent about their abilities and they aren’t. Furthermore I expect the self driving system to have a way to react to unforeseen situations, crashing in fog is not acceptable when the fact that there was fog was plainly obvious.
Tesla’s system intentionally assumes “a thing blocking the road” is a sensor error.
They have said if they don’t do that, about every hour or so you’d drive past a building and it would slam on the brakes and stop in the middle of the road for no reason (and then, probably, a car would crash into you from behind).
The good sensors used by companies like Waymo don’t have that problem. They are very accurate.
That article you linked isn’t about self driving car. It’s about Tesla “autopilot” which constantly checks if a human is actively holding onto the steering wheel and depends on the human checking the road ahead for hazards so they can take over instantly. If the human sees flashing lights they are supposed to do so.
The fully autonomous cars that don’t need a human behind the wheel have much better sensors which can see through fog.
Just because Tesla is worse than others doesn’t make it not self-driving. The “wiggle the steering wheel” feature is little more than a way to shift blame to driver instead of the crappy self-driving software.
Humans fundamentally can’t do that. If you sit a human in a self driving car doing nothing for hours, they won’t be able to react in a split section when it is needed. Sharing driving in that way does not work.
Is anybody actively testing them in bad weather conditions? Or are we just blindly trusting claims from the manufacturers yet again?
The fact that Tesla requires a human driver to take over constantly makes it not self-driving.
The Human isn’t supposed to be “doing nothing”. The human is supposed to be driving the car. Autopilot is simply keeping the car in the correct lane for you, and also adjusting the speed to match the car ahead.
Tesla’s system won’t even stop at an intersection if you need to give way (for example, a stop sign. Or a red traffic light). There’s plenty of stuff the human needs to be doing other than turning the steering wheel. If there is a vehicle stopped in the middle of the road Tesla’s system will drive straight into it at full speed without even touching the brakes. That’s not something that “might happen” it’s something that will happen, and has happened, any time a stationary vehicle is parked on the road. It can detect the car ahead of you slowing down. It cannot detect a stopped vehicle.
They’ve promised to ship a more capable system “soon” for over a decade. I don’t see any evidence that it’s actually close to shipping though. The autonomous systems by other manufacturers are significantly more advanced. They shouldn’t be compared to Tesla at all.
Yes. Tens of millions of testing and they pay especially close attention to any situations where the sensors could potentially fail. Waymo says their biggest challenge is mud (splashed up from other cars) covering the sensors. But the cars are able to detect this, and the mud can be wiped off. it’s a solvable problem.
Unlike Tesla, most of the other manufacturers consider this a research project and are focusing all of their efforts on making the technology better/safer/etc. They’re not making empty promises and they’re being cautious.
On top of the millions of miles of actual testing, they also record all the sensor data for those miles and use it to run updated versions of the algorithm in exactly the same scenario. So the millions of miles have, in fact, been driven thousands and thousands of times over for each iteration of their software.
You don’t understand why people on Lemmy, an alternative platform not controlled by corporations, might not want to get in a car literally controlled by a corporation?
I can easily see a future where your car locks you in and drives you to a police station if you do something “bad”.
As to their safety, I don’t think there are enough AVs to really judge this yet; of course Cruise’s website will claim Cruise AVs cause less accidents.
I can imagine in the future there will be grid locks in front of the police station with AV cars full of black people when the cops send out an ABP with the description of a black suspect.
We’ve seen plenty of racist AI programs in the past because the programmers, intentionally or not, added their own bias into the training data.
Any dataset sourced from human activity (eg internet text as in Chat GPT) will always contain the current societal bias.
The AIs are not racist themselves, it’s a side effect of the full technology stack: cameras have lower dynamic resolution for darker colors, images get encoded with a gamma that leaves less information in darker areas, AIs that work fine with images of light skinned faces, don’t get the same amount of information from images of dark skinned faces, leading to higher uncertainty and more false positives.
The bias starts with cameras themselves; security cameras in particular should have an even higher dynamic range than the human eye, but instead they’re often a cheap afterthought, and then go figure out what have they recorded.
You’re putting words to my mouth. I wasn’t talking about people on Lemmy not wanting to get into one of these vehicles.
The people here don’t seem to want anyone getting into these vehicles. Many here are advocating for all-out ban on self-driving cars and demand that they’re polished to near perfection on closed roads before being allowed for public use even when the little statistics we already have mostly seem to indicate these are at worst as good as human drivers.
If it’s about Teslas the complain often is the lack of LiDAR and radars and when it’s about Cruise which has both it’s then apparently about corruption. In both cases the reaction tends to be mostly emotional and that’s why every time one provides statistics to back up the claims about safety it just gets called marketing bullshit.
Honestly? I don’t want anyone to use AVs because I fear they will become popular enough that eventually I’ll be required to use one.
I honestly haven’t done enough research on AV safety to feel comfortable claiming anything concrete about it. I personally don’t feel comfortable with it yet since the technology is very new and I essentially need to trust it with my life. Maybe in a few years I’ll be more convinced.
I hear you. I love driving and I have zero interest in buying a self-driving vehicle. However I can still stand outside my own preferences and look at it objectively enough to see that it’s just a matter of time untill AI gets so good at it that it could be considered irresponsible to let a human drive. I don’t like it but that’s progress.
Travelling in a community whose public roads require 100% AVs will probably be the safest implementation of driving, period. But if you don’t trust the tech, then just don’t live or travel in that community.
I suspect we’ll see an AV only lane on the hwy soon, and people will realize how much faster you can get through traffic without tailgaters and lane weavers constantly causing micro inefficiencies at best, and wrecks at worst.
When vehicle-to-vehicle communication improves, and gets standardized, it will be interesting to see “AV road trains” of them going almost bumper to bumper, speeding up and slowing down all at the same time.
Autonomous driving isn’t necessarily controlled by a corporation any more than your PC is. Sure, the earliest computers were all built and run by corporations and governments, but today we all enjoy (the choice of) computing autonomy because of those innovations.
I can be pro AV and EV without being pro corporate control over the industries. It’s a fallacy to conflate the two.
The fact is that letting humans drive in a world with AVs is like letting humans manually manage database entries in a world with MySQL. And the biggest difficulty is that we’re trying to live in a world where both humans and computers are “working out of the same database at the same time”. That’s a much more difficult problem to solve than just having robots do it all.
I still have a gas powered manual that I love driving, but I welcome the advancement in EV/AV technology, and am ready to adopt it as soon as sufficient open standards and repairability can be offered for them.
That’s just outright wrong.
Modern cars communicate with their manufacturer, and we don’t have any means to control this communication.
I can disconnect my PC from the internet, I cannot disconnect my car. I can install whatever OS and apps pleases me on my PC, I cannot do anything about the software on my car’s computer.
So, while I can take full control of my PC if it pleases me, I cannot take any control of my car.
With all due respect, you’re still not understanding what I’m saying.
If you traveled back 50+ years to when computers took up several hundred sq ft, one might try to make the same argument as you: “don’t rent time on IBM’s mainframe, they can see everything you’re computing and could sell it to your competitor! Computers are always owned by the corporate elite, therefore computers are bad and the technology should be avoided!” But fast forward to today, and now you can own your own PC and do everything you want to with it without anyone else involved. The tech progressed. It wasn’t wrong to not trust corporate owned computing, but the future of a tech itself is completely independent from the corporations who develop them.
For a more recent example, nearly 1 year ago, ChatGPT was released to the world. It was the first time most people had any experience with a LLM. And everything you sent to the bot was given to a proprietary, for profit algorithm to further their corporate interests. One might have been tempted to say that LLMs and AI would always serve the corporate elite, and we should avoid the technology like the plague. But fast forward to now, not even one year later, and people have replicated the tech in open source projects which you can run locally on your own hardware. Even Meta (the epitome of corporate control) has open sourced LLaMA to run for your own purposes without them seeing any of it (granted the licenses will prevent what you can do commercially).
The story is the same for virtually any new technology, so my point is, to denounce all of AVs because today corporations own it is demonstrably shortsighted. Again, I’m not interested in the proprietary solutions available right now, but once the tech develops and we start seeing some open standards and repairability enter the picture, I’ll be all for it.
You might want to pick another example, because OpenAI was originally founded as a non-profit organisation, and in order to avoid going bankrupt they became a “limited” profit organisation, which allowed them to source funding from more sources… but really allow them to ever become a big greedy tech company. All they’re able to do is offer some potential return to the people who are giving them hundreds of billions of dollars with no guarantee they’ll ever get it back.
Maybe reread my post. I specifically picked ChatGPT as an example of proprietary corporate control over LLM tech.
I’m not sure your idea of 70s and 80s IT infrastructure is historically accurate.
50 years ago it was technically impossible to rent time on a mainframe/server owned by a third party without having physical access to the hardware.
You, or to be more accurate, your company would buy a mainframe and hire a mathematician turned programmer to write the software you need.
Even if – later in the course of IT development – you/your company did not develop your own software but bought proprietary software this software was technically not able to “call back home” until internet connection became standard.
So no, computers did not start with “the corporate elite” controlling them.
Computerized cars, on the other hand, are controlled by their manufycturers since they were introduced. There is no open source alternative.
Open standards for computerized cars would be great — but I’m very pessimistic they will evolve unless publically funded and/or enforced.
Fine by me, as long as the companies making the cars take all responsibility for accidents. Which, you know, the human drivers do.
But the car companies want to sell you their shitty autonomous driving software and make you be responsible.
If they don’t trust it enough, why should I?
Well you shouldn’t trust it and the car company tells you this. It’s not foolproof and something to be blindly relied on. It’s a system that assists driving but doesn’t replace the driver. Not in it’s current form atleast though they may be getting close.
Then what’s the discussion even about? I don’t want autonomous cars on the street because even their creators don’t trust them to make it.
Most people consider cruise control to be quite useful feature thought it still requires you to pay attention that you stay on your lane and don’t run into a slower vehicle in front of you. You can then keep adding features such as radar for adaptive cruise control and lane assist and this further decreases the stuff you need to pay attention to but you still need to sit there behind the wheel watching the road. These self-driving systems at their current form are no different. They’re just further along the spectrum towards self driving. Some day we will reach the point that you sitting on the driver’s seat just introduces noise to the system so better you go take a nap on the back seat. We’re not there yet however. This is still just super sophisticated cruise control.
It’s kind of like with chess engines. First humans are better at it than computers. Then computer + human is better than just the computer and then at some point the human is no longer needed and computer will from there on always be better.
I don’t feel like this is what we were talking about - at least I was talking about cars that drive alone.
Well Cruise is offering a full self driving taxi service where they don’t mandate you as a passenger to pay attention to the traffic and take control if needed so it’s not fair to say that they don’t trust it so why should you.
With Tesla however this is the case but despite their rather aggresive marketing they still make it very clear that this is not finished yet and you are allowed to use it but you’re still the driver and the safe use of it is on your responsibility. That’s the case with the beta version of any software; you get it early which is what early adopters like but you’re expected to encounter bugs and this is the trade-off you have to accept.
Is the company legally liable for the actions of the self driving car? If no, then they don’t trust the vehicles.
What charges would apply against a human that delayed an emergency vehicle and caused someone to die?
There’s several court cases ongoing about this stuff and I’d be surprised if these companies didn’t have any liability
The discussed incident does not involve driving assist systems, driverless autonomous taxis are already on the streets:
But do they really? If so, why’s there the saying “if you want to murder someone, do it in a car”?
I do think self-driving cars should be held to a higher standard than humans, but I believe the fundamental disagreement is in precisely how much higher.
While zero incidents is naturally what they should be aiming for, it’s more of a goal for continuous improvement, like it is for air travel.
What liability can/should we place on companies that provide autonomous drivers that will ultimately lead to safer travel for everyone?
Well, the laws for sure aren’t perfect, but people are responsible for the accidents they cause. Obviously there are plenty of exceptions, like rich people, but if we’re talking about the ideal real-life scenario, there are consequences for causing an accident. Whether those consequences are appropriate or not is for another discussion.
As far as I know, proper self driving (not “autopilot”) AVs are pretty close to zero incidents if you only count crashes where they are at fault.
When another car runs a red light and smashes into the side of an autonomous vehicle at 40mph… it wasn’t the AV’s fault. Those crashes should not be counted and as far as I know they currently are in most stats.
I’m fine with exactly the same liability as human drivers have. Unlike humans, who are motivated to drive dangerously for fun or get home when they’re high on drugs or continue driving through the night without sleep to avoid paying for a hotel, autonomous vehicles have zero motivation to take risks.
In the absence of that motivation, the simple fact that insurance against accidents is expensive is more than enough to encourage these companies to continue to invest in making their cars safer. Because the safer the cars, the lower their insurance premiums will be.
Globally insurance against car accidents is approaching half a trillion dollars per year and increasing over time. With money like that on the line, why not spend a lazy hundred billion dollars or so on better safety? It won’t actually cost anything - it will save money.
That… almost makes it sound like the main opposition to autonomous cars, would be insurance companies: can’t earn more by raising the premiums, if there are no accidents and a competing insurance company can offer a much cheaper insurance.
I saw a video years ago discussing this topic.
How good is “good enough” for self-driving cars?
The bar is much higher than it is for human drivers because we downplay our own shortcomings and think that we have less risk than the average driver.
Humans can be good drivers, sure. But we have serious attention deficits. This means it doesn’t take a big distraction before we blow a red light or fail to observe a pedestrian.
Hell, lot of humans fail to observe and yield to emergency vehicles as well.
But none of that is newsworthy, but an autonomous vehicle failing to yield is.
My personal opinion is that the Cruise vehicles are as ready for operational use as Teslas FSD, ie. should not be allowed.
Obviously corporations will push to be allowed so they can start making money, but this is probably also the biggest threat to a self-driving future.
Regulated so strongly that humans end up being the ones in the driver seat for another few decades - with the cost in human lives which that involves.
By definition nearly half of us are better than average drivers. Given that driving well is a matter of survival, I’ll take my own driving ability over any autonomous vehicle until they’re safer than 99% of drivers.
I mean, that’s an obvious one.
But how much better would it need to be? 99.9% or 99.9999999999999999999999%, or just 99.01%
A lot of people will have qualms as long as the chance of dying is higher than zero.
People have very poor understanding of statistics and will cancel holidays because someone in the vicinity of where they’re going got bitten by a shark (the current 10 year average of unprovoked shark bites is 74 per year).
Similarly we can expect people to go “I would never get into a self-driving car” when the news inevitably reports on a deadly accident even if the car was hit by a falling rock.
And then there’s the other question:
Since 50% of drivers are worse than the average - would you feel comfortable with those being replaced by self driving cars that were (proven to be) better than the average?
Given that I have no way of communicating with the driverless car and communication is often important to driving, I’d rather the kinda bad driving person. I can compensate for their bad driving when I spot it and give them room. Or sometimes i can even convey information that helps them be safer while they’re not paying attention. I’ve definitely stopped crashes that didn’t involve me using my horn.
There’s no amount of discussion or frantic hand waving that will alter the course of an automated vehicle.
I think you are optimistic about communicating with the worst percentile of drivers, but can’t argue with your reasoning
Once I was driving down what had become a narrow street with high snow banks when I came across an older woman stuck between the banks repeatedly backing into the door of her neighbor’s car as she tried to get out of her driveway. After watching her do this for a couple of minutes I offered to get her car straightened out for her. She was ecstatic and about 30 seconds later we were both able to go about our days.
Sounds like other people might have been better off if you left her there (minus her neighbor) 🙈
I don’t think drivers are supposed to communicate like that… but it raises a better question: how is a cop directing draffic, supposed to communicate with a driverless car?
If there is no mechanism in place, that’s a huge oversight… while if there is one, why didn’t use it in this case?
I’m not gonna join in the discussion, but if you cite numbers, please don’t link to the advertising website of the company itself. They have a strong interest in cherry picking the data to make positive claims.
These companies are the only ones with access to those stats. Nobody else has it. The alternative here is to not cite stats at all. If you think the stats are wrong you can go find alternative source and post it here.
If they do not give researchers access to the data, then I can guarantee you they are cherry picking their results. A research paper in a reputable journal would be easy publicity and create a lot of trust in the public.
They can’t come quick enough for me. I can go to work after a night out without fear I might still be over the limit. I won’t have to drive my wife everywhere. Old people will not be prisoners in their own homes. No more nobheads driving about with exhausts that sound like a shoot out with the cops. No more aresholes speeding about and cutting you up. No more hit and runs. Traffic accident numbers falling through the floor. In fact it could even get to a point where the only accidents are the fault of pedestrians/cyclists not looking where they are going.
All of these are solved by better public transport/safe bike routes and more walkable city designs. All of which is we can do now, not rely on some new shiny tech so that we can keep car companies profits up.
The day I can get in a car and not be simultaneously afraid of my own shortcomings and the fact that there are strangers driving massive projectiles around me is a day I will truly celebrate. The fact is that automobiles are weapons, and I don’t want to be the one wielding it when a single mistake can cost an entire family their lives, although I would like to be there to slam on the brakes and prevent it if needed.
The possibilities really are endless.
When the light turns green the entire row of cars can start moving at the same time like on motor sports. Perhaps you don’t even need traffic lights because they can all just drive to the intersection at the same time and just keep barely missing eachother but never crash due to the superior reaction times and processing speeds of computer. You could also let your car go taxi other people around when you don’t need it.
What if we tied that entire row of cars together as one unit so we could save cost on putting high end computers in each car? Give them their own dedicated lane because we will never have 100% fully autonomous cars on the road unless we make human drivers illegal.
I’ll call my invention a train.
I think you might need lights for pedestrians at crossings.
I did wonder if ambulances would need sirens but again, pedestrians!
Just ban pedestrians. Problem solved,
deleted by creator
Even better!
Which way shall we choose
🤔difficult choice.
deleted by creator
For me it’s because they’re controlled by a few evil companies. I’m not against them in concept. Human drivers are the fucking worst.
It sure would be nice if the bar was the rational one of “better” but people aren’t rational. It’s literally never going to be good enough, because even if it were perfect it still can’t be used.
I think one of the big issues psychologically about self-driving cars that people find really hard to come to terms with is the fact that even with the best systems, accidents are bound to happen and without a driver there’s no one to blame and we hate that.
There is - the company. Which they obviously don’t like. I think a huge chunk of people would be fine with them if the companies took responsibility.
I remember something about Mercedes taking liability when self driving is active, although I don’t know if that still holds. Still, this seems like something that can be approached with proper legislation, assuming we can get past the lobbying BS in the US (though the EU will probably make the right call much sooner).
Yep, I’m pretty confident they won’t be autonomously driving on EU roads legally until they conform to pretty strict legislation which I’m pretty sure will include the liability of the company.
Nice of Mercedes to do the right thing without being forced to, that’s surprisingly rare.
I believe they’re already allowed in Germany actually, although their autonomous driving feature is very limited in where it can be activated. Hopefully other vehicle manufacturers follow suit and take liability when doing autonomous driving (as opposed to “assisted driving”, which many vehicles currently have).