A 2025 Tesla Model 3 drives off of a rural road, clips a tree, loses a tire, flips over, and comes to rest on its roof. Luckily, the driver is alive and well, able to post about it on social media. The problem? The car was driving itself in Full Self-Driving. The subtitle from the […]
A 2025 Tesla Model 3 in Full-Self Driving mode drives off of a rural road, clips a tree, loses a tire, flips over, and comes to rest on its roof. Luckily, the driver is alive and well, able to post about it on social media.
I just don't see how this technology could possibly be ready to power an autonomous taxi service by the end of next week.
The worst part is that this problem has already been solved by using LIDAR. Vegas had fully self-driving cars that I saw perform flawlessly, because they were manufactured by a company that doesn’t skimp on tech and rip people off.
NHTSA stated that the crashes “involved collisions with clearly visible objects that a competent driver would be expected to avoid.” The agency is continuing its investigation.
It'd probably be better to say that Lidar is the path to solving these problems, or a tool that can help solve it. But not solved.
Just because you see a car working perfectly, doesn't mean it always is working perfectly.
The car made a fatal decision faster than any human could possibly correct it. Tesla’s idea that drivers can “supervise” these systems is, at this point, nothing more than a legal loophole.
What I don't get is how this false advertising for years hasn't caused Tesla bankruptcy already?
To put your number into perspective, if it only failed 1 time in every hundred miles, it would kill you multiple times a week with the average commute distance.
That's probably not the failure rate odds but a 1% failure rate is several thousand times higher than what NASA would consider an abort risk condition.
Let's say that it's only 0.01% risk, that's still several thousand crashes per year. Even if we could guarantee that all of them would be non-fatal and would not involve any bystanders such as pedestrians the cost of replacing all of those vehicles every time they crashed plus fixing damage of things they crashed into, lamp posts, shop Windows etc would be so high as it would exceed any benefit to the technology.
It wouldn't be as bad if this was prototype technology that was constantly improving, but Tesla has made it very clear they're never going to add lidar scanners so is literally never going to get any better it's always going to be this bad.
Because the US is an insane country where you can straight up just break the law and as long as you're rich enough you don't even get a slap on the wrist. If some small startup had done the same thing they'd have been shut down.
What I don't get is why teslas aren't banned all over the world for being so fundamentally unsafe.
What I don’t get is why teslas aren’t banned all over the world for being so fundamentally unsafe.
I've argued this point the past year, there are obvious safety problems with Tesla, even without considering FSD.
Like blinker on the steering wheel, manual door handles that are hard to find in emergencies, and distractions from common operations being behind menus on the screen, instead of having directly accessible buttons. With auto pilot they also tend to break for no reason, even on autobahn with clear road ahead! Which can also create dangerous situations.
Tunnels are extra dangerous. Not because of the likelihood of an accident, but because of the situation if an accident happens. It blocks the tunnels easily, fills it with smoke, and kills hundreds.
I use autopilot all the time on my boat. No way in hell I'd trust it in a car. They all occasionally get suicidal. Mine likes to lull you into a sense of false security, then take a sharp turn into a channel marker or cargo ship at the last second.
Exactly. My car doesn’t have AP, but it does have a shed load of sensors and sometimes it just freaks out about stuff being too close to car for no discernible reason. Really freaks me out as I’m like what you see bro we just driving down the motorway.
They've technically had autopilots for over a century, the first one was the oil tanker J.A Moffett in 1920. Though the main purpose of it is to keep the vessel going dead straight as otherwise wind and currents turn it, so using modern car terms I think it would be more accurate to say they have lane assist?
Commercial ones can often do waypoint navigation, following a set route on a map, but I don't think that's very common on personal vessels.
Elon Musk decided they absolutely would not use lidar, years ago when lidar was expensive enough that a decision like that made economic sense to at least try making work. Nowadays lidar is a lot cheaper but for whatever reason Musk has drawn a line in the sand and refuses to back down on it.
Unlike many people online these days I don't believe that Musk is some kind of sheer-luck bought-his-way-into-success grifter, he has been genuinely involved in many of the decisions that made his companies grow. But this is one of the downsides of that (Cybertruck is another). He's forced through ideas that turned out to be amazing, but he's also forced through ideas that sucked. He seems to be increasingly having trouble distinguishing them.
This represents the danger of expecting driver override to avoid accidents. If the driver has to be prepared enough to take control in an accident like this AT ALL TIMES, then the driver is required to be more engaged then they would be if they were just driving manually, because they have to be constantly anticipating not just what other hazards (drivers, pedestrians,…) might be doing, they have to be anticipating in what ways their own vehicle may be trying to kill them.
I've got a car with level 2 automation, and after using it for a few months, I can say that it works really well, but you still need to be engaged to drive the car.
What it is good at...
Maintaining lanes, even in tricky situation with poor paint/markings.
Maintaining speed and distance from the car in front of you.
What it is not good at...
Tricky traffic, congestion, or sudden stops.
Lang changes.
Accounting for cars coming up behind you.
Avoiding road hazards.
I use it mostly like an autopilot. The car takes some of the monotonous workload out of driving, which allows me to move my focus from driving the car to observing traffic, other drivers, and road conditions.
I am never getting into a self driving car. I don't understand why we are investing money into this technology when people can already drive cars on their own, and we should be moving towards robust public transportation systems anyway. A waste of time and resources to... what exactly? Stare at your phone for a few extra minutes a day? Work from home and every city having robust electric transit systems is what the future is supposed to be.
In general I am opposed to machines being in direct control of weapons. I am also definitely of the opinion that there are lots of people who shouldn't be driving.
Ditto! They were about 1 foot from hitting the tree head on rather than glancing off, could have easily been fatal. Weirdly small axises of random chance that the world spins on
They are running proprietary software in the car that people don't even know what is happening in background of. Every electric car needs to be turned into an open source car so that the car cannot be tampered with, no surveillancing, etc etc
Everyone should advocate for that because the alternative is this with Tesla. And I know nobody wants this happening to other car manufacturers cars as well
Why someone will be a passenger in self-driving vehicle? They know that they are a test subjects, part of a "Cartrial" (or whatever should be called)? Self-Driving is not reliable and not necessery. Too much money is invested in something that is "Low priority to have". There are prefectly fast and saf self-driving solutions like High-speed Trains.
It's fine, nothing at all wrong with using just camera vision for autonomous driving. Nothing wrong at all. So a few cars run off roads or don't stop for pedestrians or drive off a cliff. So freaking what, that's the price for progress my friend!
I'd like to think this is unnecessary but just in case here's a /s for y'all.
GPS data predicted the road would go straight as far as the horizon. Camera said the tree or shadow was an unexpected 90 degree bend in the road. So the only rational move was to turn 90 degrees, obviously! No notes, no whammies, flawless
The problem with automation is complacency. Especially in something that people already have a very hard time taking seriously like driving where cell phone distraction, conversations, or just zoning out is super common.
It’s ready, but you’re assuming an entirely general taxi service. It will be carefully constrained like Wayno was. It will be limited to easy streets and times, probably lower speeds, where there is less chance of problems. It’s ready for that.
There’s always a reason. I agree with the author: most likely it misinterpreted a shadow as a solid obstacle. I’m not excusing it but humans do that too, and Tesla will likely ensure it doesn’t come up in their taxi service.
Remember that robotaxi doesn’t actually exist yet. I’m pretty sure the plan is to start with Model Y having human safety drivers. it’s ready for that
I did a trial to find out for myself and my reason for it not being ready yet is a bit different. Full self-driving did perfectly under “normal” conditions, and every time it made me nervous was an edge case. However it made me realize driving is all edge cases. It’s not ready and may never be