I think we all know that HW3 cars can't do self driving. Elon knew this from the beginning. The idea has always been -- implicitly -- that when they solve FSD Tesla will have so much money they can just upgrade the old cars with new hardware. It just turns out that solving FSD is orders of magnitude more difficult than Elon anticipated. Despite many rewrites of the FSD stack Tesla still has a long way to go. Unyielding optimism is a double edged sword. You'll try things other people won't dare to. Incredible rewards when you prove the naysayers wrong but you face harsh criticism when you fail at things others won't even try.
I can immediately tell that this is the perspective of someone whose sole source of information about FSD is the MSM. My Tesla drives me around the city autonomously every day, and you're sitting here telling me it doesn't. FSD is already a solved problem. It's real, you can buy it, and you can use it, right now. And it keeps getting better and better with every new version.
It drives well in your city but we don't know anything about how has Tesla tailored that experience to your city. Did they need to run around mapping it with their own LIDARs and feed into some learning model to provide it in your city? Because that won't scale. Did they use only camera vision? Because there's also multiple videos showing FSD not driving well at all in some cities.
We cannot tell, and that's the whole problem, your anecdote that FSD works well for you is a N = 1, there are other reports on HN about others having the same experience as you, it's great it works for you, people! It doesn't mean it's been generalised and works in any city, we simply cannot know, and the longer time passes the less trust the general public has with Tesla.
As another anecdote: I know at least 2 people for which FSD doesn't work well in their cities, to the point where they're afraid of turning it on even when paying full attention, and both regret paying for it. This is not a good sample either, we just don't have enough data.
What was promised was a robotaxi, not “keep your eyes on the road or FSD will disengage.” As it is, if you let FSD drive and don’t ever intervene, you will have frequent crashes.
Seconded. It has also ebbed and flowed. about 3 months ago it was simply awful. It's gotten a lot better recently but nowhere near good enough that I'll just trust it.
The thesis makes sense. Humans drive with a 20 watt brain, two cameras (eyes), two microphones (ears). Tesla has 9 cameras surrounding the car, I assume some microphones (not sure). Two big fat neural processors that can fails over incase one fails.
If they crack the human perception algorithm, they'd have the cheapest robotaxis on the road and have multi trillion valuation.
However cracking human perception seems much more complex than what was estimated and foretold.
I'm grateful we have multiple US companies trying different approaches. Competition is a wonderful gift.
> However cracking human perception seems much more complex than what was estimated and foretold.
Who was estimating that cracking how the human brain works, specifically the visual cortex together with our decision-making, would be the easier path to this endeavour?
It was clear it would be a very complex thing to anyone who actually knows about the human brain, the rest was just wishful thinking.
Does it make sense? Is there any other area where the simple amount of energy used indicates a problem's solution? How long before my Kettle is self-driving, that's got 3kW it should be winning the Formula 1 by now.
HW3 owner, tesla allows me to do a transfer to a new car, my car is getting into 5 y.o. zone. I'll probably follow the path of transfer.
I Just came from 30h+ driving long weekend. FSD did 25h+ of that driving. It's definitely a value even in its current form, probably not as expensive as Tesla charges thought.
I did this. They transferred it and it worked for a month, then they pulled it from the new car and now I have to repurchase. What a joke. Next car will be a Rivian.
I transferred my fully paid for FSD to my new Model Y through their amnesty program. It was a huge PITA to do. It worked on the new car for about a month, then the FSD upgrade was removed from the car randomly. I can’t get them to add it back for free, as it should have been, instead they want me to re-buy FSD to get it on my new car again. Nightmare.
Can you share how much of that 25 hours wouldn't have been handled by current tech like lane control and radar cruise control on other cars? Was it doing a lot of lane changing or exits/entrances to roads? As a non-tesla owner I'm trying to gauge how much extra value there would be in switching since I do a lot of long trips.
The article says "Making a mistake is not a fraud. If Tesla really thought that it could deliver unsupervised self-driving to vehicles equipped with HW3 and, at one point, it figured out that it couldn’t, it’s not fraud even though it used that as a selling point for millions of vehicles for years."
Does that mean I can claim I made a mistake if I do something wrong and get caught?
It may or may not be fraud but Tesla is still liable for their promises. If they said it would do X and it doesn’t, that’s a defective product and Tesla owes compensation.
> ... if Tesla knows that it can’t deliver unsupervised self-driving on HW3, it needs to let owners know right now and stop selling the software package to HW3 owners without a clear plan to make things right. Otherwise, this quickly becomes fraudulent.
they are delivering updates as of now, it's just how FSD it will be lol. It seems like the model will be a quantized version of what will be on HW5 Robotaxi.
>Let’s be honest. Tech is rarely supported with software updates after 5-7 years. Tesla Hardware 3 is entering that zone. It is becoming obsolete and normally, it wouldn’t be a problem, but Tesla sold a Full Self-Driving capability package for up to $15,000 based on this hardware that it never delivered.
>At the minimum, it will have to reimburse that, but owners can even argue that they bought the car because Elon Musk told them it would become self-driving over time and become an “appreciating asset.”
>This could quickly become a very large liability for Tesla, and the way it handles it is also important.
Anyone who aaccepts a claim that a vehicle will become an appreciating asset is an idiot.
It might appreciate, once, if Tesla ships FSD and creates a robotaxi market.
However, something in active production, providing a commodity value add, and competing with newer versions of itself is never going to continually appreciate.
This is such a weird thing to be upset about. Musk announced this hardware was ready for full self driving in 2016. So anyone who bought this now has an 8 year old car that still doesn't self-drive. The author is all upset that this 8 year old car may never get the FSD turned on but how is really worse than owning the car for the last 8 years. If I had to pick something to be annoyed about it would be that they still haven't delivered full self driving in a period of time that most people would consider the full length of time they expected to own the car!
It's also just a really weird attitude, that this will turn out to be a massive liability and that they'll need to think about retrofits. The truth is simple: They didn't sell that many of these cars, of those they did sell, the majority won't have bought the self driving package for the exorbitant price and most owners will have upgraded to a newer model anyway. So what's the plan? They'll do nothing, they won't admit it won't happen they'll just say they'll get to it one day, continue to offer relatively small incentives to upgrade away from these cars anyway. What are you going to do? Sue them. Good luck. You have literally shown that over the last 8 years he can just wave his arms and say "Well we'll get to it one day" and you believe he's fulfilled his obligations, so he'll continue doing that.
I don't know anything about the industry, or definitions, or terms used. I'm just "a person in the street in a hurry".
"Unsupervised" sounds like you can take a nap in the back while the car takes you somewhere to me.
What's an appropriate safety standard the self-driving car must meet to make that safe enough? I'd say that the self-driving needs to be [at least] as reliable as a human driver.
If someone said that the average human driver has some kind of failure (loss of attention causing accident, feels unwell and needs to pull over, etc) once every 122,000 miles, that sounds about right to me.
Humans aged 30 to 80 have 350 accidents every 100 million miles [1]. That implies over 280,000 miles between collisions. Intervention every 122,000 miles is the average 17 to 18-year old.
(The data also suggest widespread self-driving cars should prompt adjusting the unsupervised driving age up to 18 to 20.)
> Intervention every 122,000 miles is the average 17 to 18-year old.
Your source says a crash every 70,000 miles for 17 year olds, to my understanding. Number of unsafe actions that would warrant a disengagement, but didn't happen to cause a crash, would likely be significantly higher (e.g: the 17y/o driver speeds 100 times, and crashes once).
> Your source says a crash every 70,000 miles for 17 year olds, to my understanding
Correct.
> Number of unsafe actions that would warrant a disengagement, but didn't happen to cause a crash, would likely be significantly higher
Why? When I’m in a Waymo, I’m taking a nap or entirely uninvolved in the driver nearly 100% of the time. I am worse situated than a drunk, high teen because I’m in the back seat and not even pretending or trying to pay attention. And that’s Level 4. If you’re marketing Level 5, every intervention should be treated like an autopilot failure in aviation: more than we do for crashes.
If I’m in a car with a driver where I need to regularly grab the wheel to keep them from colliding, they’re an unsafe driver.
What percentage of speeding, for instance, do you think materialises into a crash? Even the most dangerous unsafe actions like running a red light won't cause a crash most of the time. So if a 17y/o crashes every 70,000 miles, they're doing something unsafe far more frequently than that.
> every intervention should be treated like an autopilot failure in aviation: more than we do for crashes
I think the benchmark for now should be whether it's safer than a human driver. We could compare human crashes caused to autonomous crashes caused, or unsafe human actions to unsafe autonomous actions, but I believe you're trying to conflate actual human crashes to unsafe autonomous actions.
> you're trying to conflate actual human crashes to unsafe autonomous actions
For a Level 5 system, yes, a situation where the car requires intervention is equivalent to a crash or avoided crash. Particularly if it won’t have a steering wheel or pedals.
A disengagement may be equivalent to a potential crash (or, just a mistake on behalf of the human doing the disengaging), but not to one actual crash. That someone intervenes to prevent a dodgy overtake does not mean that car was definitely (or even likely) going to crash had they not intervened, for instance.
... or wait until all HW3 models are recycled. Is this a viable strategy? Sell a feature that never arrives and the owner puts the product in the trash and never has used/accessed that feature? This was announced 2016.
Electrek, as a publication outlet, has basically lost all credibility and has been in a downward spiral for the last few years. I would take any of their reporting on Tesla with a heavy grain of salt.
I think we all know that HW3 cars can't do self driving. Elon knew this from the beginning. The idea has always been -- implicitly -- that when they solve FSD Tesla will have so much money they can just upgrade the old cars with new hardware. It just turns out that solving FSD is orders of magnitude more difficult than Elon anticipated. Despite many rewrites of the FSD stack Tesla still has a long way to go. Unyielding optimism is a double edged sword. You'll try things other people won't dare to. Incredible rewards when you prove the naysayers wrong but you face harsh criticism when you fail at things others won't even try.
I can immediately tell that this is the perspective of someone whose sole source of information about FSD is the MSM. My Tesla drives me around the city autonomously every day, and you're sitting here telling me it doesn't. FSD is already a solved problem. It's real, you can buy it, and you can use it, right now. And it keeps getting better and better with every new version.
Anecdotes are never a good sample.
It drives well in your city but we don't know anything about how has Tesla tailored that experience to your city. Did they need to run around mapping it with their own LIDARs and feed into some learning model to provide it in your city? Because that won't scale. Did they use only camera vision? Because there's also multiple videos showing FSD not driving well at all in some cities.
We cannot tell, and that's the whole problem, your anecdote that FSD works well for you is a N = 1, there are other reports on HN about others having the same experience as you, it's great it works for you, people! It doesn't mean it's been generalised and works in any city, we simply cannot know, and the longer time passes the less trust the general public has with Tesla.
As another anecdote: I know at least 2 people for which FSD doesn't work well in their cities, to the point where they're afraid of turning it on even when paying full attention, and both regret paying for it. This is not a good sample either, we just don't have enough data.
> It drives well in your city but we don't know anything about how has Tesla tailored that experience to your city.
Spot on.
I am keen to see how FSD manages to execute hook turns to accommodate for the trams here in Melbourne, AU.
https://en.m.wikipedia.org/wiki/Hook_turn
What was promised was a robotaxi, not “keep your eyes on the road or FSD will disengage.” As it is, if you let FSD drive and don’t ever intervene, you will have frequent crashes.
I love FSD, but this is absolutely true. It is an excellent hands-free driving system. It is NOT autonomous without supervision.
Seconded. It has also ebbed and flowed. about 3 months ago it was simply awful. It's gotten a lot better recently but nowhere near good enough that I'll just trust it.
Thats good enough for me. I have no intention of using my car as taxi. I'd love a reasonably priced SFSD in my country.
The thesis makes sense. Humans drive with a 20 watt brain, two cameras (eyes), two microphones (ears). Tesla has 9 cameras surrounding the car, I assume some microphones (not sure). Two big fat neural processors that can fails over incase one fails.
If they crack the human perception algorithm, they'd have the cheapest robotaxis on the road and have multi trillion valuation.
However cracking human perception seems much more complex than what was estimated and foretold.
I'm grateful we have multiple US companies trying different approaches. Competition is a wonderful gift.
> However cracking human perception seems much more complex than what was estimated and foretold.
Who was estimating that cracking how the human brain works, specifically the visual cortex together with our decision-making, would be the easier path to this endeavour?
It was clear it would be a very complex thing to anyone who actually knows about the human brain, the rest was just wishful thinking.
Does it make sense? Is there any other area where the simple amount of energy used indicates a problem's solution? How long before my Kettle is self-driving, that's got 3kW it should be winning the Formula 1 by now.
> Humans drive with a 20 watt brain, two cameras (eyes), two microphones (ears).
And a huge amount of tactile feedback.
Telling me other motorists drive by Braille actually explains quite a lot
HW3 owner, tesla allows me to do a transfer to a new car, my car is getting into 5 y.o. zone. I'll probably follow the path of transfer.
I Just came from 30h+ driving long weekend. FSD did 25h+ of that driving. It's definitely a value even in its current form, probably not as expensive as Tesla charges thought.
I did this. They transferred it and it worked for a month, then they pulled it from the new car and now I have to repurchase. What a joke. Next car will be a Rivian.
Sounds like Tesla baited you into buying two of their cars with the promise of FSD.
huh? Can you share more details? why do they force you to do so? what was their justification?
p.s. I was looking at Rivian, but it seems like their delivery of affordable cars is in limbo.
I transferred my fully paid for FSD to my new Model Y through their amnesty program. It was a huge PITA to do. It worked on the new car for about a month, then the FSD upgrade was removed from the car randomly. I can’t get them to add it back for free, as it should have been, instead they want me to re-buy FSD to get it on my new car again. Nightmare.
I don’t know if that clarified.
Seems perfectly clarified to me.
For the price of the two teslas that OP ended up buying, they could have afforded the much nicer R1
Can you share how much of that 25 hours wouldn't have been handled by current tech like lane control and radar cruise control on other cars? Was it doing a lot of lane changing or exits/entrances to roads? As a non-tesla owner I'm trying to gauge how much extra value there would be in switching since I do a lot of long trips.
What if you don't want a new Tesla but want the FSD you already bought?
just wait, next year for sure...
The article says "Making a mistake is not a fraud. If Tesla really thought that it could deliver unsupervised self-driving to vehicles equipped with HW3 and, at one point, it figured out that it couldn’t, it’s not fraud even though it used that as a selling point for millions of vehicles for years."
Does that mean I can claim I made a mistake if I do something wrong and get caught?
Fraud requires mens rea. You can stupid your way into plenty of crimes. But usually not fraud.
It may or may not be fraud but Tesla is still liable for their promises. If they said it would do X and it doesn’t, that’s a defective product and Tesla owes compensation.
> but Tesla is still liable for their promises. If they said it would do X and it doesn’t, that’s a defective product and Tesla owes compensation.
Not really. A judge recently ruled that it was 'corporate puffery' in a very similar case.
https://www.theverge.com/2024/10/1/24259588/tesla-lawsuit-au...
key bit:
> ... if Tesla knows that it can’t deliver unsupervised self-driving on HW3, it needs to let owners know right now and stop selling the software package to HW3 owners without a clear plan to make things right. Otherwise, this quickly becomes fraudulent.
they are delivering updates as of now, it's just how FSD it will be lol. It seems like the model will be a quantized version of what will be on HW5 Robotaxi.
> quantized version of what will be on HW5 Robotaxi
HW5 will feature analog optics and computing? How steampunk!
>Let’s be honest. Tech is rarely supported with software updates after 5-7 years. Tesla Hardware 3 is entering that zone. It is becoming obsolete and normally, it wouldn’t be a problem, but Tesla sold a Full Self-Driving capability package for up to $15,000 based on this hardware that it never delivered.
>At the minimum, it will have to reimburse that, but owners can even argue that they bought the car because Elon Musk told them it would become self-driving over time and become an “appreciating asset.”
>This could quickly become a very large liability for Tesla, and the way it handles it is also important.
Anyone who aaccepts a claim that a vehicle will become an appreciating asset is an idiot.
It might appreciate, once, if Tesla ships FSD and creates a robotaxi market.
However, something in active production, providing a commodity value add, and competing with newer versions of itself is never going to continually appreciate.
Sounds like fraud to me
This is such a weird thing to be upset about. Musk announced this hardware was ready for full self driving in 2016. So anyone who bought this now has an 8 year old car that still doesn't self-drive. The author is all upset that this 8 year old car may never get the FSD turned on but how is really worse than owning the car for the last 8 years. If I had to pick something to be annoyed about it would be that they still haven't delivered full self driving in a period of time that most people would consider the full length of time they expected to own the car!
It's also just a really weird attitude, that this will turn out to be a massive liability and that they'll need to think about retrofits. The truth is simple: They didn't sell that many of these cars, of those they did sell, the majority won't have bought the self driving package for the exorbitant price and most owners will have upgraded to a newer model anyway. So what's the plan? They'll do nothing, they won't admit it won't happen they'll just say they'll get to it one day, continue to offer relatively small incentives to upgrade away from these cars anyway. What are you going to do? Sue them. Good luck. You have literally shown that over the last 8 years he can just wave his arms and say "Well we'll get to it one day" and you believe he's fulfilled his obligations, so he'll continue doing that.
> anyone who bought this now has an 8 year old car that still doesn't self-drive
Senior people at Tesla are still repeating the claim, which freshens liability. And Tesla will still sell you FSD on HW3.
Why would anyone spend 15k based on a promise?
Every second wedding out there? /s
> According to most experts, Tesla needs a ~1,000x increase in miles between disengagement to deliver on its unsupervised self-driving promises.
So they need 122,000 miles between disengagement? Really? That's a rather crazy standard.
I don't know anything about the industry, or definitions, or terms used. I'm just "a person in the street in a hurry".
"Unsupervised" sounds like you can take a nap in the back while the car takes you somewhere to me.
What's an appropriate safety standard the self-driving car must meet to make that safe enough? I'd say that the self-driving needs to be [at least] as reliable as a human driver.
If someone said that the average human driver has some kind of failure (loss of attention causing accident, feels unwell and needs to pull over, etc) once every 122,000 miles, that sounds about right to me.
Humans aged 30 to 80 have 350 accidents every 100 million miles [1]. That implies over 280,000 miles between collisions. Intervention every 122,000 miles is the average 17 to 18-year old.
(The data also suggest widespread self-driving cars should prompt adjusting the unsupervised driving age up to 18 to 20.)
[1] https://www.friedmansimon.com/faqs/how-common-are-car-accide...
> Intervention every 122,000 miles is the average 17 to 18-year old.
Your source says a crash every 70,000 miles for 17 year olds, to my understanding. Number of unsafe actions that would warrant a disengagement, but didn't happen to cause a crash, would likely be significantly higher (e.g: the 17y/o driver speeds 100 times, and crashes once).
> Your source says a crash every 70,000 miles for 17 year olds, to my understanding
Correct.
> Number of unsafe actions that would warrant a disengagement, but didn't happen to cause a crash, would likely be significantly higher
Why? When I’m in a Waymo, I’m taking a nap or entirely uninvolved in the driver nearly 100% of the time. I am worse situated than a drunk, high teen because I’m in the back seat and not even pretending or trying to pay attention. And that’s Level 4. If you’re marketing Level 5, every intervention should be treated like an autopilot failure in aviation: more than we do for crashes.
If I’m in a car with a driver where I need to regularly grab the wheel to keep them from colliding, they’re an unsafe driver.
> Why?
What percentage of speeding, for instance, do you think materialises into a crash? Even the most dangerous unsafe actions like running a red light won't cause a crash most of the time. So if a 17y/o crashes every 70,000 miles, they're doing something unsafe far more frequently than that.
> every intervention should be treated like an autopilot failure in aviation: more than we do for crashes
I think the benchmark for now should be whether it's safer than a human driver. We could compare human crashes caused to autonomous crashes caused, or unsafe human actions to unsafe autonomous actions, but I believe you're trying to conflate actual human crashes to unsafe autonomous actions.
> you're trying to conflate actual human crashes to unsafe autonomous actions
For a Level 5 system, yes, a situation where the car requires intervention is equivalent to a crash or avoided crash. Particularly if it won’t have a steering wheel or pedals.
A disengagement may be equivalent to a potential crash (or, just a mistake on behalf of the human doing the disengaging), but not to one actual crash. That someone intervenes to prevent a dodgy overtake does not mean that car was definitely (or even likely) going to crash had they not intervened, for instance.
1. There is a big difference between interventions and collisions. 2. I make mistakes a lot more often than every 122,000 miles.
Is think it should be at least 165 000 miles in the US, that's what waymo claim anyway.
i think it should be 100x higher than that to be called level 5
... or wait until all HW3 models are recycled. Is this a viable strategy? Sell a feature that never arrives and the owner puts the product in the trash and never has used/accessed that feature? This was announced 2016.
Electrek, as a publication outlet, has basically lost all credibility and has been in a downward spiral for the last few years. I would take any of their reporting on Tesla with a heavy grain of salt.