In the piece — titled “Can You Fool a Self Driving Car?” — Rober found that a Tesla car on Autopilot was fooled by a Wile E. Coyote-style wall painted to look like the road ahead of it, with the electric vehicle plowing right through it instead of stopping.
The footage was damning enough, with slow-motion clips showing the car not only crashing through the styrofoam wall but also a mannequin of a child. The Tesla was also fooled by simulated rain and fog.
Why would a car that expensive not have a LiDAR sensor?
They cost hundreds of dollars!!
/$
Because commonly they use radar instead, the modern sensors that are also used for adaptive cruise control even have heaters to defrost the sensor housing in winter
Hell, they don’t even have radar anymore, despite even a lot of low end cars having that.
Technically cost savings, but it seems mostly about stubborn insistence on cameras being enough.
The supplier he was using couldn’t supply lidar fast enough, and it was at risk of slowing his manufacturing.
So he worked in a way to not need it, and tell everyone this solution was superior.
Read about this somewhere. Iirc, Elon felt cameras were better than LiDAR at a time when that was kinda true, but the technology improved considerably in the interim and he pridefully refuses to admit he needs to adapt.
Elon felt cameras were better than LiDAR at a time when that was kinda true,
that was never true
Found the article! I had breezed through the thing. I was incorrect about the LiDAR/camera thing. Instead it was: ‘Elon even admitted that “very high-resolution radars would be better than pure vision”, but he claimed that “such a radar does not exist”’
He, of course was incorrect and proven incorrect, but ‘the problem is that Musk has taken such a strong stance against [LiDARs] for so long that now that they have improved immensely and reduced in prices, he still can’t admit that he was wrong and use them.’
he claimed that “such a radar does not exist”
Lol just like his Nazi forefathers in WWII who refused to believe (more than once!) the British had the advanced radar that they actually did have.
Couldn’t he just use both… Like LiDAR as a contingency
I added a correction in another reply. Basically he stubbornly refuses to believe a powerful enough LiDAR exists. So I suppose he is all-in on “LieDAR” technology instead (yes, I kinda feel bad about this pun too)
He could. In fact Waymos, for instance, do and are fully autonomous commercial taxis while Tesla are still 2 years out from full self driving for the tenth year in a row
I don’t even understand that logic. Use both. Even if one is significantly better than the other, they each have different weaknesses and can mitigate for each other.
It was always just to save money and pad the profit margins
A LiDAR sensor couldn’t add more than a few hundred to a car, surely
And thats a few hundred less profit, so we cant have that.
They ditched radar at a time when radar only added probably about $50 a car according to some estimates.
It may technically get a smidge more profitable, but it almost seems like it’s more about hubris around tech shouldn’t need more than a human to do as well. Which even if it were true, is a stupid stance to take when in that scenario you could have better than human senses.
And to make him think he’s a smart boy.
He didn’t think they were better. He thought Tesla could get away without the more expensive lidar. Basically “humans can drive with just vision, that should be enough for an autonomous vehicle also.” Basically he did it because lidar is more expensive.
Even if humans can drive with just vision:
- Human vision has superb dynamic range, auto focus and other features that cameras thousands of dollars could only dream of (for most).
- I don’t want self driving cars to drive like humans. Humans make too many mistakes and are prone to bad decisions (see the need for safety systems in the first place).
- Train and bus transport is better for most people. Driving is a luxury, we’ve forced people that should not be driving to do so in order to keep a job and barely survive.
Human vision is great, human attention is not and neither is their reaction time. Computers are 100x better at both of those. If you throw lidar into the mix, then a car’s vision is now much better than a humans.
IMHO self driving cars have to be statistically 10x better than humans to be widely implemented. If it passes that threshold them I’m fine with them.
I didn’t think it was about the cost. I think he just likes to be contrarian because he thinks it makes him seem smart. He then needs to stick by his stupid decisions.
I’m assuming it’s a cost because it makes sense to me. His goal was to build full-self-driving (FSD) into ever car and sell the service as a subscription.
If you add another $500 in components then that’s a lot of cost (probably a lot cheaper today but this was 10 years ago). Cameras are cheap and can be spread around the car with additional non-FSD benefits where as lidar has much fewer uses when the cost is not covered. I think he used his “first-principles” argument as a justification to the engineers as another way for him to say “I don’t want to pay for lidar, make it work with the cheap cameras.”
Why else would management take off the table an obviously extremely useful safety tool?
Because Musk insists that cameras are better and that LiDAR is flawed
That’s not really true.
He use lidar in SpaceX because he knows it’s the right tool for their specific job.
His stance is it’s not that cameras are better, but that cameras have to be so good for a truly AV that putting effort into both means you’re not going to make your cameras good enough to do it and rely on lidar instead. That and cost.
If the car can’t process and understand the world via cameras, it’s doomed to fail at a mass scale anyway.
It might be a wrong stance, but it’s not that lidar is flawed.
Tesla even uses lidar to ground truth their cameras
I think the bigger issue is that he is saying redundancy is not important. He thinks cameras could be good enough, well fine, but the failure results in loss of life so build in redundancy: lidar, radar, anything to failover. The fact that cutting costs OR having a belief that one system is good enough is despicable.
Ya, no redundancy is a problem for sure.
Because Tesla makes money, with the byproduct of cars.
There was a comedy channel on Youtube aeons ago that would do “if x were honest” videos. Their slogan for Valve was “We used to make games. Now we make money.”
Honest Ads is still around, they’ve just moved off the Cracked channel like how PitchMeetings moved off the ScreenRant channel.
It wasn’t Cracked, it was a channel called Gaming Wildlife, last video on the channel was posted 6 years ago; I think they’re defunct. here’s the video in question.
Ah, sorry, it just sounded very similar. Still recommend the honest ads though.
It amazes me they’re still doing them; feels like Roger has been at it for a decade now.
Light aren’t radar systems don’t work internationally because they’re functionally band in many asian and european countries. Instead of making one system that was almost complete finished, they went all camera and now none of it works right.
Cameras are cheaper…that’s it
It’s a car that’s at least £10k
Cost cutting. Lidar is cheaper now but was relative expensive and increased tech debt and maintenance. Also he legit thought that “human see good - then car see good too”. Tesla is being led by a literal idiot.
inb4 musk bricks his car remotely
I don’t think he’s actually driving it. Might even have been a rental.
He said it was his own car when he was interviewing with Phillip DeFranco. He also said he’s still planning on getting another Tesla when an updated model comes out.
I think it’s Rober’s own, I remember him doing videos in the past testing it out.
They sort of do with the super chargers already. If someone has a car that’s deemed to damaged its disconnected
E. Lon Musk. Supah. Geenius.
Of course it disengages self driving modes before an impact. Why would they want to be liable for absolutely anything?
I can’t wait for all this brand loyalty and fan people culture to end. Why is this even a thing? Like talking about box office results, companies financials and stocks…. If you’re not an investor of theirs, just stop. It sounds like you’re working for free for them.
Man these cars don`t have a Radar ? Only eyes like most of the animals? Not even as a backup ? Not talking about Lasers, but Radar? Truck drivers, better not paint a scenery on the back of your truck.
props to the LiDAR car for trying to drive through that heavy rain - does it just have enough resolution to see through the droplets to determine that there isn’t a solid object within braking distance?
As Electrek points out, Autopilot has a well-documented tendency to disengage right before a crash. Regulators have previously found that the advanced driver assistance software shuts off a fraction of a second before making impact.
This has been known.
They do it so they can evade liability for the crash.
Not sure how that helps in evading liability.
Every Tesla driver would need super human reaction speeds to respond in 17 frames, 680ms(I didn’t check the recording framerate, but 25fps is the slowest reasonable), less than a second.
It’s not likely to work, but them swapping to human control after it determined a crash is going to happen isn’t accidental.
Anything they can do to mire the proceedings they will do. It’s like how corporations file stupid junk motions to force plaintiffs to give up.
They’re talking about avoiding legal liability, not about actually doing the right thing. And of course you can see how it would help them avoid legal liability. The lawyers will walk into court and honestly say that at the time of the accident the human driver was in control of the vehicle.
And then that creates a discussion about how much time the human driver has to have in order to actually solve the problem, or gray areas about who exactly controls what when, and it complicates the situation enough where maybe Tesla can pay less money for the deaths that they are obviously responsible for.
They’re talking about avoiding legal liability, not about actually doing the right thing. And of course you can see how it would help them avoid legal liability. The lawyers will walk into court and honestly say that at the time of the accident the human driver was in control of the vehicle.
The plaintiff’s lawyers would say, the autopilot was engaged, made the decision to run into the wall, and turned off 0.1 seconds before impact. Liability is not going disappear when there were 4.9 seconds of making dangerous decisions and peacing out in the last 0.1.
The plaintiff’s lawyers would say, the autopilot was engaged, made the decision to run into the wall, and turned off 0.1 seconds before impact. Liability is not going disappear when there were 4.9 seconds of making dangerous decisions and peacing out in the last 0.1.
these strategies aren’t about actually winning the argument, it’s about making it excessively expensive to have the argument in the first place. Every motion requires a response by the counterparty, which requires billable time from the counterparty’s lawyers, and delays the trial. it’s just another variation on “defend, depose, deny”.
Defense lawyers can make a lot of hay with details like that. Nothing that gets the lawsuit dismissed but turning the question into “how much is each party responsible” when it was previously “Tesla drove me into a wall” can help reduce settlement amounts (as these things rarely go to trial).
They can also claim with a straight face that autopilot has a crash rate that is artificially lowered without it being technically a lie in public, in ads, etc
Which side has more money for lawyers though?
If the disengage to avoid legal consequences feature does exist, then you would think there would be some false positive incidences where it turns off for no apparent reason. I found some with a search, which are attributed to bad software. Owners are discussing new patches fixing some problems and introducing new ones. None of the incidences caused an accident, so maybe the owners never hit the malicious code.
if it randomly turns off for unapparent reasons, people are going to be like ‘oh that’s weird’ and leave it at that. Tesla certainly isn’t going to admit that their code is malicious like that. at least not until the FBI is digging through their memos to show it was. and maybe not even then.
The given reason is simply that it will return control to the driver if it can’t figure out what to do, and all evidence is consistent with that. All self-driving cars have some variation of this. However yes it’s suspicious when it disengages right when you need it most. I also don’t know of data to support whether this is a pattern or just a feature of certain well-published cases.
Even in those false positives, it’s entirely consistent with the ai being confused, especially since many of these scenarios get addressed by software updates. I’m not trying to deny it, just say the evidence is not as clear as people here are claiming
I think Mark (who made the OG video) speculated it might be the ultrasonic parking sensors detecting something and disengaging.
The self-driving equivalent of “Jesus take the wheel!”
If it knows it’s about to crash, then why not just brake?
Removed by mod
AEB braking was originally designed to not prevent a crash, but to slow the car when a unavoidable crash was detected.
It’s since gotten better and can also prevent crashes now, but slowing the speed of the crash was the original important piece. It’s a lot easier to predict an unavoidable crash, than to detect a potential crash and stop in time.
Insurance companies offer a discount for having any type of AEB as even just slowing will reduce damages and their cost out of pocket.
Not all AEB systems are created equal though.
Maybe disengaging AP if an unavoidable crash is detected triggers the AEB system? Like maybe for AEB to take over which should always be running, AP has to be off?
Breaks require a sufficient stopping distance given the current speed, driving surface conditions, tire condition, and the amount of momentum at play. This is why trains can’t stop quickly despite having breaks (and very good ones at that, with air breaks on every wheel) as there’s so much momentum at play.
If autopilot is being criticized for disengaging immediately before the crash, it’s pretty safe to assume its too late to stop the vehicle and avoid the collision
This autopilot shit needs regulated audit log in a black box, like what planes or ships have.
In no way should this kind of manipulation be legal.
So, as others have said, it takes time to brake. But also, generally speaking autonomous cars are programmed to dump control back to the human if there’s a situation it can’t see an ‘appropriate’ response to.
what’s happening here is the ‘oh shit, there’s no action that can stop the crash’, because braking takes time (hell, even coming to that decision takes time, activating the whoseitwhatsits that activate the brakes takes time.) the normal thought is, if there’s something it can’t figure out on it’s own, it’s best to let the human take over. It’s supposed to make that decision well before, though.
However, as for why tesla is doing that when there’s not enough time to actually take control?
It’s because liability is a bitch. Given how many teslas are on the road, even a single ruling of “yup it was tesla’s fault” is going to start creating precedent, and that gets very expensive, very fast. especially for something that can’t really be fixed.
for some technical perspective, I pulled up the frame rates on the camera system (I’m not seeing frame rate on the cabin camera specifically, but it seems to either be 36 in older models or 24 in newer.)
14 frames @ 24 fps is about 0.6 seconds@36 fps, it’s about 0.4 seconds. For comparison, average human reaction to just see a change and click a mouse is about .3 seconds. If you add in needing to assess situation… that’s going to be significantly more time.
Any crash within 10 of a disengagement counts as it being on so you can’t just do this.
Where are you seeing that?
There’s nothing I’m seeing as a matter of law or regulation.
In any case liability (especially civil liability) is an absolute bitch. It’s incredibly messy and likely will not every be so cut and dry.
Well it’s not that it was a crash caused by a level 2 system, but that they’ll investigate it.
So you can’t hide the crash by disengaging it just before.
Looks like it’s actually 30s seconds not 10s, or maybe it was 10s once upon a time and they changed it to 30?
The General Order requires that reporting entities file incident reports for crashes involving ADS-equipped vehicles that occur on publicly accessible roads in the United States and its territories. Crashes involving an ADS-equipped vehicle are reportable if the ADS was in use at any time within 30 seconds of the crash and the crash resulted in property damage or injury
https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-06/ADAS-L2-SGO-Report-June-2022.pdf
I get the impression it disengages so that Tesla can legally say “self driving wasn’t active when it crashed” to the media.
Thanks for that.
The thing is, though the NHTSA generally doesn’t make a determination on criminal or civil liability. They’ll make the report about what happened and keep it to the facts, and let the courts sort it out whose at fault. they might not even actually investigate a crash unless it comes to it. It’s just saying “when your car crashes, you need to tell us about it.” and they kinda assume they comply.
Which, Tesla doesn’t want to comply, and is one of the reasons Musk/DOGE is going after them.
I knew they wouldn’t necessarily investigate it, that’s always their discretion, but I had no idea there was no actual bite to the rule if they didn’t comply. That’s stupid.
10n what
Oops haha, 10 seconds.
That makes so little sense… It detects it’s about to crash then gives up and lets you sort it?
That’s like the opposite of my Audi who does detect I’m about to hit something and gives me either a warning or just actively hits the brakes if I don’t have time to handle it.
If this is true, this is so fucking evil it’s kinda amazing it could have reached anywhere near prod.even your audi is going to dump to human control if it can’t figure out what the appropriate response is. Granted, your Audi is probably smart enough to be like “yeah don’t hit the fucking wall,” but eh… it was put together by people that actually know what they’re doing, and care about safety.
Tesla isn’t doing this for safety or because it’s the best response. The cars are doing this because they don’t want to pay out for wrongful death lawsuits.
If this is true, this is so fucking evil it’s kinda amazing it could have reached anywhere near prod.
It’s musk. he’s fucking vile, and this isn’t even close to the worst thing he’s doing. or has done.
The point is that they can say “Autopilot wasn’t active during the crash.” They can leave out that autopilot was active right up until the moment before, or that autopilot directly contributed to it. They’re just purely leaning into the technical truth that it wasn’t on during the crash. Whether it’s a courtroom defense or their own next published set of data, “Autopilot was not active during any recorded Tesla crashes.”
Lol yeah they’re “furious”
I have no clue what you’re trying to say, but the significant amount of outrage a day or two later that I suddenly saw explode on Twitter was mind boggling to me. Couldn’t tell if it was bots or morons but either way, people are big mad about the video.
My 500$ robot vacuum has LiDAR, meanwhile these 50k pieces of shit don’t 😂
11 years ago teslas were advanced. Also how the heck are you getting a new car for only 50k the self driving adds some random amount of cost last I saw between 8-10k
Holy shit, I knew I’d heard this word before. My Chinese robot vacuum cleaner has more technology than a tesla hahahahaha
Vacuum doesn’t run outdoors and accidentally running into a wall doesn’t generate lawsuits.
But, yes, any self-driving cars should absolutely be required to have lidar. I don’t think you could find any professional in the field that would argue that lidar is the proper tool for this.
…what is your point here, exactly? The stakes might be lower for a vacuum cleaner, sure, but lidar - or a similar time-of-flight system - is the only consistent way of mapping environmental geometry. It doesn’t matter if that’s a dining room full of tables and chairs, or a pedestrian crossing full of children.
I think you’re suffering from not knowing what you don’t know.
I think you’re suffering from not knowing what you don’t know.
and I think you’re suffering from being an arrogant sack of dicks who doesn’t like being called out on their poor communication skills and, through either a lack of self-awareness or an unwarranted overabundance of self-confidence, projects their own flaws on others. But for the more receptive types who want to learn more, here’s Syed Saad ul Hassan’s very well-written 2022 paper on practical applications, titled Lidar Sensor in Autonomous Vehicles which I found also serves as neat primer of lidar in general..
Well look at you being adult and using big words instead of just insulting people. Not even going to wastime on people like you, I’m going to block you and move on and hope that everyone else does the same so you can sit in your own quiet little world wondering why no one likes you.
You’re an idiot.
jesus man, how many alts do you have?
Wow, what’s with all the hostility against him.
It’s maybe because i also know a bit about lidars that his comment was clear to me (“ha, try putting a vacuum lidar in a car and see if it can do anything useful outside at the speeds & range a car needs”).
Is it that much of an issue if someone is a bit snarky when pointing out the false equivalence of “my 500$ vacuum has a lidar, but a tesla doesn’t? harharhar”.
But, yes, any self-driving cars should absolutely be required to have lidar.
So they think self-driving cars should have lidar, like a vacuum cleaner. They agree, and think it’s a good idea, right?
I don’t think you could find any professional in the field that would argue that lidar is the proper tool for this.
…then in the next sentence goes on to say that lidar is not the correct tool. In the space of a paragraph they make two points which directly contradict one-another. Hence my response:
What is your point here, exactly?
They could have said “oops, typo!” or something but, no, instead they went full on-condescending:
I think you’re suffering from not knowing what you don’t know.
I stand by my response:
arrogant sack of dicks
And while I’m not naive enough to believe that upvotes and downvotes are any kind of arbiter of objective truth, they at least seem to suggest, in this case, that my interpretation is broadly in line with the majority.
(“ha, try putting a vacuum lidar in a car and see if it can do anything useful outside at the speeds & range a car needs”).
Because no one suggested that.
So someone saying “why does my 500$ vacuum have a lidar but not the car” isn’t suggesting that?
I guess in some technical way you’re right, but it for sure is the implication…
You’re mischaracterizing their point. Nobody is saying take the exact piece of equipment, put it in the vehicle and PRESTO. That’d be like asking why the roomba battery can’t power the car. Because duh.
The point is if such a novelty, inconsequential item that doesn’t have any kind of life safety requirements can employ a class of technology that would prevent adverse effects, why the fuck doesn’t the vehicle? This is a design flaw of Teslas, pure and simple.
Older teslas HAD lidar. They were removed on newer models to cut costs.
They did not. They had radar, which was removed.
But they do, there are literally cars out there with lidar sensors.
The question was why can’t I have a lidar sensor on my car if my $150 vacuum has one. The lidar sensor for a car is more than $150.
You don’t have one because there are expensive at that size and update frequency. Sensors that are capable of outdoor mapping at high speed cost the price of a small car.
The manufacturers suspect and probably rightfully so that people don’t want to pay an extra 10 - 30 grand for an array of sensors.
The technology readily exists rober had one in his video that he used to scan a roller coaster. It’s not some conspiracy that you don’t have it on cars and it’s not like it’s not capable of being done because waymo does it all the time.
There’s a reason why waymo doesn’t use smaller sensors they use the minimum of what works well. Which is expensive, which people looking at a mid-range car don’t want to take on the extra cost, hence it’s not available
Good God it’s like you’re going out of the way to intentionally misunderstand the point.
Nobody is saying that the lidar on a car should cost the same as a lidar on a vacuum cleaner. What everyone is saying is that if the company that makes vacuum cleaners thinks it’s important enough to put lidar on, surely you’re not the company that makes cars should think that it’s important enough to put lidar on.
Stop being deliberately dense.
You’re either taking to a fanboy or Elon on ket. You ain’t gettin’ through.
Whether lidars are reliable enough to run on autonomous cars has nothing to do with whether they are cost efficient enough to run on vacuum cleaners though. The comparison is therefore completely irrelevant. Might as well complain that jet fighters don’t allow sharing on Instagram your location, because your much cheaper phone does.
I’m not being deliberately dense it just a seriously incomplete analogy. At worst I’m being pedantic. And if that’s the case I apologize.
I agree with the premise that the cars need lidar radar whatever the f*** they can get.
Saying if a vacuum company can see that a vacuum needs lidar (which is a flawed premise because half the f****** vacuums use vslam/cameras) then why doesn’t my car have lidar, none of the consumer car companies are using it (yet anyway). It’s great to get the rabble up and say why are vacuum companies doing it when car companies can’t but when nobody’s doing it there are reasons. Ford Chevy BMW f***, what about Audi what about Porsche? What about these luxury brands that cost an arm and three fucking legs.
Let’s turn this on its head, why do people think they’re not including it in cars. And let’s discount musk for the moment because we already know he’s a fucking idiot that never had an original idea in his life and answer why it isn’t in any other brand.
Is it just that none of these companies thought about it? Is it a conspiracy? What do people think here. If I’m being so dense tell me why the companies aren’t using it.
It’s a cost-benefit calculation.
- For a vacuum at the speeds they travel and the range it needs to go, LiDAR is cheap, worth doing. Meanwhile computing power is limited.
- my phone is much more expensive than the robot vacuum, and its LiDAR can range to about a room, at speeds humans normally travel. It works great for almost instant autofocus and a passable measurement tool.
- For a car, at the speeds they travel and range it needs to go, LiDAR is expensive, large and ugly. Meanwhile the car already needs substantial computing power
So the question is whether they can achieve self-driving without it: humans rely on vision alone so maybe an ai can. I’m just happy someone is taking a different approach rather than the follow the pack mentality: we’re more likely to get something that works
Stop being deliberately dense.
Its weaponized incompetence.
I bet they do the same shit with their partner when it comes to dishes, laundry, and the garbage.
https://techcrunch.com/2019/03/06/waymo-to-start-selling-standalone-lidar-sensors/
Waymo’s top-of-range LiDAR cost about $7,500… Insiders say those costs have fallen further thanks to continuous advances by the team. And considering that this short-range LiDAR is cheaper than the top-of-range product, the price is likely under $5,000 a unit.
This article is six years old, so I wouldn’t be surprised if they’re even cheaper now.
Only Tesla does not use radar with their control systems. Every single other manufacturer uses radar control mixed with the camera system. The Tesla system is garbage.
yeah, you’d think they’d at least use radar. That’s cheap AF. It’s like someone there said I have this hill to die on, I bet we can do it all with cameras.
The self driving system uber was working on also went downhill after they went full visual only.
10 - 30 grand
Decent LIDAR sensors have gotten a lot cheaper in the last 5 years or so, here’s one that is used in commercial self-driving taxis: https://www.alibaba.com/product-detail/X01-36020021-Nev-Auto-Parts-for_1601252480285.html
So that one sensor is $700. Waymo has 4 LIDAR sensors (all of which are physically larger and I would imagine fancier than the Alibaba ones, but that’s speculation), so just in the scanner hardware itself you’re looking at $2,800. Plus the computer to run it, plus the 6 radar receivers, and 13 cameras, I could absolutely see the price for the end user to be around $10k worth of sensors.
But to be clear, I don’t think camera only systems are viable or safe. They should at minimum be forced to use radar in combination with their cameras. In fact I actually trust radar more than lidar because it’s much less susceptible to heavy snow or rain.
Shit that’s pretty decent. That looks like a ready fit car part, I wonder what vehicle it’s for. Kind of sucks that it only faces One direction but at that price four them would not be a big deal
You’re bending over backwards to miss the point huh
So be clear about the point.
Prices tend to come down on these things simply because the car industry widely adopts them. For example, accelerometers became cheap because they were needed for air bags. LIDAR might not come down as much as those have, but it won’t be tens of thousands of dollars.
The price of lidar sensors has dropped by like 50 times since musk decided to cut costs by eliminating theny from their cars.
Yeah looks like it, chinese sensors are down to 700 a pop. Even if it’s a few grand, it’s decent, looks like chevy offers it on 7 models.
Meep meep!
What do you expect when the company is run by a loony toon?
Please do not besmirch the good name of Looney Tunes.
Tesla? No thank you, I’ve go ACME.
Beep beep
The video if you haven’t seen it.
Nah bro that’s a short. THIS is the video:
Of course they are, they’re cultists. The Elon boot licker and MAGA Venn diagram is a circle.
What would definitely help with the discussion is if Mark Rober the scientist left a fucking crumb of scientific approach in his video. He didn’t really explain how he was testing it just slam car into things for views. This and a collaboration with a company that makes lidar made the video open to every possible criticism and it’s a shame.
Discovery channel level of dumbed down „science”.
Okay, but what would you like him to elaborate on, other than showing you that the Tesla is fooled by a road runner type mural?
How much more info other than just “car didn’t stop” do you need to be convinced this is a problem?
Did he enable the autopilot? When? What his inputs to the car were? Is if fsd? What car is that?
You can make every car hit a wall, that is the obvious part, but by claiming (truthfully, I have no doubt) that the car hit it on its own I would like to know what made it do it.
You didn’t watch the video did you? He address that after the first test and said all further test will be done with self driving on.
He said after the first of the 5 tests that every tesla test has autopilot on because some features are only enabled then
I have no doubt the car will crash.
But I do feel there is something strange about the car disengaging the auto pilot (cruise control) just before the crash. How can the car know it’s crashing while simultaneously not knowing it’s crashing?
I drive a model 3 myself, and there is so much bad shit about the auto pilot and rain sensors. But I have never experienced, or heard anyone else experiencing a false positive were the car disengage the auto pilot under any conditions the way shown in the video with o sound or visual cue. Considering how bad the sensors on the car is, its strange they’re state of the art every time an accident happens. There is dissonance between the claims.
Mark shouldn’t have made so many cuts in the upload. He locks the car on 39mph on the video, but crashes at 42mph. He should have kept it clean and honest.
I want to see more of these experiments in the future. But Marks video is pretty much a commercial for the Lidar manufacturer. And commercials shouldn’t be trusted.
I want to see more of these experiments in the future. But Marks video is pretty much a commercial for the Lidar manufacturer. And commercials shouldn’t be trusted.
This. If the video presented more facts and wasn’t paid for by competition it would be trustworthy. Otherwise it’s just clickbait (very effective judging by the fact we’re discussing it).
Actually, his methodology was very clearly explained. Did you watch the whole video? He might have gushed a bit less about LiDAR but otoh the laymen don’t know about it so it stands to reason he had to explain the basics in detail.
Found the Tesla owner!
😋
So Tesla owners have a monopoly on caring about the process of an experiment?
A logic conclusion by that is anyone not a Tesla owner is incapable of critical thought?
How is this a win?
What did you not like about his process?
I have no doubt the car will crash.
But I do feel there is something strange about the car disengaging the auto pilot (cruise control) just before the crash. How can the car know it’s crashing while simultaneously not knowing it’s crashing?
I drive a model 3 myself, and there is so much bad shit about the auto pilot and rain sensors. But I have never experienced, or heard anyone else experiencing a false positive were the car disengage the auto pilot under any conditions the way shown in the video with o sound or visual cue. Considering how bad the sensors on the car is, its strange they’re state of the art every time an accident happens. There is dissonance between the claims.
Mark shouldn’t have made so many cuts in the upload. He locks the car on 39mph on the video, but crashes at 42mph. He should have kept it clean and honest.
I want to see more of these experiments in the future. But Marks video is pretty much a commercial for the Lidar manufacturer. And commercials shouldn’t be trusted.
I fucking hate tesla and elon musk. Also I fucking hate people calling unverifiable shit science
You’re upset that made up people in your head called this video a research project or something? Because the closest thing I could find to what you’re complaining about is his YouTube channel’s description where it says “friend of science”.
He never claimed to be a scientist, doesn’t claim to be doing scientific research. In his own words, he’s just doing some tests on his own car. That’s it.
Well, it was published, up to you to do a peer review I guess!
Also, this isn’t needing science, it blatantly shows that things does infact not function as intended.
Just fyi, they used AEB in one car and cruise control in another. Far from even. I think it was a fail from the start considering they couldn’t get AEB to even fire on the Tesla driving without cruise control. Insane
Were is a robust description of the experiment? Or am I supposed to look frame by frame at the screen in the car to deduce the testing conditions?
All he had to do was tell us clearly what is enabled on each car and what his inputs are. That would solve all the tesla fanbois comments about him cheating. Maybe he didn’t for „engagement”.