Join daily news updates from CleanTechnica on electronic mail. Or follow us on Google News!
Final Up to date on: eleventh February 2025, 11:58 am
Driving a automobile is a fancy course of. Generally the individuals who design and construct roads do silly issues, like placing a lightweight pole in the midst of a journey lane. Actually? Sure, actually. Working example is the expertise of Tesla Cybertruck proprietor John Challinger, a software program developer in Florida, who posted on social media at 6:11 am on February 9, 2025:
Soooooo my @Tesla @cybertruck crashed right into a curb after which a lightweight publish on v13.2.4.
Thanks @Tesla for engineering the perfect passive security on this planet. I walked away and not using a scratch.
It did not merge out of a lane that was ending (there was nobody on my left) and made no try to decelerate or flip till it had already hit the curb.
Massive fail on my half, clearly. Don’t make the identical mistake I did. Listen. It might probably occur. I observe Tesla and FSD fairly intently and haven’t heard of any accident on V13 in any respect earlier than this occurred. It’s simple to get complacent now – don’t.
@Tesla_AI how do I ensure you have the info you want from this incident? Service middle and many others has been lower than responsive on this. I do have the dashcam footage. I need to get it on the market as a PSA that it may well occur, even on v13, however I’m hesitant as a result of I don’t need the eye and I don’t need to give the bears/haters any materials.
Unfold my message and assist save others from the identical destiny or far worse.

For context, right here is a picture of the sunshine pole from Google Maps as posted by PC Magazine. It must be one of many stupidest locations to place a lightweight pole within the historical past of road lighting, however there it was and the Cybertruck clearly did not see it. Folks all the time speak about “edge cases” when discussing autonomous driving conditions. Circumstances don’t get way more edgy that this act of outrageous ignorance, however there it’s, caught out in the midst of what is meant to be a journey lane for all of the world to see — aside from a Cybertrucik working on a latest model of Full Self Driving.
For his half, Challinger is fairly laid again about the entire thing and blames himself for not paying consideration and getting “complacent.” In a previous post from January, Challinger wrote about his behavior of dropping focus with FSD enabled: “Sometimes I decide to go somewhere and turn on Tesla FSD and then I forget where I decided to go and then it starts turning into Taco Bell or whatever and I’m like wtf is it doing and then I’m like oh right Taco Bell.”
Tesla’s system is meant to warn drivers repeatedly if they don’t seem to be paying consideration. Per the Tesla proprietor’s guide, the automobile ought to situation a collection of escalated warnings if the driving force shouldn’t be paying consideration. They can even be requested to place their arms on the steering wheel. If the driving force repeatedly ignores these prompts, FSD disables for the remainder of the drive. “I don’t expect (FSD) to be infallible but I definitely didn’t have utility pole in my face while driving slowly on an empty road on my bingo card,” Challinger mentioned after the collision.
The Cybertruck solely bought the flexibility to run FSD in September9 months after the automobile’s launch. Given its distinctive dimension, form, and software program, it required tweaks to the FSD utilized by different Tesla autos. The Cybertruck that crashed was working a comparatively latest model of FSD — model 13.2.4 — which Tesla launched in January. It principally targeted on “bug fixes,” according to Not a Tesla Apphowever the release notes additionally point out an improved system for “collision avoidance.” It appears to be like as if extra enhancements could also be wanted.
Tesla & Automation Bias
Probably the most harmful a part of any vehicle is the nut behind the wheel, my outdated Irish grandfather appreciated to say. Regardless of a number of protestations by Elon Musk, placing an excessive amount of belief and religion in pc programs is an issue. Elon Musk doesn’t believe in it. He thinks placing warnings in an proprietor’s guide that few take the time to learn is adequate, however scientists have a reputation for it. They name it “automation bias.”
In line with Wikipediathe tendency towards over-reliance on automated aids is named “automation misuse” which happens when a consumer fails to correctly monitor an automatic system or when the automated system is used when it shouldn’t be. Automation bias is immediately associated to misuse of the automation by way of an excessive amount of belief within the skills of the system. It might probably result in lack of monitoring of the automated system or blind settlement with an automation suggestion and may then result in errors of omission and errors of fee. Errors of fee happen when customers observe an automatic directive with out making an allowance for different sources of data. Errors of omission happen when automated gadgets fail to detect or point out issues and the customers don’t discover as a result of they don’t seem to be correctly monitoring the system.
Errors of fee happen for 3 causes — overt redirection of consideration away from the automated assist, diminished consideration to the help, or energetic discounting of data that counters the help’s suggestions. Errors of omission happen when the human resolution maker fails to note an automation failure both attributable to low vigilance or over-trust within the system. Coaching targeted on the discount of automation bias and associated issues has been proven to decrease the speed of fee errors, however not of omission errors.
The presence of automated aids “diminishes the likelihood that decision makers will either make the cognitive effort to seek other diagnostic information or process all available information in cognitively complex ways.” It additionally renders customers extra prone to conclude their evaluation of a scenario too unexpectedly after being prompted by an automated assist to take a particular plan of action. The three important components that result in automation bias are the human tendency to decide on the least cognitive method to resolution making, the tendency of people to view automated aids as having an analytical potential superior to their very own, and the tendency of people to cut back their very own effort when sharing duties both with one other individual or with an automatic assist.
“Technology over-trust is an error of staggering proportion,” writes Patricia Hardré of the College of Oklahoma in a ebook on why we generally put an excessive amount of religion in machines. According to the BBCshe argues that folks usually lack the flexibility to guage how dependable a particular expertise is. This will really go each methods. We would dismiss the assistance of a pc in conditions the place it will profit us or blindly belief such a tool, just for it to finish up harming us or our livelihoods.
What’s the purpose? Merely this: Tesla Full Self Driving shouldn’t be working because it ought to. Not now, by no means has. And till Musk will get over his childish refusal to include radar and lidar into the automated driving {hardware} package deal at Tesla, it by no means will likely be. Finish of story.
Chip in a couple of {dollars} a month to help support independent cleantech coverage that helps to speed up the cleantech revolution!
Have a tip for CleanTechnica? Need to promote? Need to counsel a visitor for our CleanTech Speak podcast? Contact us here.
Join our every day publication for 15 new cleantech stories a day. Or join our weekly one if every day is just too frequent.
CleanTechnica makes use of affiliate hyperlinks. See our coverage here.
CleanTechnica’s Comment Policy