Why Is Fully Self Driving So Hard?

250 views

Why is FSD proving to be so complicated in dynamic environments I.e. rain, low light, lack of road markings etc. is it an algorithm/computation limitation, is it sensory hardware, is it both? Or is it accomplishable, but not profitable?

In: 4

9 Answers

Anonymous 0 Comments

Not an expert in the field but it is definitely a software problem. We humans are good at telling things apart in context– we can tell the difference between a leaf, a tumbleweed, a basketball, and a kid in basically any scenario. If the first one went across the road it could be safely ignored. The second one, maybe. The third, it would be preferable not to hit, and the last one critical. The computer would have to make that decision in an instant. While the tech is constantly improving, it’s still not reliable enough to put it on the road and have it make the right choice 100% of the time.

Anonymous 0 Comments

Building the human brain is really hard. It’s adaptable, it’s flexible in its approaches, it can take incomplete data and layer it against (in this case) around two decades or more of information and make judgements, and more.

Think of all the sensors and math required to accurately throw a ball to someone running across a yard. That’s something that a child can (usually) do, if imperfectly. Now layer on an infinite number of variables, situations, and contexts and it becomes very difficult to do all that processing, in a short time, in a box that can fit under the hood of a car.

Anonymous 0 Comments

reliablility

I break it down in two points: 1 system, and 2 sensors.

1 system: any system can fail. Example of fail safe for planes: When it comes to a computer making a determination, the most used system in aviation is 2 dual autopilots, each controlling only half of the control inputs. where half AP1 does half the flight control movement, half AP1 checks AP2 solutions. AP2 is also, half doing half control movement and half checking AP1. As soon as one of the four does disagree, the AP that is different from the other 3 is disconnected. Result is you are left with only half of the flight control authority, so half as reactive.

The other more expensive one is to have 3 autopilots. For example one AP is flying the plane fully and 2 are backup. As soon as one disagree with the other 2, it is disconnected. If then the 2 remaining do disagree, they both disconnect.

Problem is that to guarantee a single autopilot function you have to install 2 enhanced identical systems, or three “simple” ones.

Having just 2 normal systems would not work as as soon as they disagree, none of them has a third reference to tell which of the two is the most correct.

A single one, will simply drive you into a mountain with no warning because it “thinks to be right but is not”. Which is the whole problem with recent car accidents. The AP does not know it is wrong, if it can’t recognize that, it can’t hand you the controls in time.

In all cases, a human pilot/driver, is needed to be attentive and ready to take control. You can’t watch a movie and trust the computer. It’s not a videogame.

2 Speaking about video games, here’s another really big issue. When you run Flight Simulator, the computer game knows where your plane is in the simulation, cause it’s the game itself determining it. So there’s not a big deal of calculation to autopilot the plane. You know where it is, where it should go, and just need simple math to change the control inputs to change course.

But in a real plane, the autopilot has no idea about where the plane is. You need sensors to tell things to it. You need altitude, airspeed, accelerometers, gyroscopes, maps, gps, radio navigation. All of these are thrown into a very complex equation, in the flight management computer, that then feeds your most likely position to the autopilot. Then the autopilot will control the plane using the same sensors.

When it comes to cars, you have way more obstacles than a plane. And you don’t have many choices about sensors. You can’t use radar, forget about cost, it’s just gonna fry the testicles of any pedestrian in front of you. You can use low power radar but then your detection range is similar to your parking sensor, awful. Rule out gps as it is intermittent in nature, and the accuracy is good for navigation but that’s a 1 meter accuracy. Imagine you drive on a highway, gps says you are one meter too much to the left, steers the car into the truck in the lane next to you. Radio navigation is good as long as you go 100+ mph in the sky. That’s not car use.

Sure you can measure wheels speed and angle of steering. You can use a LIDAR, a light based system similar to radar or sonar principles, or use cameras or both. Now, LIDAR shoots a laser, and measure what comes back. I course that’s cool but of course you don’t want to blind pedestrians. So LiDAR is usable only with very low energy and low energy means lower accuracy and range. Cameras have issues too, as they lose accuracy in the dark. Then both systems are light based, light can be disturbed by droplets on the lenses, rain, fog, mirrors. Light receivers can be temporarily blinded by other strong lights. Relying only to light based sensors means all your detection and guidance can completely fail for a single cause. In aviation they use different type of sensor to differentiate the threats, giving a system that has more elements that can disturb it, but can’t completely fail unless simultaneously disturbed on all the following elements: radio, light, magnetic, satellite, air and radar. On car it’s very hard to back up light sensor with non light sensors.

So, you need let’s say, a complex computer to interpreter the sensors data, you need multiple sensors to back up eachother, to prevent a single sensor error to kill you. Then you need to feed this data to multiple AP that are cross checking eachother. Then you need a backup human to take control as soon as the multiple AP do disagree and disconnect. Remember, disconnect means the multiple AP telling you “we are no more sure of our determinations, you better take control”. The other option is letting the AP going wherever it thinks is good to go and probably crash.

Anonymous 0 Comments

FSD basically combines all the properties that computers are generally terrible at.

Computers & algorithms are deterministic and do best with very clearly defined inputs & outputs. For systems that rely on interaction with the outside world, that means precisely defined/controlled/known conditions. Driving features…the complete opposite.

Driving takes place in a wildly varying dynamic environment the car doesn’t control, with an unknown number/type/size/color of stationary & moving objects, none of which can be counted on to obey any accurate predictive model of behavior, in an extremely wide variety of environmental conditions, with operating rules/laws that change, may not be known, and are context dependent.

Doing this well requires really good pattern recognition, really good and fast heuristics, incredibly high dynamic range sensors, a lot of non-deterministic analysis, snap judgement, improvisation, and “driving empathy” (attempting to correctly guess the action of other entities in the environment based on real-time context). These are all skills that humans are really good at, thanks to billions of years of evolution trying to keep us alive in exactly this kind of environment, and computers are generally terrible at because they *don’t* have good deterministic algorithms to solve these problems. We can barely teach computers to accurately gauge human facial expressions or beat us at Go, which is trivial compared to the amount of environmental modelling a self-driving car needs to do in real time.

Anonymous 0 Comments

It’s not possible due to the economics of blame and insurance.

There will always have to be a human that has personal responsibility for the vehicle, the vehicle manufacturers will not take this on themselves.

Anonymous 0 Comments

At the end of the day, it’s because we tolerate less imperfection from machines than from humans because we can hold imperfect humans accountable for their actions. Humans also struggle to drive in poor conditions (and just normally), and traffic accidents happen all the time, but machines causing accidents feels like something we can do something to prevent – by banning these vehicles – rather than just punish after the fact.

Anonymous 0 Comments

I think it’s a bit like general AI. There are situations in real-life, and in driving, that require that we think out of the box. Machines are usually not so good at that. They are extremely good in repeatable situations, but generalizing to very exceptional cases is tough.

E.g., [https://www.youtube.com/watch?v=cPMvQphJQiE&t=144s&ab_channel=StanLettink](https://www.youtube.com/watch?v=cPMvQphJQiE&t=144s&ab_channel=StanLettink)

Anonymous 0 Comments

Most robotic and automated systems up to now, have been in restricted and controlled environments.

The assembly robot can be programmed with assurance it won’t encounter other objects. The autopilot is assured it’s operating in an environment with lots of distance between objects, tight controls, predictable and well-trained behavior by the other players.

Self driving vehicles have to work without those assurances. Everything is close, filled with unpredictable humans doing random human stuff, and no external controls on who gets to “play”. It’s an extremely hard problem.

Anonymous 0 Comments

It’s a decision tree problem, mostly.

Basically any task can be written as a decision tree – what should you do at any juncture, with a result.

In reality, most tasks aren’t written like this, because the amount of possible variables quickly dwarfs the ability to write the tree.

Now consider FSD. Even at a basic level. It needs to determine what road it’s on, merely to know what speed it’s allowed to drive – some of that comes from GPS, but anyone that’s used that knows that it can get confused about what road you are on at times. So even before you’ve done anything, it has to marry up GPS and a map, to get an idea of where it is and how fast it’s allowed to go. And that’s the easy logic.

Everything else is about the decision tree and conflict resolution – what should I be doing and what should I be doing when the information I have conflicts with something else I know. This latter is why Tesla removed the radar guidance from their cars – the radar picked up bridges across roads and would slow down as the conflict suggested an obstruction.

And these are very basic examples. What should it do at a 4 way stop sign? What size of object entering the road should trigger stopping/swerving etc (this is the one that Tesla doesn’t like as their cars have had a tendency to ignore children as an obstacle).

The scale of the complexity is astonishing. Humans driving make enormous numbers of calculations and considerations subconsciously, so even to program the cars systems, as a human you have to work out what you do subconsciously and then work out how to make a computer do that.