- So there is a three-way distinction:
Self-driving
Driver-assist
Remotely-assisted driving
It appears that most or all the work at present is much closer to the latter two, and that all of the published numbers are really about the latter two—with no disclosure about how much remote centers are contributing to whatever results we see for efforts at putative self-driving.
Cruise yanks their entire fleet, more than a month after rubbing a pedestrian against the street like an avocado with a molcajeteCruise issued the recall Tuesday to correct the programming in its entire driverless fleet, saying it will address a “post-collision response” that “could increase the risk of injury.”
"Again, in aggregate this is 2-4% of time in driverless mode." Cruise confirms robotaxis rely on human assistance every four to five miles Let's say you're averaging 25mph through San Jose due to lights and such. Your 5 mile trip is going to take 12 minutes. For both of these statements to be true your ONE FUCKING TRIP is going to have a 15-30 second intervention. And yet there's Hacker News, lapping this shit up.
As if an intervention is gonna last only seconds, lol. I'm so glad I'm not the only one gobsmacked and outraged by that fact. Never thought that the definition of succes for a company like Cruise isn't "is this safe enough for rollout", but instead "are the downsides linearly scalable" like every other fucking startup.
We'll call it the Barra Rule: "obfuscate and cover-up until the outrage becomes self-sustaining"
To me the Theranos comparison is apt in one way and inapt in another way. I think it works insofar as Cruise is clearly selling lies to unwitting investors. Where it breaks down (in my opinion) is that I think that self-driving cars are possible in principle, though maybe not with anything like today's technology. Maybe the AI needs to be rebuilt from the ground up, and that could be a multi-decade project. But given that human drive cars, I think it's reasonable that machines could drive cars. Theranos, on the other hand, was grift by inspection. The small amounts of sample she was proposing to use to measure some of the things she wanted to measure couldn't in some cases give you a big enough stochastic set to say with any confidence that the measurements were meaningful. That is, biology is a statistical science, and you need a certain signal:noise ratio to say anything smart. If you're below that threshold, then you're only measuring noise. That is, there's a lower limit of detection for many metabolites independent of the measurement modality, and my belief (though I'm not an expert in what she was claiming to do) is that she was proposing to more or less break the laws of statistics. I could be wrong on both counts.
It's an interesting discussion as far as I'm concerned because we're talking about two specialized, scientific pursuits with broad importance to a public that has only the shakiest understanding of how they work or what they mean. Theranos argued they could do blood chemistry with samples too small to have any stochastic certainty that they would contain the correct proportions of the markers and antigens necessary to test. That's my understanding of the situation - my wife's father has a couple dozen patents in diabetes testing and his take on Theranos was "I just don't see how it could work" from the very beginning. Several times, he walked me through all the problems that they've run into again and again and again with the goals of "smallest, least invasive test possible" as they rub up against the hard realities of protein chemistry and microfluidics. The self-driving car contingent argues that they can do advanced autopilot with cognition too small to solve all the problems contained in an average drive. It's not the same because there is no actual, physical limit to the amount of processing the cars have to do - it's not a "this will never work because physics" problem, it's a "this isn't going to work right now because limits of technology" problem. But I think from an investor and public safety standpoint, "doesn't work right now" and "won't work ever" end up the same. I still think Theranos had a way forward. They got my attention because the duopoly of Quest and Lab Corps is awful to deal with, and any disruptor simply going "the same, only less dickish" could have made a mint. Had Theranos gone "actually, the Edison is further out than we hoped, we've decided to build out a network of independent blood chemistry labs for consumers to take control of their health" they'd be a multi-billion-dollar company. If they'd gone with the GoodRX model instead of the flim-flam chicanery model they'd probably be in every Walgreen's. And I think Google has a way forward - but only Google. I think they were able to convince the world they were working on "self driving cars" when in fact they were building out a map and a strategy for street sweepers, mail delivery bots, meter reader bots and community shuttles the world over. Their approach works great and without any problems whatsoever if: - Weather is clear - Route is known - Environment is predictable and if you keep it under 15mph and cover it in flashing lights, you're good. Google could roll their service out to the USPS and make their money back. They could roll it out to Waste Management and make their money back. Probably not so much municipal bus service? But a street sweeper is a gimme, particularly if you make it small enough that it can't really fuck up anything bigger than a motorcycle. Is it self-driving cars? Hell to the no. But it's a level of autonomy that works right now, that replaces salaries with capital equipment and a service contract. I'll go one further: I'll bet Google could get autonomous semis approved, on the understanding that (A) they operate in the middle of the night (B) they operate with blinking lights (C) they operate with an operator on board to take over in and out of cities (D) they operate entirely autonomously unless they don't, in which case they don't operate autonomously at all. But none of that is "take me to work while I smoke a bowl."
Personally I do think there is an argument to be made that self-driving cars might be in the realm of "this can't work because physics", if we define "physics" in a particular way: this can't work because the physics of compute tech and AI cannot reach the levels of safety that we might demand from an actual, fully self-driving vehicle. Because, let's be honest, it depends on a) Moore's law continuing, b) an unattainable amount of data or some unforeseen leap in ML that allows it to reason outside of its training data. I came across this which argues that GPT-2 can't reason beyond its training data. Much like GPT, the dominant idea of AVs is that they would be either able to preload everything they could ever possibly encounter, or that they could reason outside of that dataset through some kind of magi- I mean AGI. We know through Waymo/Google that the former is pretty much impossible in the near future, and we know through Tesla and GPT that the latter is also impossible in the near future, especially if what that paper argues is a fundamental part of all ML models and AGI turns out to be just a rabbit pulled out of the hat.
Sure we could quibble about that: it sounds like you're arguing "GPT and similar models for intelligence will never be capable of the improvisation necessary to adequately replace human drivers." I think that's quite possibly in "iron law" territory - their approach to intelligence is to find the middle ground between multiple inputs within a known data space. Meanwhile, improvisation depends on the interpretation of an unknown data space. On the other hand, I don't think "self-driving cars are forever impossible" because GPT and other approaches to AGI/LLM/etc are not the only possible approaches. To the contrary, I suspect we're starting to see a realization that LLMs and GPT are useful in specific niches of problem-solving, and if those tools are restricted to those niches everyone benefits. That will hopefully free up some research time and capital to attack the problems where GPT/LLMs fall down. I'll say this: I haven't seen any approach that I would trust to take the wheel sans a whole bunch of nerfing. I don't know how you would get there from here. But I don't think that's "iron law" territory, just "no idea how we'd do that" territory.
"No idea how we'd do that" is a good way to condense it. But it just seems like every promising foray into improvisation done by computers ultimately turns out to be a fata morgana or a false positive. What hopes I had before this week have been put to bed now. Put differently, how long must we search (and how many billions do we/Google/rich people spend) before we ask ourselves the hard question whether this is doable at all. That may very well turn out to be a failure of imagination on my part, but personally I feel like the position of "this is, for the forseeable future, iron law" is preferential to "it might work some day (we just don't know how, and have no practically attainable idea of how)".
I have seen four, possibly five forays into artificial intelligence in my lifetime. None of them have panned out in a "three laws of robotics" / cogito, ergo sum sort of way but by sheer process of reduction, we keep getting closer to an answer. That answer might be "never." Researchers keep swinging at the ball, though, which leads me to believe they think there's an outside chance they'll hit it out of the park. By way of contrast, in college I sat through a two hour lecture from a couple guys who had modified collective pitch helicopters with broadcast transmitters, gyros and servos to put a stereoscopic VHS-grade turret on an aerial platform. Their build cost was around $20k, their bird was an easy 50 lbs of nitromethane-powered terror and it required a tremendous amount of skill between two operators to manipulate. I had Costco pizza for lunch? And they've got FPV drones for $50 that will fit in your shirt pocket. There are some problems that can be brute-forced through existing technology and some problems that can't. My personal sense of the state of self-driving cars is that our technology will totally drive a garbage truck without much drama but getting you to work via the freeway is never going to be safe enough. Is it a leverage problem or a breakthrough problem? I don't know enough to have an opinion... but I know enough that I think GM is being pretty shady about it.