Personally I do think there is an argument to be made that self-driving cars might be in the realm of "this can't work because physics", if we define "physics" in a particular way: this can't work because the physics of compute tech and AI cannot reach the levels of safety that we might demand from an actual, fully self-driving vehicle. Because, let's be honest, it depends on a) Moore's law continuing, b) an unattainable amount of data or some unforeseen leap in ML that allows it to reason outside of its training data. I came across this which argues that GPT-2 can't reason beyond its training data. Much like GPT, the dominant idea of AVs is that they would be either able to preload everything they could ever possibly encounter, or that they could reason outside of that dataset through some kind of magi- I mean AGI. We know through Waymo/Google that the former is pretty much impossible in the near future, and we know through Tesla and GPT that the latter is also impossible in the near future, especially if what that paper argues is a fundamental part of all ML models and AGI turns out to be just a rabbit pulled out of the hat.