I suspect you don't have this strong stance because you have less experience with the regulatory apparatus of the US federal government, less experience with mechanical design, and possibly less experience with Silicon Valley "bad boys." Because he did say that. Check it: That's an admission that neural networks are unknowable, but an assertion that they are better because, you know, neural networks. He also said this: That's another assertion that the goal of "safety" must not be put ahead of the goal of "innovation" and that further, "innovation" is not possible without compromising "safety" in some way. Which is fucking bullshit. Which isn't even worth respecting. Which is making me lose respect for you for not recognizing the fallacy. How much was safety compromised with the introduction of seat belts? How much was safety compromised with the introduction of crumple zones? How much was safety compromised with the introduction of collapsable steering columns? How much was safety compromised with the introduction of antilock brakes? Yer li'l buddy Brad literally argued that "safety" is too important for testing and verification. His reason? His excuse? Citation needed, bitch. And this is why computer scientists shouldn't be allowed to run rampant on all this - they keep saying stupid shit like "trolley problem" without understanding down to their very bones that it's abject bullshit. So yeah. Your guy is an asshole, he did say that, and you're wrong. Sorry.It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases. On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.
I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.
It is challenging. It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases.
On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.