Funny you mention perceptrons, since it was Minsky who pointed out that they weren't as general as they seemed at first, making work in neural networks unfashionable for years. When neural networks came back it was in particular applications. Yeah, they're great at pattern recognition, but no one is saying we just need to find the right neural network topology, train it, and boom, mind-in-a-box anymore. All the successes you point out were relatively modest problems. Which is great; better face recognition and question-answering are things within our reach, let's do those. Let's not promise things we can't back up. Promising things we can't back up has not worked out well for us in the past.
A computational model you can't implement or prove no one can is not a very useful computational model.
We cannot now nor will we ever be able to devise a learning algorithm that performs well for every problem. We can devise learning algorithms that work well enough for particular problems.
Computing does not work like empirical science works. Mostly it works the way mathematics works; we precede by proving things, if only existence proofs in the form of working artifacts. To the extent that it's empirical, it's empirical the way engineering is; we want to see how well our artifacts work.With regard to his classifications of the human brain, I’m more inclined to think they’re not wrong, just not practical to program manually.
They’re much more complex than we thought. Which is why we're turning to things like machine learning.
Darwin? Freud? Einstein? They were all wrong on countless points. But their grand theories created and advanced their fields. 'Grand theories' give science targets. Proving something wrong is just as good as proving something right.