Random guessing might be fast enough, but it is probably not good enough. Random guessing is as bad as it gets.
That is what NFLs tell us. From the paper you linked: For a particular region of search space, you can do better than random guessing. You will do worse than random guessing on other regions. Thus, if you want a useful search algorithm, you tune your algorithm for the region it will be operating in; you make your TSP algorithm perform well for TSP problems, knowing that if you feed it something else, well, GIGO.Random guessing for all values. For one, random guessing might be fast enough. For another, it doesn’t have to learn for all values, just for most. Mathematically, the implications are significantly different.
The NFL doesn’t tell us that. The NFL tells us we can’t design a single algorithm which performs optimally for all inputs.
In addition to governing both how a practitioner should design their search algorithm, and how well the actual algorithm they use performs, the inner product result can be used to make more general statements about search, results that hold for all P(f)’s. It does this by allowing us to compare the performance of a given search algorithm on different subsets of the set of all objective functions. The result is the no free lunch theorem for search (NFL). It tells us that if any search algorithm performs particularly well on one set of objective functions, it must perform correspondingly poorly on all other objective functions. This implication is the primary significance of the NFL theorem for search. To illustrate it, choose the first set to be the set of objective functions on which your favorite search algorithm performs better than the purely random search algorithm that chooses the next sample point randomly. Then the NFL for search theorem says that compared to random search, your favorite search algorithm “loses on as many” objective functions as it wins (if one weights wins/losses by the amount of the win/loss). This is true no matter what performance measure you use.
Seems good enough for evolution. But, there's a good argument that Strong AI isn't practical in a reasonable amount of time: human evolution took 14 galactic years. Of course, we don't know to what degree sentience is learned versus inherited. It does kind of bother me that debates about Minsky always seem to degrade to Strong AI. Yeah, they failed to accomplish that. And relative to Strong AI, things like natural language processing are modest. But relative to everything else we've achieved in computer science, I think things like Watson, Siri, and Wolfram Alpha are rather significant. And those are only the consumer-visible ones. Machine learning is used by everything from search engines to bioinformatics. Most of which in some way extend Minsky's work. Even his brain-oriented work like Society of Mind. The book addresses numerous subjects like learning meaning, language processing, ambiguity, and spatial perception, which are equally applicable to various Weak AI systems.Random guessing is as bad as it gets.