a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by rob05c
rob05c  ·  3607 days ago  ·  link  ·    ·  parent  ·  post: The Society of Mind – Marvin Minsky

    I know that there is no learning algorithm that is better than random guessing, because there is a theorem that tells me so.

Random guessing for all values. For one, random guessing might be fast enough. For another, it doesn’t have to learn for all values, just for most. Mathematically, the implications are significantly different.

    We cannot now nor will we ever be able to devise a learning algorithm that performs well for every problem.

The NFL doesn’t tell us that. The NFL tells us we can’t design a single algorithm which performs optimally for all inputs [1] [2]. 'Well' is relative. It doesn’t need to be optimal, it just needs to be fast enough. Just like O(n log n) is fast enough to sort several hundred million integers on your CPU in a second or so, there is a point of computing power for which O(q) is fast enough to generally learn, whatever q is, even if it’s no better than random. Now, if you can prove q = n! or some such, you can demonstrate it’s not achievable with the mass we have to work with in the universe. But nobody’s proved that, and the NFL certainly doesn’t.

Furthermore, 'for all values' is misleading. Consider quicksort. Quicksort performs better than most other algorithms, at the price of a significantly worse worst-case (There exist analogous NFLs for sort). Likewise, there may (you might even be able to prove there does) exist a general-purpose learning algorithm which performs better than random for 99.99% of problems, at huge cost for the 0.01%.

Unless the human brain is doing something odd with quantum physics, or has some unknown metaphysical component – unless it is strictly more powerful than an LBA – then there is, by definition, some level of FLOPS after which a computer can learn faster than a human brain, even for random guessing.

I’m not making claims or predictions. I’m not saying we’ll have a sentient computer in the next ten, or hundred, or thousand years. I’m simply saying, it’s theoretically possible (1) given enough computing power and (2) assuming the human brain is an LBA.





user-inactivated  ·  3607 days ago  ·  link  ·  

    Random guessing for all values. For one, random guessing might be fast enough. For another, it doesn’t have to learn for all values, just for most. Mathematically, the implications are significantly different.

Random guessing might be fast enough, but it is probably not good enough. Random guessing is as bad as it gets.

    The NFL doesn’t tell us that. The NFL tells us we can’t design a single algorithm which performs optimally for all inputs.

That is what NFLs tell us. From the paper you linked:

    In addition to governing both how a practitioner should design their search algorithm, and how well the actual algorithm they use performs, the inner product result can be used to make more general statements about search, results that hold for all P(f)’s. It does this by allowing us to compare the performance of a given search algorithm on different subsets of the set of all objective functions. The result is the no free lunch theorem for search (NFL). It tells us that if any search algorithm performs particularly well on one set of objective functions, it must perform correspondingly poorly on all other objective functions. This implication is the primary significance of the NFL theorem for search. To illustrate it, choose the first set to be the set of objective functions on which your favorite search algorithm performs better than the purely random search algorithm that chooses the next sample point randomly. Then the NFL for search theorem says that compared to random search, your favorite search algorithm “loses on as many” objective functions as it wins (if one weights wins/losses by the amount of the win/loss). This is true no matter what performance measure you use.

For a particular region of search space, you can do better than random guessing. You will do worse than random guessing on other regions. Thus, if you want a useful search algorithm, you tune your algorithm for the region it will be operating in; you make your TSP algorithm perform well for TSP problems, knowing that if you feed it something else, well, GIGO.

rob05c  ·  3607 days ago  ·  link  ·  

    Random guessing is as bad as it gets.

Seems good enough for evolution. But, there's a good argument that Strong AI isn't practical in a reasonable amount of time: human evolution took 14 galactic years. Of course, we don't know to what degree sentience is learned versus inherited.

It does kind of bother me that debates about Minsky always seem to degrade to Strong AI. Yeah, they failed to accomplish that. And relative to Strong AI, things like natural language processing are modest. But relative to everything else we've achieved in computer science, I think things like Watson, Siri, and Wolfram Alpha are rather significant. And those are only the consumer-visible ones. Machine learning is used by everything from search engines to bioinformatics. Most of which in some way extend Minsky's work.

Even his brain-oriented work like Society of Mind. The book addresses numerous subjects like learning meaning, language processing, ambiguity, and spatial perception, which are equally applicable to various Weak AI systems.