This is a monumental step for AI.
(Also I love Monte Carlo tree search so so much)
Interesting, I wonder if I can find the unconventional move. Usually all my moves are moves no human champion would ever play.[Lee] said AlphaGo's early strategy was "excellent'' and that he was stunned by one unconventional move it had made that a human never would have played.
There was some interesting discussion on facebook -- evidently the commentators (themselves professional Go players, I think) decided Lee played a subpar game. Obviously Lee disagreed, which raises the interesting point that not even really good Go players can always tell what's going on in a match.
I want to think that Lee has an advantage in the meta-game, being able to experiment and try different strategies, but this might be a disadvantage from anther point of view. Now that the second game is decided, Saturday will be a big game. It starts at 13:00 Seoul time; I might stay up to watch starting Friday night at 23:00 New York time.
I know you jest, but If you really wanted, you should be able to mathematically formulate a novel winning strategy until someone/everyone finds an effective counter to your technique. That could be what is happening here, with machine learning, neural networks, AI, whatever. Dunno if you could ever beat AlphaGo more than a few of times, but it would probably cost anyone at least a couple to few years of 40 man-hour weeks writing some slick code. I more or less view this as a team of people playing Go through a computer (albeit indirectly). So it was only a matter of time before an algorithm beat a human master, just like Chess, yep. Edit: Thanks for the link to the more explicit play-by-play.
Surely you jest. I spent hours working on a clumsy brute-force Sudoku algorithm for Problem 96, and it needed half an hour to grind through fifty boards. Board #47 was especially annoying and required 440,865 steps. 5 9 _ _ 8 _ _ _ 1 _ 3 _ _ _ _ _ 8 _ _ _ _ _ _ 5 8 _ _ _ 5 _ _ 6 _ _ 2 _ _ _ 4 1 _ _ _ _ _ _ 8 _ _ _ _ _ 3 _ 1 _ _ _ 2 _ _ 7 9 _ 2 _ 7 _ _ 4 _ _ Perhaps by "you" you mean a large, talented team of researchers and programmers backed by a corporation with a market cap heading toward a trillion USD. Perhaps our species can maintain some dignity by letting Google take the prize rather than Facebook. _ _ 1 _ _ 7 _ 9 _
Efficient Sudoku solvers require knowing more than a little about search algorithms, but are easy if you know more than a little about search algorithms. Go has a ridiculously large search space, it would take a lot of insight into Go to hand-roll a heuristic for it. AlphaGo learns the heuristic so they didn't have to be clever, they just had to throw a lot of big iron at training it, after which you have a program that plays Go well and have demonstrated that if you have the big iron to throw around, just throwing big iron at a problem is often an effective way to solve it, but haven't really learned anything about Go or Go players. I'm becoming more sympathetic to Chomsky's complaints about statistical results in computational linguistics by the day.
Looking bad for Team Human in game 3. I would like to take comfort in your denigration of AI, but it seems too easy to apply it to any future advance. AlphaGo beats the world champion? It's just a fancy heuristic for searching a 361!-deep search space. AlphaGogh makes beautiful paintings? Well, there are only so many hues you can put on a pixel of canvas. Cognition itself is computation. I sum, therefore I think.
Denigration wasn't the right word, but I was weary after staying awake through three hours of Korean afternoon. I couldn't find the Hofstadter quote was looking for either, where he explains how neurons count incoming impulses and fire if they hit a threshold. Adders and logic gates. Maybe I was a bit paranoid after reading an interview with Eliezer Yudkowsky, but I was getting a creeping sense of menace watching Lee struggle. AlphaGo got to be good by watching humans, like human children do, but it goes beyond imitation. A company representative said that AlphaGo gave a 1/10000 chance that a human player would make an unusual move that AlphaGo played in Game 2. Today humans understand how AlphaGo plays go, and don't understand well how humans play go. As long as humans get superior results, the AI is a novelty. A backgammon AI is also a novelty. But it is hard to believe that our creations won't eventually branch out beyond manufacturing and game playing. I recall you thought Yudkowsky was overreacting to AlphaGo's early success. Is there another capability that you would watch out for as a more significant sign? Suppose BetaGo maintains a flock of complex go-playing-programs generated by genetic algorithms, and uses the best ones to beat human champions. Perhaps no one could explain how the winning algorithms work. Would that be meaningfully different from what we call intelligence?
AI is mathematics and programming. There are quite a lot of people who wish we'd given machine learning the more descriptive, if boring, name "computational statistics." But programming is unlike math in that we don't get to have foundations, and so we reach for metaphors. Back in the 50s people were thinking of compilers as an AI application, because they didn't have a theory of compilers and taking a program description in a high level language and producing and executable program looked a lot like handing a programmer a specification to implement. Needless to say, compilers can be smart, but they aren't intelligent. But thinking of programs in terms of cognition is a useful thing to do, because the metaphor can guide us to a solution to problems we only know how to state in terms of what a person does. It's part of the fun and part of the weakness of computing that most of our good ideas come from daydreaming. That's great and good as long as we don't forget the distinction between what we're imagining and the actual technology. Artificial Intelligence is programming, not the Great Work. Isn't a technological question. Yudkowsky is a great popularizer of decision theory, but when he, and every other transhumanist, start predicting the future and giving you apocalyptic and transcendent pictures of where AI is going, they've taken off their technologist caps are are playing prophets and alchemists, and they don't even have the awesome illustrations. Let me riff on your last question: Suppose GammaGo is as complicated as it needs to be, but as easily comprehensible as the textbook alpha-beta pruning tic-tac-toe program. Is that meaningfully different than what we call intelligence? Of course. It's a program that plays Go, and that's it. So is it incomprehensibility that makes the Go-playing program you're imagining look intelligent? As Wittgenstein said, there are no surprises in logic; either the program you're picturing is comprehensible, because it is a program and thus an application of logic, or it's science fiction. It is easy to drift into science fiction when thinking about AI, especially if, like Yudkowsky, you do much more daydreaming about AI than writing programs.Is there another capability that you would watch out for as a more significant sign?
Suppose BetaGo maintains a flock of complex go-playing-programs generated by genetic algorithms, and uses the best ones to beat human champions. Perhaps no one could explain how the winning algorithms work. Would that be meaningfully different from what we call intelligence