a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by wasoxygen
wasoxygen  ·  3178 days ago  ·  link  ·    ·  parent  ·  post: AlphaGo BEATS Go world champion Sedol in first game

Denigration wasn't the right word, but I was weary after staying awake through three hours of Korean afternoon.

I couldn't find the Hofstadter quote was looking for either, where he explains how neurons count incoming impulses and fire if they hit a threshold. Adders and logic gates.

Maybe I was a bit paranoid after reading an interview with Eliezer Yudkowsky, but I was getting a creeping sense of menace watching Lee struggle.

AlphaGo got to be good by watching humans, like human children do, but it goes beyond imitation. A company representative said that AlphaGo gave a 1/10000 chance that a human player would make an unusual move that AlphaGo played in Game 2.

Today humans understand how AlphaGo plays go, and don't understand well how humans play go. As long as humans get superior results, the AI is a novelty. A backgammon AI is also a novelty. But it is hard to believe that our creations won't eventually branch out beyond manufacturing and game playing.

I recall you thought Yudkowsky was overreacting to AlphaGo's early success. Is there another capability that you would watch out for as a more significant sign? Suppose BetaGo maintains a flock of complex go-playing-programs generated by genetic algorithms, and uses the best ones to beat human champions. Perhaps no one could explain how the winning algorithms work. Would that be meaningfully different from what we call intelligence?





user-inactivated  ·  3178 days ago  ·  link  ·  

AI is mathematics and programming. There are quite a lot of people who wish we'd given machine learning the more descriptive, if boring, name "computational statistics." But programming is unlike math in that we don't get to have foundations, and so we reach for metaphors. Back in the 50s people were thinking of compilers as an AI application, because they didn't have a theory of compilers and taking a program description in a high level language and producing and executable program looked a lot like handing a programmer a specification to implement. Needless to say, compilers can be smart, but they aren't intelligent. But thinking of programs in terms of cognition is a useful thing to do, because the metaphor can guide us to a solution to problems we only know how to state in terms of what a person does. It's part of the fun and part of the weakness of computing that most of our good ideas come from daydreaming. That's great and good as long as we don't forget the distinction between what we're imagining and the actual technology. Artificial Intelligence is programming, not the Great Work.

    Is there another capability that you would watch out for as a more significant sign?

Isn't a technological question. Yudkowsky is a great popularizer of decision theory, but when he, and every other transhumanist, start predicting the future and giving you apocalyptic and transcendent pictures of where AI is going, they've taken off their technologist caps are are playing prophets and alchemists, and they don't even have the awesome illustrations. Let me riff on your last question:

    Suppose BetaGo maintains a flock of complex go-playing-programs generated by genetic algorithms, and uses the best ones to beat human champions. Perhaps no one could explain how the winning algorithms work. Would that be meaningfully different from what we call intelligence

Suppose GammaGo is as complicated as it needs to be, but as easily comprehensible as the textbook alpha-beta pruning tic-tac-toe program. Is that meaningfully different than what we call intelligence? Of course. It's a program that plays Go, and that's it. So is it incomprehensibility that makes the Go-playing program you're imagining look intelligent? As Wittgenstein said, there are no surprises in logic; either the program you're picturing is comprehensible, because it is a program and thus an application of logic, or it's science fiction. It is easy to drift into science fiction when thinking about AI, especially if, like Yudkowsky, you do much more daydreaming about AI than writing programs.