Is this true? I don't keep track of the news, and perhaps this is why I'm buffled by this idea - that it would want to lose on purpose.
No it's a joke. Isaac Asimov wrote some sci-fi book about an AI named Multivac. Most of the time Multivac made just enough mistake so human can feel a bit less useless. The AI only want their happiness. Being less than perfect help for that purpose. It would be nice that AlphaGo could feel the 'need' to lose for the beauty of this game about intuition and harmony.
Thanks for clarifying that. Otherwise it would ruin what we consider the beauty of the unknown and shred the fog of mystery, wouldn't it? The fact that we can program a machine to win a game of intuition - implying a sort of system behind it - that is.It would be nice that AlphaGo could feel the 'need' to lose for the beauty of this game about intuition and harmony.