Interesting. I'd like to think that I wouldn't be one of those blindly applauding, but there's a lot of self-preservation in that applause. Nobody wants to be inferior, nobody wants to suggest that they're disposable or potentially replaceable. The future will present some seemingly unique problems, but as mentioned in the piece, maybe they're not all that unique.
I once gave a lecture in Holland in which I suggested such a vision of benevolent silicon creatures and suggested that the word "we" might someday come to encompass them, just as it now encompasses females and males, old and young, yellow and red, black and white, gay and straight, Arabs and Jews, weak and strong, cowardly and brave, short and tall, clever and silly, and so on. The next speaker, a gentle-looking, eloquent elderly fellow - indeed, quite resembling benevolent old Einstein - responded by arguing vociferously that the mere act of trying to develop artificial intelligence was inherently dangerous and evil, and that we should never, ever let computer programs make moral judgments, no matter how complex, subtle, or autonomous the programs might be. He argued that computers, robots, whatever they might become, irrespective of their natures, must in principle be kept out of certain areas of life - that our species has an exclusive and sacred right to certain behaviors and ideas, and this right must be protected above all.
Well, to my deep astonishment, when this gentleman had finished his pronouncements, nearly the entire audience rose to its feet and clapped wildly. Dazed, I could not help but be reminded of the crudest forms of racist, sexist, and nationalist oratory. Despite its high-toned and moralistic-seeming veneer, this exhortation and the audience's knee-jerk reaction seemed to me to be nothing more than a mindless and cruel biological tribalism rearing its ugly head. And this reaction, mind you, was in the supremelycosmopolitan,anti-Fascistic,internationally-mindedcountryofHolland! Can you imagine how my ideas would have been greeted in the Bible Belt, or in Teheran or the Vatican?
Old behaviours, new divisions. Instead of race, religion, ethnicity; it will be: are you carbon or silicon? Are you biological human or a biological-technological hybrid cyborg? etc.The future will present some seemingly unique problems, but as mentioned in the piece, maybe they're not all that unique.
I wonder if such overriding divisions are just artifacts of our brains limited computational power, and whether it will survive when such limitations disappear. Think for a moment about why and where we use `we'. I think that we use it as a heuristic to represent a group who are similar to us in some sense, ignoring the various differences individual members have, and I suspect we evolved such a notion for an evolutionary advantage. Distinguishing members or environment which allowed for the best chance of survival or propagation of one's progeny gives one a certain advantage, and in the absence of computational power to precisely compute how best to do it, the simple heuristic of distinguishing a group based on some similarities could serve as a substitute. This heuristic holds some value even now, because we still lack the power to compute the relationships and advantages precisely. But will it hold the same value in the future when the required computational power becomes available to individuals? And if its value diminishes, why should some entity feel the need to deploy this heuristic?