I read an intriguing article earlier that might provide somewhat of a counter to this piece. Not so much in regards to what we will come to include in 'we', but to the viewing of artificial intelligence as having to be intrinsically anthropomorphic. The author views this tendency as problematic and limiting of how we think about A.I. Now that I re-read that line in the context of this discussion, perhaps it's more relevant than I first thought. If such a hypothetical form of A.I. came to exist as presented in Hofstadter's paper, I could see myself including them in use of 'we'. If it had a similar form of intelligence, could empathise, form a comparable relationship to a human , etc then I think it would only seem natural to do so. A potential analogy might be if you could upload a human brain/mind/personality to a similar system, which could then live on. I don't think there would be much issue in including that in a use of 'we'. So stretching that to a purely artificial intelligence doesn't seem out of the question. I'm again reminded of Marshall McLuhan quote that I mentioned on a previous thread here: In this case the A.I. beings could be seen as an extension of human experience as a whole. If these beings were as capable as imagined by Hofstadter, then I think they too could become 'invisible' in a sense. Invisible to us as a form of technology separate to humans. The would become part of us as a collective species.The real philosophical lessons of A.I. will have less to do with humans teaching machines how to think than with machines teaching humans a fuller and truer range of what thinking can be (and for that matter, what being human can be).
...All media, from the phonetic alphabet to the computer, are extensions of man that cause deep and lasting changes in him and transform his environment. Such an extension is an intensification, an amplification of an organ, sense or function, and whenever it takes place, the central nervous system appears to institute a self-protective numbing of the affected area, insulating and anesthetizing it from conscious awareness of what’s happening to it. It’s a process rather like that which occurs to the body under shock or stress conditions, or to the mind in line with the Freudian concept of repression. I call this peculiar form of self-hypnosis Narcissus narcosis, a syndrome whereby man remains as unaware of the psychic and social effects of his new technology as a fish of the water it swims in. As a result, precisely at the point where a new media-induced environment becomes all pervasive and transmogrifies our sensory balance, it also becomes invisible.
Hofstadter is great. In Star Wars: A New Hope, C3PO and R2D2 are purchased and tethered by the hero of the story. C3PO called him "Master Luke". We still root for him and empathize with him. It might be argued that Luke was a good guy in a bad culture. Slave owners in the US were not consciously evil; they were evil in ignorance. These are not the same kinds of evil. A few years ago, I took a course in Research Ethics. It was fascinating. It seems that ideally, we should feel solidarity with similarity of mind: Are chimpanzees and dolphins so intelligent that they shouldn't be experimented upon? But then, we also feel solidarity with progeny: Should we feel less solidarity with a human that is less intelligent than a chimpanzee, than we do with the more intelligent ape? Then there is solidarity of place: We decide it is better that many Iraqis die, rather than risk fewer American lives. In questions of climate change and economics, there is also solidarity with those in our time. Or mix them up: Someone doesn't buy leather goods, because they are made from animals, but buys cloth goods made by people in sweatshops. We should be so lucky that in 2493 it is in our hands to make such judgements.
I agree with the discussion about this as it is – but I went down a different path. As far as we know, apes and chimps and pigs and fish and humans were all "created" from "whatever". As in, at the beginning of time, we did not create the apes or chimps or pigs. Humans are separate from pigs who are separate from apes who are separate.... The evolution or creation can be debated, but I don't think anyone believes that humans created pigs. Would our attitude be different if we had been the creators? At what point, especially when it concerns super-intelligent life, does the relationship with that entity become more like that of a parent and child? Where we look at it as something we created and a piece of us? Does it ever? Does it start like that (ie: We love our adorable, freaky little Siri) but change as the intelligence becomes less terribly adorable and more terribly intelligent? Is that line crossed when the intelligence is more intelligent than us? Suddenly, we are threatened because we aren't the smartest thing in the room? We do not have this parent-child attitude towards other humans, perhaps because we are on parallel ground (I had no involvement in the creation of the human child that was just born in China, or down the street for that matter) or perhaps because there is too much history and division between cultures and locales and races and classes. But, if we were to have a super intelligent life form that humankind created, would we look at it as us vs them, like we look at those evil terrorists over there and they look at the evil Americans over here, or would we look at it with the love or admiration or whatever that we have for a child? Like, this is our collective baby that we created and brought into this world? Furthermore, would the American's super-intelligent life form be loved by Americans but despised by China? Or vise versa? Would the intelligences have their own race or class or cultural divisions with each other? I doubt we will ever refer to this life form as "us" or "we" because we are still separate entities. But I am not so quick to assume we will look at it in the same way we look at a chimp, or a pig, or a fish. But then, we also feel solidarity with progeny: Should we feel less solidarity with a human that is less intelligent than a chimpanzee, than we do with the more intelligent ape?
ould our attitude be different if we had been the creators? At what point, especially when it concerns super-intelligent life, does the relationship with that entity become more like that of a parent and child?
this is an interesting point to consider. There is some presidence, to an extent. We pretty much created the domestic canine. Dogs were forged to our liking over hundreds of years. We certainly feel a kinship with them as a species that is seemingly unique and definitely parental. But then, people like humanodon still eat them, despite us being their creators. Also, dogs don't look like us. Would we be less likely to eat dogs if they did? Would we be less likely to mistreat a robot if it looked more human? I think so.
Well, there exist/have existed cultures that practice cannibalism. Relationships are often rooted in biology and are also very much shaped by culture. What would a super-intelligent culture be like? More pragmatic? Less emotive? More brutal? Who knows . . .
Hm, this runs counter to my view of the future. I see the future as a radical differentiation of minds. From this perspective we should fall in love with difference/weirdness, etc. Yes :D I've never felt comfortable with this comparison. My mother works with mentally handicapped people and she once said something similar, essentially comparing the mentally challenged with the cognitive capacities of a chimpanzee. But this comparison does not make sense. A chimpanzee that is particularly intelligent doesn't become "more human" and a cognitively challenged human doesn't suddenly become "chimpanzee-like". We are completely different species with different genetic make-up, and with humans also inhabiting a symbolic order that is pretty much completely absent in the world of the chimpanzee. Ya, these are great examples of flaws in modern logic. Agreed, it's hard for me to imagine biocultural humans (as we have known them) existing that far into the future.It seems that ideally, we should feel solidarity with similarity of mind
Are chimpanzees and dolphins so intelligent that they shouldn't be experimented upon?
Should we feel less solidarity with a human that is less intelligent than a chimpanzee, than we do with the more intelligent ape?
Then there is solidarity of place: We decide it is better that many Iraqis die, rather than risk fewer American lives. In questions of climate change and economics, there is also solidarity with those in our time. Or mix them up: Someone doesn't buy leather goods, because they are made from animals, but buys cloth goods made by people in sweatshops.
We should be so lucky that in 2493 it is in our hands to make such judgements.
I agree that there is more too it. But, it is difficult to pin down. The factors that go into this kind of solidarity are a mix of intrinsic ones and those that we attribute. The comparison doesn't make perfect sense, but neither does any definition of human, or a rationale for putting non-humans on similar footing.I've never felt comfortable with this comparison. My mother works with mentally handicapped people and she once said something similar, essentially comparing the mentally challenged with the cognitive capacities of a chimpanzee. But this comparison does not make sense. A chimpanzee that is particularly intelligent doesn't become "more human" and a cognitively challenged human doesn't suddenly become "chimpanzee-like". We are completely different species with different genetic make-up, and with humans also inhabiting a symbolic order that is pretty much completely absent in the world of the chimpanzee.
IMO, I think it is precisely this that makes us human, this flexible/dynamic symbolism that allows us to have this conversation - to construct conceptual frameworks that can be debated and critiqued over time. Chimpanzees will only ever have a solidarity of mind with their kind, but humans can symbolise totally new solidarities, i.e. all apes deserve fundamental rights, all mammals, all organisms, etc. That is totally an evolving open-ended symbolic process, i.e. what is "similarity of mind" can always be re-articulated, depending on where we are intersubjectively as a species.The comparison doesn't make perfect sense, but neither does any definition of human, or a rationale for putting non-humans on similar footing.
Most would say yes. When I read this, I think, "Probably best not to examine too deeply the intelligence of pigs".Are chimpanzees and dolphins so intelligent that they shouldn't be experimented upon?
I'd have guessed that link would have led to one of these (I know them from OddWorld but didn't realise they were in the 'Dune' world) :
Interesting. I'd like to think that I wouldn't be one of those blindly applauding, but there's a lot of self-preservation in that applause. Nobody wants to be inferior, nobody wants to suggest that they're disposable or potentially replaceable. The future will present some seemingly unique problems, but as mentioned in the piece, maybe they're not all that unique.
I once gave a lecture in Holland in which I suggested such a vision of benevolent silicon creatures and suggested that the word "we" might someday come to encompass them, just as it now encompasses females and males, old and young, yellow and red, black and white, gay and straight, Arabs and Jews, weak and strong, cowardly and brave, short and tall, clever and silly, and so on. The next speaker, a gentle-looking, eloquent elderly fellow - indeed, quite resembling benevolent old Einstein - responded by arguing vociferously that the mere act of trying to develop artificial intelligence was inherently dangerous and evil, and that we should never, ever let computer programs make moral judgments, no matter how complex, subtle, or autonomous the programs might be. He argued that computers, robots, whatever they might become, irrespective of their natures, must in principle be kept out of certain areas of life - that our species has an exclusive and sacred right to certain behaviors and ideas, and this right must be protected above all.
Well, to my deep astonishment, when this gentleman had finished his pronouncements, nearly the entire audience rose to its feet and clapped wildly. Dazed, I could not help but be reminded of the crudest forms of racist, sexist, and nationalist oratory. Despite its high-toned and moralistic-seeming veneer, this exhortation and the audience's knee-jerk reaction seemed to me to be nothing more than a mindless and cruel biological tribalism rearing its ugly head. And this reaction, mind you, was in the supremelycosmopolitan,anti-Fascistic,internationally-mindedcountryofHolland! Can you imagine how my ideas would have been greeted in the Bible Belt, or in Teheran or the Vatican?
Old behaviours, new divisions. Instead of race, religion, ethnicity; it will be: are you carbon or silicon? Are you biological human or a biological-technological hybrid cyborg? etc.The future will present some seemingly unique problems, but as mentioned in the piece, maybe they're not all that unique.
I wonder if such overriding divisions are just artifacts of our brains limited computational power, and whether it will survive when such limitations disappear. Think for a moment about why and where we use `we'. I think that we use it as a heuristic to represent a group who are similar to us in some sense, ignoring the various differences individual members have, and I suspect we evolved such a notion for an evolutionary advantage. Distinguishing members or environment which allowed for the best chance of survival or propagation of one's progeny gives one a certain advantage, and in the absence of computational power to precisely compute how best to do it, the simple heuristic of distinguishing a group based on some similarities could serve as a substitute. This heuristic holds some value even now, because we still lack the power to compute the relationships and advantages precisely. But will it hold the same value in the future when the required computational power becomes available to individuals? And if its value diminishes, why should some entity feel the need to deploy this heuristic?