I might be described as a materialist, because I think we do. IMO it's impossible to separate decision-making from consciousness. IMHO human-level consciousness is separated by degrees of sophistication, but there are multiple paths from here to there. Or in other words, we have already acheived machine consciousness. It's just unsophisticated and very non-human. I do agree with the author in that, in addition to sophistication, a 'Homo-complete' type of consciousness would require a human-complete type of learning, including our limitations.So far, the only specs we have are the biological ones, and we don't even understand how they produce consciousness.
But, "sentience" or "consciousness" isn't defined as "decision-making." By definition, we mean something else when we speak of that. It's a kind of awareness of self and others, of experience, of incorporeal processes, and of will which doesn't necessarily involve decisions at all. While it's tempting for me to ascribe willfulness to my laptop computer when it's buggy, I can still see that it possesses no traits which approach sentience. Even the decisions it makes are inane, pre-programmed, and designed to do nothing more than manipulate symbols for someone else to interpret. Computers are a medium, not a mind. We have a long way to go before we can produce an artificial intelligence, let alone an artificial consciousness, and, so far, there's little reason to believe that computing will lead to either one.IMO it's impossible to separate decision-making from consciousness.
I think at its root, it is. IMHO biological consciousness can be observed along a spectrum that goes from a simple one-celled paramecium searching for food, to an ant, to a bird, to a chimpanzee, to us. But when I say 'decision-making' I don't necessarily mean on the level of what motivates the organism as a whole. It also can be part of the processes that distinguish a tree from the space behind it, to recognizing the face of someone you know from a stranger. These are biological algorithms that enable us to construct a complex consciousness. Watson beat Ken Jennings at Jeopardy. This was a display of artificial intelligence, and also a type of consciousness. Watson is artificial, but it was able to parse answers and retrieve facts and construct the correct questions which made it able to win the game. Ken Jennings, although he has many other capacities, was doing a very similar thing. In fact, if you gave him some very specific neural injuries, you could experimentally display it. Mess with Broca's area, and he could retrieve the information into his inner dialog, but could no longer parse it into language (a lobotomy-like injury could make him emotionally flat). Watson's missteps seem so weird and lacking thought, but that is due to the fact that it had no ability to create or draw on cultural knowledge or any type of experiential knowledge. Watson's knowledge was a very specific type, but it was enough to win. I don't know if there wasn't a stochastic element to Watson's decision-making, but some artificial intelligences do employ it. Many also employ machine learning, like a translator learning your accent. At any rate, sophisticated programs that can evolve based on input are going to be integrated together to make some dynamic artificial intelligences. Imagine 20 Watsons that have a range of specialties working in parallel, but together as one entity. That's much how our brain works. A mouse and my brain are very similar on the cellular level, but on the macro level, our consciousnesses are quite different. Watson is a cricket. But soon we will start integrating many Watsons together into another type of architecture. It won't be too long before we have a Watson mouse. And then, a Watson Watson.But, "sentience" or "consciousness" isn't defined as "decision-making."
I really loved this article and your conversation. I think you have both made really interesting points. However, I have stared to believe that whether A.I.'s are actually conscious will kind of be irrelevant. In the 2030s A.I. will have the capability to behave in very human ways. I agree with mk that if you had a machine like Watson that could "draw on cultural knowledge and any type of experiential knowledge" it would be hard to tell it apart from a human. Perhaps the only way you will know is that it will be far more knowledgeable than you are. And when A.I.'s start behaving in very complex ways we will develop relationships with them and we may at first ask ourselves whether they are "actually conscious" and we will never get an actual answer. If the A.I. tells us it is conscious we will just have to take their word for it - just as you have to take my word that I am conscious and I have an internal subjective experience. You will never actually know that I do have an internal subjective experience - but you just decide to reason that "well I have an internal subjective experience so I'm going to assume that everyone else does as well." This is essentially what we will end up deciding about A.I. And mk - again I agree with you - it will not be too long before we have a Watson Watson. Watson will be in your smart phone by 2020. Watson is already collaborating with doctors. Watson will be the worlds most useful doctor in the 2020s - and he will be accessible to most people in the world. And I'm actually starting to think that Kurzweil's 2029 prediction is going to be a year or two late.
I agree. In fact, the more that you try to define what this internal subjective experience is, IMO, the more you are forced to conclude that there is no special essence to it. What is the internal subjective experience of an ant, mouse, or dog? We are willing to grant a mouse consciousness, but do we also grant them an internal subjective experience? How about a dog? I don't see any reason to think why the ISE isn't a function of consciousness complexity. Or specifically, the product of employing an intelligence that employs symbolism to understand and operate within an environment. And, how can you see other things in a symbolic way, without applying the same to yourself?And when A.I.'s start behaving in very complex ways we will develop relationships with them and we may at first ask ourselves whether they are "actually conscious" and we will never get an actual answer. If the A.I. tells us it is conscious we will just have to take their word for it - just as you have to take my word that I am conscious and I have an internal subjective experience. You will never actually know that I do have an internal subjective experience - but you just decide to reason that "well I have an internal subjective experience so I'm going to assume that everyone else does as well." This is essentially what we will end up deciding about A.I.