I agree. In fact, the more that you try to define what this internal subjective experience is, IMO, the more you are forced to conclude that there is no special essence to it. What is the internal subjective experience of an ant, mouse, or dog? We are willing to grant a mouse consciousness, but do we also grant them an internal subjective experience? How about a dog? I don't see any reason to think why the ISE isn't a function of consciousness complexity. Or specifically, the product of employing an intelligence that employs symbolism to understand and operate within an environment. And, how can you see other things in a symbolic way, without applying the same to yourself?And when A.I.'s start behaving in very complex ways we will develop relationships with them and we may at first ask ourselves whether they are "actually conscious" and we will never get an actual answer. If the A.I. tells us it is conscious we will just have to take their word for it - just as you have to take my word that I am conscious and I have an internal subjective experience. You will never actually know that I do have an internal subjective experience - but you just decide to reason that "well I have an internal subjective experience so I'm going to assume that everyone else does as well." This is essentially what we will end up deciding about A.I.