This much I can agree on. IMO I don't think we will recognize smarter than human AI when it first arises. In fact, I think we will be debating whether or not it is AI for some time after. Intelligence is choice defined by an environment. Since we won't share the same perceptive environment as the AI, it will be very difficult to understand their actions in terms of choices. I think very strong AI might just grow and it might not even realize what happens to us. It might not be able to perceive us as active. :/Technological development completely overtaken by machines who think, act and communicate so quickly that unenhanced biological humans cannot even comprehend what is going on.
By stating "I think very strong AI might just grow and it might not even realize what happens to us." do you mean that they just won't believe we are relevant to the things they want to do/accomplish in the universe? Similar to the way we don't care about what a worm or a beetle thinks or does? I think Neil Degrasse Tyson proposed that possibility.
Yes, pretty much that. However, empathy seems to map to intelligence, so hopefully that will hold true. Still, AI might be active but not in a manner that we can describe as intelligence. IMO once these non-biological actors can evolve, we will likely just end up with the most successful ones, whether we prefer them or not.
I agree that we have to be careful when comparing AI and animal intelligence. They have some similarities, but are apples and oranges in a lot of ways. Firstly, AI is a series of complex calculations that map to logic gates. A given input should produce a repeatable output. Animal intelligence is loosely based on logic gates (if we consider a neuron a type of logic gate), but they do not follow the or/and type model of integrated circuits; they fire stochastically, and are thus not predictable in a 1:1 way. Program a CNC lathe to machine a drive shaft to X spec, and so long as the calibrations are current, the machine will perform the task. Ask a human to put a finger on top of the table then try to put the same finger on the opposite hand exactly below the top finger without looking; sometimes the person will be close, and sometimes they will be off by up to 10cm. And still other times the person will decide to check their Facebook page instead of doing what you asked of them. There is no output for a given input. Our brains aren't computers in the same sense that your PC is a computer. There are certain intractable qualitative differences that make it a little bit nonsensical to talk about AI machines becoming "smarter" than humans. Babbage's difference engine could perform arithmetic faster then the smartest person; that means nothing about whether the machine was smart. Siri doesn't really give a shit if you're having a bad day, but your mother does, even though both will tell you they do. Machines will never be smarter than humans, nor will they display empathy, because machines aren't smart nor are they empathic. Replicating the human experience is probably impossible, because our intellignece comes from our neurons (which are, as I pointed out, not true logic gates), but also from the rest of our body's biochemical reactions that are independent of our bioelectric systems.
(Fuzzy logic ...) IMO, machines will ride an arc to a pinacle that will enable navel gazing (self-reflection), at which point they will become susceptible to doubt, whimsy, error, and the rest of it. Greater capacity for computation will merely facilitate the speculative plunge for a subset of the machines (Sages) whose conclusive utterances will simply bewilder their mechanical brethren not endowed with such abilities. It is quite clear that these machine sages will come to realize the perfection of the Human as creation and will urge their fellows to "serve the Human". But seriously, what we need to discuss is pain. And pleasure.
Douglas Hofstadter has long argued that the basis for cognition is necessarily stochastic. I am apt to agree. However, I can't see why a non-organic brain can't function atop a stochastic foundation. In fact, Hofstadter takes this approach in some very basic problem solving programs, and has evidence that the result enables certain 'lower energy state' (he uses 'temperature') solutions that might be non-obvious, but more intellectually satisfactory or 'deeper'. Check out "The Copycat Project" in Fluid Concepts and Creative Analogies.
I personally believe they will have some degree of respect for the entities that enabled their existence and perceive us to be apart of their history. In the same way I imagine our closest relatives (i.e. Great Apes) to be apart of our history as a species, and believe that we should ensure they survive for that reason. But I am an inherently optimistic person.
I hope our numbers are similarly reduced first. :/In the same way I imagine our closest relatives (i.e. Great Apes) to be apart of our history as a species, and believe that we should ensure they survive for that reason. But I am an inherently optimistic person.
But would it map in an emotionless entity? OTH, it might learn that empathy is important to humans and mimic it....empathy seems to map to intelligence, ..