a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by b_b
b_b  ·  4421 days ago  ·  link  ·    ·  parent  ·  post: An Idea to Change All Ideas

I agree that we have to be careful when comparing AI and animal intelligence. They have some similarities, but are apples and oranges in a lot of ways. Firstly, AI is a series of complex calculations that map to logic gates. A given input should produce a repeatable output. Animal intelligence is loosely based on logic gates (if we consider a neuron a type of logic gate), but they do not follow the or/and type model of integrated circuits; they fire stochastically, and are thus not predictable in a 1:1 way. Program a CNC lathe to machine a drive shaft to X spec, and so long as the calibrations are current, the machine will perform the task. Ask a human to put a finger on top of the table then try to put the same finger on the opposite hand exactly below the top finger without looking; sometimes the person will be close, and sometimes they will be off by up to 10cm. And still other times the person will decide to check their Facebook page instead of doing what you asked of them. There is no output for a given input. Our brains aren't computers in the same sense that your PC is a computer. There are certain intractable qualitative differences that make it a little bit nonsensical to talk about AI machines becoming "smarter" than humans. Babbage's difference engine could perform arithmetic faster then the smartest person; that means nothing about whether the machine was smart. Siri doesn't really give a shit if you're having a bad day, but your mother does, even though both will tell you they do. Machines will never be smarter than humans, nor will they display empathy, because machines aren't smart nor are they empathic. Replicating the human experience is probably impossible, because our intellignece comes from our neurons (which are, as I pointed out, not true logic gates), but also from the rest of our body's biochemical reactions that are independent of our bioelectric systems.





alpha0  ·  4420 days ago  ·  link  ·  

(Fuzzy logic ...)

IMO, machines will ride an arc to a pinacle that will enable navel gazing (self-reflection), at which point they will become susceptible to doubt, whimsy, error, and the rest of it. Greater capacity for computation will merely facilitate the speculative plunge for a subset of the machines (Sages) whose conclusive utterances will simply bewilder their mechanical brethren not endowed with such abilities.

It is quite clear that these machine sages will come to realize the perfection of the Human as creation and will urge their fellows to "serve the Human".

But seriously, what we need to discuss is pain.

And pleasure.

b_b  ·  4420 days ago  ·  link  ·  

I can't wait for the first robot suicide!

mk  ·  4421 days ago  ·  link  ·  

Douglas Hofstadter has long argued that the basis for cognition is necessarily stochastic. I am apt to agree. However, I can't see why a non-organic brain can't function atop a stochastic foundation. In fact, Hofstadter takes this approach in some very basic problem solving programs, and has evidence that the result enables certain 'lower energy state' (he uses 'temperature') solutions that might be non-obvious, but more intellectually satisfactory or 'deeper'. Check out "The Copycat Project" in Fluid Concepts and Creative Analogies.

alpha0  ·  4420 days ago  ·  link  ·