Ray Kurzweil is a prolific inventor and futurist that believes that humanity is reaching a new epoch in the history of the universe, life and humanity: the technological singularity. The technological singularity is a predicted point in time when humans and technology will completely merge to create a new type of intelligence. Is this singularity an inevitability? And if it is, what will happen afterwards?
Pretty good tldr summary of the singularity theory. For my part I was first exposed to it through "The Age of Spiritual Machines" which I stumbled into after listening to Our Lady Peace's concept album of the same name. The band even recruited Kurzweil to voice several excerpts. The interesting thing to me is that album has a fairly dark tone to it. This is in stark contrast to the very optimistic outlook Kurzweil argues for in his books. I would consider myself a 'believer' in the concept of the singularity, although as more time goes on I become more skeptical of Kurzweil's timeline. His predictions continue to be quite spot on, so maybe it's just the inevitable conservatism that comes with age or the difficulty of understanding/accepting exponential trends that humans seem to have, but as neuroscience advances we seem to keep finding more complexity to the human brain rather than less. The rate at which we find new questions outpaces the rate at which we find answers. Such is science. At the same time we seem to be hitting some limitations in our advancement of microprocessor technology. Laws of physics type limitations. We can't keep scaling processing potential vertically so now we are going horizontally. Given the difficulty of massively parallel programming and the lack of benefits it poses to the vast majority of consumer applications the rate of advancement seems to have slowed down to me. But again, maybe I'm just getting older and more jaded? :) I do think I will live to see the singularity though, even if it misses it's 2040's deadline. Kurzweil's book "Live Long Enough to Live Forever" makes a pretty compelling argument as such. There are a massive shitload of interesting moral questions that will arise from all of this. AI civil rights. When you copy your complete consciousness into a machine which 'you' is you? Is deleting copies of a transcended consciousness murder? In a world with no resource scarcity how will we occupy our time? Etc etc etc... It's an exciting topic.
1) I remember loving "The Age of Spiritual Machines" concept album as a kid. I just didn't realize it was Ray Kurzweil and the ideas from his books that the album was based around until afterwards. I still love that album though! And I agree that it is a very dark album, which makes little sense considering how optimistic Kurzweil is. 2) Inevitable conservatism may have something to do with your incredulity. The idea of the singularity is so massive and so transformative it is hard for me to wrap my mind around sometimes as well. Re: advances in understanding the brain, I suggest you pick up his new book that comes out in November. It is all about how to build a mind. 3) We are hitting "laws of physics type limitations" in terms of microprocessing, but that is because we haven't started making them in 3 dimensions. When we do it will be the end of Moore's Law but it will not be the end of exponential growth for computer based technologies. That level won't be reached until post-singularity (according to Kurzweil). Kurzweil continues to argue that once that level is reached the only way to make a computer more powerful would be to build bigger, which strong A.I. would do, turning essentially all inanimate matter into a computer (i.e. Earth, moon, etc.) 4) I couldn't agree more re: the interesting moral questions.
This much I can agree on. IMO I don't think we will recognize smarter than human AI when it first arises. In fact, I think we will be debating whether or not it is AI for some time after. Intelligence is choice defined by an environment. Since we won't share the same perceptive environment as the AI, it will be very difficult to understand their actions in terms of choices. I think very strong AI might just grow and it might not even realize what happens to us. It might not be able to perceive us as active. :/Technological development completely overtaken by machines who think, act and communicate so quickly that unenhanced biological humans cannot even comprehend what is going on.
By stating "I think very strong AI might just grow and it might not even realize what happens to us." do you mean that they just won't believe we are relevant to the things they want to do/accomplish in the universe? Similar to the way we don't care about what a worm or a beetle thinks or does? I think Neil Degrasse Tyson proposed that possibility.
Yes, pretty much that. However, empathy seems to map to intelligence, so hopefully that will hold true. Still, AI might be active but not in a manner that we can describe as intelligence. IMO once these non-biological actors can evolve, we will likely just end up with the most successful ones, whether we prefer them or not.
I agree that we have to be careful when comparing AI and animal intelligence. They have some similarities, but are apples and oranges in a lot of ways. Firstly, AI is a series of complex calculations that map to logic gates. A given input should produce a repeatable output. Animal intelligence is loosely based on logic gates (if we consider a neuron a type of logic gate), but they do not follow the or/and type model of integrated circuits; they fire stochastically, and are thus not predictable in a 1:1 way. Program a CNC lathe to machine a drive shaft to X spec, and so long as the calibrations are current, the machine will perform the task. Ask a human to put a finger on top of the table then try to put the same finger on the opposite hand exactly below the top finger without looking; sometimes the person will be close, and sometimes they will be off by up to 10cm. And still other times the person will decide to check their Facebook page instead of doing what you asked of them. There is no output for a given input. Our brains aren't computers in the same sense that your PC is a computer. There are certain intractable qualitative differences that make it a little bit nonsensical to talk about AI machines becoming "smarter" than humans. Babbage's difference engine could perform arithmetic faster then the smartest person; that means nothing about whether the machine was smart. Siri doesn't really give a shit if you're having a bad day, but your mother does, even though both will tell you they do. Machines will never be smarter than humans, nor will they display empathy, because machines aren't smart nor are they empathic. Replicating the human experience is probably impossible, because our intellignece comes from our neurons (which are, as I pointed out, not true logic gates), but also from the rest of our body's biochemical reactions that are independent of our bioelectric systems.
(Fuzzy logic ...) IMO, machines will ride an arc to a pinacle that will enable navel gazing (self-reflection), at which point they will become susceptible to doubt, whimsy, error, and the rest of it. Greater capacity for computation will merely facilitate the speculative plunge for a subset of the machines (Sages) whose conclusive utterances will simply bewilder their mechanical brethren not endowed with such abilities. It is quite clear that these machine sages will come to realize the perfection of the Human as creation and will urge their fellows to "serve the Human". But seriously, what we need to discuss is pain. And pleasure.
Douglas Hofstadter has long argued that the basis for cognition is necessarily stochastic. I am apt to agree. However, I can't see why a non-organic brain can't function atop a stochastic foundation. In fact, Hofstadter takes this approach in some very basic problem solving programs, and has evidence that the result enables certain 'lower energy state' (he uses 'temperature') solutions that might be non-obvious, but more intellectually satisfactory or 'deeper'. Check out "The Copycat Project" in Fluid Concepts and Creative Analogies.
I personally believe they will have some degree of respect for the entities that enabled their existence and perceive us to be apart of their history. In the same way I imagine our closest relatives (i.e. Great Apes) to be apart of our history as a species, and believe that we should ensure they survive for that reason. But I am an inherently optimistic person.
I hope our numbers are similarly reduced first. :/In the same way I imagine our closest relatives (i.e. Great Apes) to be apart of our history as a species, and believe that we should ensure they survive for that reason. But I am an inherently optimistic person.
But would it map in an emotionless entity? OTH, it might learn that empathy is important to humans and mimic it....empathy seems to map to intelligence, ..
The singularity is another hoax. Read this article by Charles Stross: http://www.antipope.org/charlie/blog-static/2007/05/shaping-... He later coined the singularity "The rapture of the nerds", and wrote a nice novella about it.
The 2030's sound nice, why don't we just pause there and proceed no further. In the realm of science fiction the singularity is almost always when the humans become slaves to their AI masters, does Kurzweil have any predictions to this effect or is it impossible to predict because we are dealing with a new intelligence whose motivations are beyond our comprehension?
The definition of the singularity is that we don't know what how our world and civilization (both technologically and socially) will be affected by higher than human intelligence AI. Of course we don't know for sure anything about the future, but I suppose we are reasonably good at having some kind of an idea, as Kurzweil's track record shows. The one thing that we can reasonably predict is the impact will be pretty massive. The invention of agriculture type massive. Anything from Terminator-esque genocide to essentially heaven on earth is conceivable. Guiding this toward a beneficial outcome for the human race this is exactly the the point of The Singularity Institute Kurzweil's books "The Age of Spiritual Machines" and "The Singularity is Near" take the basic format of spending half the book explaining the singularity theory and making a case for it's validity, and the second half talking about how crazy awesome the outcome could potentially be. He's pretty optimistic about it and makes good arguments for his optimism. They are great reads.
I couldn't agree more. The benefit of having an institute like The Singularity Institute will become more and more evident as the pace of change increases in the 2010s, 2020s, and beyond. The think the singularity will be just as massive and transformative as agriculture... or even the World Wide Web. I just went to see Kurweil speak in Toronto and this was one of my favourite quotes from him: "Its amazing how quickly people adapt to things that they had thought were impossible when they were first told about it. Then they continue to act as if nothing has changed.” I think that rings true for humanity's response to the internet at the moment. We've lived through an "agriculture types massive" change with the development of the internet. Its mind boggling... and Kurzweil predicted everything about it to pretty much the exact year. In retrospect that is truly amazing.
He has hypothesized that there will be humans that refuse to integrate with new technologies. There have always been luddites throughout history that do not integrate with the rest of society. I would guess that whatever future intelligence existed they would preserve areas where biological humans could exist, in the same way we are trying to preserve areas where non-humans can live without fear of extinction. However, they could just as easily wipe out biological humans, in the same way we could choose to wipe out all megafauna.