a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by theadvancedapes
theadvancedapes  ·  4264 days ago  ·  link  ·    ·  parent  ·  post: Help building a curriculum on Narrative and Artificial Intelligence

As someone who is thinking about re-directing my graduate career in the direction of futurology, here are my suggestions:

ANY contemporary scientific or humanities discussion on the future of humanity must include Kurzweil (whether you support him or not):

The Singularity Is Near (2005) Transcendent Man (documentary) (2009)

If you don't want to spend too much time on Kurzweil but want a solid perspective on how the field of AI perceives Kurzweil's theories on the future, you can read this article (which I think is fair and insightful):

Goertzel, B. 2007. Human-level artificial general intelligence and the possibility of a technological singularity: A reaction to Ray Kurzweil’s The Singularity Is Near, and McDermott’s critique of Kurzweil. Artificial Intelligence, 171: 1161-1173.

To dig deeper into contemporary "futurology" it is important to discuss the research produced from The Future of Humanity Institute at Oxford. Some of their biggest thinkers, economist and anthropologist Robin Hanson, as well as mathematician and philosopher Nick Bostrom are great places to start:

Hanson, R. 1998. Is a singularity just around the corner? Journal of Evolution and Technology, 2. Hanson, R. 2000. Long-term growth as a sequence of exponential modes. Hanson, R. 2001. Economic growth given machine intelligence. Journal of Artificial Intelligence Research. Hanson, R. 2008. Economics of the singularity. IEEE Spectrum, 45: 45-50. Hanson, R. 2008. Economics of brain emulations. In Healey, P. & Rayner, S. (eds.). Unnatural Selection – The Challenges of Engineering Tomorrow’s People: 150-158. London: EarthScan.

Nick Bostrom's ideas and perspectives can be found easily on YouTube and I would suggest this FANTASTIC Aeon Magazine piece on Bostrom and the Future of Humanity Institute for your course: http://www.aeonmagazine.com/world-views/ross-andersen-human-.../

Francis Heylighen is a hidden genius in the world of futurology. I think he is mostly hidden because he is Belgian. Either way, his works are unbelievable and necessary readings for any discussion on our future:

Heylighen, F. 2007. The global superorganism: An evolutionary-cybernetic model of the emerging network society. Social Evolution & History, 6: 57-117.

Heylighen, F. 2008. Chapter 13 Accelerating socio-technological evolution: From ephemeralization and stigmergy to the Global Brain. In Modelski, G., Devezas, T. & Thompson, W.R. Globalization As Evolutionary Process. New York: Routledge.

Of course, it is important to also dissect and discuss the work of the man who proposed the idea of the technological singularity: Vernor Vinge. Here are some of his notable works:

Vinge, V. 1993. The coming technological singularity: How to survive in the post-human era. Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Vol. 1.

Vinge, V. 2007. What If the Singularity Does NOT Happen. Seminars About Long-Term Thinking, the Long Now Foundation.

Vinge, V. 2008. Signs of the singularity. IEEE Spectrum, 45: 76-82.

Also H+ Magazine's Adam Ford recently did an interview with him that is great: http://www.youtube.com/watch?v=tngUabHOea0

It seems like sci-fi is also featured prominently in this course. Therefore, I would also suggest watching the H+ series on YouTube about the merger of humans and machines: http://www.youtube.com/user/HplusDigitalSeries

I hope those suggestions are of some help. If you have any further questions or need more suggestions feel free to contact me via Hubski.

EDIT:

I almost forgot - given the nature of the course, the following two articles have immense historical value:

Ulam, S. 1958. Tribute to John von Neuman. Bulletin of the American Mathematical Society, 64: 1-49.

Good, I.J. 1965. Speculations concerning the first ultraintelligent machine. Advances in Computers, 6: 88.

EDIT 2:

Also, in my opinion, the biggest question that people within the field of artificial intelligence and futurology have to understand is whether the substrate of the mind (i.e., biology versus technology; cells versus microprocessors) matters when it comes to consciousness. If the substrate matters (i.e., only biology can produce consciousness) than the future becomes very mysterious and confusing (as noted by Vinge (2007)). If the substrate does not matter (as proposed by Kurzeil in How to Create a Mind (2012)) then I would contend that we can start to discuss a) the types of AI that will exist before 2050, b) the roles they will play within our society, c) and their place within an evolutionary framework of life, intelligence, and the universe.





user-inactivated  ·  4264 days ago  ·  link  ·  

From the description above, it seems to be pop-culture depictions of artificial intelligence, primarily focused on film. I think digging that deeply into futurology would be too much of a digression. Agreed on Vinge though, and reluctantly on Kurtzweil.

JakobVirgil  ·  4263 days ago  ·  link  ·  

Oh come on Kurtzweil is pop-fiction. futurology is certainly not a science :)

user-inactivated  ·  4263 days ago  ·  link  ·  

    Oh come on Kurtzweil is pop-fiction.

I'd call him a charlatan, but he's still influential so it's worth paying some attention to him.

    futurology is certainly not a science

Agreed. I'm not sure that makes it a bad thing; wild speculation has been useful to science and technology in the past, and making ourselves and our world better is more palatable to me than the speculating about ways to sell lots of advertising. It is a little disturbing how cultlike it can get though.

JakobVirgil  ·  4263 days ago  ·  link  ·  

Cult-like like Lesswrong or cult-like like Heaven's Gate and Scientology?

I think a lot of futurology (and all of Kurtzweil) comes from a religious impulse and has more to do with new-agery and eschatology than science. I guess it is fine enough as a hobby.

Since the class is about pop-culture and AI Kurtzweil is a good fit.

user-inactivated  ·  4263 days ago  ·  link  ·  

    Cult-like like Lesswrong or cult-like like Heaven's Gate and Scientology?

    I think a lot of futurology (and all of Kurtzweil) comes from a religious impulse and has more to do with new-agery and eschatology than science. I guess it is fine enough as a hobby.

Yes. Also, this. I look at these things as an outsider inclined to be hostile, but this stuff looks uncomfortably close to Scientology. And I've seen Ben Goertzel give a talk at an AAAI conference!

JakobVirgil  ·  4263 days ago  ·  link  ·  

how did that go?

user-inactivated  ·  4263 days ago  ·  link  ·  

It was on Cyc as a precursor to strong AI. It would have been a legit, if controversial, talk in 1984 when Lenat published his book, if Lenat himself had given it. I think this was 2008, and he didn't add anything to what Lenat wrote back then but updated jargon and mentioning OpenCyc. It was very strange, but not really nutty. I don't think it was well received, but I was a lowly undergraduate and didn't talk to many people for fear of making a fool of myself.

JakobVirgil  ·  4261 days ago  ·  link  ·  

Cyc uses a really bizarre definition of Intelligence. The folks involved seem to think Intelligence = knowing a bunch of stuff.

of course there is no good definition of Intelligence and it is better than Kurtzweil's moronic "intelligence is pattern recognition".

(I feel I left that looking a bit like a ad hom. It is not because the of the direction of the inference. Kurtzweil believes dumb stuff != the stuff is dumb because Kurtzweil is but may well imply that Kurzweil is dumb because he believes dumb stuff.)

user-inactivated  ·  4261 days ago  ·  link  ·  

When Cyc started there were a lot of successful rule-based expert systems that the AI community was really exciting about, doing things like medical diagnosis and credit card fraud detection. It's a very good way to automate making the sort of routine decisions experts in a particular domain make within that domain. As weak AI, that model is excellent for modeling what it models. Using the informal definition that an intelligent program is one that does what an intelligent person would do, I have no problem calling those programs intelligent.

The techniques we use in AI are all just magic tricks though. They're really cool and really useful magic tricks, but they don't tell us anything about our own minds, or about how to write programs that are intelligent in the way we are. Maybe they will, eventually, but we're far from that point.

I don't think any of these guys are dumb, they've all done clever work, they're just making assertions way beyond what they're justified in asserting as scientists. When people do that with quantum physics or Gödel we call them cranks.

JakobVirgil  ·  4261 days ago  ·  link  ·  

I stand corrected but Ray's confusion of strong and weak AI does not point at genius. He is quite clever at OCR it is a bit of if you have a hammer everything looks like a nail issue. I see a future chock full of weak AI and completely devoid of artificial humans.

This sort of thing comes up all the time people clever at one thing usually medical doctors become horrible cranks at another in my experience evolutionary biology, human origins etc.

If I see M.D. next to an author of a paper in anthro my eyes tend to roll unless of course it is medical anthropology.

theadvancedapes  ·  4264 days ago  ·  link  ·  

That's fair. Then I will recommend Love and Sex with Robots.