a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by mk
mk  ·  3556 days ago  ·  link  ·    ·  parent  ·  post: The Paperclip Maximizer

    It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.

    This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

IMHO Douglas Hofstadter has made strong arguments why AI simply cannot be so narrow in scope or objective, or exist without many of the same flaws that we have. It's one thing to imagine an intelligence that is so focused, but once you begin to investigate the scope of intelligence necessary for the AI to achieve the objectives, you find a requisite dilution of the original focus. That is, in order to turn the universe into paperclips, you need a understanding of the universe that is so complex, you aren't going to able to be persuaded that turning the universe into paperclips is a good idea. -The more focused the AI on the paperclip conversion, the less capable it is going to be in actually carrying it out.

I am not sure if there is a term that is equivalent to anthropomorphism for mapping computer-like qualities on a being, maybe "calculamorphism"? At any rate, I believe that is the fault of arguments like this: they project the cold emotionless intelligence of simple computers upon intelligent non-biological beings.





NotPhil  ·  3556 days ago  ·  link  ·  

    Douglas Hofstadter has made strong arguments why AI simply cannot be so narrow in scope or objective, or exist without many of the same flaws that we have.

We consider people to be intelligent, and people frequently pursue seemingly innocuous goals to the extent that they harm themselves and others. A critical difference is that AIs are potentially immortal, so even death can't put an end to their shenanigans.

mk  ·  3556 days ago  ·  link  ·  

Perhaps.

I suppose it comes down to a question about to what extent intelligence ought to mold the universe to its liking. Or maybe more specifically: How does strength of intelligence relate to the desire to change the environment?

Maybe the Fermi paradox should give us some comfort? Chances are, we are passing through well-trodden ground.

Edit: on second thought, maybe Fermi's paradox should be cause for concern, where we can only take comfort in that the destruction is likely limited to solar systems.