I found this thought-experiment very intriguing and a little creepy.
Also see this strategy of the AI; made my head spin a little.
- Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:
"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."
Just as you are pondering this unexpected development, the AI adds:
"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."
Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:
"How certain are you, Dave, that you're really outside the box right now?"
Quote taken from here.
---
If you can think of a tag that would better fit this post, please suggest one.
Yudikowski has been making a lot of hay with his box for a while. I also find it intriguing. I also am curious as to what he's done; the original link is here. I don't find the lesswrong.com argument compelling. Rather than a thought experiment about what an AI might be able to convince you to do, it's an experiment on your perception of reality. Whatever Yudikowsky has up his sleeve, I hope it's something more compelling than that.
Wow "rational" people seem to believe as much woo-woo as the rubes.
the belief in Transhumanism and the Singularity as inevitable events is as stupid as waiting for the Messiah (notice the holy capitalization of the three words), dumber even because at least the apocalyptics have sense enough to call it faith. anyone up for the game I will be the gatekeeper.
Gimme some time to think about it I am definitely intrigued.
cool we can do it on irc or on turntable than the AI could make aesthetic arguments
I'd rather do IRC. I've gotten a few ok arguments together, not sure when I'll have 2 hours. What time is good for you?
I think I am in mountain time I have been going to bed around midnight and getting up at about 7. I work from home so my schedule is not strict. so pretty much anytime
Alrighty. Maybe Saturday or Sunday will work.
I'm pretty convincing. It might become a date! Don't tell LBerasmoochie.
I think one of the caveats is that you can't!
we should record it and release it if both of us agree afterwards.
We were supposed to on Saturday I was caught up in a mild traffic event.
Ah, ok. Have you heard of anyone online where people try this experiment? Sorry I keep asking questions; the idea behind the experiment and the possible outcomes really intrigues me. Heck, you might have just as little knowledge of the whole thing also, for all I know.
Technically no one knows much about general AI or trans-human intelligence because as of yet it is science fiction. Not to say that a lot of smart folks haven't turned their thinkers on it.
Yeah, way to treat the question as interesting. Look at kleinbl00's original link below- both of the people it was run against had the same thought, and both caved. So do you think that just maybe this guy is onto something?
Fascinated to know what went on. However since that's unlikely, we have an IRC channel and a bunch of smart folks, shall we try it ourselves?