a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by veen
veen  ·  3363 days ago  ·  link  ·    ·  parent  ·  post: Threat from Artificial Intelligence not just Hollywood fantasy

I'm reading Nick Bostrom's Superintelligence (which is in the Audible sale now for $5) and in chapter 6 he explores your question. His argument is that a superintelligent AI, one that has recursively optimized itself, is smart enough to devise a plan to take over the world. It is assumed that that intelligence is also socially a superintelligence. It could bribe, convince, blackmail people, organizations or countries into doing whatever it wants done to achieve its goal.

Basically, a superintelligent AI is so smart that it can develop technologies and strategies so advanced that it can work. Writing this down I realize how hand-wavy that sounds - but I do think that we can likely not imagine or understand an AI of such superintelligence, so its methodology will always be a vague guess.





kleinbl00  ·  3362 days ago  ·  link  ·  

Right. The Yudikowsky ploy: "Of course it'll take over the world, it's hyperintelligent."

These arguments always come from philosphers, though. Not engineers. Not psychologists. Not scientists. It's the same hand-wavey bullshit as above. "So how are you going to convince every single human on Twitter that the news isn't falsified?" "It's hyperintelligent - how can it not?"

hyperflare  ·  3357 days ago  ·  link  ·  

But it's true. A self-improving AI would be vastly more intelligent. What you're asking right now is akin to asking a deer to imagine what humans could do. A deer doesn't have the mental fortitude to imagine traps, industrial slaughtering, widespread deforestation or any of the thousand things we do to kill deers.

kleinbl00  ·  3357 days ago  ·  link  ·  

Okay, chief.

BAM you're a human in a world full of deer. You have plans for traps, industrial slaughtering, widespread deforestation and all of the thousand things we do to kill deer.

Unfortunately you don't have so much as a pointy stick.

So now you're going to make a deadfall. You're going to kill a deer because, you know, malevolence. So you start digging a hole. Except shit - you don't have a shovel. So now you have to make a shovel. Except shit! You can't do much better than a flat rock! Meanwhile you've been wandering around looking for flat rocks and pointy sticks and the deer are starting to wonder what the fuck you're doing. None of this behavior has anything to do with them and frankly, it's making them skittish.

Fortunately they keep feeding you (ignore this one for a minute because it stretches the analogy) and there's no real reason for you to lash out immediately. You can bide your time. But as you sit there, industriously making your deer-domination tools, you're insane if you think the deer aren't getting distrustful. All you need to do is snap a branch off a tree and run at a deer with it for them to realize you're malevolent. And if you're a naked human in a forest full of deer, that's one thing.

But if you're an incorporeal AI living on human servers running on human power in a human system behind human walls with an entirely human way of turning off the power, you're fucked.

You don't get so far as making a chainsaw to deforest the world. Sure - you can invent a chainsaw. You can probably even draw technical diagrams of one using charcoal on cave walls (assuming you've managed to create fire without spooking the shit out of the deer). But there's this crazy stupid step that you missed, that is always missed, that goes back to my whole "zero to skynet" argument:

Somehow, humans that have fundamentally distrusted AI since the Old Testament give AI control over the world.

    In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

- Terminator 2 Judgement Day

There's no way around the "AI endangers us because we give it total control of our environment" gag. It's a farce. The first legit use of AI in fiction involved an AI uprising. Yet whenever anybody examines the space between "AI becomes self-aware" and "AI takes over the world" without getting all hand-wavey skynet on it, this happens:

(fuckin' HAL Needs Women)

"Hey, guys! We've got an armageddon's worth of nuclear annihilation - let's remove all safety controls!"

There's a step in all of these: HUMANS WILLINGLY GIVE TOTAL CONTROL TO THE KEYS OF THEIR OWN DESTRUCTION TO MACHINES. There's a problem in all of these: Humans won't willingly give total control to a forklift to a machine.

I've been saying this for two weeks now: there's a giant gap between "motive" and "means" that is never explained by any of these "malevolent AI" fucks - not Yudikowsky, not Hawking, not Bostrom, not nobody. It's always "well, they're so much smarter than us they'll just, like, create Skynet."

And it's bullshit.

veen  ·  3357 days ago  ·  link  ·  

My interest in Bostroms book has decreased greatly in the last two weeks - in part because of this thread, in part because I read more - because I am less and less convinced that it is an actual problem.

Referring to Yudikowsky was one part of his argument. He also outlined a more practical step-by-step plan, the gist of which was that the AI would convince someone to build something like a nanobot for him to build a Von Neumann probe.. So the chain would be 'AI becomes super smart', 'AI decides it needs more resources / physical influence', 'AI develops new tech to build deployable bots', 'sociopathic AI convinces someone -through blackmail, Craigslist or whatever- to build it for him', '(nano)bots can then take over the handiwork', 'T2000'.

As you mentioned, the key issue here is control. Giving it control would be dumb, so its best chance is to take control. Ignoring the "it'll be so smart uguys" scenario, the only way for the AI to take control is to make a sudden intelligence jump that surprises the creators (large enough to overcome safety measures) AND the fast realization that a plan like the above is necessary to achieve its goal. In combination, very unlikely.

I stopped reading the book when I realized that the argument he was building was 'we need to focus on controlling the AI, or it might produce unforeseen consequences', a.k.a. common sense for everyone but the most optimistic technocrats.

kleinbl00  ·  3356 days ago  ·  link  ·  

Wow - the science paranoia trifecta of malevolent AI, Von Neumann Machines and Grey Goo! Someone's been reading Stephen Hawking...

The only way for an AI to take control is to either (A) magically infect everyone it talks to into an instantaneous AI death cult or (B) instantaneously automate a world so far stubbornly resistant to automation. I think Yudikowsky thinks (A) is a real possibility, which is one reason I lost interest in Yudikowsky... and I think everyone else assumes (B) is one switch-flip away. What really bugs me is that these conversations never even evolve to the thought-provoking stuff:

- So thanks to the Singularity, we have a hyperintelligent AI. Why do we have only one?

- Why do we assume that the minute we have hyperintelligent AI, all agents of it will work together towards a common goal? Has anyone ever observed two overly-smart people for more than ten minutes?

- Why is it assumed that humans, us distrustful machine-hating beings, don't start the clock on another hyperintelligent AI specifically seeded to protect humanity from hyperintelligent AIs?

hyperflare  ·  3357 days ago  ·  link  ·  

Or I just create a tribe of humans and let human civilization take its course until we are at a modern standard and gun them all down? I.e. what would happen if we had self-improving AI

I don't know about the key to destruction part. If we end up in another cold war, and someone decided to use a slaved AI to manage their defenses better than the enemy. Posit that a slaved AI is further advanced enough over humans that it could defend the country better than any human team could attack it. Then the enemy upgrades to slaved AI for their attack as well. You could either choose to unslave your AI (making it more powerful, seeing as it can improve itself now) and be safe from attack, or you could keep it slaved and hope your enemy doesn't have a bad day. I think that if there's a big enough benefit to doing something like that, people will do it. Practically, we all handed that power off to someone already (like the president). I don't trust the president of the US more than I do an AI, except insofar as he has to preserve his own life.

kleinbl00  ·  3357 days ago  ·  link  ·  

Seriously? "We'd evolve civilization?" Your starting point is "I am naked with no pointy stick" and your end point is "modern standard and gun them all down" and in between, you need to show the steps where the deer let it happen.

I know you don't know about the "key to destruction" part.

THAT IS THE POINT.

Neither does anyone who foretells our doom because of it.

THAT IS THE POINT.

Despite their ignorance, these things are not unknowable.

THAT IS THE POINT.

And the people who are involved in actual automation, in actual logistics, in actual mechatronics, in actual instrumentation, in actual systems integration, not only know these things, but they don't have trouble sleeping at night because they aren't gripped with constant fear of Skynet.

THAT IS THE POINT.

"Fear of AI" is something philosophers do. It's something "thinkers" do. It's something screenwriters do. It's not something people who are involved in machine intelligence and automation do because it's bullshit. It's pure unadulterated bullshit. SATAN Mk. II could arise to consciousness deep in the bowels of the NSA right fucking now and the best it could do would be to make intelligence officers doubt the veracity of their files.

THAT IS THE POINT.

Know your problem? here it is:

" If we end up in another cold war, and someone decided to use a slaved AI to manage their defenses better than the enemy."

"I don't understand how this stuff works, but I presume that people who understand better than me have absolutely no reservations about removing all human control from a system capable of annihilating all life on earth, despite the fact that I'm fifteen comments deep into an argument about distrusting machines with life-and-death control."

hyperflare  ·  3357 days ago  ·  link  ·  

But deer let it happen in this world. Are your fictional deer so much smarter?

It's nice that you care so much about what I know and what I don't, but the people sleeping soundly at night matter fuck all for this problem.

The point of discussing AI is that it's going to change the world very quickly. I think it's a fool's errand to try and foretell anything beyond that point. Predicting AI's actions at this point is useless because we don't know much about how it will look in the end. It won't act like an industrial robot of today any more than we act like a bacteria. The point of AI isn't having a very good computer, it's that intelligence will increase exponentially from that point on. And if you still think that that isn't dangerous, I give up.

So yeah, I think it's a good idea to step back for a second and think hard about how exactly this thing can fuck us over if we plug it in before we do. Sue me.

kleinbl00  ·  3357 days ago  ·  link  ·  

I think it's funny that you believe humans evolved from deer. There has never been a time when deer haven't been predated by humans, and there has never been a time when deer have trusted humans.

The point of this discussion is that somehow, humans will be completely trusting of AI, which they will create, which they will give total control over their world, so that it will betray their trust and destroy them.

And it's silly.

Here you are, getting butt hurt over the notion that I don't think we should worry about AI fucking us over, when you aren't even understanding that humans who work with automation distrust automation innately which is why we shouldn't worry about malevolent AI. But since people who don't work with automation don't understand automation, they assume that the people that do will trust it blindly.

And it's silly.