a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  3364 days ago  ·  link  ·    ·  parent  ·  post: Threat from Artificial Intelligence not just Hollywood fantasy

    Uh, what did you expect? A step-by-step guide to world domination by the hyperintelligent AI I keep in my calculator?

That's exactly what I expected. That's what I asked for. That was the bounds of the discussion, the ground rules of the thought experiment. IF: hyperintelligent AI AND: world domination is the goal THEN: how, exactly, would it be accomplished?

And your response, as well as your responses above, all say "I don't know but I'm sure it would happen."

And that's my beef. It won't. Waving hands and insisting it to be so is not the same as an actual, practical methodology towards nefarious machine behavior. Here, look:

    I actually doubt that I'd be able to infect a computer in a nuclear plant with something as primitive as a USB stick, but on the off chance that it did work, I'd be easy to fake read-outs to the controllers (via the SCADA). Alternatively, you could just uncouple the turbines and watch them blow. Suddenly your power has nowhere to go and you need to initiate emergency shutdown. Even if I can't do anything, I can decieve the controllers. If I do this to all plants, it will succeed in at least a few.

Have you ever been in a powerplant? Or a large factory? Or a pump station? Or a refinery? Or any large facility of the type typically suggested for targeting? They're full of giant mechanical shut-off valves and giant hand actuated breakers and giant full manual safety interlocks because there's no advantage to automation on the scale you need to cause turmoil. The attack on Natanz worked because the centrifuges there were specific devices with a specific job that can't be done without computer control. They were hacked, and they failed as a result. The plant didn't blow up, the grid didn't go dark, the T-1000 didn't stalk Tehran.

I can tell that you think you understand the fundamentals here, but you don't. "It'd be easy to fake read-outs to the controllers (via the SCADA)" is absolutely true... but if you want to do real damage, you need to co-opt this guy:

Repeat for every point you've made, frankly. "All the things, at once" compounds the problem, not making it easier. The world is not autonomous. The world is highly instrumented. Not the same thing. It's easy to think it's the same thing, because Hollywood is big on that idea.

But it's Hollywood.

The "T-1000 with a shotgun" is not a natural conclusion. It's a flight of fancy. The fear of malevolent AI is rooted in a fundamental misunderstanding of how many people actually keep your world running.

Click this link. Count the cars. That's how many people it takes to keep the poop flowing in West Seattle.

Now click this link. Count the cars. That's how many people it takes to turn oil into gas for about a million and a half people.

Now click this link. Count the cars. That's how many people it takes to keep a 500MW coal-fired powerplant running.

These aren't jobs programs. This isn't welfare. These are trained professionals keeping your toilet flushing, keeping your lights on, and keeping gas in your tank.

Keeping your malevolent AI from picking up the shotgun.





hyperflare  ·  3353 days ago  ·  link  ·  

But it took only ten people to crash into the WTC. It's always easier to break things than it is to keep them running. And the AI can still co-opt people. Just as all those people working in your water plant will be working with computers. In fact, all these cars give me an thoer angle of attack: Modify the traffic lights in such a way as to cause massive traffic jams. Problem solved. And yes, that's possible.

kleinbl00  ·  3353 days ago  ·  link  ·  

That's not an answer. Nor is it a response. Nor is it worthy of this discussion. How is your AI going to fly planes into the WTC? Are we going to give up on human pilots?

And yes - aren't you clever. Those people are working with computers. You see that GIANT FUCKING VALVE THO? Do you have any idea how many GIANT FUCKING NON-AUTOMATED THINGS there are in your life?

This is the point I've been making over and over again and you keep coming back to "waves hands skynet."

It's disrespectful.

Whoop de doo. Traffic signals can be fucked with. I don't know if you, like, drive but emergency vehicles override traffic signals. That's hardwired. Flash the relay with the code, the light turns green for them. Through-hole level circuitry, friend. No Skynet necessary. So again, your massive AI conspiracy accomplishes "annoyance" and you refuse to see it because - and we'll hammer this one home further - you don't understand how your world works.

hyperflare  ·  3353 days ago  ·  link  ·  

If I'm annoying you that much, you can just stop replying or something.

The WTC part was an example. You're talking about how many people and non-automated things we need to keep things running. How many people service an airplane? How many people regulate air traffic? How many people keep an buildings the size of the WTC running? And yet all it took to destroy the "system" was five determined people and a vulnerable spot.

    Are we going to give up on human pilots?

    You mean like we are in the process of giving up on human drivers?

You see that GIANT FUCKING VALVE THO?

My point the entire time ha sbeen that the AI (or a garden-variety hacker, since that is the level of AI you're allowing, here) could just convince one gullible person in the whole facility to turn that valve. Like, fake an email from their boss telling them to do it. I've been saying this over and over. The AI can get people to do things for it

German emergency vehicles don't have that override. I also don't know how you propose to solve gridlock with emergency vehicles. The purpose isn't to stop emergency vehicles, it's to stop those massive amounts of people necessary to keep things running.

In the end, I can say computers matter and you can say they don't, but let's take the long view. Computers are going to get more abundant.

I also think that statistically speaking, an AI with its resources and and intelligence and computing power will find one single way to end the world, sooner or later. The only thing stopping humans from that is either a missing desire or the inability to amass enough power.

kleinbl00  ·  3353 days ago  ·  link  ·  

You're annoying me (you're annoying the shit out of me) because you keep dodging the question:

HOW IS THE AI GOING TO TAKE OVER THE WORLD?

Your answer has been nineteen flavors of "take it on faith."

NO.

ANSWER THE QUESTION.

Everyone's response - all the people who are so hopped up on malevolent AIs taking over the world - is always "take it on faith."

NO. ANSWER THE QUESTION.

Let's take your WTC example. Yep. It took ten guys to crash a plane into the WTC. But you know what? It only took one guy to say "here's how you'd crash a plane into the WTC." His name was Tom Clancy, and the book he predicted it in was a NYT Bestseller. See, it's a whole lot easier to imagine the attack than to carry out the attack yet the best imagining of an AI attack we've got so far is "well, they could hack traffic signals."

Except that wouldn't even work in the US. Which you didn't even know. Because thesis or not, you haven't really thought about this. Worse, you're refusing to.

So I guess where we're at is that you're insisting we're one Singularity away from Skynet tanks crushing skulls and I'm pointing out that we're one Singularity away from traffic jams in your hood but not mine because apparently the US takes traffic more seriously than Germany. I really don't know how many more ways you can say "use your imagination, don't make me prove a point" and I can say "you can't prove it because it can't be proven."

So I'll say this:

11 days ago, my point was

    In popular conception, the distance between "machines that think" and "A T-1000 with a shotgun" is about 1/2 a faith-leap. The basic assumption is the minute we've achieved "artificial intelligence" (which nobody bothers to define), it will Skynet the fuck out of everything and it'll be all over for America and apple pie.

That point hasn't been changed. It hasn't even been challenged. Your counter-arguments have been getting more and more feeble. I'm really not sure what either of us get out of continuing.

hyperflare  ·  3353 days ago  ·  link  ·  

I don't think it'S that easy to have a plan for ending the world. It'S not something you can seriously expect someone alone to do in a few minutes. It's complicated. But just because it's too complicated for me doesn't mean it's impossible.

Reiterating my point from above, your 200 police cars which can override a traffic light for a few minutes won't help you fix 2000 red traffic lights. You can still hack the traffic lights. You will still have traffic jams. But whatever.

The biggest danger from AI is that it accidentally destroys the world because it wasn't taught to value it. (Popular scenarios are using the earth for raw material for a dyson sphere or something). Obviously those scenarios make very different assumptions about the state of the world. That'S what I'm worried about. I'm not worried about the singularity happening and creating skynet, because who the fuck would create a malevolent AI?

Again, I would like reiterate that in order to imagine how a superintelligent AI would destroy the world, we'd have to ask one itself! right now our discussion is just "how would hyperflare fuck up the world if he could hack anything", which is so far off the mark it's not even sad anymore.

kleinbl00  ·  3353 days ago  ·  link  ·  

Dude. 55 million people lost power for half a day. The world did not come to an end. Yet you're still harping on traffic lights.

Natural disasters destroy infrastructure regularly. Bad things happen all the time. Hell - the Soviets had their entire nuclear arsenal on automatic and humans still managed to prevent nuclear war.

There's still this assumption in your thinking that AI will have the ability to destroy the world. You've done nothing to back that up. You keep waving it away with platitudes like "just because it's too complicated for me doesn't mean it's impossible."

Nobody's saying it's impossible. But "possible" is a long fucking walk from "probable" and three connecting flights and a layover from "inevitable." And your best efforts - EVERYONE'S BEST EFFORTS - have gotten us no closer than "they could produce false data."

False data doesn't end the world. It lowers productivity. "Lower productivity" isn't "the threat from artificial intelligence."

thundara  ·  3364 days ago  ·  link  ·  

    I can tell that you think you understand the fundamentals here, but you don't. "It'd be easy to fake read-outs to the controllers (via the SCADA)" is absolutely true... but if you want to do real damage, you need to co-opt this guy:

Noted, IF I want world destruction THEN I should stop caring about intelligence AND shift gears to zombie-viruses.