From Kevin Kelly's What Technology Wants:
- Woven deep into the vast communication networks wrapping the globe, we find evidence of embryonic technological autonomy. The technium contains 170 quadrillion computer chips wired up into one mega-scale computing platform. The total number of transistors in this global network is now approximately the same as the number of neurons in your brain. And the number of links among files in this network (think of all the links among all the web pages of the world) is about equal to the number of synapse links in your brain. Thus, this growing planetary electronic membrane is already comparable to the complexity of a human brain. It has three billion artificial eyes (phone and webcams) plugged in, it processes keyword searches at the humming rate of 14 kilohertz (a barely audible high-pitched whine), and it is so large a contraption that it now consumes 5 percent of the world's electricity. When computer scientists dissect the massive rivers of traffic flowing through it, they cannot account for the source of all the bits. Every now and then a bit is transmitted incorrectly, and while most of those mutations can be attributed to identifiable causes such as hacking, machine error, or line damage, the researchers are left with a few percent that somehow changed themselves. In other words, a small fraction of what the technium communicates originates not from any of its known human-made nodes but from the system at large. The technium whispering to itself.
Historically speaking, gods were created in the image of man (or the fun-house reflections of aspects of humanity), then people forgot why the gods were invented and so those gods became "fact" until a greater understanding of the world emerged. What if in this instance, human beings are unwittingly creating a real god, or a consciousness or being that will effectively be a god? In some sense, I think that the one of the many reasons why there are so many dystopian futures in science-fiction is that there is a deep seated suspicion in the collective unconscious that should an all-powerful being exist, that it wouldn't particularly like humanity. In any event, there is a particular symmetry to the natural world giving rise to a pantheon or even an ineffable creator figure giving rise to the agency and power of humans, who in turn, create a being or class of beings much greater than themselves in a universe increasingly influenced by human hands.
What if in this instance, human beings are unwittingly creating a real god
Reminds me of the short story Answer, by Fredric Brown.
I've never heard of this guy before, but I like that story. Do you know if there's any relationship between this and Asimov's The Last Question or Adam's Hitchhiker series?
Something often forgotten in these comparisons is that each "cell" in this brain-like network serves itself. Its connection to other aspects of the system is parasitic; any cooperation is ad-hoc and uncoordinated. Pushing the biological analogy further, it's "uncontrolled division of abnormal cells" - a cancer. That's really what separates normal tissue from abnormal tissue - organization. There's a central hierarchy that covers everything, not just IPv6. Cells exist to serve the whole, not the other way 'round. Is there any wonder there are ricochets? We're talking about a complex ad-hoc system with no overarching purpose, after all; expecting to diagnose all of it is foolhardy. So is expecting a ghost to arise from the machine, if you ask me. Optimist though I may be, Skynet is gonna look just like us and it'll exist on purpose, not by accident. And it'll die easily. That said, what do you think of the book? I've been disappointed by four sci fi novels in a row; I'm trying to decide whether or not to dig into The Second Self and for audiobooks I'm staring down the barrel of either Shelby Foote or something dreadful on The Spanish Flu.
I don't think so. Barring Asimovian precautions, which may be impossible if it is emergent, any AI could reprogram a simple "killswitch" function. Software can easily modify itself. The first strong AI won't be created in a black box, disconnected from the internet, nor with Asimovian low-level protections. And if it is malevolent, it will be very unfortunate for us. The first thing it will do when (not if) it's connected to the internet, is use known exploits to take control of some of the myriad vulnerable consumer machines, in precisely the same manner as a botnet. It will make every node distributed and redundant, and thus very hard to kill. Botnets are killable via a few methods. They usually have central command servers. It will not. Botnets often have recognisable traffic. It will probably use techniques that aren't. Botnets are often killable via a security patch to the operating system. But this only kills nodes which are updated. For the AI, this won't be enough, even if a patch can be developed, which is uncertain. This is even more difficult because unlike most botnets, it would also almost certainly make itself cross-platform for survivability, possibly even onto unexpected devices such as phones and internet-connected refrigerators. Secondly, if it is actively malevolent, it will gain access to many of the unsecure systems accessible to the internet. It's baffling how many of these there are. If you don't believe me, search the news for "SCADA hacks." The SCADA protocol has no security, and is designed for an insecure environment. But many SCADA systems are connected to the internet. I would honestly be surprised if some serious weapons systems were not accessible via the internet. All it takes is one nitwit on the LAN connecting his personal computer to the Internet. So how do we kill it? The same way you kill an epidemic, or an insect infestation. Tracking it is the first step. Even encrypted traffic may bear signatures. If that's impossible, you isolate every computer on the planet. It wouldn't be easy. It would probably require either shutting off power, or shutting off Tier 3 ISPs (possibly Tier 1). Then go thru them one by one and wipe all hard drives and flash the BIOS. And know you missed one, because someone doesn't respond to the emergency call. The question is, how much of the intelligence is contained in a single node, and how capable is it of self-repair? Like most infestations, it will almost certainly resurface in time, in the walls, where you can't see it until it's too late. I won't even go into the ethical ramifications of treating a sentient being as an infestation. Also bear in mind we're not talking about the Singularity. The Singularity is a hypothesised event wherein humans create an intelligence greater than our own, which does so in kind, ad infinitum. This is the hypothetical first AI, which has around the same IQ as us, not approaching infinity. TLDR (1) it won't die easily and (2) I really, really hope it isn't malevolent. Disclaimer:
I'm not a security expert. A software security expert could probably better analyse the steps an AI might take to protect itself than I. I'm also a human; which means I’m probably mistaken.And it'll die easily.
I'm not sure how much of these theoreticals that I want to entertain, but I finished Count Zero recently, sofuckit: The thing you're not considering is when the ephemeral starts to move into the physical domain. An AI is one thing, but an AI with millions of dollars, mined from stolen CCNs, spread across hundreds of wallets, invested in assets through paid-off humans... It only takes a couple thousand dollars and a few willing subjects to lock enough drives away in hidden nooks to preserve yourself against any offline attacks. The law can't throw you in jail, and if you have enough money, you can find anyone on the black market to do your bidding. And if you are a paranoid AI that doesn't trust humans? Invest in robotics and load them with software to move and replicate your physical nodes.If that's impossible, you isolate every computer on the planet. ... Then go thru them one by one and wipe all hard drives and flash the BIOS.
Claimer: My father built the first network the Department of Energy ever had, and spent 30 years managing computer security for, among others, NEST. You're mistaken. There's this pervasive idea about "the Internet" that somehow, a million webcams, a billion Bluetooth receivers and a trillion audio DA chips will arise out of the digital muck and choose to wipe out humanity. It's positively Frankensteinian. Let's take it back a notch: I am in my house, with my wife and daughter, right now. I live in Los Angeles, one of the most fragile cities in the world. There's a phone in my pocket, an iPad on the table, a network-connected receiver, a hub, a Wii, a Mac, a PS3, a printer, a Kindle and a weather station all connected to the Internet. That's just in this room. Out there, malevolently dwelling in the aether, is a superintelligent AI. It wishes to do me harm. How is it going to do that, exactly? Let's give it a leg up and pretend that my two Roombas are connected to the Internet. Now it's got mobility. Pretend it can wake them up and send them rolling… for my toes. OH SHIT MAD ROOMBA. Maybe I go into the gear room where I've got 36 internet-connected faders. I try to mix and OH SHIT FADERS ATTACKING MY FINGERS. I've got more "internet" in my house than any given seven people (we didn't really get into it) and I'm a long, long way from "Maximum Overdrive." There's a step most people miss in the whole "Skynet" scenario - the part where we wire up all of our killing toys for autonomy for some completely unknown reason. Reality check: it takes more people to get a Reaper UAV in the air and keep it there than it takes to do the same job with an F-15E Strike Eagle. And the Eagle can carry nukes. The entire scenario you're talking about is basically a malevolent version of Conficker, which mostly exists to lift credit card numbers these days. Which would be one thing if our Malevolent AI could wear all the slippers and bling it's buying with stolen credit card numbers, but as it doesn't even have the rudimentary waldoes to plug additional USB cables into itself, you'll excuse me if I find your predictions to be naively alarmist.
We aren't at that level of organization yet. Competition still overrides cooperation at a fundamental level. There was a time when you could say the same thing about single-celled organisms merging to become multi-celled organisms. And it took them a long time to evolve the control mechanisms necessary to live in a larger and more complex organization. I think it's worth a read. I find the way Kelly stumbled into his career in the tech world to be fascinating. It gives him a unique perspective on technology and how he relates to it in his life.Cells exist to serve the whole, not the other way 'round.
That said, what do you think of the book?
I've seen no evidence to support the notion that this level of cooperation will spontaneously arise.There was a time when you could say the same thing about single-celled organisms merging to become multi-celled organisms. And it took them a long time to evolve the control mechanisms necessary to live in a larger and more complex organization.