Response to Tyler CowenIf you create something with superior intelligence, that operates at faster speed, that can make copies of itself, what happens by default?
MR Tries The Safe Uncertainty Fallacy with Tyler Cowen and Scott Alexander in the comments.
Like the top comment, I immediately took offense with not characterizing the onset of the internet as a "truly radical technical change". But I guess you could make the case that it's been a gradual onset, whereas ChatGPT and others have kinda just exploded into existence. I also thought it was funny that many of the technocrats advocating for a six-month pause on AI development coincidentally seem to have found themselves invested in the wrong companies. And yeah, the author is 100% correct that China will leverage any pause in American AI development against the US. Still not sure how this transition to a widespread AI society will be smooth without some form of universal basic income. There aren't enough physical labor jobs to fill, domestically, at least. And robotics is working to make those obsolete soon too.
This is some of the most thunderously ignorant writing I have read. Zero to Godwin in 293 words. Worthy of note: the Chinese, the Hindus and the Turks all discovered block printing before the Europeans, but didn't use it because their societies didn't need it. Block printing took off in Europe because (A) there weren't enough scribes to go around thanks to the black death (B) there was a violent cultural revolution overthrowing the predominant religion and social hierarchy. So it's not a great place to go "Gutenberg therefore Hitler" even if you disregard the five centuries betwixt the two. "Yeah so I'm going to use your labor-saving device to overthrow God as we understand him." Pretty sure they had a real idea. Best guess is about a year elapsed between Gutenberg printing indulgences and Gutenberg printing a bible in German. Not a one of those oil barons who practiced ruthless and cut throat business practices had a clue why they were doing it, they found it eerily coincidental that they got rich, though! This is the exact opposite of what we usually say about inventors. When we use the term "visionaries" it's not because they're clueless. ...so we're done with the printing press? We've moved on? Who's going to tell the publishing industry? Adjusted Gross Income? Oh, you mean artificial general intelligence. Maybe you should link to that. Or at least define it. Although you did just spend several hundred words on "gutenberg" without getting to "bible." navel gazing intensifies ______________________________ This whole argument is a vampires vs. zombies discussion - which fictional threat do you fear more? I guess it's fun to engage with? I guess it freaks everyone out that the computers none of us had when we were kids are about to have better UI? I guess people can't imagine a chatbot that doesn't suck, therefore anything that talks must be alive? One of my favorite things about the pundit-sphere is the same people quick to point out what a shitshow Southwest Airlines is because of their ancient shitty computers think Sam Altman is building Skynet in his basement so he can crash all the airlines or some shit. It's like none of them have ever changed a printer cartridge.In other words, virtually all of us have been living in a bubble “outside of history.”
I am reminded of the advent of the printing press, after Gutenberg. Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits. But it also created writings by Lenin, Hitler, and Mao’s Red Book.
The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring.
No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring.
No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly).
How well did people predict the final impacts of the printing press?
So when people predict a high degree of existential risk from AGI
Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.)