Let me just get this straight. Do you agree or disagree with the piece? Your commentary is a bit run on and I cannot make a lot of sense out of it. Anyway, in some ways I can agree with the piece. Humans are quite surprisingly flexible in what they can do. I mean, we have jacks of all trades on this very forum. However, I also believe that AI will automate away many (if not most) jobs. The transportation sector will be hit by self driving trucks, banks by even more automation where people can do their own things without having to go to a bank. For example, in the Netherlands banks have gone from offices with tellers and everything to spaces where larger decisions are made, accounts are opened and closed (mainly because you need ID to open an account) and people who don't know how to use the computer system are helped via a local version of the very same computer system. No tellers needed, only a few advisers because the brunt of the info is online. But the core of what I disagree with in this piece is that there is no reason to suppose that the new industries that will arise cannot be automated. So, let's explore that for a bit, shall we? Nowadays we have all kinds of industries that we did not consider automatable even just a few years back. Now we have computers doing image recognition, resulting in cars driving themselves, computers assisting doctors with diagnoses. We have machine learning, which has had and has a huge impact on various computer systems we use every day. Essentially, if it can be made into an algorithm, a computer can do it. This is not bad per se, and it will probably open us up for more joyful pursuits. But it does open up another can of worms, as our old economic system cannot work when only a few people have income (assuming new industries get automated away as quickly as they came). But to get to (what I think) is your point. Converting to General AI will indeed be painful, but I don't think that the resulting AI will have a good reason to just kill off humans. We are not completely worthless, I mean, we still do things that AI can hardly replicate and we do it way more efficiently and effortlessly than AI can! Sure, AI programming will not be "just like now, but better!", but it probably also isn't the apocalyptic wasteland of Terminator or Matrix. We can adapt and we should adapt to the new reality, but as always, we don't really know what will be coming. All we can do is speculate and prepare for various outcomes.
In general I agree -- I'm sorry if it came out more messy than I suppose. And I don't believe in Terminator or Skynet, despite what some people apparently suppose -- I honestly have no idea why. But I do believe that in general it will be bad for humans pretty soon (perhaps 25 years, +/- 5 years), because people will not realize that AI and specifically AGI wil be better at everything important and job-related than human beings, often massively better. Even "common sense" and intuition and purely artistic creatively will be sucked under the AGI umbrella quite soon -- and humans will be very outmoded and not really knowing what to do. Whether AGI kills off people very humanely, or sticks them in a zoo playing virtual reality games or has another different alternative is not important, but they will find a way to move completely beyond us, and we will be history. That is the primary point, and it's pretty dire, whatever the final outcome.
Ah, that does make it more clear. Thank you for clearing that up. I do still disagree with that somewhat, because we as humans can do things the AI is not capable of. Or at least, not yet capable of. It's a bit like you would redesign a module someone else already created just because you can. The only thing we do know is that big changes are about to happen and many people will be caught unaware. We need to find a good answer to various possible outcomes and try to steer this into the right direction.
"At least not yet capable of" -- I think that is a really gigantically important point however. Of course right now for AI it's very primitive -- I totally get that. For the next 10-15 years it will be like that, and people will be fine, perhaps even complacent, retraining away. But all of a sudden, for those not following it closely, it will shift to high gear, and AGI will start getting quite good at human qualities where previously they were totally awful, not even really human-like. That's when it will get good at common sense, and the world will realize that humans are history, for job-related positions as well as just living life. I guess the main point I want to make is that this is not an experience that humans ever has had to deal with, ever, and it's going to be horrible. I don't think you have any idea how awful it will be; I have spent years doing exactly that with AI and AGI; I have a company doing AI and robotic consulting for a few years, after selling a company doing personalized voice recognition for 8 wonderful years, and I can tell you that AI is not well understood at all, which is really unfortunate.
To be honest, I'm not quite sure if AGI would benefit from being "just like a human". I even find it a strange goal. I mean, we use all kinds of mental shortcuts and have biases that do hold us back or even confuse us. As AI is used to replace or enhance human capabilities, AI should be able to interpret the human communication, but it really should not do as we do. But I think that is another discussion entirely. Agreed, especially if we don't prepare. Yeah, I have a notion of what can be done at the moment, but I don't know the latest developments etc, so you're probably right. Do you have any literature which can help me understand why you think it'll mean the end of humanity? I guess the main point I want to make is that this is not an experience that humans ever has had to deal with, ever, and it's going to be horrible.
I don't think you have any idea how awful it will be; I have spent years doing exactly that with AI and AGI; I have a company doing AI and robotic consulting for a few years, after selling a company doing personalized voice recognition for 8 wonderful years, and I can tell you that AI is not well understood at all, which is really unfortunate.
I think that an oldie but an amazing article anyway is https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html It's not for everyone, but it's outstanding for those who give a shit, and I certainly think that's you. Another good general introduction is http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom Otherwise just keep poking now that you understand the enormous topic -- the good news is that it's all around us, so you will get informed pretty damn quickly.