The quite hilarious thing is that Quartz (or anybody else either) is NEVER going to say "Let's be afraid of AI" -- they are always going to say some version of "Don't be afraid of AI, be ready" or some claptrap like that. Only I will be brutally honest with you -- it's the greatest crisis ever, a true apocalypse, and you better damn well be afraid!" But sadly, you've gotten so used to claptrap media and ridiculous promises that you don't even realize that they are happening now -- you're immune to them, and you just assume that everything will be ok. NO IT WON'T. Better get on your chinstrap because it's going to be hell before the end, the greatest nightmare in all history. And 99.9% of you are not prepared at all, just waiting for the next iPhone with gleeful anticipation. Thankfully, a few out there are listening very intently. They realize the globally big picture -- the next race will be wonderfully artificial and fabulously intelligent and creative and gifted beyond any measure, and you illogical, biological, incredibly wasteful human beings are not wanted. It's happening right now, all around you, and you do not care, secure in the so-called knowledge that you will be needed and wanted anyway for utterly insane reasons, when actually the exact opposite is true.
For such batshit ideas, your writing is so smooth that I've chosen to believe you're a kick-ass troll.
I concur with ideasware in resentment of the phrase "attuned to Hubski." There shouldn't be a unanimous voice here. That's not what I'm getting at. I think some of idea's ideas are far from rational, in the tone of "we're all going to die if we don't start a world-collapsing revolution." Just look at some of his other posts. It's difficult to take seriously.
I'm very curious, what is "attuned to Hubski"? I'm really wondering what I've missed. I seem to be the only one who's in the dark... But if you say "well, you really don't get how it's critical to go back and forth without rancor", I have to disagree... I think I've been very tolerant, but nonetheless I want to express my point with force and vigor, and have a good debate, and hopefully convince some other people. Is that so screwy?
I can't answer for the site, I can't answer for most people who post. I can only answer for myself. If we follow from the analogy of 'Hubski is a bar' I think that you're still somewhat of a stranger. Yes, I know you have worked in tech for a long time. Yes, resume stuff is important and of itself it only gives me a stereotypical sense of who you are. I'm more interested in the things that set you apart. Crisis-posting/fear-posting typically isn't something I seek out anymore. I am learning that the ways we choose to engage with information have an effect on the tone of discussion and the way we process and digest that information, and Chicken-Little-esque frameworks no longer interest me in any active way. I am perfectly happy to talk about different potential development paths for A.(G.)I., potential risks and benefits, current strategies to develop machine learning in functional ways and related topics. I'm not really going to chime in with more 'Shit's fucked and here's why, let's sit and agree vigorously about it.' Let's try and pursue the discussion about responsible development and application of technology. Less Juicero, more solar panels and easier access to education for destitute people, things like that. And we can also have the discussion about machine learning, ethical and unethical collection and applications of data (Big or otherwise). There are definitely existential risks that we need to be taking more seriously, and in order to get to a point where serious action is taken we have to steer the overton window to where that kind of thing is part of the realm of discourse. One can try and make radical shifts, usually facing lots of pushback, or else one can exert slow, steady pressure and maybe see some progress. what is "attuned to Hubski"?
Very strange, honestly. I have NEVER said the stuff that apparently you believe I said. "'Shit's fucked and here's why, let's sit and agree vigorously about it." -- that is the exact wrong thing to say, and I will say so 10 times out of 10. I basically am on the same page as you -- let's deliberate reasonably, and come to a rational conclusion. So -- I don't quite know what to say. I think you have been misinformed, pretty badly I'll say. If and when you disagree with me, I will try and change your mind, without rancor or bitterness. If you agree with me, so much the better -- then we're on the same page, and can try to lead to action.
You didn't use the words I used, but let's be honest and agree that your headlines and tone of discussion is somewhat apocalyptic. This orientation/tone makes discussion difficult and unpleasant.Very strange, honestly. I have NEVER said the stuff that apparently you believe I said.
Having a good debate isn't screwy. That's not what's going on here. I think it's the tone coupled with the actual statements you make that makes you look so screwy. Your argument here is that there is an "us" and "them" to the world, you apparently being gifted with the knowledge that "a true apocalypse" is being covered up by false promises from media, and "them" (everyone else) being slaves to the media (or the government, or rich people, whatever you feel to throw up in that moment). What makes me dismiss your ideas as nonsense immediately is the easily identifiable fuckery present when someone thinks they are part of a minority with the key to enlightenment-- and that everyone else is stupid. That's a terrible way to go about things. I went through that phase in 6th grade when I discovered atheism. I used to blast paragraphs and paragraphs of bullshit nonsensical "philosophy" on facebook and more-or-less provoke people to the point of isolating myself as a total nut. And I was. Don't be that guy. I wish I cared enough to put in a better effort to explain the problem here eloquently.
My goodness. I'm 56, a programmer by heart, and quite an excellent one at one time too. For 20 years I've either been a CTO or CEO. For 8 years I have been CEO of MeMeMe -- a voice recognition service in the cloud that focused on each individual, making it work far better for their individual voice. I raised $3 million dollars all by myself, got Costco and Lowe's and buy.com and many others large customers nationwide to implement it for mobile devices (both Android and iPhone), and sold it for quite a bit more. Now I'm the CEO of ideasware, and am completely into AI, robotics, and nanotechnology for several years. I am not a wacko or a nutjob, and the fact that you claim that for completely unfounded reasons is a genuine mystery to me.
I know I'm being a giant dick in the way I'm trying to explain this to you. I promise you sound batshit, because of the claims you make and the tone in which you write. I'm retreating from this argument because I think it's stupid. Please read this it's about the logical fallacy of appealing to accomplishment, it shows why everything you just said is irrelevant to the discussion.
Well, I have been posting articles for several years at Medium and reddit and facebook specifically about AI... But that seems not to have done the trick, even though I also post commentary saying exactly what I mean, for every article. I don't know what to do -- I'm stumped just as well. Of course a few people listen to me -- I am a moderator on "Robotics for Good" on facebook, and a moderator on r/singularity, a subreddit for reddit -- so it's not as though I've gotten zero attention -- but unless I get a truly wide audience, I have done no good. A few awake persons -- when it's the greatest crisis ever -- simply does zero good. Do you have any idea what I could do better? -- I'm asking you honestly.
I guess it depends on whether you see a solution, or if the outcome for us is inevitable. If AI is in actuality such a threat, I can't see how it can be prevented. Personally, I am not convinced that the threat is as close as many seem to think. I've yet to see a worrisome example.
But when you see one "worrisome example" it will already be too late. That the very nature of AGI -- that's what your obviously missing. I can't describe how much I am scared by your ridiculous finding, but you seem to believe it's perfectly ok. It's artificial intelligence, not ordinary human intelligence -- and once it's achieved for the very first time, it will only get stronger, not weaker. You silly humans seem to think it's just fine and dandy -- when exactly the opposite is true.
Well "Superintelligence" by Nick Bostrom is a pretty recent book (that I've read and liked), but I suggest that right now, long articles are superior at this point to books. A recent 3-part article is http://www.lawandfuturetechnology.com/2017/05/military-ai-arms-race-will-ai-lower-threshold-going-war/ Just google "a book that's against the military AI arms race" and you will come up with lots of articles that will freeze your blood if you have any feeling how ominous it really is.
So, this is part of the concern, is it not? The idea being that a sufficiently advanced AGI system would be capable, and in some senses, obligated to fly under the radar until such a time it could guarantee it's own survival, assuming it has a sense of self-preservation similar to a humans. Loosely speaking, part of the 'threat' is that we won't see a worrisome example until the cost of a failure is astronomical.I've yet to see a worrisome example.