Well, I have been posting articles for several years at Medium and reddit and facebook specifically about AI... But that seems not to have done the trick, even though I also post commentary saying exactly what I mean, for every article. I don't know what to do -- I'm stumped just as well. Of course a few people listen to me -- I am a moderator on "Robotics for Good" on facebook, and a moderator on r/singularity, a subreddit for reddit -- so it's not as though I've gotten zero attention -- but unless I get a truly wide audience, I have done no good. A few awake persons -- when it's the greatest crisis ever -- simply does zero good. Do you have any idea what I could do better? -- I'm asking you honestly.
I guess it depends on whether you see a solution, or if the outcome for us is inevitable. If AI is in actuality such a threat, I can't see how it can be prevented. Personally, I am not convinced that the threat is as close as many seem to think. I've yet to see a worrisome example.
But when you see one "worrisome example" it will already be too late. That the very nature of AGI -- that's what your obviously missing. I can't describe how much I am scared by your ridiculous finding, but you seem to believe it's perfectly ok. It's artificial intelligence, not ordinary human intelligence -- and once it's achieved for the very first time, it will only get stronger, not weaker. You silly humans seem to think it's just fine and dandy -- when exactly the opposite is true.
Well "Superintelligence" by Nick Bostrom is a pretty recent book (that I've read and liked), but I suggest that right now, long articles are superior at this point to books. A recent 3-part article is http://www.lawandfuturetechnology.com/2017/05/military-ai-arms-race-will-ai-lower-threshold-going-war/ Just google "a book that's against the military AI arms race" and you will come up with lots of articles that will freeze your blood if you have any feeling how ominous it really is.
So, this is part of the concern, is it not? The idea being that a sufficiently advanced AGI system would be capable, and in some senses, obligated to fly under the radar until such a time it could guarantee it's own survival, assuming it has a sense of self-preservation similar to a humans. Loosely speaking, part of the 'threat' is that we won't see a worrisome example until the cost of a failure is astronomical.I've yet to see a worrisome example.