But when you see one "worrisome example" it will already be too late. That the very nature of AGI -- that's what your obviously missing. I can't describe how much I am scared by your ridiculous finding, but you seem to believe it's perfectly ok. It's artificial intelligence, not ordinary human intelligence -- and once it's achieved for the very first time, it will only get stronger, not weaker. You silly humans seem to think it's just fine and dandy -- when exactly the opposite is true.
Well "Superintelligence" by Nick Bostrom is a pretty recent book (that I've read and liked), but I suggest that right now, long articles are superior at this point to books. A recent 3-part article is http://www.lawandfuturetechnology.com/2017/05/military-ai-arms-race-will-ai-lower-threshold-going-war/ Just google "a book that's against the military AI arms race" and you will come up with lots of articles that will freeze your blood if you have any feeling how ominous it really is.