To be honest, I'm not quite sure if AGI would benefit from being "just like a human". I even find it a strange goal. I mean, we use all kinds of mental shortcuts and have biases that do hold us back or even confuse us. As AI is used to replace or enhance human capabilities, AI should be able to interpret the human communication, but it really should not do as we do. But I think that is another discussion entirely. Agreed, especially if we don't prepare. Yeah, I have a notion of what can be done at the moment, but I don't know the latest developments etc, so you're probably right. Do you have any literature which can help me understand why you think it'll mean the end of humanity? I guess the main point I want to make is that this is not an experience that humans ever has had to deal with, ever, and it's going to be horrible.
I don't think you have any idea how awful it will be; I have spent years doing exactly that with AI and AGI; I have a company doing AI and robotic consulting for a few years, after selling a company doing personalized voice recognition for 8 wonderful years, and I can tell you that AI is not well understood at all, which is really unfortunate.