Of course -- that's not even an issue, although I suppose the great mass of people are not quite sure what the difference is between ANI and AGI. But I take it as fact that Ben Goertzel's position is the correct one -- "The creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability.”
That said, the next 20-30 years are going to be a fabulous, difficult, and epoch-changing era, at the end of which (if we have not been driven back to the actual stone age by narrow military AI arms race going awry, which it very well might, make no mistake) we will have AGI going to be a reality, and the very real possibility that humans will then be history -- we will be humanely killed, and it will be AGI that will go forward in undreamed of events that we humans are not prepared for in any way.