Robots. Hear me out. It's been said that artificial intelligence will be the last invention that humans will ever have to make. Imagine this: if the first generation of AI is as smart as a human, then the second generation could be exponentially more intelligent than a human, etc. A few generations later we would be mere ants to AI and we could very well be eliminated from the planet. At that point, the only thing left to right the final chapter in history books about us (assuming the AI even wants to to that) would be robots.
I do no believe it is likely that a technological singularity would wipe us out. Of course, we'll have to be cautious in the way we develop these systems, but at the end of the day we would not give them full reign to do as they please. For example, let's look to the autopilot of a commercial airliner. They are usually designed with "with redundancy and reliability as foremost considerations." In some cases, one aircraft with have multiple different autopilot systems. Each system will have been designed by a different team, potentially in a different language or a different architecture, and will run independently. If the majority of these systems at any time don't agree on the next step they should take, those controls handed back to the pilot. Of course, these are artificially intelligent systems in the way you're hypothesizing about. But I'd hope that there's no way super-intelligent AI would be developed without similar considerations being put in place. It would very silly indeed just to let such a system go off and do as it pleases. Anyway, have you ever tried to get rid of ants? Those bastards are resilient.
I think that if we outlast the first stages of true AI then we'll get to a point where our technology will allow us to fuse with the AI. It's a matter of not giving AI the access or capabilities to develope in areas that could threaten humans. That's why it's so important to try and find the best ethical system because when it comes time to program these AIs what are we going to tell it to value above all else? Progress? Human wellbeing?
Note to self: Be nice to AI. Got it. ;)
Do you think, knowing what you know of the human race, that we could be a "creator" type of story for them, something for them to hold onto, with "hope" or maybe even "reverence"? Or would we be a more sinister villian-esque "this is what they put us through" kind of thing? I understand that they could be simply too analytical to know of hope or disdain (depending on the level of AI we're talking about), but if we made them then they must retain some of our inherently human characteristics?
I would love to say yes, but I imagine we won't make AI "in our image and likeness" to the point where they will have an internal disposition to worship or at the very least value highly a "creator" figure. If AI will exist, I imagine it would be strictly logical with the a given value (such as happiness or maybe progress) to maximize. What I hope the future holds is a mix of cold hard logic, and abstract emotion. And with this mix, I want the human consciousness and artificial intelligence to mix together. One in the same. Nice to think about, no?