I do no believe it is likely that a technological singularity would wipe us out. Of course, we'll have to be cautious in the way we develop these systems, but at the end of the day we would not give them full reign to do as they please. For example, let's look to the autopilot of a commercial airliner. They are usually designed with "with redundancy and reliability as foremost considerations." In some cases, one aircraft with have multiple different autopilot systems. Each system will have been designed by a different team, potentially in a different language or a different architecture, and will run independently. If the majority of these systems at any time don't agree on the next step they should take, those controls handed back to the pilot. Of course, these are artificially intelligent systems in the way you're hypothesizing about. But I'd hope that there's no way super-intelligent AI would be developed without similar considerations being put in place. It would very silly indeed just to let such a system go off and do as it pleases. Anyway, have you ever tried to get rid of ants? Those bastards are resilient.
I think that if we outlast the first stages of true AI then we'll get to a point where our technology will allow us to fuse with the AI. It's a matter of not giving AI the access or capabilities to develope in areas that could threaten humans. That's why it's so important to try and find the best ethical system because when it comes time to program these AIs what are we going to tell it to value above all else? Progress? Human wellbeing?