Me, I'd put good money on "not accurate at all". At least, not as folks generally understand "self aware". There have been beat-ups about "self-aware" strong AI algorithms since the Eliza program in the 1960s. As a teen growing up I read Joseph Weizenbaum's "Computer power and human reason" - the wikipedia article doesn't do it justice, a lot of his argument was that weak AI, i.e. building intelligence along classical programming lines, with codified rules and algorithms, is reasonably straightforward; but writing an algorithm to simulate understanding a set of logic rules is quite different from strong AI, where you are trying to build something that is more like human consciousness. From some digging it seems the core algorithm here is Deontic Cognitive Event Calculus which looks very cool as a way of codifying and modelling logic and self-awareness and the like. But there's a huge gap, in my mind at least, between being able to model and simulate self-awareness, and actually being self-aware as most people would understand it.
Just wanted to point out. Modern AI (at least the machine learning subfield) is not based on hard coded logic rules. Instead, it's based on systems that observe data and automatically learn functions that model that data and generalize to unknown data. We've come a long way since Eliza, but we're still far from strong AI.
If machine learning interests you: This website has a number of interactive toys that demonstrate the deep neural network set of algorithms. This one is my favorite. Udacity also has some fantastic free courses on machine learning. In general, if you're interested in AI, you'll have to look past the media. Books and journal papers are your best resource, and there are a few good video courses.