Sigh. Hypothesis testing is just not a concept that these social psychology types seem to ever grasp (and of course I say that a someone who almost never reads anything in that realm beyond what makes these splashy headiness, so shame on me). If you're testing the hypothesis that Disney princess exposure is correlated with negative gender stereotypes, and you find that not to be the case for this cohort, with these limitations, then that's the most you can say about it. Period. End of story. No exceptions. If you find that the data hint at the exact opposite of your hypothesis, then you say, "Hmmm, this is potentially interesting, can we phrase a question and power a study to answer this new hypothesis?" Anything else is statistical cheating, and it's worthless garbage, scientifically. Part of me regrets not studying statistics more deeply. The trouble is it's a really, really, really boring discipline, so it's hard to get excited about the coursework. So important to understand if you're trying to say anything worthwhile about populations though. Newspapers should have statistics consultants on speed dial.