ignore all social science. on the rare occasion that a hypothesis has merit, it will be very obvious.
see also: https://www.reddit.com/r/gwern/comments/deqvft/how_often_do_researchers_not_read_the_papers_they/
I think a lot of hard science is pretty bad, too. Maybe it isn't social science bad, but I'd say the majority of papers I review in biology are pure shit. Faked or fraudulent data notwithstanding, I think this has mainly to do with one very serious problem: pre-registration. If you want to do a clinical study, for example, you have to write down a priori the literal one thing you are going to measure. If you don't find a "significant" effect in that one single outcome measure, then your study "failed" (in the eyes of FDA). Full stop. For anyone who wants to do work in the US, clinicaltrials.gov is the registry that must be used (and a lot of studies in other countries use it, too). Unfortunately, there is no such registry for animal or in vitro work. What I do is manufacture experimental drug for brain injuries. I work with a number of other scientists who do work on animal disease models. Every time I initiate one of these studies with them, I say, "We should write a protocol and try to publish it, so that we have a de facto preregistration." I have never had anyone say yes. For the ones who've even heard of doing that, it's seen as a waste of time at best. The trouble is, of course, that a p-value means, essentially, the odds of your result occurring due to random chance. If you consider 5% odds to be acceptable, as is custom, then if you measure a bunch of stuff, you're almost guaranteed to have one fall into the "random chance" bin. If however, you've pre-specified what is is you're going to measure, then you have a conditional probability that says that the p-value reflects the "true" odds (i.e. what are the chances I observed X given that I was only looking for X?). I've never been involved with any type of social science research, so I don't know what their tradition of specifying outcome measures is, but I get the sense it's pretty dismal, given that this dude can read an abstract and tell you with better than even odds whether the study is going to be replicable.
I got this close to getting into academia, landing 2nd out of 50+ applicants for a cushy PhD position. The day they told me was also the day I finally came to the conclusion that academia isn't for me, largely because of the problems he's describing: a system of misaligned incentives that catches people in a crushing rat race that's more about appeasing funders and making grueling overtime than about the romantiziced ideal of uncovering hidden truths about the world. Didn't stop me from doing science as a hobby though. My academic paper got accepted last Friday! It's a methodological paper in a technical field though so it avoids a lot of the traps. My incentives are different because of that - I actually want people to replicate and improve my approach, because I know it's not perfect.