The argument put forth is that it conclusively proves that antibodies to COVID-19 don't last very long. But if I read the study, what it actually says is only 4% of healthcare workers had IgG antibodies. It further says that 89% of the 1400-odd verified COVID-19 patients they tested had antibodies while 4% of the 2800-odd unknown-covid-status healthcare workers did.
What this seems to say to me is "most healthcare workers in the study didn't catch COVID."
Am I reading this wrong?
I think you're correct. I had to read it a few times to get the gist of it, too, and I think they are just flat wrong in their conclusions based on their data. Their conclusion that long lasting antibodies don't form hinges on a big assumption in the Discussion, for which they provide no evidence: They are saying (without citation, mind you) that other people have calculated that there was a dramatic undercount of infections among healthcare workers, and therefore only ~15 or 20% of healthcare workers who were infected actually developed antibodies. They further assume, based on that, that only severely infected people develop IgG (which is responsible for long-term immunity) robustly. If I were reviewing this paper I would print a copy just so I could throw it in the trash and piss on it. The only way we'll figure out whether long lasting immunity builds up is to query the people who have confirmed infections. And in this study, basically all the people with confirmed infections had IgG 3 months later (assuming there's some false negative rate, the "true" positive rate is surely well over 90%, although as far as I can tell they don't report what that rate is in their test). The proper control would be confirmed covid patients who were not hospitalized (or low symptoms if possible, but sample sizes could be tough to come by). Anyway, I think this is a case of believing-is-seeing, and not objectively assessing data. The assumptions are economist-level bad, and the conclusions are therefore very hard to believe (even if they're right, this paper sure a shit doesn't say so). It is worth noting that the Discussion is 6 pages long, whereas the Results is 1.5 pages. This typically says the author has a point of view moreso than a valid data-driven conclusion. It is also an example of The Scientist trying to generate clicks. They know better than to link to non-peer reviewed garbage.In Zhongnan Hospital of Wuhan University, 2.88% (118/4099) healthcare workers were diagnosed with COVID-19 before March 16, 2020. With a moderate estimation, the true infection rate would be ten times that had been confirmed, i.e., >25% of those healthcare providers without diagnosed COVID-19 had been infected. However, only 4% of those infected healthcare workers without confirmed COVID-19 still had IgG antibodies to SARS-CoV-2.
That's what I thought, thanks. I have a couple different ulterior motives here - on the one hand, I would very much like documented proof of antibody development, and documented proof that antibody titers can be used to indicate immunity. On the other hand, I was miserable and scared for all of March and part of April and my antibody test came back negative so my ego wants to know that I had COVID even though there's no proof I did. It seems to me that a useful way to slice this data is to say "89.8% of confirmed COVID patients had detectable igG." That's scary enough - that means that 10% didn't. Seems like that would be a useful place to commence further study. It also seems to me that saying "4% of healthcare providers had detectable igG" is also useful because that would tend to indicate that fuckin' hell, whatever protocols they were following were like 95% effective. Seems like that would be a useful place to commence further study, too. But saying "our actual number is 4% but our estimate is 40% so we're going to warp our hypothesis to match our estimate" is fucking dumb.
mk and I used to talk a lot about how journals should discourage discussion sections to the greatest extent possible. Restate your results, put them in context, and tell us the limitations of the study. Don't pontificate or just make shit up. Speculating wildly for the sake of drumming up interest is an unfortunate side effect of the self-promotion that scientists have to engage in to get noticed, and duping journalists into covering it is too easy sometimes, because they need content, too. I don't know off hand how that 90% number compares to other diseases (or again how reliable their test is), so even that is difficult to judge. That's the kind of thing they should be telling us in their discussion.
I suspect there's an answer there? But I also suspect that COVID-19 is simultaneously more infectious and less harmful than we're commonly assuming it is, based on the LANL/DARPA studies. Which calls into question our entire disease model and right now we're in a space where we cling to any new factoid like a rope from a coast guard cutter. When I was trying to figure out how to kill COVID with UV light I dug deep into what dicks coronaviruses are for the pork industry and how they tend to provoke really lackluster immune responses in general, which makes them a pain in the ass to vaccinate against. Knowing that a case of COVID gives you a one in five chance of getting COVID again two months from now, a one in two chance four months from now and all your immunity has dissipated six months from now is data. It's shitty, dispiriting, expensive, society-changing data but it's data. I'd like to see that data, or at least people researching that data. I'm sure the studies are being done. I doubt the results will be heartening, and I sincerely hope they're more robust than this.I don't know off hand how that 90% number compares to other diseases (or again how reliable their test is), so even that is difficult to judge. That's the kind of thing they should be telling us in their discussion.