While I don't think a Git-Wiki kind of method will work so greatly, I do think the current model is outdated and rather inefficient. What would be a better way to connect scientific knowledge?
There's a bit in here to dissect, I don't agree with a lot of what the author says even if I agree with his goals... Lazy counter: Software is engineered from first principles. Scientific knowledge is discovered and refined in ways that are usually surprising. Software is built to do one thing (hopefully), science aims to describe all of the intricacies of a phenomenon. Its knowledgebases are often redundant, with many experiments confirming the same major findings, but also incomplete, as it's hard to get grants filling in every detail about something. This is false or true depending on the journal. Corrections usually lead to the paper's main text being updated with an extra page at the end noting the differences. Usually studies do an experiment to replicate what is in the literature if their own conclusions depend on it. Nevertheless, this is a real concern, especially among clinical trials. There are systematic reviews, which should filter retracted studies out, but they don't get re-filtered after the systematic review has been published. What the author suggests though are annotations of "my study would be false if this other study was false", which is: (1) complex and full of many-to-many relationships, (2) hard to get scientists to do across the board, (3) hard to do, logically, (4) not something anyone thinks about when they're writing their paper the first time round. This is another half-true claim... corrections, revisions, and comments are all posted, but not in the same neat structured form as git. The author gets at the git metaphors being solutions for science, but I sincerely doubt that having a million forks of each paper would solve any of the current challenges facing science. It's like arguing that having every paper in LaTeX would solve the problem of the public-academia divide. It just reformulates the same issues in a new language. Make no mistake, I'm all for open, reproducible science, but I just don't think this guy is thinking about it from the right angle.Just as the software industry has moved from a "waterfall" process to an "agile" process—from monolithic releases shipped from warehouses of mass-produced disks to over-the-air differential updates—so must academic publishing move from its current read-only model and embrace a process as dynamic, up-to-date, and collaborative as science itself.
It is typical, for example, that even when the journal editors and the authors fully retract a paper, the paper continues to be available at the journal's website, amazingly, without any indication that a retraction exists elsewhere
A subtler question is how and in what manner ("caveat lector"?) to flag studies that depend on the discredited study—let alone studies that depend on those studies.
An academic publisher worth their salt would also accommodate another pillar of modern software development: revision control. Code repositories, like wikis, are living documents, open not only for scrutiny, censure and approbation, but for modification.