Not a quibble - If you have a dataset and you explore it, you're likely to find interesting correlations. That's what large-dataset survey experiments are about - you collect all the data you can, then you comb through it afterward to see what you can see. It's a useful and valid method of scientific discovery. The failure of "big data" happens when that "scientific" step - the part where you go "hmmm - I see a correlation between A and B, let's investigate further" - gets bypassed in favor of expediency. Correlation does not imply causation and it's important to discover whether or not you have causation at hand or not. Silver's argument is that much "big data" analysis presumes causation every time you see correlation, which this article is full of. For example, an AFC win in the Superbowl has an 80% correlation with a stock market decline. Scientific method calls for investigating whether or not the Miami Dolphins actually have the power to bring down the DJIA. "Big Data" simply says "eighty percent! Woo hoo!"However, assuming one is analyzing data they don't know much about other than it has integrity, it's here a divurge. In this scenario, untold and surprising discoveries can occur. Discoveries that would not have been found if analysis was taken under the curse of knowledge (read: analyzed under the constructs of what you know (or assume to be absolute).