- A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes.
There's always an underlying model; the only question is whether or not the analysts know what that model is. If you don't understand why your results are what they are, your analysis has no predictive power!
This is what gets me with people who believe that AI will lead to scientific results beyond the ability of humans to comprehend. What's the point? Humans have a hard enough time listening to other humans, even if their arguments are well-reasoned and easy to understand! How are we going to get society to blindly follow what machines say?
I agree on the easier bit of getting people to listen to the machines. We already use, and have used aggressively, incorrectly predictive world models in refusing to hire folks based on mental models we have for their race, gender, culture, religion, body-odor,..etc.. Question to me, is how do we recognize these errors in big data early on? Early enough that they don’t make a wreck the day of some whole subset of people. Definitely food for thought.
It might have enough predictive power to be accepted. It might be predicting the wrong thing, following correlations it shouldn't, but if it looks good, people won't bother trying to understand. I agree it won't lead to new science, but I certainly believe people will follow any stupid thing The Numbers say as long as it's presented with a veneer of competence.There's always an underlying model; the only question is whether or not the analysts know what that model is. If you don't understand why your results are what they are, your analysis has no predictive power!
This is what gets me with people who believe that AI will lead to scientific results beyond the ability of humans to comprehend. What's the point? Humans have a hard enough time listening to other humans, even if their arguments are well-reasoned and easy to understand! How are we going to get society to blindly follow what machines say?
I think it's a moving goalpost when it comes to our ability to comprehend things. As little as 150 years ago you would likely be hardpressed to find a genius capable of catching up and understanding results of path integrals (in a reasonable amount of time), right now it's something that's taught even to econ majors in an entirely different, unexpected, context. Sorry, I'm one of those "IQ isn't a factor, it's about the grokking time" optimists. ;)This is what gets me with people who believe that AI will lead to scientific results beyond the ability of humans to comprehend.