There's always an underlying model; the only question is whether or not the analysts know what that model is. If you don't understand why your results are what they are, your analysis has no predictive power!
This is what gets me with people who believe that AI will lead to scientific results beyond the ability of humans to comprehend. What's the point? Humans have a hard enough time listening to other humans, even if their arguments are well-reasoned and easy to understand! How are we going to get society to blindly follow what machines say?