It's interesting; a line through the past years of my work is that it slowly went from "here's a fun analysis to help you foward!" to "I'm handcrafting the model that is gonna determine what you need to do". I'm staying far away from anything that even remotely resembles ML and have developed the skill to properly communicate what my results do and don't say, so it doesn't come close to any case she's describing. The problem, however, is that for large and complex challenges of our time there generally aren't ways of dealing with it that avoid any biases and prejudices by those who create them. Now "college rankings" isn't among those challenges, but something like climate change totally is. From my vantage point, there's been an increase in the number who realize that a) we can't trust these models on their face value, but b) we do still need large and automated solutions, so c) we need to be much more careful of how we use them, d) without throwing the baby out with the bathwater. There's still an awful long way to go, though.I wonder if the Facebook kerfuffle is maybe going to make this stuff more prominent, more front'n'center.
I wished you'd pushed me harder to read this book. I'm not finding any new information in it, but her style of communication is extremely novice-friendly. I've already inflicted it on three people and I'm only on Chapter 5. Perhaps it's because I've read a bunch of Selingo, as well as Graeber and a few others. My window into the book is slightly different. For me, the takeaway has been that large problems are more likely to experience algorithmic solutions, and that the more privileged you are the less likely you are to encounter someone else's math-shaped policy. I take Ms. O'Neil's point to be that algorithms aren't inherently evil but they also aren't inherently magical. I grew up alongside computers. "OK Cupid" when I was in 9th grade was "Answer these 20 multiple-choice questions, hand the sheet to 4H, pay a dollar and get your top five dating recommendations." It took rented server time to collate 1200 Meyers-Briggs responses and the results were on dot-matrix. So I guess it comes more naturally to me to note that normie humans have been a lot more skeptical about the wisdom of black box data, have inherently known that it is naturally used for oppression. I think Facebook is teaching us all - in real time - how easily data is weaponized. That doesn't mean all data is a weapon, it means if you don't demand your counterparty show his work you're gonna get screwed.
There's a pretty good documentary called Coded Bias that introduces a group called the Algorithmic Justice League that is working to raise public awareness about the impacts of AI and fight for equitable and accountable AI, as opposed to what we have currently.the more privileged you are the less likely you are to encounter someone else's math-shaped policy