Good ideas and conversation. No ads, no tracking. Login or Take a Tour!
I agree on the easier bit of getting people to listen to the machines. We already use, and have used aggressively, incorrectly predictive world models in refusing to hire folks based on mental models we have for their race, gender, culture, religion, body-odor,..etc.. Question to me, is how do we recognize these errors in big data early on? Early enough that they don’t make a wreck the day of some whole subset of people. Definitely food for thought.