Last night I was at a talk on ethics and bias hosted by Women in Big Data. One of the presenters mentioned the Harvard Implicit Associations Test. I hadn't heard of this test before but I thought it would be interesting to know what my inherent biases are, especially as I move into a research project on the topic of ethics.
There are a number of tests of the site. I've taken two so far, one on gender, the other on race. The test asks you to very quickly classify words or images into groups. The goal is to find your subconscious response in associating certain categories. For example, Female and Humanities, Science and Male (both of which I have a slight bias towards). The test results also reveal where your score lands relative to others who have taken the test. In both cases, I don't align with the majority which doesn't necessarily surprise me.
By Katrina Ingram _______
Sign up for our newsletter to have new blog posts and other updates delivered to you each month!
Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2019 Ethically Aligned AI Inc. All right reserved.