I'm in NYC for the AI, Ethics and Society conference. Tonight, our opening keynote speaker was Charlton McIlwain who recently published a new book - Black Software: The Internet and Racial Justice from the Afronet to Black Lives Matter.
There are many reasons why Professor McIlwain wrote this book, as he outlines in this Slate article. In part, he was inspired by the story of Roy Wilkins, a civil rights activist, who wrote a particularly prescient opinion piece for the LA Times back in 1967. This piece was called "Computerize the Race Problem?" and Wilkins wondered if the "unprejudiced magic" of the machine might be used to address issues of interracial justice and peace, instead of just optimizing cows (really!).
Tonight's keynote unpacked aspects of the Black experience within the predominantly White world of computer technology. It was during the 1960's that computers and computing science were being established. At the same time, civil rights activist, like Roy Wilkins, were fighting for equality. Black people (and other minority groups) were actively excluded from the conversations taking place that would shape this important technology. Instead, Black people became the targets of the technology - a problem that needed to be solved. Of course, it wasn't necessarily framed in such blatant racial terms. Instead, it was framed as solving societal problems like lowering crime rates and keeping people safe. Who doesn't want that? It just so "happened" that Black people (and other minority groups) were the problems.
Fast forward fifty years and we are living with the results of the power imbalance and bias encoded within these technologies. We have Algorithms of Oppression as Safiya Noble has documented. We're still trying to lower crime rates, now with facial recognition technology. What would have been different if Roy Wilkins and people like him, were invited to participate in the shaping the technology? Could our current world be one where technology enabled the inclusive future that Wilkins aspired to?
It's an interesting question to ask. I think about where we are heading with AI and my hopeful self really wants this to be true. I'd love to think that we will develop a super-ethical AI that rights the wrongs of our world. However, given how the technology works (based on historically biased data) and who is in charge of shaping the agenda, I'm less hopeful. I think we sit at that same intersection now with AI as we did in the 1960's with computer hardware and software. I oscillate between a future of "yes-we-can" dreams and despair.
The Q & A session was led by AI, Ethics and Society Conference Co-chair Anne Washington, who is an Assistant Professor of Data Policy at NYU and has also worked in industry as a data scientist. Her experiences working in the technology sector have informed her research into how we might move forward to enable progress for much needed changes. Much of the questions from the audience centered around the "go forward". Professor McIlwain suggested that we need more voices to be included and to slow down the rush to develop and deploy the technology to make room for this approach. This won't be easy, and perhaps, that's why he is admittedly, skeptical about how progress will be made.
It was a thought provoking talk that provided another lens to the issues I've been thinking about this past year. One "aha" moment for me came from calling out the problem-solution paradigm in which we frame the use of technology. This type of thinking is so normalized, especially in the business circles that I'm used to travelling in, that I don't often stop to question how it frames a certain dynamic.
Tonight's talk was a great to start to the conference - looking forward to full day tomorrow!
Note: I had to look up if I should capitalize Black and White or make them lowercase. That led to this interesting article which basically said there isn't consensus on this issue.