It's been a super full day exploring the landscape surrounding the many societal questions relating to artificial intelligence and ethics. Law Professor Frank Pasquale kicked off the morning keynote with some reflections on the ground we've covered so far in our approach to this topic. He talked about two waves of ethical assessment in AI - the first one focused on issues of bias, fairness, transparency and explainability. This approach is one of "mend it" - fix the technology to address these ethical concerns. While those issues are still central to the discussion, we're also seeing a second wave that is questioning the use of the technology altogether. This a more radical "end it" approach and it's being debated as a result of the many dangers we've seen documented in the deployment of AI that's driven the "subordination of the disadvantaged, rationalized by an algorithm". It questions whether using AI in certain cases is the right approach at all.
Ironically, the Legalweek branding on the podium was leftover from the conference that ended yesterday, but it somehow felt appropriate for Professor Pasquale's talk!
Predatory, creepy and subordinating
Professor Pasquale shared examples from finance and healthcare, two high stakes fields for AI applications. He pointed out that there are not only risks when algorithms don't work, but also when they do. For example, if hospital rating systems in the US get it right and actually agreed (they currently don't), the results might be an influx of people to certain hospitals, creating a reinforcement mechanism that leads to high patronage of these hospitals, that will then get better with increased use, possibly driving poorer outcomes and closure of other hospitals. That could disadvantage people who don't have close access to a hospital, possibly leading to higher survival rates for people in wealthier areas with more hospital resources. Rating systems have the potential to drive Darwinian outcomes. On the finance side, desperate people might be willing to trade data access for loans, social media data might be used to allocate risk in credit scores or the assessment of how you fill in a form (with shaky mouse usage, for example) might indicate a medical condition like Parkinson's, and that knowledge could be used against you. He categorizes risks in financial inclusion into three camps - predatory (addressable by usury laws), creepy (addressable by data limits/erasure requirements) `and subordinating (addressable by the legally disallowing certain action/action).
What gets measured...
Facial recognition was singled out as the "lead paint" of today - a known risk backed by powerful interests who are invested in keeping the technology in use. The recent Clearview controversy is a case in point. At the end of the talk, moderator Gina Neff coined Pasquale's law - "if you measure it, it will manage", a spin on Peter Drucker's "what gets measured gets managed".
This idea of measurement was a prevalent theme throughout the rest of the day. Some speakers sought to measure concepts like morality, diversity and inclusion, turning these into computations to be encoded within AI systems.Others were focused on trying to ensure more equity in existing systems by tweaking the math being used to be more "fair". A few presenters questioned the notion of the measurement itself. One speaker, Derek Leben, aptly highlighted the challenges in determining fairness because it stems from a person's ethical viewpoint, leading me to wonder if we should transparently label AI like we label food (this algorithm was developed by consequentialists!). At least we would know what went into the product.
Solidarity - lessons from Jakarta
People generally seemed to agree that more voices and representation in addressing the issues would be a good thing, but there were few concrete examples that illustrated how this could or should be accomplished. One that stood out was the experience of gig economy drivers in Jakarata. They co-opted the WhatsApp platform as well as established other on and offline spaces to form a community of solidarity. It wasn't a union (they didn't want that) but it was a way to organize, make their voices heard and to have an impact. This story in particular really spoke to me as a hopeful sign done at a grassroots level. We need more of that.
Googlers onboard
I spoke with a guy named Larry (not Page) who works for Google. I was curious to know what the climate was like inside the company with respect to ethical discussions. He shared that in his experience Googlers have a sense of responsibility towards the ethics of AI and that they want to find solutions. It's certainly in their best interest as a company to do this but I also get the vibe that there is a personal level of commitment for some of these technologists. That's encouraging. It doesn't mean we shouldn't be critical of Google or it's ever growing monopolistic position, but it is heartening to know that these issues are being taken seriously and being resourced.
Overall, it's still early days for AI and ethics. We are all on a journey to figure this out.
More to come tomorrow!
By Katrina Ingram _______
Sign up for our newsletter to have new blog posts and other updates delivered to you each month!
Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2020 Ethically Aligned AI Inc. All right reserved.
Commentaires