top of page

It's Focus Group Time! Three Tools for Developing Ethical AI

Updated: Mar 23, 2022

My first job after finishing my undergrad in the mid 90's was Research Coordinator for Metroline Research. I helped manage the Vancouver focus group facility (which I see no longer exists). It was a fancy space, with one way mirrors, fully equipped hidden cameras and microphones to record the groups and a tricked out viewing room for the clients where the mini-bar was always fully stocked.

Fast forward two decades and I'm about to conduct my first online focus group.

I'm using the group to gather feedback on a set of ethical tools that I think will be helpful to AI researchers. I decided to include a focus group as part of my research because I think a group reaction to these tools is important to gauge. So much of applied ethics is about cultural norms. Think about all the times someone goes to a conference, brings a great idea or tool back to the office but then it doesn't get adopted because its not a fit. That's what I mean by trying to gauge a group response and while this setting is a little artificial, that's the constraints of research, unless you go full on ethnography - but that's another story.

There are hundreds of different AI ethics tools that have been developed in the past couple of years. I reviewed the one on one interview data to get a baseline feel for ethical issues raised as primary concerns. That input helped me select three ethical tools that I think might be useful for AI researchers in the course of their work.

I selected tools I believed would be relevant to all participants, that worked across a range of different types of projects (ie image data vs text), addressed three distinct ethical concerns and did not require lengthy technical explanations. The following tools we will review are:

Datasheets for Datasets: A tool that documents data provenance based on the idea of electronic component datasheets (Gebru, Morgenstern,Vecchhione, Vaughn, Wallach, Lii & Crawford, 2018).

Principles for Accountable Algorithms and Social Impact Statement: A guide for writing an algorithmic social impact statement in order to provide transparency (Diakopoulos, Friedler, Arenas, Barocas, Hay, Howe, Jagadish, Unsworth, Venkatasubramanian, Wilson, Yu & Zevenbergen, n.d.).

AI Blindspot: A general purpose set of flashcards used to foster discussion of the whole AI workflow (Calderon, Taber, Qu & Wen, 2019).

Many of these tools were actually written as papers or a set of ideas so I spent time reworking them to turn them into formats that were easier to use. I'm excited to see how this turns out! While there is no mini-bar, I do have virtual muffins from a pretty infamous AI dataset...

Conducting a virtual group has the benefit of easy recording (just hit a button) but I know I'm losing contextual clues from body language and eye contact between participants. I think my facilitation skills will also be tested by juggling between documents to review on screen vs paying attention to what is being said in real time (which is why we record the group). However, in our physically distanced times, this was the only way I could pass research ethics. One thing that works in my favour is that these are highly tech-savvy people, so there is a comfort level with the technology. As with everything in my research project, I'm treating this whole exercise as an experiment! Away we go...

By Katrina Ingram _______

Sign up for our newsletter to have new blog posts and other updates delivered to you each month!

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at © 2020 Ethically Aligned AI Inc. All right reserved.


bottom of page