top of page

Experiential Ethics: Practical skills for navigating our AI present and future


The Experiential Ethics program at MIT is a collaborative discussion based summer course that allows students to explore BIG social questions centered on science and technology through a series of experiential learning activities.


One of those activities is a virtual field trip and I was pleased to be asked to participate as a guest speaker. The students did some background research about my work and then we had an ask me anything session. There were some incredibly thoughtful questions. In the spirit of sharing the learning more widely, I thought I’d answer some of these for the blog.


Ethics in Industry

  • You’ve had a lot of business driven roles before founding Ethically Aligned AI. Considering you were in a position to influence the ethical standpoints of your previous companies, how has that experience shaped how Ethically Aligned AI focuses on current issues and how do you expand your ethical perspective beyond the business perspective?

  • What are the greatest challenges in bringing ethical theory into industry? To what extent can a for-profit organization be prepared to make ethical choices that are financially negative? What are the biggest ethical challenges or blind spots that well-intended members of industry often fall into?

I'll answer these two questions together. Firstly, I have a lot of empathy for anyone trying to run a business. I appreciate and understand the challenges of driving revenue, meeting obligations to clients and staff and balancing fiscal realities. Not for profits are not immune to business challenges as they also need to find sustainable sources of funding.


I don’t think business goals need to be antithetical to ethical objectives.

I think it’s possible to run a company in ethical ways but it takes a commitment to frame success beyond financial, bottom line metrics or growth at all costs. That does require considering social implications, the public good and personal values.


As for blindspots and challenges, one can justify and rationalize making unethical choices in the name of it being 'just a business decision'. We see this all the time - offshoring work to countries with poor labour practices as a cost management measure, turning a blind eye to environmental externalities, taking money to scale a business without fully appreciating the possible compromises. These are common things.


To what extent can business leaders think about impacts beyond the business? Small businesses typically have more control and founder value alignment, assuming the founder is behaving ethically and not like Sam Bankman-Fried. Public companies are legally beholden to a wider range of actors, market forces and shareholders. While things are changing with a greater focus on ESG metrics and corporate social responsibility, I still think it’s tougher for very large companies competing at a global level to ensure they are ethically aligned.

  • How do you persuade the companies you work with that use AI (or the sponsors that you have) that they would benefit from prioritizing ethics in the design of their technology? Sometimes it seems like companies are only interested in pursuing things that will bring them economic gain. Do you try to promote Ethically Aligned AI as something that is also good for business?

We can think about this in terms of risk mitigation. Companies don’t want to risk their reputations, face financial or possibly legal consequences. Ethical practices can help avoid or mitigate those risks. On the upside, there are positive benefits from ethics too, such as being seen as the company that does the right thing. It’s like a Patagonia effect - you are seen as the socially responsible company and that has real benefits. I wrote a business focused case study about this with a team of ethicists that might be of interest to further illustrate how this can work.


AI and Regulation

  • How has your experience working in the up and coming cannabis industry shaped your perception of AI as a new technology?

  • What are your thoughts on the role of government regulation in promoting ethical AI practices? Should there be more regulatory oversight, and if so, what form should it take?

From day one, the cannabis industry faced a huge amount of regulatory oversight from Health Canada and from provincial regulators such as the AGLC in Alberta. That is to be expected when going from a schedule II narcotic to a widely available legal substance that was named an essential service during the pandemic.


Contrast this with AI - it has little to no regulation. Yet, we know there are harms from the technology that are on par with or some might say more troublesome than marijuana. I find the comparison between the two in terms of the process of how we regulate, what we regulate and what is seen as risky incredibly interesting because it reveals our societal values and priorities.


In terms of what regulation we need, I think we really need to better regulate the use of data and rethink our notions of data privacy.

All of our privacy laws center on this idea that we can adequately remove personally identifiable information to anonymize data and that just isn’t the case anymore.

Law professor Teresa Scassa makes a case for regulating human-derived data and Elizabeth Renieris makes a similar kind of case to center on human-rights in her book Beyond Data. I’m in agreement with both of them. In addition, I think we need AI specific regulations and we are seeing countries moving in that direction such as the EU AI Act and the AIDA (Bill C-27) in Canada.


Will AI take my job?

  • What do you think is the ideal balance between AI and skilled labor so that we can use the technology to help us but not take away any of the important components of our lives?

The word ‘ideal’ frames the question to suggest there is some singular best way. I’m not sure if there is one best way or what that might be if we assume there was one ‘ideal’. I think people are going to need to navigate what this looks like and it will be context specific.

We’re going to need to define what we believe to be the important components of our lives - the things we will choose NOT to outsource to automated systems. This will likely be both a personal exercise but also a societal exercise.

The labour piece also involves what an organization is willing to pay for so the values of the business will come into play. For example, one organization I worked for in the past valued the work of musicians and committed to always paying them. Another place in the same industry wanted musicians to work for free, for the “exposure", instead of appropriate compensation. Those are two sets of values at play within a workforce context.

  • In your blog post on the relation between AI & Jobs you talk about how AI will change people’s views on the attainment of a high level of education. How do you think AI will change the way subjects are taught in academic institutions? Will there be a shift in what is taught and is considered as necessary knowledge?

Yes - it’s already happening.

We can think about the academic response to ChatGPT from “hard pass” to “well, maybe” to “let’s try and work with the technology”. That shifting perspective took place in a few months.

I do think it’s important to balance openness along with skepticism which isn’t always an easy dynamic to navigate. I’m co-creating a set of norms around AI tool use with the class I’m currently teaching on AI Ethics this summer. That’s been my approach. We'll see how it goes.


I also really love this perspective from Harvard professor Rebecca Nesson who was part of a panel discussion on AI in education. She said that there should be something different (she implied better) about a student who has taken her class and used ChatGPT/generative AI vs a student who just used ChatGPT/generative AI but didn't take the class. In other words, educators really need to think about the ways in which their work is adding value. In turn, this idea of adding value (or not) will impact the demand for education and post-secondary degrees.

  • Where can people develop skills on how to use AI well? Do you envision this being a natural skill that people pick up or will it need to be taught in schools?

I think it will be both. The technology is being woven into many things so people will encounter it in their lives and learn from use. However, I also think being taught skills on how to use the technology intentionally and appropriately is important. I’m doing some work on the concept of Automation Bias - the idea that we can over trust technology and believe it to be more objective than humans. I think having skills related to discernment becomes very important in a heavily automated world.

  • Is there any advice that you would give students who may be interested in working in ethical AI? How would you break into that industry & what makes you right for the job/task?

We need a wide range of people doing this work with a diverse set of skills. A lot of the early work has been on education and awareness. As we move forward, we need people to implement ethical design and deployment practices with organizations (technical and non-technical roles), policy makers and auditors. There are also opportunities for entrepreneurs who want to build tools or deliver services in this area.


Every company thinking about or already using AI will need AI ethics related roles. So many opportunities!

You can develop your skills by learning about these issues, volunteering to help organizations make ethical technology choices (e.g. help draft a generative AI guidance for a not for profit), join like minded communities of AI ethicists (For Humanity, All Tech is Human, Women in AI Ethics), raising your hand to become the AI Ethics “go to” person for whatever organization you are interning with, then parlaying those experiences into a role in the space that aligns with your interests and skill sets.


Many thanks to all the students for your great questions and to the organizing team at MIT for the invitation to take part!


By Katrina Ingram, CEO, Ethically Aligned AI

________

Sign up for our newsletter to have new blog posts and other updates delivered to you each month! Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com


© 2023 Ethically Aligned AI Inc. All right reserved.

Commentaires


bottom of page