top of page

Office Hours: Here’s what startups want to know about AI ethics

Updated: Mar 23, 2022


What's on the mind of startups when it comes to AI ethics?


I’ve been hosting office hours with Startup Edmonton. These are confidential one on one sessions where startup founders and leaders can bring their questions or concerns and receive personalized feedback. AI ethics is a new area, so I thought it would be interesting to share some of the types of questions being asked as well as my responses.


Does our organization need to be concerned about AI ethics?

I think it’s useful for all startups to examine their business model and technology solutions in light of ethics. However, it’s CRITICAL to do this for any organization that intersects directly with humans and the use of human data. As I told one startup founder, if you are building an agricultural focused solution using only plant data, I’m less concerned about that than if you are building a solution for job recruitment.


High stakes areas include healthcare, education, law enforcement/criminal justice, financial services, social services, human resource, marketing - anything that involves the use of human data. If your solution is making some kind of prediction about humans, it warrants ethical consideration too, even if that prediction relates to something that seems low stakes. Perhaps counterintuitively, a lot of “AI4Good” initiatives can be higher stakes, as they tend to focus on vulnerable communities. For any company involved in these areas, having clear ethical principles and practices in place is essential.


What is our liability when it comes to the use of personal data?

Liability is an interesting word. There are obvious legal connotations, and for that, I always suggest consulting a lawyer to discuss specific issues. It’s also a good idea to be familiar with general privacy laws such as PIPEDA in Canada or the EU’s GDPR which is setting the bar globally. There are also provincial level regulations and domain specific regulations for particular areas, such as healthcare or finance. Anyone using personal data should have some familiarity with privacy protection regulations.


However, when it comes to ethics, we can also think of liability in terms of moral obligations. In that context, we should understand where our data comes from if it’s secondary data as well as any preprocessing of the dataset. If we are collecting the data directly, we need to ensure we are doing so in a transparent, ethical manner. We can apply the principles of necessity and proportionality to our data collection and use. For example, are we collecting or using only what we need (minimizing data collection)? We might also ask how we are being transparent about what the data is used for. What consent has been given or not given? How will your organization handle questions or concerns from data subjects including possible removal of data?


There is also the question of security. How is the data being secured and where is the data stored? For example, if we use cloud based storage with servers that are in another country and not subject to local regulations, how does that impact our ability to protect data relative to local laws?



The other concept to consider in this question is “personal data” - what is personal data? Is a tweet personal data? From an ethics perspective, even data that might not be afforded current legal protections might have ethical issues. This is part of the debate around Clearview AI’s use of publicly posted images to train its models. Canadian privacy commissioners issued their report which explains why this is problematic and recommended Clearview not only cease operations in Canada but also delete the images used. Clearview has done the former, but not the latter. These are just some of the questions any organization using data needs to address. There are other questions that would require context specific deliberations.


How can we ensure our solution is ethical and isn’t causing harm?

All of the startup founders I’ve had a chance to connect with are well intentioned people who want to build a solution to make the world better. This itself, however, is a blind spot. There is the need to envision both what might happen if our solution is used maliciously by bad actors and to take appropriate steps to ensure this doesn’t happen. Understanding these harms may require feedback from stakeholders who are not part of the core team, who are not invested in the solution and whose backgrounds are diverse enough to be able to bring a very different perspective to the discussion.


In addition to understanding bad actors, startups also need to consider who might be harmed if their technology works exactly as promised. To go back to the example of Clearview AI, their technology can be harmful to certain communities who might be over-policed as a result of using the technology exactly as it was intended. Thinking broadly about stakeholders and how to engage different perspectives is part of the risk mitigation work that startups can do to identify and address potential harms early in their development process.


There’s also one question that hasn’t come up yet, but that I think every startup founder should ask themselves…


Whose money will we take and how might that shape our organization?

This question is fundamental as it sets the tone for what you will or won’t do to meet the explicit or implicit obligations that come with funding. This is perhaps one of the most important ethical questions you will need to address. Ethical compromises are a slippery slope. What will be the boundaries for your organization? As a startup founder myself, I think about this question a lot! Conversely, investors also encounter risks if the companies you invest in are not addressing ethics and in particular algorithmic risk - but that’s a topic for another post.


Having an AI ethics coach can help startup leaders take an objective approach to think through these types of questions. It’s like having a personal trainer who will make you do one more sit up. It may not feel good in the moment, but you’ll be happy with the long term results. I’ll be offering more office hours with Startup Edmonton in the future or you can find me on Mentorcruise.


If you want to learn more and build some foundational skills in AI ethics, check out our new AI ethics micro-credential with PowerED by Athabasca University. It’s online learning in bite sized pieces to fit into a busy schedule. Alberta based companies may also be eligible for funding to offset some of the costs of this training.


By Katrina Ingram, CEO, Ethically Aligned AI ________


Sign up for our newsletter to have new blog posts and other updates delivered to you each month!

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2022 Ethically Aligned AI Inc. All right reserved.

bottom of page