The Digital Law Innovation Society is a student-led group at the University of Alberta that plays at the intersection of technology and the law. This post is part two of our highlights from their recent conference - Myths & Reality: AI and the Law.
AI Regulation in Canada
Where does Canada stand when it comes to AI regulation? Teresa Scassa, Canada Research Chair in information law and policy and professor, University of Ottawa, shared her insights. The bottomline - we have work to do!
Part of the barrier that Scassa sees for AI regulation in Canada relates to our story or narrative about AI. AI is primarily thought about in terms of its economic potential rather than more holistically through a human rights or societal impact lens. If we look at the agencies and departments who are tasked with advancing AI initiatives at a federal government level we see groups that focus on research and development, economic investment and growth. What’s missing are departments that focus on oversight, governance and civil society concerns. This is something we’ll need to shift if we are to embrace a more holistic approach to AI.
There is, however, some low hanging fruit that we could tackle, namely modernizing existing data protection laws and creating frameworks for ethical adoption of AI in the public sector. Scassa also identified the need to ensure capacity and jurisdiction for oversight agencies, a need for frameworks and regulation to enforce post market oversight for high risk AI and addressing accountability issues so the burden or risk is not disproportionately falling on individuals.
It strikes me that in order to get to the more substantive change’s Scassa has identified we first need the political will to do so. This means addressing the first issues she raised about our framing of AI primarily through an economic lens. Often those who call for more balanced or holistic perspectives about AI are said to be “slowing down progress” from those who see technology primarily in economic terms. This is a narrative that we need to reframe.
AI, Law and Humanity
Amy Salzyn, Lauryn Kerr and Var Shankar are three people who are working at the intersection of AI, Law and Humanity. Salzyn is an associate professor at the University of Ottawa, Kerr is with the Civil Resolution Tribunal and Shankar is with The Responsible AI Institute. I was pleased to moderate this panel and learn more about their respective organizations, what they are seeing when it comes to AI and the Law from their respective vantage points and what they see as challenges and opportunities for a way forward.
Access to justice was a thread that was woven through these presentations. We can think about this from the perspective of using AI tools to enable greater access, which is part of what the Civil Resolution Tribunal is doing, ensuring people have the “functional literacy” and tools they need as Salzyn spoke of, or ensuring that those creating AI systems are doing so in ways that are fair and equitable, as Shankar covered in his talk.
I felt very aligned with these panelists and the work they are doing. At one point, Shankar commented “it takes a village” to do this work of ensuring we have responsible AI. These panelists are very much a part of my professional village.
It was an incredibly jam-packed day of presentations and this two part blog post features only a few of the many speakers that DLIS hosted. As with all multi-track conferences, it's not possible to attend everything. It was clear that we need more of these opportunities to come together to talk about AI and the law. We’re at the early stages of this conversation, especially here in Canada.
By Katrina Ingram, CEO, Ethically Aligned AI ________
Sign up for our newsletter to have new blog posts and other updates delivered to you each month!
Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2022 Ethically Aligned AI Inc. All right reserved.