top of page

Responsibility, Liability and Consequences: lessons from the Sacklers

It’s the week before Christmas and there is a battle taking place in my city.  A court is

deciding the fate of hundreds of unhoused people as the police and city administration attempt to move forward with removal of encampments while those opposed to the matter have filed legal and ethical appeals. Sadly, this is a problem that many cities are facing and it involves a complex set of social issues including the fallout of the opioid crisis. 

In another legal battle, the Sackler family - the billionaires behind Purdue Pharma and OxyContin - are awaiting a US Supreme Court decision to decide if their immunity deal is going to be upheld or overturned. The family has already faced social and reputational consequences including having their name stripped from philanthropic spaces, scathing books and documentaries. However, they have managed to avoid facing criminal charges, have retained their personal fortunes and negotiated a way to evade further liability. This last point - the avoidance of future claims that was negotiated as part of the Purdue Pharma bankruptcy - is the piece that is before the Supreme Court. This article provides a more in depth legal analysis.

I think most people would agree that the Sacklers bear a lot of responsibility for the opioid crisis even though the family continues to claim they did nothing wrong. Yet, the consequences they have faced are not proportional to their role in this crisis. The billions they made have outweighed the impacts to this family, while the families of those addicted to opioids have been torn apart.

A controversial legal manoeuvre

This case represents a tough situation for the Supreme Court because not allowing the legal tool of immunity (aka third -party releases) also has consequences. It can result in uncertain, long, drawn out legal processes where no victims are compensated. This is why the deal was cut in the first place. As this CNN piece notes:

Proponents of third-party releases say they’re the quickest and fairest way for victims to receive compensation for harm done by a company or other organization. Those who oppose the provision say it’s a way for potentially liable parties to skirt legal scrutiny, possibly weakening consumer protections.”

This controversial legal tool has also been used by the Boy Scouts and Catholic Church to avoid further lawsuits. 

Which brings me to AI risks.

Risk for whom?

Much of the conversation around AI guardrails is tied to the idea of assessing risk. There is  a lot of latitude for organizations to set their own ‘risk appetite’ - the level of risk they are willing to accept. Yet, it seems obvious, when we look at the case of Purdue Pharma, that allowing companies to apply cost benefit analysis - to determine how much upside there is vs downside and then act based on a ‘risk appetite’ - isn’t a good way to prevent societal catastrophes. A risk-based self-assessment is always going to have a bias towards those who have the power to make the final decision. In so many cases, these are the folks who also have the most to gain and will be impacted the least by taking the risk.

The consequences are not borne by those who are empowered to make these risk-based decisions. Instead, other people pay, sometimes with their lives. 

What about regulation?

Regulation is absolutely necessary but it can be insufficient. The pharmaceutical industry is one of the most heavily regulated industries. The Stanford-Lancet commission which was established to investigate the opioid crisis noted in it’s recommendations that:

The profit motives of actors inside and outside of the health care system will repeatedly generate harmful over-provision of addictive pharmaceuticals unless regulatory systems are fundamentally reformed.”

Why would the profit motives for actors involved with AI be any different? Perhaps the risks are not as clear cut as opioid drugs leading to addiction. Yet, as difficult as it might be to fathom, the societal risks may be even more extensive and far reaching. I’m not talking about rogue AI, but the real risks of mass automation and job displacement, deskilling, over reliance on automation for critical infrastructure, mis or dis information that threatens democracy and all of the ensuing social upheavals.

The Sackler case has implications for the assignment of blame and consequences that bear relevance as it relates to the corporate calculus we are applying to determine AI risks. It may not be quite as simple for powerful actors to make billions, cut a deal and negotiate away the future consequences. Or maybe, it will remain that simple. 

By Katrina Ingram, CEO, Ethically Aligned AI


 Sign up for our newsletter to have new blog posts and other updates delivered to you each month! Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at   

© 2023 Ethically Aligned AI Inc. All right reserved.


bottom of page