top of page

Culture is key to Responsible AI strategy



We’ve likely all heard the saying “culture eats strategy for breakfast” - a quote that is allegedly attributable to management guru Peter Drucker. Drucker’s point is that regardless of how solid, well thought out or effective your strategy might be in theory, your company’s culture - the shared beliefs, habits, values, attitudes and norms embodied and enacted by people in your organization - will ultimately determine if that super solid strategy will actually work.


I see a lot of focus on developing strategies for Responsible AI. However, if we agree with Drucker we should also be thinking about how culture will either support or kibosh our ethical AI efforts. This is an area that I think is under-appreciated but it can be the determining factor for AI ethics in practice.


The Case of Crisis Text Line

The case of Crisis Text Line (CTL) reveals the role of culture in determining how AI and data ethics can play out in an organization. In early 2022, Politico published an investigation into a data sharing incident entitled “Suicide hotline shares data with for profit spin off, raising ethical questions”. To summarize the situation…

  • CTL gathered data from people using it’s mental health support services to triage cases and serve its clientele. It also shared this data in aggregated form with researchers and partners. It set up a data ethics committee and data sharing guidance to vet requests. The guidance stated CTL would not use data for commercial purposes.

  • CTL changed the guidance once an opportunity presented itself to start a sister company called Loris.ai. Loris would use the data to train chatbots that could handle difficult customer service calls for large private corporations. CTL data was particularly adept for dealing with crisis situations given the nature of being a mental health hotline.

The video goes into greater detail about the case.


The Role of Culture

The ways in which culture led to CTL’s data sharing crisis are instructive. While researching this case, it became clear that CTL’s Silicon Valley tech-inspired approach to addressing mental health was a core factor in what ultimately led to the creation of Loris.ai and the decision to use incredibly sensitive personal data for commercial gain. The story is salient because it illustrates that well intentioned people (and I believe CTL had well intentioned management and board members) can make poor decisions when the logic behind those decisions is normalized by organizational culture.


What exactly does Silicon Valley culture look like? It’s embodied in ideas about scaling rapidly, the role of data as a resource for enabling growth, techno-solutioning, treating everything as a problem to be solved and enacting a ‘winner takes all approach’ to solving problems. If you work for a startup or in the tech sector, you might be wondering - what’s the problem with this logic? That question exemplifies a culturally informed response because this set of attitudes and behaviours can be normalized for those working in the technology space.


If you want an entertaining crash course on how this works, I highly recommend watching HBO’s Silicon Valley. The show tracks the ethically fraught journey of Richard Hendricks and his co-founders as they navigate the world of tech-entrepreneurship.


Data as a resource

Notably, the role of data and using data to enable organizational goals was foundational to CTL from its inception. While CTL was a not for profit, it also very much saw itself as a technology startup. This was evident in the way it described the role of data in various press releases when the company launched as well as in the Politico piece:

Data science and AI are at the heart of the organization — ensuring, it says, that those in the highest-stakes situations wait no more than 30 seconds before they start messaging with one of its thousands of volunteer counselors. It says it combs the data it collects for insights that can help identify the neediest cases or zero in on people’s troubles, in much the same way that Amazon, Facebook and Google mine trends from likes and searches.”


Organizations that see data as a resource, divorced from the data subject (aka people), could easily find themselves making similar ethical decisions about how data will be used and for what purposes it could be used.

Funding Matters

Another aspect that impacted CTL’s culture is the way in which the organization was established and funded. During its early years, it raised close to $24M in funding from a veritable “who’s who” of foundations created from the fortunes of former tech founders (eg Omidyar, the Bill and Melinda Gates Foundation). The typical funding trajectory for a not for profit tends to be more grassroots and most NPOs don’t start with tens of millions of dollars. That early success created a large footprint and one might imagine, a large operating budget. How to remain sustainable year over year was a question that the board and management grappled with in light of the level of funding required.


In a blog post by board member, danah boyd, the reason why Loris.ai and the data sharing agreement were deemed acceptable was because it represented a means of ongoing sustainable funding for the organization at the scale necessary to continue running a large, global organization. This excerpt from boyd’s blog explains:

We were also struggling, as all non-profits do, with how to be sustainable. Non-profit fundraising is excruciating and fraught. We were grateful for all of the philanthropic organizations who made starting the organization possible, but sustaining philanthropic funding is challenging and has significant burdens….Funding in the mental health space is always scarce. And yet, as a board, we always had a fiduciary responsibility to think about sustainability.”


There is an “ends justify the means” thinking at play here. It becomes easier to convince oneself that in order to continue doing good things, it might be ethically permissible to do some questionable things along the way.

Ethics beyond data or AI

I originally reviewed this case in early 2022, shortly after the Politico story was published. However, I was reminded of it because of a recent episode of Mystery Hype AI Theatre 3000 (an excellent podcast!) where the CTL issue was discussed alongside a different story - the National Eating Disorder Association’s (NEDA) use of a chatbot gone wrong.


Hannah Zeavin, a guest on the podcast, and an expert in the history of how mental health supports have been delivered through various technologies from telephone hotlines to digital means shared additional context about the CTL case that raised new issues. In many instances, simply connecting to a mental health crisis line might activate a process of non consensual intervention, such as calling police to conduct a wellness check, a practice called active rescue. In this Slate piece, Zeavin provides more context about how this works as well as the role of algorithms and data in the process of triaging people who contact crisis lines. One of the points Zeavin makes on the podcast is whether we should consider these issues around the delivery of crisis services simply from a data perspective or a more broad based healthcare perspective.


A healthcare informed culture would take a different approach to these issues. Healthcare cultures are patient-centric, not data or tech-centric. That perspective would lead to different choices about the role of data and how it should be used in the delivery of mental health services. It’s less likely that a patient-centric culture would see data as a resource to be used for commercial gain, regardless of how the proceeds would be used. There would be greater legal protections in place for healthcare data as well.


Operationalizing ethical principles for AI isn’t just about having mechanisms like committees or policies in place. It’s about creating a culture that will ultimately abide by these governance mechanisms when tough decisions need to be made.


By Katrina Ingram, CEO, Ethically Aligned AI

________


Sign up for our newsletter to have new blog posts and other updates delivered to you each month! Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com



© 2023 Ethically Aligned AI Inc. All right reserved.




bottom of page