I recently learned that companies can outsource corporate whistleblowing to a piece of software, confirming that there really is an app for everything.
“Whistleblowing software is a type of issue management solution that provides employees with anonymous options to report issues related to fraud, harassment, theft, embezzlement, corruption, and so on, giving organizations the ability to uncover these workplace problems.” (G2.com)
This issue arose out of a talk this week co-hosted by three university affiliated institutes, including the University of Alberta’s AI4Society, on the topic of whistleblowers and self-regulation in the technology space. The discussion was centered around whether companies could effectively self-regulate and how enhanced legal protections in Europe along with technology solutions might afford whistleblowers greater protections.
The panel included two legal experts from Portugal, one of whom is working as a data protection officer in the financial services sector, and a business professor from the University of Alberta whose research is focused on organizational scandals.
My immediate question with respect to whistleblower software is – where does this information go? Is this information externally reported to authorities, such as regulators, or does it remain internal to the company?
Imagine you’re an employee at Enron, trying to blow the whistle on fraudulent accounting practices, or an employee at Volkswagen who knows something isn’t quite right about emissions reporting. In both these cases, if the information remained internal, even if it was communicated to the CEO, the highest office in the organization, the likelihood that the information would result in any positive change would have been non-existent. The truly big scandals may be sanctioned by organizational culture and business practices. ROI for unethical business practices can be a powerful incentive.
At the same time, whistleblowers take an enormous amount of personal risk. As U of A professor Tim Hannigan shared, the cultural norms of an organization have a strong impact on how individuals are treated and whistleblowers, who speak out against a group, are typically ostracized. What are the privacy implications for this software? Does someone using a whistleblowing app risk being identified?
Let’s assume the whistleblower goes external – to the media or to regulators – even then, it may be challenging to gain traction. Typically, the issue needs to be impactful enough to cause a scandal and fuel sustained public outrage, at which point, regulators may step in and issue fines, private lawsuits may be launched, or the company's stock price may be impacted. These "financial pain events" are what can cause a company to take action.
Yet, in a post-Trump era, the goal posts for what qualifies as a scandal have shifted. At the same time, we see a warp-speed media cycle where public outrage comes and goes in short order. There are few media organizations left with the resources to provide deep investigative analysis of the Watergate level. Thus, many unethical issues , outside of the most egregious acts of lawlessness, may not get too much attention – not enough to illicit real change.
In light of all of this, why would someone risk being a whistleblower? There are issues of personal conscience, of course, and that’s important. However, doing the right thing can come at a high personal cost. While it’s easy to self-righteously say people need to speak up, it places an awful lot of risk on an individual, their livelihood, their family and their physical safety. Even with increased legal protections, and new technologies for reporting, as one of the legal experts bluntly put it – “snitches get stiches”. Retribution for speaking up and speaking out against an organization doesn’t seem to be something we can solve with an app.
More about whistleblowing and the GDPR - https://edps.europa.eu/data-protection/data-protection/reference-library/whistleblowing_en
Signal is an app used to organize in ways that are protected - https://signalfoundation.org/
Ethics Dialogue was co-organized by the Center for Digital Ethics and Policy (CDEP) at Loyola University Chicago, the AI4Society Signature Area at the University of Alberta and the Weizenbaum Institute for the Networked Society in Berlin.
Rita de Sousa Costa - TMT Associate at PLMJ, Portugal
Inês de Castro Ruivo - PLMJ TMT, Telecommunications, Media and Technology
Tim Hannigan - University of Alberta Assistant Professor - Department of Strategy, Entrepreneurship and Management