top of page

Sentient chatbots, pay equity and focusing on what matters



The internet’s been abuzz for the past few weeks with news of Google engineer, Blake Lemoine, who claims that the chatbot he’s been working on is sentient. LaMDA, which is a large language model trained on dialogue, has Lemoine convinced that there was evidence of signs of self-awareness and perhaps, even a soul. Here’s the transcript Lemoine published if you want to take a look and form your own conclusions. My take on this is aligned with many who have written about the ELIZA effect and how these systems can be incredibly good at mimicking what might be perceived as feelings.


One of the first things I decided when I started working in the field of AI ethics is that I was going to focus on issues impacting people right now.

Yes, there were possibilities for AI to pose “existential risks'', the singularity, dystopian killer robots and all that comes with Artificial General Intelligence. I’m not saying this isn’t important to consider. However, I’ve found that a focus on these future issues distracts people from acknowledging and addressing the current harms perpetuated by today’s narrow AI systems. While today’s systems are boring in comparison to The Terminator, they have real and immediate impacts for people.


A week prior to the sentient chatbot hub-bub, an important victory was won by a group of former Google employees with the settlement of a $118M pay equity lawsuit. The case, which was filed in 2017, covered 15,500 women in 236 job titles who claimed that Google had been systemically underpaying women. The potential gap was estimated to be as much as $17,000/year in total compensation, including a mix of base pay, bonuses and stock.


Employment practices that led to and reinforced the pay inequities included:

  • Tying starting pay to an employee’s prior role, which reinforce gender pay biases

  • Disproportionately hiring women into the lower levels of a pay band for a job family despite equivalent experience and education as male counterparts

The independent analysis which supported these claims showed that there was only a 1 in 100 likelihood of these these disparities occurring by chance rather than by direct discrimination.


Heidi Lamar, one of the plaintiffs, was employed as a preschool teacher at Google’s children’s centre. She was being paid $18.51/hr vs the $21/hr paid to a male counterpart even though she held a masters in childhood education and he did not. Lamar’s story caught my attention because it’s also important to note the wide range of roles that support the creation of technology. Not everyone who works for Google is a software engineer.


Pay inequity is a REAL harm. It’s impacting people TODAY.

While this settlement is a victory, technology companies - some of the best resourced companies in the world - need to do better. Increasingly, AI systems are being implicated in amplifying inequity in the workplace through issues such as discriminatory hiring practices, monitoring employees and behavioural nudging. Again, real harms, attached to real people.


Let’s put our focus on the issues that matter most.


By Katrina Ingram, CEO, Ethically Aligned AI


________


Sign up for our newsletter to have new blog posts and other updates delivered to you each month!


Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com


© 2022 Ethically Aligned AI Inc. All right reserved.




bottom of page