top of page

“Glassholes” and toilet paper: Consumer attitudes about AI

Updated: Mar 23, 2022



Back in 2013, Google launched Google Glass. It was hailed at that time as a technology set to revolutionize how we interact with the world and usher in a state of ubiquitous computing. Google co-founder Sergey Brin describes the vision for Google Glass in this TED talk.

Google Glass failed to find consumer acceptance. It actually created a backlash. The term “glasshole” came to define those who used the product and the Urban Dictionary equates the term to stalker or creeper.

Many articles have been written about why Google Glass failed. Some say it didn’t have a clear product market fit, that the glasses looked “dorky” and that clearly the idea of always on surveillance wasn’t something the world was ready to embrace. Others claim it wasn’t so much of a product failure as a victory for those concerned over privacy and human rights.

Product failures happen all the time. Google Glass reminds us that they can even happen to the best resourced companies. Understanding consumer behaviour is essential for companies who want to prevent market rejection which brings me to AI.


Consumer attitudes and AI

I’ve just finished reading two papers that talk about consumer attitudes towards the adoption of AI. In both cases, the findings show that companies have some work to do if they wish to gain consumer acceptance and adoption of AI solutions. While the two papers are different they share a similar subtext – people want to be treated like people. They want to be treated with empathy, dignity and a sense that they matter as individuals. To the degree that AI systems take these human aspects into account (or at least are positioned as such), they achieve a more positive response from consumers.


Human or Robot?

The first paper, Human or Robot? Consumer Responses to Radical Cognitive Enhancement Products, explores consumer attitudes towards the use of technology that enhances mental abilities. Researchers found that consumers using technology to enhance their cognitive capabilities beyond normal levels were seen as being less human (dehumanized), however, if the reasons why they used this technology were seen as “pro-social” (helping others) this would protect against this negative impression (Castelo, Schmitt and Sarvary, 2019). The importance of prosocial positioning was illustrated through a Facebook advertising campaign that demonstrated greater acceptance if the reasons for adopting the technology were to benefit others, not for selfish intentions.

This study was interesting to consider in light of how Google Glass was positioned during it’s launch. Having Sergey Brin (not exactly Mr. Empathetic) demo the product to an elite crowd at TED while playing a video of how it can be useful as a hands-free device during sky-diving probably wasn’t the best positioning strategy! Glass 2.0 has seen some success as an enterprise product used to enable workers in the manufacturing space. You could argue that this new use is seen as more prosocial and helpful (ie helps workers do their job).


Consumer’s don’t trust medical AI

The title of the second paper, Resistance to Medical Artificial Intelligence, pretty much sums up the findings. This paper outlined a series of studies which found that people think of AI as being OK to address averages (or average people) yet, because they are "special snowflakes", AI is not trusted to address their unique medical situation, a term the paper calls “uniqueness neglect”. (Longoni, Bonezzi,& Morewedge, 2019, p 631). The impact of uniqueness neglect was profound. “Participants were resistant to medical AI even when the performance of AI providers was explicitly specified to be superior to that of human providers” (Longoni et al, 2019, p 636). The simple logic of who is better, an AI or a human doctor, was not enough to counter consumer perceptions and even led to negative utility when factors like accuracy and cost were controlled. (Longoni, Bonezzi,& Morewedge, 2019, p 638)

If AI can provide personalized healthcare, this concern over “uniqueness neglect” would be addressed. However, the authors also note the consumer behaviour is complex and this is just one factor in many that my impact consumer acceptance of AI in medicine. (Longoni et al, 2019)

This study reminded me of a tweet by Geoffrey Hinton asking people if they would prefer a more accurate AI doctor that was an unexplainable "black box" vs a less accurate human doctor to treat them if they had cancer. The responses and justifications make for an interesting read.


Doctor’s perceptions of using AI

In addition to medical consumer reluctance, the literature indicated that while statistical models outperform doctors, doctors tend to prefer to rely on their own judgement and may be seen as less competent by colleagues if using decision support systems. (Longoni et al, 2019). However, the studies about doctor’s preferences cited in this paper are older – ranging from 1996 to 2013. It would be interesting to see if attitudes have changed more recently given both improvements in AI tool sets and a new generation of doctors in the workforce who may be more inclined to use decision support systems and other AI technology.


Consumer fears and irrationality (aka the part about toilet paper)

I started writing this post two weeks ago, before the panic of COVID-19 had set in motion a wave of panic buying. All of sudden, there were media and social media reports of empty shelves where toilet paper used to be stocked. I was reminded of the Seinfeld episode where Elaine is told there’s not a square to spare! Panic begets more panic as consumers rushed out to stock up, further depleting supplies.

We’ve also seen the reports of people hoarding hand sanitizers and other cleaning supplies hoping to resell them and profit off those desperate enough to pay exponentially more for these items. Online platforms, who definitely wanted to be on the right side of the moral outrage and the possible legal consequences aimed at those opportunists, have cracked down on sellers. These platforms are very likely using AI algorithms to police keywords.

This past week has reminded me just how much context counts when it comes to consumer behaviour and what is considered ethical during a crisis.

Given that, I wonder how consumer attitudes towards adopting AI or genetic technologies and becoming transhuman might be different in a post-COVID-19 world. By confronting how frail and vulnerable we really are as humans is there more willingness now to take steps to radically bolster ourselves physically or mentally beyond what is considered normal levels? To use technology to become superhuman? I’m especially curious to understand how younger generations might look at these issues.

I'm also interested in ways we can empower doctors and other clinicians to have the best tools possible to do their jobs which I believe will increasingly include AI systems . I'm very thankful for the many healthcare professionals who are undertaking heroic efforts and putting themselves at risk in order to save lives during this pandemic.

Thank you to Dr. Noah Castelo, University of Alberta Assistant Professor, Marketing and lead author of the Human or Robot paper for taking time to speak with me and for pointing me towards these resources.


By Katrina Ingram _______


Sign up for our newsletter to have new blog posts and other updates delivered to you each month!

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2020 Ethically Aligned AI Inc. All right reserved.

________________

References

Castelo, N., Schmitt, B., & Sarvary, M. (2019). Human or Robot? Consumer Responses to Radical Cognitive Enhancement Products. Journal of the Association for Consumer Research, 4(3), 217–230. doi: 10.1086/703462

Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to Medical Artificial Intelligence. The Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013


Kommentarer


bottom of page