It’s easier to sell a dream than it is reality. This headline kicks off an in depth piece in Canadian Business by Edmonton-based writer Omar Mouallem which examines the boom and bust of Canada’s cannabis industry. This is a story I know well because I was part of it. During the first year of legalization, as I was working on my masters research in AI ethics, I was also working at a startup in the cannabis industry. This quote in the article caught my attention…
“In the capital markets, it’s easier to sell a dream than it is to sell reality,” says independent analyst Scott Willis. “You get a higher valuation for your company if you just promise things that haven’t happened yet—because nobody can fact-check you.”
Doesn’t this sound like it could also apply to our current AI narrative?
Media headlines regularly amplify the idea that AI is outperforming humans in one thing or another. One hype-inducing headline about AI “mastering language” recently became the focus of linguistic professor Emily Bender’s critical inspection. In a detailed response to the New York Times article she points out how the many ways in which the original piece failed to question assumptions made by the subject of the interview (OpenAI), how it framed alternate perspectives (including quotes from her) as those of “skeptics” and how it strayed at some points into straight up hero-worship and idolization of OpenAI’s leadership. This framing of AI as being in the realm of the magical and mystical and on the cusp of superintelligence is part of what is fueling the dream and attracting an abundance of investment.
There’s a level of economic fear of missing out that is fueling global investment and deployment of AI. Investors both benefit from the hype-fueled story that surrounds AI but they are also attracted to its allure and afraid of the ramification of not getting onboard now. In other words, many of those selling the dream actually believe it. A recent piece in TechRepublic, digs into this further citing an increase of 30X in AI patents between 2021 and 2015, even as some of its basic shortcomings have failed to be addressed. A cautionary perspective from VC investor and start-up founder David Beyer notes that:
“Too many businesses now are pitching AI almost as though it’s batteries included [which may] potentially lead to over-investment in things that over-promise. Then when they under-deliver, it has a deflationary effect on people’s attitudes toward the space.”
Part of the reason it's been challenging to regulate AI is that governments are entangled in advancing this economic story as a pathway to prosperity. As legal scholar Teresa Scassa noted at a recent talk “AI has been identified as a key factor in economic growth (globally)”. This is part of what makes it challenging to foster the political will to apply regulatory brakes lest we slow down innovation and growth.
Perhaps some of that more balanced analysis will also be driven by economic considerations. IEEE recently published a piece that called into question the ROI and economic sustainability of techniques like deep learning. Current methods rely on increasing computing power and the ongoing relevance of Moore’s Law but will this hold indefinitely? Some are questioning this logic while others seem to be doubling down on it.
The cannabis story is not an exact parallel for the current AI narrative. The pathway to legal cannabis was subject to a lot of regulation and in the opinion of industry, this is part of why it failed to thrive. Cannabis also faced competition from illegal growers and sellers that made for unfair competition. The AI technology space, in contrast, is a wild west with little regulator oversight which feels like a misplaced risk given its relative impacts to society at large.
However, there are some comparable elements in these two different sectors. We have super-charged narratives of economic growth driving massive amounts of capital and creating a lot of hype and expectations. What happens when the sector over promises and under delivers? We’ve seen how that story ends.
For more on Emily Bender check out her academic work and this paper in particular - “On the dangers of stochastic parrots: Can language models be too big?”
David Beyer’s interview “The Dirty Secret of Machine Learning” provides some interesting insight into thinking about how AI automation relates to various labour markets and the relationship between labour costs and incentives to automate. I don't agree with everything he says in this talk, for example we have VERY different perspectives on Elon Musk! I also think it’s interesting to see how things have shifted since 2017 when this talk took place and where we actually are today in terms of transportation and radiology with respect to AI.
By Katrina Ingram, CEO, Ethically Aligned AI
Sign up for our newsletter to have new blog posts and other updates delivered to you each month!
Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com
© 2022 Ethically Aligned AI Inc. All right reserved.