DeepMind and OpenAI are two companies that have been working on generative AI “chatbot” technologies. Most people will have heard about OpenAI’s ChatGPT which was released to the world in November and has been making headlines since then. Sparrow is a very similar project pioneered by DeepMind who published a paper about their work in September 2022 - months ahead of OpenAI.
The reason why you know about one and not the other relates to ethical choices.*
A TIME magazine interview with Demis Hassabis, CEO of DeepMind, says that DeepMind's internal ethics board debated whether or not to publish "a blueprint for a faster engine" that would more efficiently train a model. They did not want that blueprint to be a guide for unscrupulous actors. However, DeepMind went ahead with the publication in the spring of 2022 with the rationale that if they didn't do it, someone else would. It seems that this "someone else will do it, so it might as well be me" mentality was also embraced by DeepMind's competitor, OpenAI, in deciding to release ChatGPT to the public. The TIME piece also hints at "freeloaders" and "non-contributors" - those who take other's research for their own gain. Hassabis declines to name names, but one has to wonder - is that a dig at OpenAI?
DeepMind vs OpenAI
DeepMind is a UK based company that was co-founded in 2010 by Demis Hassabis and funded in part by Peter Thiel and Elon Musk. It has an academic sensibility and positions itself as a research focused company committed to the pursuit of artificial general intelligence. DeepMind has made headlines for many AI breakthroughs in games, notably AlphaGo, and also in health sciences research, notably AlphaFold. In 2014, it was acquired by Google.
OpenAI launched in December 2015 and was also funded by Thiel and Musk. Some articles have suggested it was a response to DeepMind’s acquisition by Google.
OpenAI was formed as a not for profit. It held the same goal of pursuing artificial general intelligence but the founders felt that this important work should be unsullied by capitalism, not controlled by any one entity and shared openly (hence the name - OpenAI). Given those involved in founding OpenAI, it’s hard not to see the irony here. But, things changed pretty quickly when it became apparent that they needed A LOT of resources - billions of dollars in computing power - to pursue their artificial general intelligence dream. That’s when the not for profit became a for profit and landed a $1B investment from Microsoft.
An early 2020 piece in MIT Technology Review by Karen Hao hints at what they were working on that required this kind of investment: “One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources.”
Sam Altman took the helm of OpenAI as CEO in 2019. Altman could be considered the ultimate Silicon Valley insider. He’s a serial entrepreneur who inherited the helm of Y-Combinator from its founder Paul Graham. While both Altman and Hassabis are tech-bros, white males and part of the privileged AI elite, their styles are very different. Given the backgrounds of DeepMind’s and OpenAI’s respective CEO’s - a British academic and an American entrepreneur - it’s not surprising why OpenAI and ChatGPT are currently upstaging DeepMind and Sparrow.
DeepMind took a more measured approach by releasing a paper and not launching a beta version of its product to the public. OpenAI took the more traditional “move fast and break things” Silicon Valley approach and has garnered all the attention as a result.** From an ethics perspective, it’s pretty disappointing but not surprising.
OpenAI was supposed to value ethics as well. But, back to Karen Hao’s piece: “There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”
A better mousetrap
Sparrow might be the better (though still ethically fraught) version of ChatGPT. Sparrow apparently addresses these ethical issues:
It provides sources - Sparrow will cite the source material that goes into the generative commentary
Its methodology is more transparent - DeepMind published this paper. OpenAI published a less comprehensive blog post.
It has better rules - Sparrow apparently has more rules built into it as guardrails
AI Coffee Break with Letitia has published an in-depth look at Sparrow vs ChatGPT - well worth a watch.
Chatbots, the only big thing in AI?
The two companies in the Western world who have the most resources focused on research of artificial general intelligence have landed on almost identical projects as the focus of their work.
Technically, we continue to narrow the field of AI by focusing on similar approaches - doubling down on ever larger models fueled by massive amounts of questionably acquired data, remixed and served back to us as intelligence.
This narrowing of the field should be of concern to those who care about AI research. It means there are many other paths not being taken and ideas not being explored as much of the funding accrues to this one pathway. This appears to be a moment of closure, a phase in the social construction of technology where the problem is seen as being solved and we lose design flexibility in favour of stability.
Stability, however, is also what is needed for commercialization and that may be the actual driver behind both of these large scale chatbot efforts. Both DeepMind and OpenAI answer to publicly traded overlords - namely Google and Microsoft. The latter has recently committed $10B to OpenAI, a deal which also serves to shore up dominance for the Microsoft Azure platform. That old expression about making money during a goldrush by being in the “picks and shovels” business comes to mind. Cloud storage is the necessary backbone infrastructure for these large scale models. There's also chatter about how ChatGPT might help resurrect Bing. Meanwhile, DeepMind was becoming a bit of a money pit for Google, so perhaps there was pressure to churn out something commercially viable.
At the moment, the first mover advantage appears to be with ChatGPT, OpenAI and Microsoft. Yet, who will win the war of the chatbots, never mind the bigger quest for artificial general intelligence, remains to be seen. Can any of this be done ethically? That’s also an open question.
By Katrina Ingram, CEO, Ethically Aligned AI
*I've been critical of both DeepMind and Google in other presentations and blog posts. However, in this particular context, I feel they did make a better choice.
** This blog post by Alberto Romero, an analyst at CambrianAI, suggests that OpenAI may have used the window between DeepMind’s publication of their research and the launch of ChatGPT to quickly train this system, upstaging DeepMind. I can’t find other evidence to support this but I think it’s an interesting theory.
I'm also aware this whole situation isn't exactly an example to help sell ethics since the less ethical choice has been rewarded, at least in the short run. Despite that, I firmly believe ethical behaviour does pay off in the longer run!
Sign up for our newsletter to have new blog posts and other updates delivered to you each month!
Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com
© 2023 Ethically Aligned AI Inc. All right reserved.