top of page

In too Deep: Are we missing the bigger picture for AI?

Updated: Mar 23, 2022


If I were writing an equation to summarize the general state of artificial intelligence today it might look like this: AI = ML = DNN

Artificial intelligence (AI) is increasingly framed or equated with machine learning (ML) which in turn is being equated with deep neural networks (DNN). It’s that story that’s driving media headlines, investment and popular opinion. Yet, while the accomplishments made in this area are impressive and an important piece of the AI story, it’s not the whole story. In fact, the focus on machine learning techniques, especially deep learning, might be a problem in advancing the overall field of artificial intelligence. It’s also causing some fundamental problems with the AI systems we are deploying right now.

That’s the premise outlined in Gary Marcus’ new book, co-written with Ernest Davis, Rebooting AI Building Artificial Intelligence We Can Trust. Marcus, a psychologist working in the field of cognitive science and artificial intelligence, is a protégé of Steven Pinker, and a proponent of taking a more symbolic approach to work in combination with deep learning techniques. He feels we need to get to trustworthy AI and the way forward requires more focus on instilling common-sense reasoning into artificial intelligence.

Before delving into the book, a little history to set the stage…

Back in the early days of artificial intelligence there was focus on symbolic reasoning. Pioneers of the field, like Marvin Minsky, were interested in determining how machines could learn common-sense reasoning. A psychologist named Frank Rosenblatt had a different approach. In his work for the US Navy Rosenblatt developed Perceptron, “a machine capable of perceiving, recognizing and identifying its surrounding without any human training or control”. (Lefkowitz) In hindsight, he paved the way for neural network techniques in machine learning. But there were problems with Perceptron, which Minsky and his colleague Seymour Papert took great pains to point out in their paper which was later published as the book Perceptrons. Their analysis presented a scathing review of Rosenblatt’s work whereby the authors rebuilt the Perceptron machine for the purposes of illustrating its limitations. (Olzaran) As a result, hundreds of thousands of dollars of funding dried up. Sadly, Rosenblatt died in a freak boating accident and the idea of neural networks laid dormant for decades. Minsky's work had essentially shutdown support for neural networks as being impossible. (Olzaran) Stalwarts like Geoffrey Hinton and his protégé Yann LeCun carried on in the spirit of Rosenblatt, but it wasn’t until major breakthroughs in computing power enabled the “deep” part of neural networks to become a reality that things really took off.

If the Minsky vs Rosenblatt rivalry sounds a little “high-school”, it may have in fact started there as Minsky was a grade behind Rosenblatt at the Bronx High School of Science. (Lefkowitz) It’s also interesting that Minsky was the mathematician and Rosenblatt was the psychologist in this scenario. This early history seems to have set the stage for the rivalry that continues in the field today. Marcus goes into a book’s worth of reasons why the field of AI needs a “reboot”, but here are some of the big themes:

It’s reductionist.

Artificial intelligence is a pretty big playing field. Machine learning (and especially deep learning) has been where all the recent action has taken place and garnered an enormous share of the investment and attention. However, it’s not indicative of the entire field of AI and Marcus is concerned about unfunded or under-funded areas that he believes are critical to advance the field and someday get to artificial general intelligence.

It lacks common sense.

Current AI systems based on deep learning techniques don’t provide a means for reasoning. They’re largely based on statistical probabilities that demonstrate correlation but not causation. The book cites some crazy examples of this, like the correlations between the number of people dying by becoming entangled in their bed sheets and the per capita consumption of cheese (p 140). Check out this site for more "spurious correlations". Marcus also goes into a number of examples around reading and the current inability of deep-learning systems to make even basic inferential decisions from children's stories and other simple texts.

It’s data hungry.

Current techniques need vast data sets, in some cases millions of examples, in order to perform within acceptable margins of error. This has down stream effects spawning concerns over data privacy, historically inaccurate data sets that amplify inequities and environmental considerations linked to the enormous amounts of computer processing power needed to process these data sets. There’s also the hidden costs of sorting and labeling data which is typically done by off-shore, low-wage, gig economy workers.

It’s brittle.

A recent Nature article* echo’s Marcus’s concerns about how easily neural networks can be broken. With the shift of a few pixels, adding some stickers to a sign or simply rotating an object to another position, deep neural networks can make inexplicable (and from a human perspective, really dumb) mistakes. This could have catastrophic affects in fields such as medicine, where extra noise in an image might result in a misdiagnosis or in an autonomous vehicle system that misreads a stop sign and keeps driving through an intersection.

It’s not explainable.

Part of the power of deep neural networks is that they learn on their own in ways that even their creators can’t fully grasp. It’s a black box. That doesn’t instill a lot of confidence and trust in using AI in high stakes environments. It also poses problems from a legal standpoint in assigning responsibility and accountability. Recently, even deep learning pioneers like Yoshua Bengio have spoken about this problem and the need to work on understanding the “why”.

So, what should happen next? It seems like there is acknowledgement from the deep learning camp that there are problems that need to be addressed around the issue of reasoning. Is there room for an “and” conversation within AI, for deep learning “and” other techniques along the lines of symbolic reasoning to coexist? More importantly, is there an appetite to fund those other areas? It feels reminiscent of the Minsky vs Rosenblatt situation.

*the title of this article was changed from “Deep Trouble for Deep Learning” in the print (PDF) edition to “Why deep-learning AIs are so easy to fool” currently displayed online. I wonder what was at play in making this change - did someone with clout object to the first framing of the story?

If you don’t have time to read the book and you like podcasts, here’s a short HBR podcast with Gary Marcus.


By Katrina Ingram _______


Sign up for our newsletter to have new blog posts and other updates delivered to you each month!

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2019 Ethically Aligned AI Inc. All right reserved.

________________

Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature, 574(7777), 163–166. https://doi.org/10.1038/d41586-019-03013-5

Knight, W. (2019, October 8). An AI pioneer wants his algorithms to under the why. Wired. Retrieved from https://www.wired.com/story/ai-pioneer-algorithms-understand-why/

Lefkowitz, M. (2019, September 25). Professor’s perceptron paved the way for AI – 60 years too soon | Cornell Chronicle. Retrieved November 3, 2019, from Cornell Chronicle website: https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon

Marcus, G. and Davis, E. (2019). Rebooting AI Building Artificial Intelligence We Can Trust. New York, NY: Penguin Random House.

Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611-659. Retrieved from http://www.jstor.org/stable/285702



bottom of page