This is an updated version.
The Godfathers of AI and 2018 ACM Turing Award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio shared a stage in New York on Sunday night at an event organized by the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading.
Introduced in the mid 1980s, deep learning gained traction in the AI community the early 2000s. The year 2012 saw the publication of the CVPR paper Multi-column Deep Neural Networks for Image Classification, which showed how max-pooling CNNs on GPUs could dramatically improve performance on many vision benchmarks; while a similar system introduced months later by Hinton and a University of Toronto team won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. These events are regarded by many as the beginning of a deep learning revolution that has transformed AI.
Deep learning has been applied to speech recognition, image classification, content understanding, self-driving, and much more. And according to LeCun who is now Chief AI Scientist at Facebook the current services offered by Facebook, Instagram, Google, and YouTube are all built around deep learning.
Deep learning does however does have its detractors. Johns Hopkins University Professor and one of the pioneers of computer vision Alan Yuille warned last year that deep learnings potential in computer vision has hit a bottleneck.
We read a lot about the limitations of deep learning today, but most of those are actually limitations of supervised learning, LeCun explained in his talk. Supervised learning typically refers to learning with labelled data. LeCun told the New York audience that unsupervised learning without labels or self-supervised learning as he prefers to call it may be a game changer that ushers in AIs next revolution.
This is an argument that Geoff [Hinton] has been making for decades. I was skeptical for a long time but changed my mind, said LeCun.
There are two approaches to object recognition. Theres the good old-fashioned path based approach, with sensible modular representations, but this typically imposes a lot of hand engineering. And then there are convolutional neural nets (CNNs), which learn everything end to end. CNNs get a huge win by wiring in the fact that if a feature is good in one place, its good somewhere else. But their approach to object recognition is very different from human perception.
This informed the first part of Hintons talk, which he personally directed at LeCun: Its about the problems with CNNs and why theyre rubbish.
CNNs are designed to cope with translations, but theyre not so good at dealing with other effects of changing viewpoints such as rotation and scaling. One obvious approach is to use 4D or 6D maps instead of 2D maps but that is very expensive. And so CNN are typically trained on many different viewpoints in order for them to be able to generalize across viewpoints. Thats not very efficient, Hinton explained. Wed like neural nets to generalize to new viewpoints effortlessly. If it learned to recognize something, then you make it 10 times as big and you rotate it 60 degrees, it shouldnt cause them any problem at all. We know computer graphics is like that and wed like to make neural nets more like that.
Hinton believes the answer is capsules. A capsule is a group of neurons that learns to represent a familiar shape or part. Hinton says the idea is to build more structure into neural networks and hope that the extra structure helps them generalize better. Capsules are an attempt to correct the things that are wrong with CNNs.
The capsules Hinton introduced are Stacked Capsule Auto-encoders, which first appeared at NeurIPS 2019 and are very different in many ways from previous capsule versions from ICLR 2018 and NIPS 2017. These had used discriminative learning. Hinton said even at the time he knew this was a bad idea: I always knew unsupervised learning was the right thing to do so it was bad faith to do the previous models. The 2019 capsules use unsupervised learning.
LeCun noted that although supervised learning has proven successful in for example speech recognition and content understanding, it still requires a large amount of labelled samples. Reinforcement learning works great for games and in simulations, but since it requires too many trials its not really applicable in the real world.
The first challenge LeCun discussed was how models can be expected to learn more with fewer labels, fewer samples or fewer trials.
LeCun now supports the unsupervised learning (self-supervised learning) solution Hinton first proposed some 15 years ago. Basically its the idea of learning to represent the world before learning a task and this is what babies do, LeCun explained, suggesting really figuring out how humans learn so quickly and efficiently may be the key that unlocks self-supervised learnings full potential going forward.
Self-supervised learning is largely responsible for the success of natural language processing (NLP) over the last year and a half or so. The idea is to show a system a piece of text, image, or video input, and train a model to predict the piece thats missing for example to predict missing words in a text, which is what transformers and BERT-like language systems were built to do.
But success of Transformers and BERT et al has not transferred into the image domain because it turns out to be much more difficult to represent uncertainty in prediction on images or in video than it is in text because its not discrete. Its practical to produce distributions over all the words in a dictionary, but its hard to represent distributions over all possible video frames. And this is, in LeCuns view, the main technical problem we have to solve if we want to apply self-supervised learning to a wider variety of modalities like videos.
LeCun proposed one solution may be in latent variable energy-based models: An energy-based model is kind of like a probabilistic model except you dont normalize. And one way to train the energy-based model is to give low energy to samples that you observe and high energy to samples you do not observe.
In his talk, LeCun touched on two other challenges:
LeCun opined that nobody currently seems to have a good answer to either of these two challenges, and said he remains open to and looks forward to any possible ideas.
Yoshua Bengio, meanwhile, has shifted his focus to consciousness. After cognitive neuroscience, he believes the time is ripe for ML to explore consciousness, which he says could bring new priors to help systematic and good generalization. Ultimately, Bengio hopes such a research direction could allow DL to expand from System 1 to System 2 referring to a dichotomy introduced by Daniel Kahneman in his book Thinking, Fast and Slow. System 1 represents what current deep learning is very good at intuitive, fast, automatic, anchored in sensory perception. System 2 meanwhile represents rational, sequential, slow, logical, conscious, and expressible with language.
Before he dived into the valuable lessons that can be learned from consciousness, Bengio briefed the audience on cognitive neuroscience. It used to be seen in the previous century that working on consciousness was kind of taboo in many sciences for all kinds of reasons. But fortunately, this has changed and particularly in cognitive neuroscience. In particular, the Global Workspace Theory by Baars and the recent work in this century based on DeHaene, which really established these theories to explain a lot of the objective neuroscience observations.
Bengio likened conscious processing to a bottleneck and asked Why would this (bottleneck) be meaningful? Why is it that the brain would have this kind of bottleneck where information has to go through this bottleneck, just a few elements to be broadcast to the rest of the brain? Why would we have a short term memory that only contains like six or seven elements? It doesnt make sense.
Bengio said the bottom line is get the magic out of consciousness and proposed the consciousness prior, a new prior for learning representations of high-level concepts of the kind human beings manipulate with language. The consciousness prior is inspired by cognitive neuroscience theories of consciousness. This prior can be combined with other priors in order to help in disentangling abstract factors from each other. What this is saying is that at that level of representation, our knowledge is represented in this very sparse graph where each of the dependencies, these factors involve two, three, four or five entities and thats it.
Consciousness can also provide inspiration on how to build models. Bengio explained Agents are at the particular time at a particular place and they do something and they have an effect. And eventually that effect could have constant consequences all over the universe, but it takes time. And so if we can build models of the world where we have the right abstractions, where we can pin down those changes to just one or a few variables, then we will be able to adapt to those changes because we dont need as much data, as much observation in order to figure out what has changed.
So whats required if deep learning is going to reach human-level intelligence? Bengio referenced his previous suggestions, that missing pieces of the puzzle include:
In a panel discussion, Hinton, LeCun and Bengio were asked how they reconcile their research approaches with colleagues committed to more traditional methods. Hinton had been conspicuously absent from some AAAI conferences, and hinted at why in responding: The last time I submitted a paper to AAAI, I got the worst review I ever got. And it was mean. It said Hinton has been working on this idea for seven years [vector representations] and nobodys interested. Time to move on.
Hinton spoke of his efforts to find a common ground and move on: Right now were in a position where we should just say, lets forget the past and lets see if we can take the idea of doing gradient descent in great big system parameters. And lets see if we can take that idea, because thats really all weve discovered so far. That really works. The fact that that works is amazing. And lets see if we can learn to do reasoning like that.
Author: Fangyu Cai & Yuan Yuan | Editor: Michael Sarazen
Like Loading...
Read more:
AAAI 2020 | Whats Next for Deep Learning? Hinton, LeCun, and Bengio Share Their Visions - Synced
- Sheffield Lab: Understanding the neuroscience of memories - University of Chicago News - April 27th, 2025 [April 27th, 2025]
- Prenatal Stress Leaves Lasting Molecular Imprints on Babies - Neuroscience News - April 27th, 2025 [April 27th, 2025]
- Dean Buonomano explores the concept of time in neuroscience and physics - The Transmitter - April 27th, 2025 [April 27th, 2025]
- Psychedelics May Reset Brain-Immune Link Driving Fear and Anxiety - Neuroscience News - April 27th, 2025 [April 27th, 2025]
- Infant Social Skills Thrive Despite Hardship - Neuroscience News - April 27th, 2025 [April 27th, 2025]
- From Cologne to Country Roads: One scientist's interdisciplinary journey to build bridges (and robotic insects) between neuroscience and engineering -... - April 27th, 2025 [April 27th, 2025]
- Eyes Reveal Intentions Faster Than We Think - Neuroscience News - April 27th, 2025 [April 27th, 2025]
- Immune Resilience Identified as Key to Healthy Aging and Longevity - Neuroscience News - April 27th, 2025 [April 27th, 2025]
- Energy Starvation Triggers Dangerous Glutamate Surges in the Brain - Neuroscience News - April 27th, 2025 [April 27th, 2025]
- WVU Rockefeller Neuroscience Institute first in U.S. to successfully test innovative brain-computer interface technology to decode speech and language... - April 27th, 2025 [April 27th, 2025]
- Microglia Reprogrammed to Deliver Precision Alzheimers Therapies - Neuroscience News - April 27th, 2025 [April 27th, 2025]
- Neuroscience Says Music Is an Emotion Regulation Machine. Heres What to Play for Happiness, Productivity, or Deep Thinking - Inc.com - April 19th, 2025 [April 19th, 2025]
- Early Maternal Affection Shapes Key Personality Traits for Life - Neuroscience News - April 19th, 2025 [April 19th, 2025]
- Elons new neuroscience major highlighted by Greensboro News & Record - Elon University - April 19th, 2025 [April 19th, 2025]
- Brain Blast event at St. Lawrence University teaches local students neuroscience - North Country Now - April 19th, 2025 [April 19th, 2025]
- AI Reveals What Keeps People Committed to Exercise - Neuroscience News - April 19th, 2025 [April 19th, 2025]
- The "Holy Grail" of Neuroscience? Researchers Create Stunningly Accurate Digital Twin of the Brain - The Debrief - April 19th, 2025 [April 19th, 2025]
- Annenberg School Vice Dean Emily Falk publishes book on the neuroscience of decision-making - The Daily Pennsylvanian - April 19th, 2025 [April 19th, 2025]
- Music-Induced Chills Trigger Natural Opioids in the Brain - Neuroscience News - April 19th, 2025 [April 19th, 2025]
- What We Value: The Neuroscience of Choice and Change - think.kera.org - April 19th, 2025 [April 19th, 2025]
- Kile takes top neuroscience post at Sutter Health as system pushes to align care, expand trials - The Business Journals - April 19th, 2025 [April 19th, 2025]
- A Grain of Brain, 523 Million Synapses, and the Most Complicated Neuroscience Experiment Ever Attempted - SciTechDaily - April 19th, 2025 [April 19th, 2025]
- Mild Brain Stimulation Alters Decision-Making Speed and Flexibility - Neuroscience News - April 19th, 2025 [April 19th, 2025]
- Cannabis studies were informing fundamental neuroscience in the 1970s - Nature - April 10th, 2025 [April 10th, 2025]
- To make a meaningful contribution to neuroscience, fMRI must break out of its silo - The Transmitter - April 10th, 2025 [April 10th, 2025]
- Steve Jobss Unexpected Secret to Being More Creative (Backed by Neuroscience) - Inc.com - April 10th, 2025 [April 10th, 2025]
- Challenging Decades of Neuroscience: Brain Cells Are More Plastic Than Previously Thought - SciTechDaily - April 10th, 2025 [April 10th, 2025]
- Q&A: Lundbecks head of R&D on letting biology speak in neuroscience - Endpoints News - April 10th, 2025 [April 10th, 2025]
- Why it's hard to study the neuroscience of psychedelics : Short Wave - NPR - April 10th, 2025 [April 10th, 2025]
- Fear Sync: How Males and Females Respond to Stress Together - Neuroscience News - April 10th, 2025 [April 10th, 2025]
- Chemotherapy Disrupts Brain Connectivity - Neuroscience News - April 10th, 2025 [April 10th, 2025]
- Newly awarded NIH grants for neuroscience lag 77 percent behind previous nine-year average - The Transmitter - April 10th, 2025 [April 10th, 2025]
- Wittstein interviewed by The Times News about new neuroscience major - Elon University - April 10th, 2025 [April 10th, 2025]
- Alto Neuroscience initiated with a Buy at H.C. Wainwright - Yahoo Finance - April 10th, 2025 [April 10th, 2025]
- New map of brain hailed as watershed for neuroscience - The Times - April 10th, 2025 [April 10th, 2025]
- GSK Ramps Up Neuroscience Investment With ABL Brain Shuttle Deal - insights.citeline.com - April 10th, 2025 [April 10th, 2025]
- ADHD and Music: Why Background Beats May Boost Study Focus - Neuroscience News - April 10th, 2025 [April 10th, 2025]
- Brains Rewire Themselves to Survive Deadly Infection - Neuroscience News - April 10th, 2025 [April 10th, 2025]
- AbbVie Hold Rating: Balancing Strong Immunology Growth with Challenges in Aesthetics, Neuroscience, and Oncology - TipRanks - April 10th, 2025 [April 10th, 2025]
- Want to Feel Better and Be More Mindful? Neuroscience Says This Habit Might Be Holding You Back - Inc.com - April 10th, 2025 [April 10th, 2025]
- How One Bad Meal Rewires the Brain to Avoid That Food Forever - Neuroscience News - April 10th, 2025 [April 10th, 2025]
- Marcus Neuroscience Institute to Host Brain and Spine Symposium - South Florida Hospital News - March 30th, 2025 [March 30th, 2025]
- Elon University to launch neuroscience major in fall 2025 - Today at Elon - March 30th, 2025 [March 30th, 2025]
- The brains stalwart sentinels express an unexpected gene - The Transmitter: Neuroscience News and Perspectives - March 30th, 2025 [March 30th, 2025]
- Video catches microglia in the act of synaptic pruning - The Transmitter: Neuroscience News and Perspectives - March 30th, 2025 [March 30th, 2025]
- Null and Noteworthy: Reexamining registered reports - The Transmitter: Neuroscience News and Perspectives - March 30th, 2025 [March 30th, 2025]
- Accepting the bitter lesson and embracing the brains complexity - The Transmitter: Neuroscience News and Perspectives - March 30th, 2025 [March 30th, 2025]
- NIH neurodevelopmental assessment system now available as iPad app - The Transmitter: Neuroscience News and Perspectives - March 30th, 2025 [March 30th, 2025]
- Stronger Bonds Before Birth Shape Healthier Mother-Child Futures - Neuroscience News - March 30th, 2025 [March 30th, 2025]
- How Emotionally Intelligent People Learn to Control Their Inner Voice, Backed by Neuroscience - Inc. - March 30th, 2025 [March 30th, 2025]
- Gabriele Scheler reflects on the interplay between language, thought and AI - The Transmitter: Neuroscience News and Perspectives - March 30th, 2025 [March 30th, 2025]
- Worlds first crowd-sourced neuroscience study aims to understand how our brains predict the future - EurekAlert - March 15th, 2025 [March 15th, 2025]
- Rewriting Neuroscience: Possible Foundations of Human Intelligence Observed for the First Time - SciTechDaily - March 15th, 2025 [March 15th, 2025]
- Calculating neurosciences carbon cost: Q&A with Stefan Pulver and William Smith - The Transmitter: Neuroscience News and Perspectives - March 15th, 2025 [March 15th, 2025]
- The future of neuroscience research at U.S. minority-serving institutions is in danger - The Transmitter: Neuroscience News and Perspectives - March 15th, 2025 [March 15th, 2025]
- Dopamine and social media: Why you cant stop scrolling, according to neuroscience - PsyPost - March 15th, 2025 [March 15th, 2025]
- Neuroscience Discovered a Clever Trick for Squeezing More Joy Out of Everyday Pleasures - Inc. - March 15th, 2025 [March 15th, 2025]
- The limits of neuroscience - The Transmitter: Neuroscience News and Perspectives - March 15th, 2025 [March 15th, 2025]
- BPOM Explains The Benefits Of Fasting From The Health And Neuroscience Side - VOI English - March 15th, 2025 [March 15th, 2025]
- How tiny tardigrades could help tackle systems neuroscience questions - The Transmitter: Neuroscience News and Perspectives - March 15th, 2025 [March 15th, 2025]
- Alison Preston explains how our brains form mental frameworks for interpreting the world - The Transmitter: Neuroscience News and Perspectives - March 15th, 2025 [March 15th, 2025]
- The Mystical Mind Meets Neuroscience: Seeking the Roots of Consciousness - Next Big Idea Club Magazine - March 15th, 2025 [March 15th, 2025]
- Myosin Therapeutics Closes Second Seed Round to Advance Clinical Trials for Innovative Cancer and Neuroscience Therapies - PR Newswire - March 5th, 2025 [March 5th, 2025]
- Neuroscience Ph.D. programs adjust admissions in response to U.S. funding uncertainty - The Transmitter: Neuroscience News and Perspectives - March 5th, 2025 [March 5th, 2025]
- New tools help make neuroimaging accessible to more researchers - The Transmitter: Neuroscience News and Perspectives - March 5th, 2025 [March 5th, 2025]
- Future Thinking Training Reduces Impulsivity - Neuroscience News - March 5th, 2025 [March 5th, 2025]
- Null and Noteworthy, relaunched: Probing a schizophrenia biomarker - The Transmitter: Neuroscience News and Perspectives - March 5th, 2025 [March 5th, 2025]
- How to communicate the value of curiosity-driven research - The Transmitter: Neuroscience News and Perspectives - March 5th, 2025 [March 5th, 2025]
- Cognitive neuroscience approach to explore the impact of wind turbine noise on various mental functions - Nature.com - March 5th, 2025 [March 5th, 2025]
- Football on the Brain: Helping coaches embed neuroscience knowledge - Training Ground Guru - March 5th, 2025 [March 5th, 2025]
- Taking Control: Using Neuroscience to Build Better Lives - theLoop - March 5th, 2025 [March 5th, 2025]
- Creating a pipeline of talent to feed the growth of Neuroscience: Lessons from Ghana - Myjoyonline - March 5th, 2025 [March 5th, 2025]
- Exclusive: NIH appears to archive policy requiring female animals in studies - The Transmitter: Neuroscience News and Perspectives - February 25th, 2025 [February 25th, 2025]
- Roll On Down The Highway 2025 Tour coming to Neuroscience Group Field - WeAreGreenBay.com - February 25th, 2025 [February 25th, 2025]
- STEM organizations host Neuroscience Outreach Fair for local K-12 students - University of Virginia The Cavalier Daily - February 25th, 2025 [February 25th, 2025]
- Adapt or die: Safeguarding the future of diversity and inclusion funding in neuroscience - The Transmitter: Neuroscience News and Perspectives - February 25th, 2025 [February 25th, 2025]
- The last two-author neuroscience paper? - The Transmitter: Neuroscience News and Perspectives - February 25th, 2025 [February 25th, 2025]
- Gate Neurosciences Strengthens Focus on the Synapse as a Therapeutic Target with Acquisition of Boost Neuroscience - Business Wire - February 25th, 2025 [February 25th, 2025]
- Why Firefly Neuroscience, Inc. (AIFF) Is Soaring This Year So Far - Yahoo Finance - February 25th, 2025 [February 25th, 2025]
- Breaking the barrier between theorists and experimentalists - The Transmitter: Neuroscience News and Perspectives - February 25th, 2025 [February 25th, 2025]