Scientists think about 40% of happiness is genetic while the rest comes down to 3 main components – Insider – INSIDER

Some people seem to be born with a happier, carefree disposition than others, and research indicates that yes some of your sense of well-being may be in your genes. But only partly.

Your genes make up an estimated 40% of your ability to be happy, says psychotherapist Susan Zinn of Susan Zinn Therapy in Santa Monica, California.

But that doesn't mean that if you weren't born with certain genes, you're destined to be unhappy. Zinn says that "it's completely possible to rewire our brains for happiness," because the other 60% of happiness comes down to lifestyle and other environmental factors.

Learn more about how your genetic makeup contributes to your life satisfaction and how you can increase feelings of happiness and well-being regardless of what your genetic sequence might say about you.

Happiness is typically determined by three main components, according to Zinn:

Research indicates that we can inherit many traits including optimism, self-esteem, and happiness. So by that logic, yes, there are genes that may predispose you to a happier disposition.

For example, a 2011 study found promising evidence that people with a certain form of the gene called 5-HTTLPR reported higher life satisfaction.

And a landmark study in 2016 that formally linked happiness to genetics involved the DNA of nearly 300,000 people. The researchers pinpointed three specific genetic variants associated with well-being. But they also found that these genetic variations weren't the only factor. An interplay of genetics and environment also contributed to happiness.

Despite your genetic makeup, there are ways you can learn to be happier, even in difficult times. Other traits, such as resilience, can be cultivated over time.

"You have a choice," Zinn says. "It's no different than deciding what to wear or what food to order. When it comes to happiness, there's a lot we can do about it."

One way to achieve a happier state is to let go of a quest for perfectionism that focuses only on the end goal of success, Zinn says. Linking happiness with perfectionism and success is common in American culture, but it leads you to concentrate on the summit of what you want to achieve rather than the journey of what happens along the way.

Here are some other practical ways to choose happiness:

Although research suggests that happiness is inherited to some extent, you're not limited by your DNA. The ability to feel happy takes practice and can be achieved with the right mindset.

Volunteering, exercise, nature, and attention to gratitude practices are just a few things you can do to increase your sense of life satisfaction, well-being, purpose, and ultimately, happiness.

The rest is here:
Scientists think about 40% of happiness is genetic while the rest comes down to 3 main components - Insider - INSIDER

Podcast: Polymerase chain reactionThe ‘transformative’ tool that sparked a genetics revolution – Genetic Literacy Project

Geneticist Dr. Kat Arney revisits the story and the characters behind one of the most transformativeand ubiquitous techniques in modern molecular biology: the polymerase chain reaction (PCR), on the latest episode of the Genetics Unzipped podcast from the Genetics Society.

Anyone who has worked with DNA in the laboratory is undoubtedly familiar with PCR. Invented in 1985, PCR is an indispensable molecular biology tool that can replicate any stretch of DNA, copying it billions of times in a matter of hours, providing enough DNA to use for applications like forensics, genetic testing, ancient DNA analysis or medical diagnostics.

Its hard to overstate the transformation that PCR brought to the world of molecular biology and biomedical research. Suddenly, researchers could amplify and study DNA in a way that had been simply impossible before, kickstarting the genetics revolution thats still going strong today.

So where did this revolutionary technology come from? Officially, PCR was invented in 1985 by a colorful character named Kary Mullis, who won a Nobel Prize for the discovery. But, as well see, all the components of PCR were in place by the early 1980s thanks to the work of scientists like Arthur Kornberg and Har Gobind Khoranait just took a creative leap to assemble them into one blockbusting technique.

Then, the discovery of Thermus aquaticus in the hot springs of Yellowstone National Park by Thomas Brock in the 1960s, the isolation of the thermostable Taq polymerase from that bacterium in 1976 by Alice Chien and John Trela from the University of Cincinnati, and the subsequent invention of automatic thermocyclers paved the way for the simple, one-step PCR process that has transformed laboratories across the world.

Full show notes, transcript, music credits and references online at GeneticsUnzipped.com.

Genetics Unzippedis the podcast from the UKGenetics Society,presented by award-winning science communicator and biologistKat Arneyand produced byFirst Create the Media. Follow Kat on Twitter@Kat_Arney,Genetics Unzipped@geneticsunzip,and the Genetics Society at@GenSocUK

Listen to Genetics Unzipped on Apple podcasts (iTunes), Spotify, or wherever you get your podcasts.

Original post:
Podcast: Polymerase chain reactionThe 'transformative' tool that sparked a genetics revolution - Genetic Literacy Project

Largest Study To-Date Focused on Undiagnosed Genetic Disease Patients Reveals That Bionano’s Optical Genome Mapping Technology Can Diagnose…

SAN DIEGO, Nov. 05, 2020 (GLOBE NEWSWIRE) -- Bionano Genomics, Inc. (Nasdaq: BNGO) announced the publication of a study led by scientists and clinicians from the Institute for Human Genetics and the Benioff Childrens Hospital at the University of California, San Francisco (UCSF) that evaluated the ability of Bionanos optical genome mapping technology and another genome analysis method to diagnose children with genetic conditions who previously went undiagnosed by the standard of care methods alone. Of the 50 children in the study, the optical genome mapping results were sufficient to definitively diagnose 6 patients (or 12%) and, for another 10 patients (or 20%), the Bionano data revealed candidate pathogenic variants. Upon further analysis, it is expected that an additional 3 patients could be diagnosed with the Bionano data, bringing the total of definitively diagnosed patients to 9 (or 18%).

Erik Holmlin, Ph.D., CEO of Bionano Genomics commented, Increasing the number of patients who receive a definitive molecular diagnosis is the driving force behind much of the development of new diagnostic technologies. Every major change in medical guidelines connected to introducing novel methods has been driven by the ability of new methods to diagnose more patients than the previously existing standard of care. This study by the UCSF team shows that Bionanos optical genome mapping can potentially bring another such leap to the clinic by diagnosing many more patients than what existing chromosomal microarray (CMA) and whole exome sequencing (WES) can. Several studies released this year have shown that Saphyr can detect all clinically relevant variants identified by karyotyping, microarray and FISH in both leukemias and genetic disease cases. This UCSF study now shows in the largest cohort analyzed to date that Bionanos optical genome mapping diagnoses more patients than the traditional methods. We believe the increase in diagnosis over conventional methods can be a significant factor in Saphyr gaining widespread adoption as a clinical tool for genetic disease diagnosis and next-generation cytogenomics.

As described in the publication, the UCSF team performed full genome analysis by combining optical genome mapping with Bionano technology and linked-read sequencing on 50 undiagnosed patients with a variety of rare genetic diseases and their parents to determine if this full genome analysis method could help solve cases that had not been diagnosed with previous testing. Of the 50 cases, 42 were previously analyzed by CMA, the first tier medical test for genetic disease cases, and 23 had previously been analyzed with commercial trio whole exome sequencing, and no pathogenic or likely pathogenic variants were identified by these methods.

Bionanos optical genome mapping technology identified a number of pathogenic variants unidentified by CMA and undetectable by WES, including duplications and deletions that were too small to be identified by CMA, or occurred in regions of the genome not typically covered by CMA or WES. Of the additional 7 patients with variations considered to be candidates for pathogenic variants, the findings included deletions, duplications, and inversions. Before concluding that these variants are sufficient to diagnose the patients, further analysis is required since these variants had not previously been reported in patients with similar disease.

The publication is available at: https://www.medrxiv.org/content/10.1101/2020.10.22.20216531v1A recording of the webinar is available at: https://bionanogenomics.com/webinars/optical-mapping-in-rare-genetic-disease-diagnosis/

About Bionano GenomicsBionano is a genome analysis company providing tools and services based on its Saphyr system to scientists and clinicians conducting genetic research and patient testing, and providing diagnostic testing for those with autism spectrum disorder (ASD) and other neurodevelopmental disabilities through its Lineagen business. Bionanos Saphyr system is a platform for ultra-sensitive and ultra-specific structural variation detection that enables researchers and clinicians to accelerate the search for new diagnostics and therapeutic targets and to streamline the study of changes in chromosomes, which is known as cytogenetics. The Saphyr system is comprised of an instrument, chip consumables, reagents and a suite of data analysis tools, and genome analysis services to provide access to data generated by the Saphyr system for researchers who prefer not to adopt the Saphyr system in their labs. Lineagen has been providing genetic testing services to families and their healthcare providers for over nine years and has performed over 65,000 tests for those with neurodevelopmental concerns. For more information, visitwww.bionanogenomics.com or http://www.lineagen.com.

Forward-Looking StatementsThis press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Words such as may, will, expect, plan, anticipate, estimate, intend and similar expressions (as well as other words or expressions referencing future events, conditions or circumstances) convey uncertainty of future events or outcomes and are intended to identify these forward-looking statements. Forward-looking statements include statements regarding our intentions, beliefs, projections, outlook, analyses or current expectations concerning, among other things: the contribution of Bionanos technology to the diagnosis of more genetic disease patients when compared to traditional standard of care methods; the capabilities of Bionanos technology in comparison to other genome analysis technologies; our expectations regarding the adoption of Saphyr as a clinical tool for genetic disease diagnosis and next-generation cytogenomics; and Bionanos strategic plans. Each of these forward-looking statements involves risks and uncertainties. Actual results or developments may differ materially from those projected or implied in these forward-looking statements. Factors that may cause such a difference include the risks and uncertainties associated with: the impact of the COVID-19 pandemic on our business and the global economy; general market conditions; changes in the competitive landscape and the introduction of competitive products; changes in our strategic and commercial plans; our ability to obtain sufficient financing to fund our strategic plans and commercialization efforts; the ability of medical and research institutions to obtain funding to support adoption or continued use of our technologies; the loss of key members of management and our commercial team; and the risks and uncertainties associated withour business and financial condition in general, including the risks and uncertainties described in our filings with the Securities and Exchange Commission, including, without limitation, our Annual Report on Form 10-K for the year ended December 31, 2019 and in other filings subsequently made by us with the Securities and Exchange Commission. All forward-looking statements contained in this press release speak only as of the date on which they were made and are based on management's assumptions and estimates as of such date. We do not undertake any obligation to publicly update any forward-looking statements, whether as a result of the receipt of new information, the occurrence of future events or otherwise.

CONTACTSCompany Contact:Erik Holmlin, CEOBionano Genomics, Inc.+1 (858) 888-7610eholmlin@bionanogenomics.com

Investor Relations Contact:Ashley R. RobinsonLifeSci Advisors, LLC+1 (617) 430-7577arr@lifesciadvisors.com

Media Contact:Darren Opland, PhDLifeSci Communications+1 (617) 733-7668darren@lifescicomms.com

View original post here:
Largest Study To-Date Focused on Undiagnosed Genetic Disease Patients Reveals That Bionano's Optical Genome Mapping Technology Can Diagnose...

Finding the solution: Animal physiology lab makes most of hybrid format – Illinois State University News

In a lab designed to illustrate how living organisms operate, it only makes sense thered be an entire week dedicated to respiration systems. In previous Biological Sciences (BSC) 283: Animal Physiology courses, students would breathe into devices themselves to measure lung capacity and simulate different chronic conditions such as asthma.

But as the Center for Disease Control (CDC) has warned the public since the beginning of the coronavirus (COVID-19) pandemic, one of the fastest ways the novel virus travels is through the air. So, naturally, that type of assignment in a lab classroom was out of the question.

Illinois State students and faculty proved once again they could pivot to make the best of the situation.

Realizing quickly the original experiment wouldnt work, the class used crayfish instead, placing them in both low and high saline solutions. A compound called soda lime would absorb the carbon dioxide the creatures breathed out, and students then made an airtight seal on the containers so that the only opening was a tube containing a bubble of water. The resulting vacuum would pull the bubble further toward the container, creating an avenue to alternatively measure lung capacity.

Simple enough, right? Well, it still mirrored the original concept and purpose of the lab.

It was possible to do it this way, but it definitely required some adjustments, said graduate teaching assistant Shana Border.

While every class has been altered in some way due to safety and health protocols, lab classes that are most effective in a hands-on way have had to be extra creative.

In a night lab of BSC 283, students have had to work with each other in a hybrid format. Some are in the lab, and others are communicating via Zoom. They are paired off in groups of four, with two students on-site and the other two working virtually.

Its been an interesting transition, especially since half of our class is online and making sure they are able to see and learn as best as they can, said junior molecular and cellular biology major Teague Williamson, who has been at the lab primarily in-person. But you make adjustments to the format.

The class has conducted experiments on muscle contractions, respiratory function, and how signals get transmitted along nerves, just to name a few. Students have used crickets, cockroaches, earth worms, computer modeling, and their own bodies to complete the tasks, all while using the hybrid format.

The whole situation is obviously stressful, but all of my students immerse themselves into it and give it their all.

When it was clear early on that this would be the way the class was structured, the groups quickly got together and determined roles. The two working remotely for the night would be either the note-takers or directing the experiment, while the other two would do the hands-on work.

The labs instituted a poll where every week students can rate each other as group mates. That opened up lines of communication quickly and also provided an accountability factor. While challenging, the student scientists have risen to the occasion.

The whole situation is obviously stressful, but all of my students immerse themselves into it and give it their all, Border said. They always do whatever they are willing to do to make this work.

Theyve built some really strong rapports.

Border also noted how students have been particularly proactive with safety measures, whether thats properly distancing in the lab or making the decision to work virtually if they may have been exposed to the virus.

Dr. Wolfgang Stein teaches the course and relies on graduate teaching assistants like Border to lead the labs. Its been a group effort to safely and effectively navigate through the course, but students and faculty have made the necessary adjustments to make their learning just as meaningful.

Read the original post:
Finding the solution: Animal physiology lab makes most of hybrid format - Illinois State University News

Zombie Physiology, According to The Walking Dead | CBR – CBR – Comic Book Resources

The Walking Dead provides information on how zombies act, behave and evolve.

The zombie genre has been going strong since the 1930s, and while there's the occasional deviation in how they're portrayed on screen, audiences are most familiar with the these famed dead creatures as brainless and slow but deadly in hordes. The zombiesin AMC'sThe Walking Deadbear all the signs of the typical zombie: mindless killing machines with the singular goal of devouring the flesh of anyone who crosses their path. The zombies in theWalking Deadmay be relatively weak, but over the course of 10 seasons, the show has introduced some interesting concepts to their physiology.

While the origins of the zombie outbreak in TheWalking Deadis unknown, every character in theshow is infected with the pathogen that causes the dead to come to life. The pathogen doesn't kill its hosts -- rather it remains dormant, and outwardly the host appears normal and healthy. The pathogen only becomes active when the host dies, reviving some parts the brain and cerebellum in the process which causes them to transform into a zombie. As long as the host remains alive and avoids bites or scratches from the dead, the pathogen will remain dormant until the moment of their demise.

RELATED:The Walking Dead: AMC Debunks a Major Rick Grimes Theory

The zombies from The Walking Dead havesuch a powerful sense of smellthat they candetect scents from miles away and can differentiate them betweenthe living or dead. In both the television series and the comics the show is adapted from, human characters can disguise their scents by covering themselves in gore, undead flesh or anything that smells of decay. Over time, the zombies' eyesight deteriorates, but their heightened sense of smell is their greatest asset and proves the most dangerous to Rick and his group of survivors.

The undead are inhumanly strong andpossess enough strengthtotear apart a human or animal with relative ease and ripapart limbs with little effort. While the zombie's strength depends on how long they've been reanimated, they can produce enough force to overpower even the strongest of humans, making them incredibly dangerous in combat. However, as the zombies decay, their strength wanes, so you'd have a better chance of survival if you encounter an older zombie.

RELATED:The Walking Dead Casts Jeffrey Dean Morgan's Wife, Hilarie Burton, as Negan's Spouse

Being dead with limited brain activity and supposedly no pain receptors, zombies fromThe Walking Dead feel no pain -- or at least they don't react to pain. They can absorb all manner of physical damage even though their bodies are no less durable -- and in fact, sometimes even weaker -- than that of a living human. Zombies can survive the worst of injuries, from losing limbs to impalement. Shots to the head, decapitation and spinal cord severing are the only things that can kill or weaken a zombie. As long as their brain is intact, zombies can function normally, even if they've lost their heads.

Follow this link:
Zombie Physiology, According to The Walking Dead | CBR - CBR - Comic Book Resources

Anatomage Launches Interactive Physiology Content and Other Updates to the Anatomage eBook – PRNewswire

First introduced in July, the Anatomage eBook provides instructional guidelines to anatomy and physiology topics using 3D anatomical images of real human cadavers. With the anatomy portion immediately launched afterward, the Anatomage eBook has quickly become a powerful solution for an online learning environment. Through today's launch of the physiology section, the Anatomage eBook is now highly anticipated to be an irreplaceable tool for A&P distance and in-person learning courses.

The physiology section features the fundamental physiological concepts that are typically taught in high-school and college-level human physiology courses. Following the interactive format of the Anatomage eBook, the physiology content comes with illustrative and animated visuals that allow students to visualize the human body's physiological mechanism. Interactive physiological illustrations are alsoavailable for manipulating, offering a highly detailed look at crucial physiological and pathological functions.

Aside from the Physiology content, users of the Anatomage eBook will be able to view and interact with images of prosected cadavers. Originating from actual human cadavers, the prosection images exhibit the most accurate anatomical visualization that allows students to appreciate the human body's integrated nature.

As part of the updates, manipulating real-patient pathology CT cases is made possible. The Anatomage eBook now includes 12 interactive case activities that provide a high-resolution, three-dimensional view for comparative anatomy, giving practical information to prepare students for their clinical professions.

With these additions, the Anatomage eBook further expands its capabilities as a market leader in premium online anatomy and physiology learning technology tailored for both in-person and virtual education.

About Anatomage eBook

Anatomage eBook offers the most accurate representation of real human anatomy that allows students to conceptualize the complicated anatomy and physiology concepts effectively. Utilizing medically accurate anatomy images and intuitive descriptions, the Anatomage eBook visually walks users through major anatomy and physiology concepts for each of the 11 human body systems across 39 chapters. For more information, visit here.

About Anatomage

A market leader in medical imaging technology, Anatomage enables an ecosystem of 3D anatomy hardware and software, allowing users to visualize anatomy at the highest level of accuracy. Through its highly innovative products, Anatomage is transforming standard anatomy learning, medical diagnosis, and treatment planning.

Media Contact:Jack ChoiCEOAnatomage Inc.Phone: 1-408-885-1474Email:[emailprotected]www.anatomage.com

SOURCE Anatomage

Table

Read more:
Anatomage Launches Interactive Physiology Content and Other Updates to the Anatomage eBook - PRNewswire

The Evolving Role of Ion Channels in Shaping Successful Drug Discovery, Upcoming Webinar Hosted by Xtalks – PR Web

Targeting ion channels selectively has always been challenging. New, more specific modalities including antibodies, aptamers, peptides and knotbodies are also being explored.

TORONTO (PRWEB) November 05, 2020

There are over 200 ion channels in the human body, all playing a pivotal role in normal physiology. As such, they are important targets for drug therapies that modulate ion channels in critical pathways, or correct aberrant ion channel function. To date, there are over 150 marketed drugs that target ion channels. Many of these drugs are anaesthetics, anti-epileptics or are active in the cardiovascular system.

The importance of ion channels in the pharmaceutical industry is evolving. As knowledge of ion channel physiology and how to target ion channels evolves, therapeutic opportunities are becoming more diverse, extending to renal and respiratory disease, inflammation, cancer, pain and depression. How the pharmaceutical industry tests and explores ion channels is also evolving with high-throughput platforms and hiPSC models.

Targeting ion channels selectively has always been challenging. New, more specific modalities including antibodies, aptamers, peptides and knotbodies are also being explored. Finally, given the importance of ion channels in normal physiology, unwanted activity at ion channels in the heart or CNS can cause serious adverse effects and should be avoided. In this respect, screening for effects on ion channels is a key, rapidly developing area of drug discovery.

Consideration of these evolving areas in ion channel drug discovery is critical to the successful development of new medicines.

Join Dr. Michael Morton, Director, ApconiX Ltd in a live webinar on Thursday, November 19, 2020 at 11am EST (4pm GMT/UK).

For more information, or to register for this event, visit The Evolving Role of Ion Channels in Shaping Successful Drug Discovery.

ABOUT XTALKS

Xtalks, powered by Honeycomb Worldwide Inc., is a leading provider of educational webinars to the global life science, food and medical device community. Every year, thousands of industry practitioners (from life science, food and medical device companies, private & academic research institutions, healthcare centers, etc.) turn to Xtalks for access to quality content. Xtalks helps Life Science professionals stay current with industry developments, trends and regulations. Xtalks webinars also provide perspectives on key issues from top industry thought leaders and service providers.

To learn more about Xtalks visit http://xtalks.comFor information about hosting a webinar visit http://xtalks.com/why-host-a-webinar/

Share article on social media or email:

See more here:
The Evolving Role of Ion Channels in Shaping Successful Drug Discovery, Upcoming Webinar Hosted by Xtalks - PR Web

NIH Grant aims to enhance scientific models of aging focused on creating better intervention tools for age-related decline – Newswise

Newswise San Antonio, Texas (November 5, 2020)The Southwest National Primate Research Center (SNPRC) at Texas Biomedical Research Institute and the University of Texas Health Science Center at San Antonio received a $1.3 million collaborative grant to continue the San Antonio Marmoset Aging Program (SA MAP) and further define the hallmarks of aging in a nonhuman primate (monkey) model. Developing the marmoset model will allow for eventual testing of interventions in additional model systems that could slow or change age-related decline in humans.

The National Institutes of Health (NIH) National Institute on Aging awarded the grant to develop new tools for the characterization of aging to Corinna Ross, Ph.D., Associate Professor at Texas Biomed and Associate Director of Research at the Southwest National Primate Research Center, and Adam Salmon, Ph.D., Associate Professor, Barshop Institute, UT Health San Antonio. Drs. Ross and Salmon will co-lead the team of scientists within SA MAP, leveraging their expertise and resources to gain knowledge behind the molecular and physiological functions behind age-related diseases.

SA MAP has developed several tools over the years to characterize aging in marmosets explained Dr. Ross. While the nine hallmarks of aging have been identified, we only have a few tools to measure these hallmarks. With this study, we hope to pinpoint biomarkers of cellular aging in marmosets so that these biomarkers can eventually serve as targets for interventions, and marmosets can become an effective model for testing these interventions.

Marmoset models are widely used in biomedical research but are most commonly used in aging studies partly due to their small size and relatively short life span of 20 years. As a non-human primate, marmosets closely resemble humans genetically, enabling them to serve as a valuable tool to test pharmacological or drug interventions.

To date, laboratory rodents and invertebrates have largely been the models to study the hallmarks of aging. However, marmosets display a wide spectrum of age-related issues similar to humans and are susceptible to diseases that occur in humans but not in rodents. Mechanisms behind the root causes of the hallmarks of aging at the cellular and molecular levels have yet to be explored in the nonhuman primate.

This model could potentially provide a window of opportunity to move aging research to the next level and assist in developing the clinical approaches that target the hallmarks of aging and their interconnection to one another, said Dr. Salmon.

The nine hallmarks of aging include:1. Genomic instability, high frequency of genetic mutations within a genome2. Telomere attrition, the gradual loss of the protective ends of chromosomes3. Epigenetic alterations, changes in the chemical structure of DNA4. Loss of proteostasis, development of nonnative protein aggregates in tissues5. Deregulated nutrient sensing, bodys inability to take in key nutrients effectively6. Mitochondrial dysfunction, disruption in mitochondrias ability to regulate cellular pathways in the body7. Cellular senescence, regular cell cycle is interrupted because cells become resistant to growth-promoting stimuli8. Stem cell exhaustion, a deficiency of stem cells due to aging. Stem cells are cells that can turn into any cell type and are needed to repair systems in the body9. Altered intercellular communication, alteration in the signaling between cells which happens as a result of aging

The NIH is really focused on interdisciplinary, collaborative research, Dr. Ross added. We have assembled a team that blends expertise in marmoset physiology and behavior, aging interventions and molecular mechanisms to address some of the remaining questions in aging through cutting-edge research. Were at the forefront of using marmosets for geriatric research and are very excited to explore the use of marmosets to test pharmaceutical interventions.

The Southwest National Primate Research Center at Texas Biomed (SNPRC) houses one of two marmoset colonies at a National Primate Center, and is home to 400 marmosets with the largest geriatric marmoset colony in the country. Recently, theNIH awarded SNPRC a grantto double the size of its marmoset colony to support ongoing and future neuroscience research.

Research is being supported by the National Institute On Aging of the National Institutes of Health under Award Number U34AG068482. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Research at SNPRC is also supported by the Office of Research Infrastructure Programs, National Institutes of Health P51 OD011133.

###

Follow this link:
NIH Grant aims to enhance scientific models of aging focused on creating better intervention tools for age-related decline - Newswise

Neuroscientists receive ERC Synergy Grant to explore the neural basis of cognition – News-Medical.Net

Reviewed by Emily Henderson, B.Sc.Nov 5 2020

A Norwegian-Israeli team of neuroscientists has been awarded an ERC Synergy Grant to explore the biological basis of spatial operations in the brain.

Humans have long wondered about the origins and workings of the mind. How does living matter generate memories, thoughts, imagination, the ability to plan? How these high-level functions are created from activity in brain cells remains one of the greatest mysteries of life. Current advances in neuroscience may finally unravel the secret of how higher cognitive functions emerge from the brain.

With a Synergy Grant funded by the European Research Council (ERC), investigators at the Kavli Institute for Systems Neuroscience at the Norwegian University of Science and Technology and the Edmond and Lily Safra Center for Brain Sciences and the Racah Institute of Physics at the Hebrew University of Jerusalem aim to explore the neural basis of cognition through focused study of one well-defined cognitive function - the ability to map our own location in space.

"There is an excitement, a sense of revolution in systems neuroscience today," says Edvard Moser, Founding Director of the Kavli Institute for Systems Neuroscience and Co-Director of Centre for Neural Computation.

After decades of studying single cells, wondering what kind of joint dynamics they take part in, neuroscientists are currently experiencing a total transformation of their field of study. A breakthrough in technology has made this possible.

"At the Kavli Institute for Systems Neuroscience, we are now replacing the old single-cell recording systems with high-site-count Neuropixels silicon probes and portable 2-photon microscopes," he said.

These tools, developed within the last year or two, allow Kavli researchers to record and visualize simultaneous activity from thousands of neurons interacting with each other during cognitive operations.

The technological advancement is not just a linear summation of information from individual brain cells. By enabling studies of how large populations of neurons work together, it brings our inquires to another functional level where we can ask how cells collaborate, rather than looking for the properties of individual cells.

Our brains generate a broad spectrum of higher cognitive functions that make up our intellectual capabilities. These brain functions emerge from the interactions between thousands of cells interconnected in large neural networks. This is the level of granularity from which we are now recording."

Edvard Moser, Founding Director of the Kavli Institute for Systems Neuroscience and Co-Director of Centre for Neural Computation

However, experimental measurements alone are not enough. Experiments must be guided by theoretical models of how neural networks create their outputs, which can in turn be tested experimentally. It is a matter of testing whether the map fits the landscape and of understanding the landscape through the map.

Some of the most promising theories in neuroscience during the last 40-50 years are called continuous attractor network (CAN) theories. Attractor network theories predict how neural networks in the brain operate through specific connections between cells in the network.

"CAN theories evolved at the Hebrew University in Jerusalem, and there is still no place on earth that better understands and moves these theories forward," Moser said. "Yoram Burak is a member of the computational neuroscience community at the Hebrew University and he is, in my opinion, the strongest theoretician of his generation in this field."

The ERC-funded research project KiloNeurons builds on the synergy created by merging approaches from theoretical physics with neurobiology and psychology. Pairing the most promising theory with the best-mapped higher cognitive functions provides a unique opportunity to explore how the brain works.

The ERC funding provides a unique opportunity to understand how the brain works, the researchers say.

"Our goal is to uncover how a cognitive brain function is generated through interactions between thousands of cells in the cortex," Burak says. Attractor network theories propose that activity patterns in the brain are formed through specific connections within neural networks. In the case of spatial orientation, attractor networks result in activity patterns that enable a sense of location and direction.

"Our point of departure is the higher brain function that provides us with a sense of location and supports navigation. CAN theories are highly developed for the brain systems that we use to find our way; we know the elements and properties of these systems, such as the grid cell; and the behavior of wayfinding is easy to measure. The project has all elements in place for breakthrough mechanistic insight to be realized," Moser said.

"Understanding attractor networks is important for any neuroscientist who wants to understand how activity patterns are generated in the brain. Attractor networks operate throughout the brain in many different systems, so demonstrating their existence, and finding out how they operate, is key to a broad understanding of cognition," he said.

It will also help us uncover what goes wrong when cognitive functions are compromised in neurological conditions, as in Alzheimer's disease, or in psychiatric syndromes - which will be a step towards exploring the potential for new therapies.

Read more here:
Neuroscientists receive ERC Synergy Grant to explore the neural basis of cognition - News-Medical.Net

What is Computation’s Role in Neuroscience? – Stanford University News

In this Directors Conversation, HAI Denning Co-Director Fei-Fei Lis guest is William Newsome, the Harman Family Provostial Professor of Neurobiology at the Stanford University School of Medicine and the Vincent V.C. Woo Director of the Wu Tsai Neurosciences Institute.

Here Li and Newsome discuss the role of computation in neuroscience, the challenges computational neuroscientists can address, whether understanding the brain at a molecular level can lead to better neural networks, AIs motivation spectrum, and the complicated definition of consciousness when it comes to both natural intelligence and artificial.

Full transcript:

Fei-Fei Li: Welcome to HAIs Directors Conversations, where we discuss advances in AI with leaders in the field and around the world. Today with me is Professor Bill Newsome. Im very, very excited to have this conversation with Bill. We have been having conversations throughout our career, and Ive been such a great admirer of Bills scholarship and leadership. Hes the Professor of Neurobiology at Stanford School of Medicine and the Director of the Wu Tsai Neurosciences Institute at Stanford. Bill has made significant contributions to understanding of neural mechanisms underlying visual perception and simple forms of decision-making. As head of Wu Tsai Institute, hes focused on multidisciplinary research that helps us understand the brain, and provide new treatments for brain disorders, and promote brain health.

So welcome, Bill. Im very much looking forward to this conversation.

Bill Newsome: Great to be here, Fei-Fei. Nice to see you again.

Li: Lets start with just defining and talking about the intersection of AI and neuroscience. What do you see as the role of computation in your field, and Wu Tsai Institutes work?

Newsome: Well, thats a great question. Computation is extremely important in the field of neuroscience today. There are two or three different ways I could answer that question, but let me try this one on you. We actually have a subfield called computational neuroscience. Weve hired faculty in this area here at Stanford, and we hope to hire more. People sometimes ask me, What is that? And I would put it this way. Computation in neuroscience has about three different areas that are really, really important. The first area is theorists: its people who actually try to theorize and extract general principles about how the brain is computing, how the brain is representing, how the brain produces action.

The second area I would call neural network kinds of people: modelers, people who really understand how to do deep convolutional networks and understand how to do recurrent neural networks, and actually model simple toy problems that we know the nervous system solves. If we can figure out how these networks solve these problems, then we can get some insight maybe into new hypotheses about the nervous system.

And then a third way that computation is really influencing neuroscience is through high-end data analytics. Like many areas of science, we are getting larger, richer, more sophisticated, and sometimes much more obscure datasets now out of the brain than weve ever been able to get in human history. And to actually understand how to deal with those data, how to treat them, how to avoid statistical pitfalls is extremely important.

I think all three of those areas of computation are really important for neuroscience. I think different computational neuroscientists may excel in two, or occasionally even all three of those, but we need more people like this in neuroscience today, not fewer, because the challenges are greater than theyve ever been.

Li: What are the biggest challenges that you feel are in need of this kind of computational neuroscientist?

Newsome: Well, let me give you some real-world examples; maybe give you one where computation has actually played a leading role and other approaches to neuroscience are lagging behind, and one where computation needs to step in and create a role in order to create understanding.

So one area that I would give to you is something in the nervous system called integration, and its familiar to anyone whos taken calculus. Its literally counting up events that happen, literally integrating some time series and saying how much you have at the end. This turns out to be a really important problem in the nervous system in many areas, including decision making, but a very simple one. Its just moving your eyes. So we know when you move your eyes from this point to this point, certain neurons give a little burst in a map of eye movement space in the brain, and when they get the burst, the eyes go out there. The amazing thing is they stay there after they get there, even though the burst is gone.

And the theorizing, the theory of computation is about integration: How could you take neural signals of the burst, integrate some value that holds the eyes in position until an animal is ready to move the eyes again? We knew some things about the physiology, we knew some things about the computation, we knew some things about the connectivity of brain structures that produce eye movements. What we really lacked was actually anatomy, in this case.

It turned out that several different computational theories that embodied physical principles could account for this, but to know which one is actually working in the brain, we needed the microanatomy of how cells are really hooked up to each other. Thats an example where computational theory actually led the way and motivated some anatomical questions.

But one that many listeners to this will resonate with today is the example of deep convolutional networks that are approaching human performance in visual categor-

Li: Surpassing in some cases.

Newsome: Surpassing in some cases, absolutely. Peoples jobs are insecure here because these networks are getting so good at visual categorization. And this presents a really interesting problem, because youre going to train a deep convolutional network that can do these things that seem almost magical, and we know almost everything there is to know about them, right? We know exactly the connections between the layers of the trained network, we know the signals that are passing, we can see the dynamics that exist, we can measure their performance. But there is still this deep angst, an intellectual angst, Id say, among neuroscientists, and I think among some people in the AI community, that we still dont understand how thats happened.

What are the algorithmic principles by which you take an array of pixels and you turn it into faces, and distinguish among faces? And somehow, the deep physical and computational principles are not there yet. We dont understand how these things are working. We understand the learning algorithm, and maybe thats as deep as itll get at some point. But heres a situation where I think computation needs to step in and teach us this, both for the artificial networks, and then for the real networks in our brains that recognize faces.

Li: Bill, I want to elaborate on that because neural networks, especially in visual recognition, are dear and near to my heart. On the one hand its phenomenal, right? We have these hundreds-layered, sometimes even thousand-layered convolutional neural network or recurrent network algorithms that are just very complex and can perform phenomenally well. When it comes to object recognition, some of these networks do surpass human capability. But in the meantime, if you look inside under the hood of these algorithms, while theyre humongous, theyre also extremely contrived compared to the brain.

Ill just take one example of the neuron-to-neuron communication. The way that its realized in todays neural network algorithm is a single scalar value, whereas the synaptic communication in the brain, as we learn more, and your colleagues will tell us, is far more complex. The neural signaling is not just one kind of neural signaling. I would love to hear more about this.

Also, on a little more system level, our brain is this organic organ that has evolved for at least the past 500 million years; the mammalian brain is about 100 million years, and it has different parts, and different modules, and all that. And todays neural network is nowhere near that kind of complexity and architecture.

So on one hand these humongous deep learning models are doing phenomenally well. On the other hand, theyre also very contrived compared to the brain. And Im just very intrigued: from your perspective, as we learn more about these computational realities of the brain at the molecular level, at the synaptic level and the system level, do you see that were going to have different insights how to build these neural networks?

Newsome: I hope so, Fei-Fei. I think this is one of the deepest intellectual questions that computationally-minded neuroscientists argue about, and thats to what extent are AI, and what I call NI natural intelligence going to converge at some point and really be useful dialogue partners? And to what extent are they simply going to be ships passing in the night, or theyre going to be parallel universes? Because there are these dramatic differences, as you point out.

One individual neuron and our brain contains about 100 billion of them is incredibly complex: incredibly complex shapes and incredibly complex biophysics, and different types of neurons in our brain have different types of physics. Theyre profoundly non-linear, and they are hooked together in these synapses and ways that form circuits, and understanding and mapping those circuits is a big fundamental problem in neuroscience.

But something that should give all of us great pause is that there are these substances that are released locally in the brain called neuromodulator substances, and they actually diffuse to thousands of synapses in the space around them in the brain, and they can completely change that circuitry. This is beautiful, beautiful work by Eve Marder, who spent her career studying this neuromodulation. You take one group of neurons that are hooked up in a particular way, spritz on this neuromodulator, and suddenly theyre a different circuit, literally.

Li: Yeah, thats fascinating. We dont have that computational mechanism at all in our deep learning architecture.

Newsome: And another feature of brain architecture, that you and I have talked about offline together, is that brain architecture is almost universally recurrent. So area A of the brain has a projection to area B. You can kind of imagine that as one layer in the deep convolutional network to another layer. But inevitably, B projects back to A. And you cant understand the activity of either area without understanding both, and the non-linear actions, the dynamical interactions that occur to produce a state that involves multiple layers simultaneously.

Many of us think today that understanding those dynamical states that are distributed across networks are going to be the secret to understanding a lot of brain computation.

I know that recurrence is starting to be built into some of these DCNs now. I dont know where exactly that field sits, but that certainly is one of the ways you get dynamics.

Dynamics are, again, another universal feature of brain operation. They reflect the dynamics in the world around them, and the input but also the dynamics in the output. Youve got to have dynamical output in order to drive muscles to move arms from one place to the other, right? So the brain is much richer, in terms of dynamics.

Another thing about the brain is it operates on impressively low power.

Li: I know, I was going to say the 20-watt problem. Thats dimmer than any lightbulbs we have. We hear about these impressive neural networks like GPT-3 or a neural architecture search or burst, an image that algorithms are all burning GPUs much more massively.

So how do you think about that?

Newsome: Well, I dont think about it very much, except that our contrived devices are very, very inefficient and very wasteful.

We have a colleague at Stanford, Kwabena Boahen, who studies neuromorphic engineering, and trying to build analog circuits that compute in a much more brain-like way. And his analog circuits certainly are much, much more efficient in power usage than digital computers. But they havent achieved nearly the level of impressive performance and the kinds of sort of cognitive-like tasks that DCNs have achieved so far. So theres a gap here that needs to be crossed.

Li: Yeah, I think this is a very interesting area of research. You mentioned the word cognitive, and I want to elaborate on that because I know we started talking about computational neuroscience, but cognitive neuroscience is part of neuroscience, and also in the field of visual where I sit.

First of all, half of my PhD was cognitive neuroscience. Second of all, in the past 30 years, I would give cognitive neuroscience a lot of credit in the field of vision to show to the AI world what are the problems to work on, especially the phenomenal work coming from the 70s and 80s in psychophysics by people like Irv Biederman, Molly Potter, and then getting to neurophysiology and cognitive neurophysiology, like Nancy Kanwisher, Simon Thorpe, showing us the phenomenal problem of object recognition, which eventually led to the blossom of computer vision object recognition research in the late 90s and the first 10 years of the 21st century.

So I want to hear from you, do you still see a role of cognitive neuroscience in, I guess, two sides of this: one is in todays AI, which I think I have an opinion, but also AI coming back to help?

Newsome: I am not nearly as well versed or trained in cognitive neuroscience as you were. That was your graduate training. I think in a very simple-minded way about cognitive neuroscience, that may make our colleagues, may make you shudder, Fei-Fei, Im not sure. I was trained as a sensory neuroscientist, trained in the visual system, the fundamentals of Hubel and Wiesel, and the receptive-field properties in the retina. And then the first processing in the brain, and then the cortex.

I was sort of getting into the brain, back in the 1970s and 1980s, thinking about signals coming from the periphery. We all called ourselves sensory neuroscientists, but there was another whole group of neuroscientists who were coming the opposite direction. They were having animals make movements: a right eye movement, like weve already talked about, or arm movements, and theyre looking at the neurons that provide input to those movements, and then theyre tracing their inputs back into the brain. And this was a motor science kind of effort.

And the sensory side and the motor side has enjoyed listening to each other talk, but they didnt really talk about it very much. But they had to meet eventually. And I think one part of my career was playing a part in hooking those two things up. And we did it by studying simple forms of decision making. So giving animals sensory stimuli that was my comfort zone asking animals to make a decision about what they were seeing, and then make an operant movement. And if they got it correct, they got a reward.

Well, how did the sensory signals that are the result of a decision get hooked up to steering the movement? And that there, youre squarely in cognition land. Some people refer to that as the watershed between sensory systems in the brain and motor systems in the brain. How do you render decisions?

You can think about sensory representations in the brain as kind of being like evidence, providing evidence about whats out there in the world. But then you can think about these cognitive structures in the brain that have to actually make a decision, render a decision, and instruct movements. You cant move your eyes to the right and to the left at the same time. Not going to happen. Sometimes you simply have to make decisions.

Thats how I kind of got into the cognitive neuroscience. And I think its one of the most interesting fields in all of neuroscience right now. I am hoping that AI and computational theory ... well, I know that computational theory is making contributions because some of the integration problems, integration of evidence from noisy stimuli, those kinds of theories, those kinds of theoretical models have deeply informed my own work in decision making. So computation theory are certainly making contributions.

I sometimes wonder about the other way around: What is that we are learning from vision and neuroscience that could inform AI? And you and I have had conversations about that as well.

Li: Right. So Ill give you an example of a group of us, Stanford neuroscience people like Dan Yamins, Nick Haber, they are the young generation of researchers who are actually taking developmental cognitive inspiration into the computational modeling of deep learning framework. They are building these learning agents that you can think of as learning babies as a metaphor, where the AI agent is trying to follow the rules of the cognitive development of early humans, in terms of curiosity, exploration and so on, and learn to build a model of the world and also improve its own dynamic model of how to interact with this world.

I think the arrow coming from cognitive developmental science actually is coming to AI to inspire new computational algorithm that transcends the more traditional, say, supervised deep learning models.

Newsome: One example where neuroscience has really led the way for artificial intelligence and for convolutional networks and artificial vision is the deep understanding of the early steps of vision in the mammalian brain, where set field structures filtering for spatial and temporal frequencies have particular locations in space; the multiscale nature of that; assembling those units in ways that extract oriented Gabor filters. Thats typical of the oriented filter, typical in the early stages of cortex processing in all mammals. And that now is baked into artificial visual.

That was the first thing. You dont even bother to train a DCN on those steps. You just start with that front end, and that front end came honestly from neuroscience, from the classic work of Hubel and Wiesel, as you know. Coming through some principle psychophysics and statistical analysis input from people like David Field. I think if I had to point to one thing that neuroscience has given to AI it would be the front end of a lot of the vision.

Li: Thats a really big thing, so absolutely.

Newsome: Fei-Fei, let me just say that the other challenges there and I think yeomans of the young generation who are working on visual would acknowledge, I think everyone acknowledges this, really is that the artificial visual systems even though they can surpass human performance in some cases after theyre trained, the learning process is so different for humans from the artificial systems.

The artificial systems need tens of thousands of examples to get really, really good, and they have to be labeled examples, and they have to be labeled by human beings, or what is your gold standard. Whereas I have this little 5-year-old daughter at home, and by the time she was two or three, she had looked at a dozen examples of elephants, and she could recognize elephants anywhere. She could recognize line drawings, photographs, different angles, different sizes, different environments. And she can play Wheres Waldo on the common childrens magazine. And this is profoundly different.

So heres an example where human cognitive neuroscience and the study of visual development in young humans and young animals, I think, presents a real challenge for artificial vision, artificial intelligence.

Li: Yeah, I actually wanted to emphasize on a point you just made because it truly, using your word, is profound because the way humans learn biologically, your NI, natural intelligence system learn is so different. I still remember 20 years ago, my first paper in AI was called One-Shot Learning of Object Categories, but until today, we do not have a truly effective framework to do one-shot learning the way that humans can do, or few-shot learning. And beyond just training example-based learning, there is unsupervised learning, there is the flexibility and the capability to generalize, and this is really quite a frontier of just the overall field of intelligence, whether its human intelligence, or artificial intelligence.

Newsome: Yeah, I think both AI and NI have to be appropriately humble right now about this. Were almost equally ignorant about exactly how that happens.

Li: In a way, I almost think it has a social impact for those of us who are scientists. We need to share with the public about the limitations because the hype talk of AI today, of machine overlord and all that, is built upon some of the lack of knowledge of the limitations of the AI system, and also the phenomenal capability of human intelligence to stumble.

Bill, I want to switch topic a little bit because I think what you are doing at Wu Tsai goes beyond some of these more lower level modeling. One of the most important charter mission of Wu Tsai is neuro disorder and healthcare-related. Here, Im going to say something that I hope that you can even disagree on: should we view AI and machine learning more like a tool for our researchers and doctors, clinicians to use this modern tool of data-driven methodology to help discover mechanism of diseases and treatments? Are there any examples of work at Wu Tsai like that? Just in general, how do you view AI through that lens of studying neuro disorders?

Newsome: Yeah, thats a really good question. Is AI really more of a tool to enable us to get on with the business of doing serious biology, or do the actual processes and algorithms and architectural structure of AI lend understanding to their correspondence inside the brain?

And I think the answer is both. So let me just give you a little Bills-eye-view of neuro disease. There are some neurological diseases that have psychiatric comorbidities, where the biggest problem is simply that cells in the nervous system, somewhere in the nervous system, start dying, for reasons we dont know yet. Parkinsons disease is an example, where its a particular class of cells, the dopamine arginic cells that start dying, and we dont know why. And Alzheimers disease, cells start dying all over the brain. Theres some areas that are particularly sensitive but by the time an Alzheimers patient shows up in the clinic complaining of symptoms, theyve already lost probably billions of nerve cells; certainly hundreds of millions of nerve cells by the time they become symptomatic.

And those diseases, I think, are going to be solved ultimately at a molecular and cellular level. Something is going wrong in the life of cells, and whether thats in the metabolic regime, whether its in the cleanup regime, keeping the cell whole and safe and free of pollutants, whatever it is, the secrets to that are going to be in cell biology, and AI can certainly help us tremendously just by providing tools to assemble all the data that were acquiring at that genetic and molecular level inside of cells.

On the other hand, there are neural diseases that smack of the more systems type of pathology; the problems are not lying in single cells probably. So you take some of the symptoms of Parkinsons disease, for example, the tremor and things like this, they can actually be rectified by putting stimulating electrodes inside the brain and doing a process called deep brain stimulation. Any of the listeners who arent familiar with this can just Google deep brain stimulation, or go to YouTube, and you can see amazing videos of remission of symptoms with this. Not a cure for Parkinsons but its a treatment for the symptoms.

And there are things like depression, which themselves dont kill people; its not like its a progressive degenerative disease. In depression, people come in, they come out. Its a dynamic kind of process. It smacks of the state system inside the brain, and that state can go through multiple systems, some of which are depressed, some of which we would characterize as more normal or positive kind of outlook.

And that kind of dynamics of complex systems, I think, is going to be part and parcel of the AI computational neuroscience thrust: understanding how these densely interconnected networks, based on certain inputs, can assume different states, fluctuate between them. I think could give some insight into the actual disease itself.

So I think it depends on which disease youre talking about, on whether AI is primarily going to be a tool or whether it might actually suggest some intellectual insights into the sources and explanations for some of them.

Li: That speaks of the broadness of machine learning AIs utility in this big area. Where we sit at HAI, we see already a lot of budding collaborations between the school of Medicine, Wu Tsai Institute, and HAI researchers, where all of these topics are touched. I know theres reinforcement learning algorithms in neurostimulation for trauma patients. Or there is computer vision algorithms to help neuro-recovery, in terms of physical rehabilitation. And also all the way down to the drug discovery, or those areas. So very excited to see this is also a budding area of collaboration between AI and neuroscience.

Newsome: I think that this is going to grow, that interface is going to grow. I think ultimately well diagnose depression much better through rapid real-time analysis of language that people use, and adjectives that they use, than expensive interaction with physicians. I dont think the algorithms are going to replace physicians, but theyll be very useful.

Can I bug you about something Im wondering about?

Li: Sure.

Newsome: In some of the first discussions that led up to the formation of AI that I was privileged to sit in upon, we raised a question of when youre a biologist you think about a human or animal performing a task; doing a discrimination task, or making a choice between this action that action. And the question that comes up is motivation. What is the organism motivated to do at the time?

And this gets very complicated. In all kinds of social situations with humans, we worry about whats fair, and we may do things that are against our economic interest because we are striking out for fairness. There are these values, there are these motivations, there are these incentives. And I wonder: what is the motivation in an artificial agent? To the extent that I know anything about motivation in artificial agents, its minimizing some cost function. Is that all there is to understand about incentives and motivation?

Is that all these complex feelings that we have, are they just reduced to cost functions, or is there a whole world there that AI needs to discover that they havent even scratched the surface of yet?

Li: This is a beautiful question. When you talk about motivation, I was thinking: what kind of reward objective mathematical functions I can write? And I come up very simple ones, like in the game of Go, I maximize the area that my own color block occupy. Or in the self-driving car, I will have a bunch of quantifiable objectives, that is: stay in the lane, dont hit an obstacle, go with the speed, and so on. So yes, the short answer is motivation is a loaded word for humans, but when it comes to todays AI algorithm, they are reduced to mathematical reward functions, sometimes as simple as a number, or what we call scalar functions. Or a little more complex, a bunch of numbers, and so on. And thats the extent.

This clearly creates an issue with communication with the public because on one hand, people are claiming incredible performances of vision language, especially those confusing language applications where you feel the agent actually is talking to you, but the under the hood its just an agent optimizing for similar pattern it has seen.

So we dont have a deep answer to this at all. My question to flip this is: both as a neuroscientist, as well as more objective observer of AI, do you see this as a fundamentally insurmountable gap, hiatus, between artificial intelligence and natural intelligence that would potentially touch on philosophical issues like awareness, consciousness? Or do you think this is a continuum of computation? At some point when computation is more and more sophisticated, things like motivation, awareness, or even consciousness would emerge?

Newsome: Well let me answer that in a couple of ways. First, I dont think that its a fundamental divide. I dont think that theres anything magical inside our brains associated with the molecules carbon and oxygen, hydrogen and nitrogen. I do this thought experiment sometimes with groups where I say, Ive got a 100 billion neurons in my brain, but imagine I could pull one of them out and replace it with a little silicone, or name your substance, neuron that mimicked all the actions of that natural neuron perfectly. It received its inputs, it gave its outputs to the downstream neurons, can even modulate those connections with some neuromodulator substance, it can sense some: would those still be Bill with this one artificial neuron inside my brain along with a 100 billion natural ones? And I think the answer would be yes. I dont think thered be anything fundamentally different about my consciousness and about my feeling. And then you just say, Well what if its two?

Li: Right.

Newsome: What if its three? And you get up to 100 billion, after a while, and my deep feeling is that if those functional interactions are well mimicked by some artificial substance, that we will have a conscious entity there. I think it may well be that entity needs to be hooked up to the outside world through a body because so much of our learning and our feeling comes through experience. So I think robotics is a big part of the answer here. I dont like the idea of disembodied conscious brains inside a silicon computer somewhere. Im deeply skeptical of that.

Li: Its like the movie, Her.

Newsome: Yeah. I dont think the divides fundamental, I dont think its magic. But the only place I know that consciousness exists, and that these intense feelings take place, is in the brains of humans, certainly in a lot of other mammals, maybe all other animals, but probably in birds and others as well.

But I think that neuroscience is kind of suited now, and artificial intelligence as its constituted now, there may be a fundamental divide just because they start with different presumptions, different goals of the kind that weve discussed here for the last half hour.

Does that make any sense to you, or am I just babbling here?

Li: No, no, its making some sense to me but let me try to share my point of agreement and disagreement. My point of agreement is that, like you said, where we are, the deep learning algorithm and also our understanding of the brain, is still so rudimentary. And from the AI point of view, its just so far, the gap between what todays AI or foreseeable AI can do to what this natural intelligence from computation to emotion to consciousness, its just so far I really dont see that the current architecture and mathematical guiding principle can get us there. What I dont have an answer is when you say 100% of your neurons are replaced.

Newsome: But perfectly mimicking the functional relationships of the originals.

Li: First of all, I dont know what perfectly mimicking means in that because were in counterfactual scenario. Like, maybe we can perfectly mimic up to this point of your life where your neurons are replaced, but what about all the future? Is that really Bill? Its almost a philosophical question, that I dont know how to answer. But I think this consciousness question is at the core of some of neuroscience researchers pursuit, as well as a very intriguing question for AI as a field.

Newsome: So consciousness, I call it the C word, and mostly I dont utter the C word. But it is maybe the single most real, as Descartes thought, and interesting feature of our internal mental lives, so its certainly worth thinking about, both from a neuroscience point of view and an artificial intelligence point of view.

A lot of the muddiness about that word comes because we use it to mean so many different things. We use it to mean a pathological state, somebodys unconscious rather than conscious. We use it to mean a natural state called sleep, and people are asleep and not conscious. Or we use it to mean: Im conscious of this TV screen in front of me and Im not conscious of the shoes on my feet at this particular point in time. Or we can use it at a much higher level, that I am conscious of the fact that Im going to exist, that Im going to die, that I have a limited time on this planet and I need to find as much meaning from those years as possible.

And so you have to sort of hone in on what youre really trying to understand with the word. I think the one thats most common is simply what were conscious of at any moment, what were aware of-

Li: Awareness.

Newsome: A phenomenal awareness, like the philosophers call it. Many of your listeners will be familiar with David Chalmers and his notion of the hard problem of consciousness and the easy problem of consciousness. If youre not, its definitely worth getting familiar with them. Chalmers says that theres some things that neuroscientists are going to solve. Were going to solve the easy problems. Were going to solve attention, were going solve memory, were going to solve visual perception, were going solve visual coordination: all these features of conscious beings were going to solve, because we can see in principle the outlines of an answer to them, even though were far from having any details.

But what he says is the hard problem is why should some biological machinery hooked together in a particular way, why should there be any internal feelings at all that go along with that, that were conscious of? Conscious of being happier, conscious of being sadder, conscious of seeing red, or conscious of seeing green. Why is there that phenomenal experience?

And one of the things Ive learned as a neurobiologist is that I can ask questions up to a certain point in animals, like I can electrically stimulate different parts of the brain, and I can elicit very sophisticated kinds of responses and behavioral responses, and yet, I do not know what that animal is actually feeling at the moment. Theres this first-person experience of our beings, and presumably of other animals beings, that is very difficult to know how we would describe that in any kind of objective terms, any kind of math that you could-

Li: The Qualia experience.

Newsome: Qualia, exactly. And thats the hard problem. Ill tell you, most neuroscientists, the large majority of neuroscientists, would deny that there was a hard problem of consciousness. Its almost an ideology, honestly, because neuroscientists believe in the supremacy of their field. Its a very deep commitment, and that once we get a mature neuroscience 500 years from now, however long it takes, there will be nothing about the brain or the mind left to explain.

And people who take the hard problem of consciousness seriously say, It may be that an intrinsically third-person science cannot account for what is intrinsically first-person experience. That there just may be a category of mismatch. And so I give credibility to that, but Im an unusual neuroscientist in giving credibility to that.

Li: Yeah, youre a very open-minded neuroscientist. I remember as a physics student at Princeton some physicists said that humanity is incapable of understanding the universe to its deepest depth because we are part of it, and its hard to study within something, the totality of that thing.

But just to be a little more concrete on a consciousness note for AI, one of the narrower definitions of consciousness is awareness; not even this deep awareness, but contextual awareness. And one of my favorite quotes of AI come from the 70s, that goes like this (and keep in mind, this is the 70s). It says: The definition of todays AI is the computer can make a perfect chess move without realizing the room is on fire. Of course, we can change the word chess move to a different game, like Go or anything else, but todays AI algorithm, not to mention the deeper level of awareness, does not even have that contextual awareness. And this is five decades later, so we have a long way to go.

Read more from the original source:
What is Computation's Role in Neuroscience? - Stanford University News