Category Archives: Neuroscience

MRI Analysis Tries To Make Sense of the Senses – Technology Networks

If we cross a road with our smartphone in view, a car horn or engine noise will startle us. In everyday life we can easily combine information from different senses and shift our attention from one sensory input to another for example, from seeing to hearing. But how does the brain decide which of the two senses it will focus attention on when the two interact? And, are these mechanisms reflected in the structure of the brain?To answer these questions, scientists at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and the Computational Neuroscience and Cognitive Robotics Centre at the University of Birmingham measured how sensory stimuli are processed in the brain. In contrast to previous studies, they did not restrict their observations to the surface the cerebral cortex. For the first time, they also measured the sensory signals at different depths in the cortex. The researchers' findings suggest that our brains conduct the multi-sensory flow of information via distinct circuits right down to the smallest windings of this highly folded brain structure.

While the participants in their study were lying in a magnetic resonance tomograph (MRI), the scientists showed them visual symbols on a screen while simultaneously playing sounds. In a prior condition, the participants had been asked to explicitly focus their attention on either the audible or visible aspect of the stimuli. The neurophysicists Robert Turner, Robert Trampel and Rmi Gau then analyzed at which exact points the sensory stimuli were being processed. Two challenges needed to be overcome. "The cerebral cortex is only two to three millimeters thick. So we needed a very high spatial resolution (of less than one millimeter) during data acquisition," explains Trampel, who co-directed the study at the MPI CBS. "Also, due to the dense folding of the cerebral cortex, we had to digitally smooth it and break it down into different layers, in order to be able to precisely locate the signals. This was all done on a computer of course."

The results showed that when participants heard a sound, visual areas of their brains were largely switched off. This happened regardless of whether they focused on the audible or visible aspect of the stimuli. However, if they strongly attended to the auditory input, brain activity decreased, particularly in the regions representing the center of the visual field. Thus, it seems that sound can strongly draw our attention away from what we're looking at.

In auditory brain regions the researchers also observed, for the first time, that the activity pattern, across different cortical layers, changed when participants were presented with only sounds. The situation was different when participants only perceived "something to the eye": in that case there was no change. Gau sums up, "So when we have to process different sensory impressions at the same time, different neuron circuits become active, depending on what we focus our attention on. We have now been able to make these interactions visible through novel computerized experiments."ReferenceGau et al. (2020) Resolving multisensory and attentional influences across cortical depth in sensory cortices. eLIFE. DOI: https://doi.org/10.7554/eLife.46856

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

Original post:
MRI Analysis Tries To Make Sense of the Senses - Technology Networks

Your brain isn’t the same in virtual reality as it is in the real world – Massive Science

Virtual Reality (VR) is not just for video games. Researchers use it in studies of brains from all kinds of animals: bees, fish, rats and, of course, humans. Sadly, this does not mean that the bees have a tiny VR headset. Instead, the setup often consists of either normal computer screens surrounding the subject, or a special cylindrical screen. Thishas become a powerful tool in neuroscience, because it has many advantages for researchers that allow them to answer new questions about the brain.

For one, the subject does not have to physically move for the world around them to change. This makes it easier to study the brain. Techniques such as functionalmagnetic resonance imaging (fMRI) can only be used on stationary subjects. With VR, researchers can ask people to navigate through a virtual world by pressing keys, while their head remains in the same place, which allows the researchers to image their brain.

VR has become a powerful tool in neuroscience.

FDA

The researchers can also control a virtual environment much more precisely than they can control the real world. They can put objects in the exact places they want, and they can even manipulate the environment during an experiment. For example, neuroscientists from HarvardUniversitywere able to change the effortthe zebrafish had to put in to swim to travel the same distance in VR, which causes zebrafish to change how strongly they move their tails. Using this experiment, researchers determined which parts of the zebrafish brain are responsible for controlling their swimming behavior. They could have never performed such a manipulation in the real world.

If you've ever experienced VR, you know that it is still quite far from the real world. And this has consequences for how your brain responds to it.

One of the issues with VR is the limited number of senses it works on. Often the environment is only projected on a screen, giving visual input, without the subject getting any other inputs, such as touch or smell. For example,mice rely heavily on their whiskers when exploring an environment. In VR, their whiskers won't give them any input, because they won't be able to feel when they approach a wall or an object.

VR cannot replicate how mice rely on their whiskers to navigate.

Adapted from Pixabay by Dori Grijseels

Another issue is the lack of proprioception, the feedback you get from your body about the position of your limbs. Pressing a button to walk forward is not the same as actually moving your legs and walking around. Similarly, subjects won't have any input from their vestibular system, which is responsible for balance and spatial orientation. This is also the reason some people get motion sickness when they are wearing VR headsets.

When VR is used for animal studies, the animals are often "headfixed," meaning they cannot turn their head. This is needed to be able to use a microscope to look at the cells in their brain.However, it poses a problem, specifically for navigation, as animals use a special type of cell, called a "head direction cell," in navigation tasks. These cells track the orientation of the head of an animal. And whenthe mouse can't move its head, the head direction cells can't do their job.

This is especially the case for the cells in the hippocampus. That is the part of your brain that is responsible for navigation, and so, relies heavily on inputs that give you information about your location and your direction.

Neurons talk to each other through electrical signals called action potentials, or spikes. The number of spikes per second, called the "firing frequency," is an important measure of how much information is being sent between neurons.A 2015 study found that, in VR, the firing frequency of neurons in a mouse is reduced by over two thirds, meaning thatthe cells don't send as much information.

The same study also showed that the cells are less reliable. They specifically looked at place cells, cells that respond to a particular location in the environment and are incredibly important for navigation. In the real world, these cells send spikes about 80% of the times thatthe animal is in a particular location. However, in VR, this is reduced to about 30%, so when an animal visits a location ten times, the cells will send spikes during only three of those visits. This means the animals are not as sure about their exact location.

Another important feature of brain activity are brainwaves, or neural oscillations. These represent the overall activity of all the neurons in your brain, which goes up and down at a regular interval. Theta oscillations, brainwaves at a frequency of 4-7 Hz, play an important part in navigation. Interestingly, scientists found that rats have a lower frequency of their theta oscillations in VR compared to the real world. This effect on oscillations is not limited to navigation tasks, but was also found for humans who played golf in the real world and in VR. It is most likely caused by the lack of vestibular input, but scientists are still unsure of the consequences of suchchanges in frequency.

We know that we should be critical when interpreting results from neuroscience studies that use VR. Although VR is a great tool, it is far from perfect, and it affects the way our brain acts. We should not readily accept conclusions from VR studies, without first considering how the use of VR in that study may have affected those conclusions. Hopefully, as our methods get more sophisticated, the differences in brain activity between VR and the real world will also become smaller.

Excerpt from:
Your brain isn't the same in virtual reality as it is in the real world - Massive Science

Dell Children’s Medical Center to spend more than $300 million over next 3 years to expand Mueller campus – Community Impact Newspaper

The upcoming $113 million Dell Childrens Specialty Pavilion will open spring 2021 with cardiovascular, neuroscience and cancer programs, according to the pediatric hospital. (Rendering courtesy Dell Childrens Specialty Pavilion)

The Dell Childrens Medical Center campus in Mueller is set to break ground on an expansion plan following the announcement of significant investment over the next three years.

The pediatric hospital Feb. 10 announced a $300 million investment in capital, equipment and programming over the next three years, made possible due to a substantial investment by Ascension, as well as a $30 million matching grant from the Michael & Susan Dell Foundation, according to a company news release.

The time is now to continue expanding complex pediatric care in Central Texas, said Christopher Born, the president of Dell Childrens Medical Center, in the Feb. 10 news release.

Dell Children will use $113 million of the investment funds to construct its new pediatric outpatient facility, which will house cardiovascular, neuroscience and cancer programs, as previously reported by Community Impact Newspaper.

The four-story, 161,000-square-foot facility, named Dell Childrens Specialty Pavilion, is slated to break ground soon and open its doors to patients in spring 2021.

Investment dollars will also go to provide backing for a new partnership with Dell Medical School at The University of Texas to develop a maternal fetal medicine program that will add a delivery unit and neonatal intensive care unit expansion at Dell Childrens Medical Center, according to the news release.

Dell Childrens Medical Center announced it will additionally add more cardiac ICU beds at its main hospital, allowing for the expansion of its pediatric heart program to include heart transplant surgery.

Read more:
Dell Children's Medical Center to spend more than $300 million over next 3 years to expand Mueller campus - Community Impact Newspaper

Cheap Diuretic Pill Could Help With Autism Symptoms, New Findings Suggest – Technology Networks

It is possible to improve symptoms in autistic children with a cheap generic drug, ourlatest study shows. The drug, bumetanide, is widely used to treat high blood pressure and swelling, and it costsno more than 10for a months supply of pills.

Autism is a neurodevelopmental disorder which is more common in boys than girls. According to the World Health Organization,1%-2% of people have the condition.

Autism can be diagnosed as early as two years old or even at 18 months. Children with moderate or severe autism can find social situations difficult. They may not make eye contact with their parents or take part in cooperative play and conversation. They may also show repetitive behaviour and have an intense interest in objects. This behaviour not only affects engagement in family activities but can also make it harder for them to make friends at school.

We were motivated to test bumetanide as a result ofbackground findingswhich suggested that the drug changed important brain chemicals in mouse models of autism; and also by somestudies, including in autistic teenagers, showing that bumetanide may have beneficial effects.

Our research group, an international collaboration between researchers at several institutions in China and the University of Cambridge, wanted to focus on young children with moderate and severe autism and to test whether bumetanide could improve their symptoms. We also wanted to understand the mechanism by which the drug achieved this. Understanding how bumetanide worked could lead to future drug development to treat moderate and severe autism.

There were 81 children with moderate to severe autism in our study 42 in the bumetanide group, who received 0.5mg of bumetanide twice a day for three months; and 39 children in the control group, who received no treatment. The children were three to six years of age.

Some of the children had their brains scanned using magnetic resonance spectroscopy (MRS) 38 in the bumetanide group and 17 in the control group. MRS is a non-invasive way of measuring chemicals in the brain. For our study, we measured brain chemicals called GABA and glutamate, which are important for learning and brain plasticity (the brains ability to change and adapt as a result of experience).

In the bumetanide group, autism symptoms improved as measured by the childhood autism rating scale (CARS) and also by a doctors overall impression. The doctors who were assessing symptom change were blind to treatment that is, they were unaware of who was receiving bumetanide. Improvements in symptoms were associated with changes in the brain chemicals GABA/glutamate ratios and, in particular, with decreases in GABA.

Looking specifically at what improved on the rating scale, we found decreases in repetitive behaviour and decreased interest in objects. These reductions in unsociable behaviour allow more time for increases in social behaviour.

One of the mothers of a four-year-old boy, living in a rural area outside Shanghai, said that her child, who was in the bumetanide group, became better at making eye contact with family members and relatives and was able to take part in more family activities.

We also found that the drug is safe for young autistic children and has no significant side-effects. Bumetanide could improve the quality of life and wellbeing of autistic children. Existing treatments are predominantly behavioural, including Applied Behaviour Analysis or ABA. Most families, particularly those in rural areas, will have limited or no access to these treatments, which are generally only available in specialised centres. The use of bumetanide would mean that there would even be a treatment for autistic children living in rural areas.

This study is important and exciting because bumetanide can improve social learning and reduce autism symptoms when the brains of these children are still developing. We now know that human brains are still in development until late adolescence and early adulthood. Further research is now needed to confirm the effectiveness of bumetanide in treating autism.

Barbara Jacquelyn Sahakian, Professor of Clinical Neuropsychology,University of CambridgeandChristelle Langley, Postdoctoral Research Associate, Cognitive Neuroscience,University of Cambridge

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Read more here:
Cheap Diuretic Pill Could Help With Autism Symptoms, New Findings Suggest - Technology Networks

AAAI 2020 | Whats Next for Deep Learning? Hinton, LeCun, and Bengio Share Their Visions – Synced

This is an updated version.

The Godfathers of AI and 2018 ACM Turing Award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio shared a stage in New York on Sunday night at an event organized by the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading.

Introduced in the mid 1980s, deep learning gained traction in the AI community the early 2000s. The year 2012 saw the publication of the CVPR paper Multi-column Deep Neural Networks for Image Classification, which showed how max-pooling CNNs on GPUs could dramatically improve performance on many vision benchmarks; while a similar system introduced months later by Hinton and a University of Toronto team won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. These events are regarded by many as the beginning of a deep learning revolution that has transformed AI.

Deep learning has been applied to speech recognition, image classification, content understanding, self-driving, and much more. And according to LeCun who is now Chief AI Scientist at Facebook the current services offered by Facebook, Instagram, Google, and YouTube are all built around deep learning.

Deep learning does however does have its detractors. Johns Hopkins University Professor and one of the pioneers of computer vision Alan Yuille warned last year that deep learnings potential in computer vision has hit a bottleneck.

We read a lot about the limitations of deep learning today, but most of those are actually limitations of supervised learning, LeCun explained in his talk. Supervised learning typically refers to learning with labelled data. LeCun told the New York audience that unsupervised learning without labels or self-supervised learning as he prefers to call it may be a game changer that ushers in AIs next revolution.

This is an argument that Geoff [Hinton] has been making for decades. I was skeptical for a long time but changed my mind, said LeCun.

There are two approaches to object recognition. Theres the good old-fashioned path based approach, with sensible modular representations, but this typically imposes a lot of hand engineering. And then there are convolutional neural nets (CNNs), which learn everything end to end. CNNs get a huge win by wiring in the fact that if a feature is good in one place, its good somewhere else. But their approach to object recognition is very different from human perception.

This informed the first part of Hintons talk, which he personally directed at LeCun: Its about the problems with CNNs and why theyre rubbish.

CNNs are designed to cope with translations, but theyre not so good at dealing with other effects of changing viewpoints such as rotation and scaling. One obvious approach is to use 4D or 6D maps instead of 2D maps but that is very expensive. And so CNN are typically trained on many different viewpoints in order for them to be able to generalize across viewpoints. Thats not very efficient, Hinton explained. Wed like neural nets to generalize to new viewpoints effortlessly. If it learned to recognize something, then you make it 10 times as big and you rotate it 60 degrees, it shouldnt cause them any problem at all. We know computer graphics is like that and wed like to make neural nets more like that.

Hinton believes the answer is capsules. A capsule is a group of neurons that learns to represent a familiar shape or part. Hinton says the idea is to build more structure into neural networks and hope that the extra structure helps them generalize better. Capsules are an attempt to correct the things that are wrong with CNNs.

The capsules Hinton introduced are Stacked Capsule Auto-encoders, which first appeared at NeurIPS 2019 and are very different in many ways from previous capsule versions from ICLR 2018 and NIPS 2017. These had used discriminative learning. Hinton said even at the time he knew this was a bad idea: I always knew unsupervised learning was the right thing to do so it was bad faith to do the previous models. The 2019 capsules use unsupervised learning.

LeCun noted that although supervised learning has proven successful in for example speech recognition and content understanding, it still requires a large amount of labelled samples. Reinforcement learning works great for games and in simulations, but since it requires too many trials its not really applicable in the real world.

The first challenge LeCun discussed was how models can be expected to learn more with fewer labels, fewer samples or fewer trials.

LeCun now supports the unsupervised learning (self-supervised learning) solution Hinton first proposed some 15 years ago. Basically its the idea of learning to represent the world before learning a task and this is what babies do, LeCun explained, suggesting really figuring out how humans learn so quickly and efficiently may be the key that unlocks self-supervised learnings full potential going forward.

Self-supervised learning is largely responsible for the success of natural language processing (NLP) over the last year and a half or so. The idea is to show a system a piece of text, image, or video input, and train a model to predict the piece thats missing for example to predict missing words in a text, which is what transformers and BERT-like language systems were built to do.

But success of Transformers and BERT et al has not transferred into the image domain because it turns out to be much more difficult to represent uncertainty in prediction on images or in video than it is in text because its not discrete. Its practical to produce distributions over all the words in a dictionary, but its hard to represent distributions over all possible video frames. And this is, in LeCuns view, the main technical problem we have to solve if we want to apply self-supervised learning to a wider variety of modalities like videos.

LeCun proposed one solution may be in latent variable energy-based models: An energy-based model is kind of like a probabilistic model except you dont normalize. And one way to train the energy-based model is to give low energy to samples that you observe and high energy to samples you do not observe.

In his talk, LeCun touched on two other challenges:

LeCun opined that nobody currently seems to have a good answer to either of these two challenges, and said he remains open to and looks forward to any possible ideas.

Yoshua Bengio, meanwhile, has shifted his focus to consciousness. After cognitive neuroscience, he believes the time is ripe for ML to explore consciousness, which he says could bring new priors to help systematic and good generalization. Ultimately, Bengio hopes such a research direction could allow DL to expand from System 1 to System 2 referring to a dichotomy introduced by Daniel Kahneman in his book Thinking, Fast and Slow. System 1 represents what current deep learning is very good at intuitive, fast, automatic, anchored in sensory perception. System 2 meanwhile represents rational, sequential, slow, logical, conscious, and expressible with language.

Before he dived into the valuable lessons that can be learned from consciousness, Bengio briefed the audience on cognitive neuroscience. It used to be seen in the previous century that working on consciousness was kind of taboo in many sciences for all kinds of reasons. But fortunately, this has changed and particularly in cognitive neuroscience. In particular, the Global Workspace Theory by Baars and the recent work in this century based on DeHaene, which really established these theories to explain a lot of the objective neuroscience observations.

Bengio likened conscious processing to a bottleneck and asked Why would this (bottleneck) be meaningful? Why is it that the brain would have this kind of bottleneck where information has to go through this bottleneck, just a few elements to be broadcast to the rest of the brain? Why would we have a short term memory that only contains like six or seven elements? It doesnt make sense.

Bengio said the bottom line is get the magic out of consciousness and proposed the consciousness prior, a new prior for learning representations of high-level concepts of the kind human beings manipulate with language. The consciousness prior is inspired by cognitive neuroscience theories of consciousness. This prior can be combined with other priors in order to help in disentangling abstract factors from each other. What this is saying is that at that level of representation, our knowledge is represented in this very sparse graph where each of the dependencies, these factors involve two, three, four or five entities and thats it.

Consciousness can also provide inspiration on how to build models. Bengio explained Agents are at the particular time at a particular place and they do something and they have an effect. And eventually that effect could have constant consequences all over the universe, but it takes time. And so if we can build models of the world where we have the right abstractions, where we can pin down those changes to just one or a few variables, then we will be able to adapt to those changes because we dont need as much data, as much observation in order to figure out what has changed.

So whats required if deep learning is going to reach human-level intelligence? Bengio referenced his previous suggestions, that missing pieces of the puzzle include:

In a panel discussion, Hinton, LeCun and Bengio were asked how they reconcile their research approaches with colleagues committed to more traditional methods. Hinton had been conspicuously absent from some AAAI conferences, and hinted at why in responding: The last time I submitted a paper to AAAI, I got the worst review I ever got. And it was mean. It said Hinton has been working on this idea for seven years [vector representations] and nobodys interested. Time to move on.

Hinton spoke of his efforts to find a common ground and move on: Right now were in a position where we should just say, lets forget the past and lets see if we can take the idea of doing gradient descent in great big system parameters. And lets see if we can take that idea, because thats really all weve discovered so far. That really works. The fact that that works is amazing. And lets see if we can learn to do reasoning like that.

Author: Fangyu Cai & Yuan Yuan | Editor: Michael Sarazen

Like Loading...

Read more:
AAAI 2020 | Whats Next for Deep Learning? Hinton, LeCun, and Bengio Share Their Visions - Synced

Unionized HealthPartners Workers OK Strike February 07, 2020 – Twin Cities Business Magazine

About 1,800 unionized HealthPartners workers are slated to strike later this month if theyre unable to reach an agreement with the health care system.

On Thursday, 95 percent of SEIU Healthcare Minnesota workers voted to authorize a seven-day strike, which would begin Feb. 19. The union filed a 10-day strike notice on Friday morning, said Kate Lynch, VP of SEIU Healthcare Minnesota.

It feels like its profits over patients and employees, Lynch said outside HealthPartners Neuroscience Center in St. Paul. She added that workers are willing to go back to the table at any time.

SEIU and HealthPartners last met to negotiate on Jan. 31. The marathon session spilled into early morning the following day. HealthPartners leaders have proposed increases to workers health insurance premiums and co-pays. SEIU which represents nurses, dental hygienists, physician assistants, and other frontline workers at more than 30 HealthPartners locations has rejected the health systems proposal.

The unions contract with HealthPartners expired Feb. 1.

Health insurance premiums and copays have remained the same for SEIU members for more than a decade, union officials said.

For their part, HealthPartners leaders maintain that their proposal is fair and reasonable. In a statement, they said the strike vote is disappointing.

We remain committed to reaching an agreement on a new contract that is fair to all, HealthPartners officials said in a statement.

A federal mediator will need to call both parties back to the table, according to HealthPartners.

The health system didn't say whether it had a contingency plan in place if the strike goes through.

"We can't really tell you what kind of care you're going to get when we're not there," Lynch said when asked how the union would address patients' concerns about the strike.

Read the original:
Unionized HealthPartners Workers OK Strike February 07, 2020 - Twin Cities Business Magazine

The science behind learning soft skills and hard skills on Brains Byte Back – The Sociable

On this podcast we learn the difference between soft skills and hard skills, why they are important, and how we can sharpen our skills.

Learning a new skill can be hard, especially if it is not something we are naturally good at. However, there is research that can help us understand what parts of the brain need to be activated in order to learn, and what we need to do to activate them.

Listen to this podcast below and onSpotify,Anchor,Apple Podcasts,Breaker,Google Podcasts,Overcast, andRadio Public.

Joining us on the show is Todd Maddox, an expertin the area of neuroscience, with more than 200 peer-reviewed research reports, and more than 12,000 citations under his belt. He is also the founder and CEO of Cognitive Design & Statistical Consulting and has a Ph.D. from the University of California, Santa Barbara.

And for our Neuron to something piece, we have results of a new survey which advocates that the public wouldnt trust companies to scan social media posts for signs of depression.

Here is the original post:
The science behind learning soft skills and hard skills on Brains Byte Back - The Sociable

The Global Neuroscience Antibodies and Assays Market is expected to grow by USD 1.36 bn during 2020-2024, progressing at a CAGR of 8% during the…

NEW YORK, Feb. 3, 2020 /PRNewswire/ --

Global Neuroscience Antibodies and Assays Market 2020-2024 The analyst has been monitoring the global neuroscience antibodies and assays market and it is poised to grow by USD 1.36 bn during 2020-2024, progressing at a CAGR of 8% during the forecast period. Our reports on global neuroscience antibodies and assays market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.

Read the full report: https://www.reportlinker.com/p05843269/?utm_source=PRN

The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by technological advances. In addition, advances in neuroscience instruments is anticipated to boost the growth of the global neuroscience antibodies and assays market as well.

Market Segmentation The global neuroscience antibodies and assays market is segmented as below: Product Consumables Instruments

Geographic segmentation Asia Europe North America ROW

Key Trends for global neuroscience antibodies and assays market growth This study identifies advances in neuroscience instruments as the prime reasons driving the global neuroscience antibodies and assays market growth during the next few years. Prominent vendors in global neuroscience antibodies and assays market We provide a detailed analysis of around 25 vendors operating in the global neuroscience antibodies and assays market, including some of the vendors such as Abcam Plc, Bio-Rad Laboratories Inc., Cell Signaling Technology Inc., F. Hoffmann-La Roche Ltd., GenScript Biotech Corp., Merck KGaA, Rockland Immunochemicals Inc., Santa Cruz Biotechnology Inc., Tecan Group Ltd. and Thermo Fisher Scientific Inc. The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

Read the full report: https://www.reportlinker.com/p05843269/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________ Contact Clare: clare@reportlinker.com US: (339)-368-6001 Intl: +1 339-368-6001

View original content:http://www.prnewswire.com/news-releases/the-global-neuroscience-antibodies-and-assays-market-is-expected-to-grow-by-usd-1-36-bn-during-2020-2024--progressing-at-a-cagr-of-8-during-the-forecast-period-300997485.html

SOURCE Reportlinker

See original here:
The Global Neuroscience Antibodies and Assays Market is expected to grow by USD 1.36 bn during 2020-2024, progressing at a CAGR of 8% during the...

Vanderbilt bonds with Nashville in the public school classroom – The Vanderbilt Hustler

Service organizations drive Vanderbilt students to become tutors and teachers in Nashvilles public schools.

Kleio JiangFebruary 7, 2020

Hundreds of volunteers in more than a dozen organizations on campus are dedicated to delivering interactive lessons to partner schools, in subjects ranging from English to Mathematics to Neuroscience. These organizations create their own syllabuses for one-on-one, interactive and even live broadcasting lessons that reach as far as schools in the most rural parts of Nashville. Much of this work, however, goes on behind the scenes.

The biggest service organization on campus, Vanderbilt Student Volunteers for Science (VSVS), has been gathering undergraduate, graduate and medical students since 1994. These volunteers not only collaborate with Metro Nashville public schools, but they also reach out to local science fairs, robotics teams and remote rural schoolsnot to mention Vanderbilt Childrens Hospital. VSVS has a specialized team that develops lessons specifically oriented around science, but public school students usually gain much more than that. VSVS encourages an interactive style of teaching, and the majority of questions posed by students are about college life.

These lessons give students not only a passion for science, but also the confidence to pursue higher education, VSVS Co-President Meghana Bhimreddy said.

But volunteering also benefits the volunteers themselves, many of whom now recognize the hardships of teachers nationwide in developing new teaching skills.

[To encourage participation among shy kids, one Vanderbilt student] brought a shiny pink karaoke mic to encourage class participation, Bhimreddy said.

Another service organization, Interaxon, is devoted to making neuroscience knowledge more accessible to its partner schools since 2011. Interaxon works with three Nashville Public schools and designs its programs based on feedback from the schools teachers to develop a syllabus that complements their students needs.

Just like most service events, the relationship between volunteers and students is a mutually beneficial one. Through the numerous questions about neuroscience that the students ask in childish yet insightful ways, volunteers are forced to come up with creative answers to satisfy their boundless curiosity.

This allows volunteers themselves to sharpen their own understanding of neuroscience knowledge, Interaxon School Director Puja Jagasia said.

In this way, volunteering at public schools is a unique experience that stands out from other service programs. Sometimes, volunteers even learn more from their students than the students learn from the volunteers.

Interaxon opened my lens on giving back to the community, Jagasia said. It is so far the most rewarding experience Ive had at Vanderbilt.

See the article here:
Vanderbilt bonds with Nashville in the public school classroom - The Vanderbilt Hustler

Inadequate Myelination of Neurons Tied to Autism: Study – The Scientist

Insufficient myelination, likely caused by a lack of mature oligodendrocytes, is linked to autism spectrum disorder, according to a study in mice and postmortem human brains published yesterday (February 3) in Nature Neuroscience.

Myelin, the fatty substance that sheaths and insulates the axons of neurons, is responsible for aiding the quick delivery of signals throughout the brain. Too little myelin leaves the cells vulnerable to damage (as with multiple sclerosis), while too much can muddle the message. Oligodendrocytes (OL) are the cells that control myelination. Previous research has shown that myelin is typically thinner in those with autism spectrum disorder (ASD), while the current study explores the source of the problem.

While studying mouse brains for genetic mutations that cause Pitt-Hopkins syndrome, an autism-related genetic disorder, the team noticed irregular myelination and inconsistent expression of Tcf4, a gene that regulates OL activity.

Turning their attention to human cadavers, the researchers found deficiencies in myelin sheathing in brains from people with autism compared to controls, echoing what was found in the mice. A genetic analysis revealed that the homologous gene, TCF4, also contained varied mutations in regulatory regions. There was a noticeable lack of mature OL in the ASD brains when compared to the controls and an overabundance of immature cells, and myelination was not happening sufficiently.

This makes us think that the cells that are myelinating are doing it properly, its just that there are not a lot of them, coauthor Joseph Bohlen told Spectrum when his then-unpublished findings were presented at the Society for Neuroscience meeting in Chicago in October.

Future research will focus on the creation of brain organoids with irregular myelination and testing compounds that could target OL and increase myelin production. The authors hope is that if children with autism receive early identification, a treatment could mitigate some of their symptoms.

Lisa Winter is the social media editor forThe Scientist. Email her at lwinter@the-scientist.com or connect on Twitter @Lisa831.

Continued here:
Inadequate Myelination of Neurons Tied to Autism: Study - The Scientist