Cancer cure, fascination for biology, neuroscience what drew these women to science – ThePrint

Text Size:A- A+

New Delhi: On 22 December 2015, the United Nations General Assembly declared 11 February as the International Day of Women and Girls in Science as part of its resolution Transforming our world: the 2030 Agenda for Sustainable Development.

Meant to promote complete and equal access for women and girls to the sciences, it is celebrated by UNESCO and UN-Women in collaboration with other institutions.

Currently, less than 30 per cent of researchers worldwide are women and according to UNESCO data from 2014 2016, only around 30 per cent of female students select STEM-related fields in higher education. The world over, female students enrolment is particularly low with only 5 per cent opting for natural science, maths or statistics, only 3 per cent women opting for information and communication technology and 8 per cent opting for manufacturing and construction.

In India, the Ministry of Science and Technology has a special Women Scientist Scheme, which provides fellowships and research grants to enable women to re-enter the field as well as provide a launch pad for them into the field. Aside from this, the Ministry also started the Vigyan Joshi programme in October 2018 to encourage girls from rural areas to opt for any subject in science, engineering and technology. Through the programme, students met scientists from NASA and even got a scholarship of Rs 5,000 after completion of the programme.

On International Day of Women and Girls in Science, ThePrint speaks to women and girls in the field of science about their choices, struggles, and journey.

Neharika Ann Mann, who is about to take her class 12 board exams opted for science in class 11, taking physics, chemistry, biology and maths as her main subjects. However for this 17-year-old, the dream of entering the world of science began very early on.

She said, It was in class 3 that a close friends relative died of cancer. It was then that I decided that I would find a cure to cancer. That thought has evolved and I have now decided to eventually study pharmacology, which is a branch of medicine concerned with the uses, effects and modes of action of drugs.

Neharika plans to apply to Delhi University and because she was advised not to specialise too soon, wants to study biochemistry first, before specialising.

With no one else from her family working in the field of science, Neharika explains that it is the very fact that the stream enables her to think out of the box that attracts her most to the world of science.

She said, In ICSE science, there is a lot of pressure and we often feel as though we have to mug up theories. However the reality is that you need to keep finding things that interest you and ask questions that a science textbook will not give you.

Also read: CERN scientist Archana Sharma says Indian girls need more female role models in STEM

Manya Singh was fascinated by biology in school. Physics and chemistry did not excite her as much because she could relate more to biology as she could observe many aspects of it in her surroundings.

After graduating, Singh studied botany from Ramjas College, Delhi University. She tells ThePrint, It was during my undergraduation that I studied flora and fauna even further, and when I zeroed in on my interest in ecology and also where I felt the urge to do fieldwork.

Singh then enrolled for a Masters in ecology and environmental sciences from Nalanda University, which is where she ultimately focused on climate change and conservation studies. Unwilling to only be restricted to the classroom, she decided to work in the field as well. Her fieldwork took her to Gujarat, where she worked at the states forest department, focusing on agro-forestry for commercial use. She then moved to Dehradun, where she is currently based. At the Centre for Ecology, Development and Research, she works with mountain communities across the state to focus on springwater and glacial conservation.

While in the field, she noticed the skewed gender ratio. She noted that in research positions or in her Masters, the gender ratio was fairly equal, but in the field its only men. There is an astounding lack of women out there in the field. Be it as project leads or in state forest departments or ministries such as for water resources. There are barely any women. It is what I have to encounter and witness every day.

Singh also led the climate strike in Dehradun and hopes to eventually apply for a PhD on methods of water conservation.

Vidita Vaidya, is a neuroscientist and professor at the Tata Institute of Fundamental Research in Mumbai, a National Centre of the Government of India, under the umbrella of the Department of Atomic Energy. Vaidyas primary areas of interest are neuroscience and molecular psychiatry.

Having got her neuroscience doctoral degree from Yale and postdoctorate from the Karolinska Institute in Sweden and from Oxford, Vaidya joined TIFR in 2000 as a principal investigator.

She studies parts of the brain that regulate emotion and monitors how these mechanisms are influenced by life experiences. Vaidya also investigates how changes in the brain form the basis of psychiatric disorders like depression and how early life experiences contribute to alterations in behaviour.

Speaking to ThePrint, Vaidya explains that it is hard for young women who want to become a faculty member at an institution for science in India.

She underscored the need for more diversity in such institutions and explained how cutthroat and ruthless the scientific community can be with its high levels of competition.

Vaidya is however quick to acknowledge government schemes that encourage more women to enter the field, creche facilities and the progressive maternal leave policy. She noted, I am where I am because I have a supportive family, be it my family structure, in-laws or spouse, and therefore do not face the standard challenges.

With the same goal of achieving success in their respective fields these women are determined to change the world of science in their own way.

Also read: These forgotten women played a huge role in eradicating smallpox from India

ThePrint is now on Telegram. For the best reports & opinion on politics, governance and more, subscribe to ThePrint on Telegram.

See the rest here:
Cancer cure, fascination for biology, neuroscience what drew these women to science - ThePrint

Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI – Singularity Hub

As obstinate and frustrating as we are sometimes, humans in general are pretty flexible when it comes to learningespecially compared to AI.

Our ability to adapt is deeply rooted within our brains chemical base code. Although modern AI and neurocomputation have largely focused on loosely recreating the brains electrical signals, chemicals are actually the prima donna of brain-wide neural transmission.

Chemical neurotransmitters not only allow most signals to jump from one neuron to the next, they also feedback and fine-tune a neurons electrical signals to ensure theyre functioning properly in the right contexts. This process, traditionally dubbed neuromodulation, has been front and center in neuroscience research for many decades. More recently, the idea has expanded to also include the process of directly changing electrical activity through electrode stimulation rather than chemicals.

Neural chemicals are the targets for most of our current medicinal drugs that re-jigger brain functions and states, such as anti-depressants or anxiolytics. Neuromodulation is also an immensely powerful way for the brain to flexibly adapt, which is why its perhaps surprising that the mechanism has rarely been explicitly incorporated into AI methods that mimic the brain.

This week, a team from the University of Liege in Belgium went old school. Using neuromodulation as inspiration, they designed a new deep learning model that explicitly adopts the mechanism to better learn adaptive behaviors. When challenged on a difficult navigational task, the team found that neuromodulation allowed the artificial neural net to better adjust to unexpected changes.

For the first time, cognitive mechanisms identified in neuroscience are finding algorithmic applications in a multi-tasking context. This research opens perspectives in the exploitation in AI of neuromodulation, a key mechanism in the functioning of the human brain, said study author Dr. Damien Ernst.

Neuromodulation often appears in the same breath as another jargon-y word, neuroplasticity. Simply put, they just mean that the brain has mechanisms to adapt; that is, neural networks are flexible or plastic.

Cellular neuromodulation is perhaps the grandfather of all learning theories in the brain. Famed Canadian psychologist and father of neural networks Dr. Donald Hebb popularized the theory in the 1900s, which is now often described as neurons that fire together, wire together. On a high level, Hebbian learning summarizes how individual neurons flexibly change their activity levels so that they better hook up into neural circuits, which underlie most of the brains computations.

However, neuromodulation goes a step further. Here, neurochemicals such as dopamine dont necessarily directly help wire up neural connections. Rather, they fine-tune how likely a neuron is to activate and link up with its neighbor. These so-called neuromodulators are similar to a temperature dial: depending on context, they either alert a neuron if it needs to calm down so that it only activates when receiving a larger input, or hype it up so that it jumps into action after a smaller stimuli.

Cellular neuromodulation provides the ability to continuously tune neuron input/output behaviors to shape their response to external stimuli in different contexts, the authors wrote. This level of adaptability especially comes into play when we try things that need continuous adjustments, such as how our feet strike uneven ground when running, or complex multitasking navigational tasks.

To be very clear, neuromodulation isnt directly changing synaptic weights. (Ughwhat?)

Stay with me. You might know that a neural network, either biological or artificial, is a bunch of neurons connected to each other through different strengths. How readily one neuron changes a neighboring neurons activityor how strongly theyre linkedis often called the synaptic weight.

Deep learning algorithms are made up of multiple layers of neurons linked to each other through adjustable weights. Traditionally, tweaking the strengths of these connections, or synaptic weights, is how a deep neural net learns (for those interested, the biological equivalent is dubbed synaptic plasticity).

However, neuromodulation doesnt directly act on weights. Rather, it alters how likely a neuron or network is to be capable of changing their connectionthat is, their flexibility.

Neuromodulation is a meta-level of control; so its perhaps not surprising that the new algorithm is actually composed of two separate neural networks.

The first is a traditional deep neural net, dubbed the main network. It processes input patterns and uses a custom method of activationhow likely a neuron in this network is to spark to life depends on the second network, or the neuromodulatory network. Here, the neurons dont process input from the environment. Rather, they deal with feedback and context to dynamically control the properties of the main network.

Especially important, said the authors, is that the modulatory network scales in size with the number of neurons in the main one, rather than the number of their connections. Its what makes the NMN different, they said, because this setup allows us to extend more easily to very large networks.

To gauge the adaptability of their new AI, the team pitted the NMN against traditional deep learning algorithms in a scenario using reinforcement learningthat is, learning through wins or mistakes.

In two navigational tasks, the AI had to learn to move towards several targets through trial and error alone. Its somewhat analogous to you trying to play hide-and-seek while blindfolded in a completely new venue. The first task is relatively simple, in which youre only moving towards a single goal and you can take off your blindfold to check where you are after every step. The second is more difficult in that you have to reach one of two marks. The closer you get to the actual goal, the higher the rewardcandy in real life, and a digital analogy for AI. If you stumble on the other, you get punishedthe AI equivalent to a slap on the hand.

Remarkably, NMNs learned both faster and better than traditional reinforcement learning deep neural nets. Regardless of how they started, NMNs were more likely to figure out the optimal route towards their target in much less time.

Over the course of learning, NMNs not only used their neuromodulatory network to change their main one, they also adapted the modulatory networktalk about meta! It means that as the AI learned, it didnt just flexibly adapt its learning; it also changed how it influences its own behavior.

In this way, the neuromodulatory network is a bit like a library of self-help booksyou dont just solve a particular problem, you also learn how to solve the problem. The more information the AI got, the faster and better it fine-tuned its own strategy to optimize learning, even when feedback wasnt perfect. The NMN also didnt like to give up: even when already performing well, the AI kept adapting to further improve itself.

Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems, the authors said.

The study is just the latest in a push to incorporate more biological learning mechanisms into deep learning. Were at the beginning: neuroscientists, for example, are increasingly recognizing the role of non-neuron brain cells in modulating learning, memory, and forgetting. Although computational neuroscientists have begun incorporating these findings into models of biological brains, so far AI researchers have largely brushed them aside.

Its difficult to know which brain mechanisms are necessary substrates for intelligence and which are evolutionary leftovers, but one thing is clear: neuroscience is increasingly providing AI with ideas outside its usual box.

Image Credit: Image by Gerd Altmann from Pixabay

View original post here:
Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI - Singularity Hub

MRI Analysis Tries To Make Sense of the Senses – Technology Networks

If we cross a road with our smartphone in view, a car horn or engine noise will startle us. In everyday life we can easily combine information from different senses and shift our attention from one sensory input to another for example, from seeing to hearing. But how does the brain decide which of the two senses it will focus attention on when the two interact? And, are these mechanisms reflected in the structure of the brain?To answer these questions, scientists at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and the Computational Neuroscience and Cognitive Robotics Centre at the University of Birmingham measured how sensory stimuli are processed in the brain. In contrast to previous studies, they did not restrict their observations to the surface the cerebral cortex. For the first time, they also measured the sensory signals at different depths in the cortex. The researchers' findings suggest that our brains conduct the multi-sensory flow of information via distinct circuits right down to the smallest windings of this highly folded brain structure.

While the participants in their study were lying in a magnetic resonance tomograph (MRI), the scientists showed them visual symbols on a screen while simultaneously playing sounds. In a prior condition, the participants had been asked to explicitly focus their attention on either the audible or visible aspect of the stimuli. The neurophysicists Robert Turner, Robert Trampel and Rmi Gau then analyzed at which exact points the sensory stimuli were being processed. Two challenges needed to be overcome. "The cerebral cortex is only two to three millimeters thick. So we needed a very high spatial resolution (of less than one millimeter) during data acquisition," explains Trampel, who co-directed the study at the MPI CBS. "Also, due to the dense folding of the cerebral cortex, we had to digitally smooth it and break it down into different layers, in order to be able to precisely locate the signals. This was all done on a computer of course."

The results showed that when participants heard a sound, visual areas of their brains were largely switched off. This happened regardless of whether they focused on the audible or visible aspect of the stimuli. However, if they strongly attended to the auditory input, brain activity decreased, particularly in the regions representing the center of the visual field. Thus, it seems that sound can strongly draw our attention away from what we're looking at.

In auditory brain regions the researchers also observed, for the first time, that the activity pattern, across different cortical layers, changed when participants were presented with only sounds. The situation was different when participants only perceived "something to the eye": in that case there was no change. Gau sums up, "So when we have to process different sensory impressions at the same time, different neuron circuits become active, depending on what we focus our attention on. We have now been able to make these interactions visible through novel computerized experiments."ReferenceGau et al. (2020) Resolving multisensory and attentional influences across cortical depth in sensory cortices. eLIFE. DOI: https://doi.org/10.7554/eLife.46856

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

Original post:
MRI Analysis Tries To Make Sense of the Senses - Technology Networks

Neuroscience Antibodies & Assays Market Increasing Demand with Leading Player, Comprehensive Analysis, Forecast 2026 – Jewish Life News

The Neuroscience Antibodies & Assays Market report 2020-2026 provides a comprehensive analysis of the current market for Smart Camera. It determines the market size of Neuroscience Antibodies & Assays and also determines the factors that control market growth. The report begins with a basic overview of the Neuroscience Antibodies & Assays industry and then goes into the Details of the Neuroscience Antibodies & Assays market.

Neuroscience Antibodies & Assays Market was valued at USD 2.42 Billion in 2018 and is projected to reach USD 5.14 Billion by 2026, growing at a CAGR of 9.7% from 2019 to 2026.

The Neuroscience Antibodies & Assays market report provides detailed information on key factors, Opportunities, Challenges, industry trends and their impact on the market. The market report Neuroscience Antibodies & Assays also includes company data and its operation. This report also contains information about the pricing strategy, brand strategy and target customer of the Neuroscience Antibodies & Assays market. It also provides the distributors/dealer list offered by the company. This research report also deals with the main competition, the market development with forecast of the expected years and the expected growth rates. The main factors that drive and influence growth market data and analysis come from a combination of primary and secondary sources.

Get | Download Sample Copy @https://www.verifiedmarketresearch.com/download-sample/?rid=28342&utm_source=JLN&utm_medium=002

[Note: our free sample report provides a brief introduction to the table of contents, table of contents, list of tables and figures, competitive landscape and geographic segmentation, as well as innovations and future developments based on research methods.]

The top Manufacturer with company profile, sales volume, and product specifications, revenue (Million/Billion USD) and market share

Global Neuroscience Antibodies & Assays Market Competitive Insights

The competitive analysis serves as a bridge between manufacturers and other participants that are available on the Neuroscience Antibodies & Assays Market. The report includes a comparative study of Top market players with company profiles of competitive companies, Neuroscience Antibodies & Assays Market product innovations and cost structure, production sites and processes, sales details of past years and technologies used by them. The Neuroscience Antibodies & Assays Market report also explains the main strategies of competitors, their SWOT analysis and how the competition will react to changes in marketing techniques. In this report, the best market research techniques were used to provide the latest knowledge about Neuroscience Antibodies & Assays Market to competitors in the market.

Global Neuroscience Antibodies & Assays Market Segmentation information

The report provides important insights into the various market segments presented to simplify the assessment of the global Neuroscience Antibodies & Assays Market. These market segments are based on several relevant factors, including Neuroscience Antibodies & Assays Market product type or services, end users or applications and regions. The report also includes a detailed analysis of the regional potential of the Neuroscience Antibodies & Assays Market, which includes the difference between production values and demand volumes, as well as the presence of market participants and the growth of each Region over the given forecast period

Ask For Discount (Exclusive Offer) @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=28342&utm_source=JLN&utm_medium=002

Neuroscience Antibodies & Assays Market: Regional Analysis :

As part of regional analysis, important regions such as North America, Europe, the MEA, Latin America, and Asia Pacific have been studied. The regional Neuroscience Antibodies & Assays markets are analyzed based on share, growth rate, size, production, consumption, revenue, sales, and other crucial factors. The report also provides country-level analysis of the Neuroscience Antibodies & Assays industry.

Table of Contents

Introduction: The report starts off with an executive summary, including top highlights of the research study on the Neuroscience Antibodies & Assays industry.

Market Segmentation: This section provides detailed analysis of type and application segments of the Neuroscience Antibodies & Assays industry and shows the progress of each segment with the help of easy-to-understand statistics and graphical presentations.

Regional Analysis: All major regions and countries are covered in the report on the Neuroscience Antibodies & Assays industry.

Market Dynamics: The report offers deep insights into the dynamics of the Neuroscience Antibodies & Assays industry, including challenges, restraints, trends, opportunities, and drivers.

Competition: Here, the report provides company profiling of leading players competing in the Neuroscience Antibodies & Assays industry.

Forecasts: This section is filled with global and regional forecasts, CAGR and size estimations for the Neuroscience Antibodies & Assays industry and its segments, and production, revenue, consumption, sales, and other forecasts.

Recommendations: The authors of the report have provided practical suggestions and reliable recommendations to help players to achieve a position of strength in the Neuroscience Antibodies & Assays industry.

Research Methodology: The report provides clear information on the research approach, tools, and methodology and data sources used for the research study on the Neuroscience Antibodies & Assays industry.

What will you find out from the global Neuroscience Antibodies & Assays Market Report?

The report contains statistical analyses of the current and future Status of the global Neuroscience Antibodies & Assays Market with a forecast to 2026.The report contains detailed information on manufacturers, Neuroscience Antibodies & Assays Market raw material suppliers and buyers with their trade outlook for 2020-2026.The report informs you about the most important drivers, technologies and Trends that will shape the global Neuroscience Antibodies & Assays Market in the near future.The report added an exclusive market segmentation, broken down by Product Type, Neuroscience Antibodies & Assays Market end user and Region.The strategic perspectives on Neuroscience Antibodies & Assays Market Dynamics, current production process and applications.

Complete Report is Available @ https://www.verifiedmarketresearch.com/product/Neuroscience-Antibodies-&-Assays-Market/?utm_source=JLN&utm_medium=002

About Us:

Verified market research partners with clients to provide insight into strategic and growth analytics; data that help achieve business goals and targets. Our core values include trust, integrity, and authenticity for our clients.

Our research studies help our clients to make superior data-driven decisions, capitalize on future opportunities, optimize efficiency and keeping them competitive by working as their partner to deliver the right information without compromise.

Contact Us:

Mr. Edwyne FernandesCall: +1 (650) 781 4080Email:[emailprotected]

Read more from the original source:
Neuroscience Antibodies & Assays Market Increasing Demand with Leading Player, Comprehensive Analysis, Forecast 2026 - Jewish Life News

Your brain isn’t the same in virtual reality as it is in the real world – Massive Science

Virtual Reality (VR) is not just for video games. Researchers use it in studies of brains from all kinds of animals: bees, fish, rats and, of course, humans. Sadly, this does not mean that the bees have a tiny VR headset. Instead, the setup often consists of either normal computer screens surrounding the subject, or a special cylindrical screen. Thishas become a powerful tool in neuroscience, because it has many advantages for researchers that allow them to answer new questions about the brain.

For one, the subject does not have to physically move for the world around them to change. This makes it easier to study the brain. Techniques such as functionalmagnetic resonance imaging (fMRI) can only be used on stationary subjects. With VR, researchers can ask people to navigate through a virtual world by pressing keys, while their head remains in the same place, which allows the researchers to image their brain.

VR has become a powerful tool in neuroscience.

FDA

The researchers can also control a virtual environment much more precisely than they can control the real world. They can put objects in the exact places they want, and they can even manipulate the environment during an experiment. For example, neuroscientists from HarvardUniversitywere able to change the effortthe zebrafish had to put in to swim to travel the same distance in VR, which causes zebrafish to change how strongly they move their tails. Using this experiment, researchers determined which parts of the zebrafish brain are responsible for controlling their swimming behavior. They could have never performed such a manipulation in the real world.

If you've ever experienced VR, you know that it is still quite far from the real world. And this has consequences for how your brain responds to it.

One of the issues with VR is the limited number of senses it works on. Often the environment is only projected on a screen, giving visual input, without the subject getting any other inputs, such as touch or smell. For example,mice rely heavily on their whiskers when exploring an environment. In VR, their whiskers won't give them any input, because they won't be able to feel when they approach a wall or an object.

VR cannot replicate how mice rely on their whiskers to navigate.

Adapted from Pixabay by Dori Grijseels

Another issue is the lack of proprioception, the feedback you get from your body about the position of your limbs. Pressing a button to walk forward is not the same as actually moving your legs and walking around. Similarly, subjects won't have any input from their vestibular system, which is responsible for balance and spatial orientation. This is also the reason some people get motion sickness when they are wearing VR headsets.

When VR is used for animal studies, the animals are often "headfixed," meaning they cannot turn their head. This is needed to be able to use a microscope to look at the cells in their brain.However, it poses a problem, specifically for navigation, as animals use a special type of cell, called a "head direction cell," in navigation tasks. These cells track the orientation of the head of an animal. And whenthe mouse can't move its head, the head direction cells can't do their job.

This is especially the case for the cells in the hippocampus. That is the part of your brain that is responsible for navigation, and so, relies heavily on inputs that give you information about your location and your direction.

Neurons talk to each other through electrical signals called action potentials, or spikes. The number of spikes per second, called the "firing frequency," is an important measure of how much information is being sent between neurons.A 2015 study found that, in VR, the firing frequency of neurons in a mouse is reduced by over two thirds, meaning thatthe cells don't send as much information.

The same study also showed that the cells are less reliable. They specifically looked at place cells, cells that respond to a particular location in the environment and are incredibly important for navigation. In the real world, these cells send spikes about 80% of the times thatthe animal is in a particular location. However, in VR, this is reduced to about 30%, so when an animal visits a location ten times, the cells will send spikes during only three of those visits. This means the animals are not as sure about their exact location.

Another important feature of brain activity are brainwaves, or neural oscillations. These represent the overall activity of all the neurons in your brain, which goes up and down at a regular interval. Theta oscillations, brainwaves at a frequency of 4-7 Hz, play an important part in navigation. Interestingly, scientists found that rats have a lower frequency of their theta oscillations in VR compared to the real world. This effect on oscillations is not limited to navigation tasks, but was also found for humans who played golf in the real world and in VR. It is most likely caused by the lack of vestibular input, but scientists are still unsure of the consequences of suchchanges in frequency.

We know that we should be critical when interpreting results from neuroscience studies that use VR. Although VR is a great tool, it is far from perfect, and it affects the way our brain acts. We should not readily accept conclusions from VR studies, without first considering how the use of VR in that study may have affected those conclusions. Hopefully, as our methods get more sophisticated, the differences in brain activity between VR and the real world will also become smaller.

Excerpt from:
Your brain isn't the same in virtual reality as it is in the real world - Massive Science

Dell Children’s Medical Center to spend more than $300 million over next 3 years to expand Mueller campus – Community Impact Newspaper

The upcoming $113 million Dell Childrens Specialty Pavilion will open spring 2021 with cardiovascular, neuroscience and cancer programs, according to the pediatric hospital. (Rendering courtesy Dell Childrens Specialty Pavilion)

The Dell Childrens Medical Center campus in Mueller is set to break ground on an expansion plan following the announcement of significant investment over the next three years.

The pediatric hospital Feb. 10 announced a $300 million investment in capital, equipment and programming over the next three years, made possible due to a substantial investment by Ascension, as well as a $30 million matching grant from the Michael & Susan Dell Foundation, according to a company news release.

The time is now to continue expanding complex pediatric care in Central Texas, said Christopher Born, the president of Dell Childrens Medical Center, in the Feb. 10 news release.

Dell Children will use $113 million of the investment funds to construct its new pediatric outpatient facility, which will house cardiovascular, neuroscience and cancer programs, as previously reported by Community Impact Newspaper.

The four-story, 161,000-square-foot facility, named Dell Childrens Specialty Pavilion, is slated to break ground soon and open its doors to patients in spring 2021.

Investment dollars will also go to provide backing for a new partnership with Dell Medical School at The University of Texas to develop a maternal fetal medicine program that will add a delivery unit and neonatal intensive care unit expansion at Dell Childrens Medical Center, according to the news release.

Dell Childrens Medical Center announced it will additionally add more cardiac ICU beds at its main hospital, allowing for the expansion of its pediatric heart program to include heart transplant surgery.

Read more:
Dell Children's Medical Center to spend more than $300 million over next 3 years to expand Mueller campus - Community Impact Newspaper

Cheap Diuretic Pill Could Help With Autism Symptoms, New Findings Suggest – Technology Networks

It is possible to improve symptoms in autistic children with a cheap generic drug, ourlatest study shows. The drug, bumetanide, is widely used to treat high blood pressure and swelling, and it costsno more than 10for a months supply of pills.

Autism is a neurodevelopmental disorder which is more common in boys than girls. According to the World Health Organization,1%-2% of people have the condition.

Autism can be diagnosed as early as two years old or even at 18 months. Children with moderate or severe autism can find social situations difficult. They may not make eye contact with their parents or take part in cooperative play and conversation. They may also show repetitive behaviour and have an intense interest in objects. This behaviour not only affects engagement in family activities but can also make it harder for them to make friends at school.

We were motivated to test bumetanide as a result ofbackground findingswhich suggested that the drug changed important brain chemicals in mouse models of autism; and also by somestudies, including in autistic teenagers, showing that bumetanide may have beneficial effects.

Our research group, an international collaboration between researchers at several institutions in China and the University of Cambridge, wanted to focus on young children with moderate and severe autism and to test whether bumetanide could improve their symptoms. We also wanted to understand the mechanism by which the drug achieved this. Understanding how bumetanide worked could lead to future drug development to treat moderate and severe autism.

There were 81 children with moderate to severe autism in our study 42 in the bumetanide group, who received 0.5mg of bumetanide twice a day for three months; and 39 children in the control group, who received no treatment. The children were three to six years of age.

Some of the children had their brains scanned using magnetic resonance spectroscopy (MRS) 38 in the bumetanide group and 17 in the control group. MRS is a non-invasive way of measuring chemicals in the brain. For our study, we measured brain chemicals called GABA and glutamate, which are important for learning and brain plasticity (the brains ability to change and adapt as a result of experience).

In the bumetanide group, autism symptoms improved as measured by the childhood autism rating scale (CARS) and also by a doctors overall impression. The doctors who were assessing symptom change were blind to treatment that is, they were unaware of who was receiving bumetanide. Improvements in symptoms were associated with changes in the brain chemicals GABA/glutamate ratios and, in particular, with decreases in GABA.

Looking specifically at what improved on the rating scale, we found decreases in repetitive behaviour and decreased interest in objects. These reductions in unsociable behaviour allow more time for increases in social behaviour.

One of the mothers of a four-year-old boy, living in a rural area outside Shanghai, said that her child, who was in the bumetanide group, became better at making eye contact with family members and relatives and was able to take part in more family activities.

We also found that the drug is safe for young autistic children and has no significant side-effects. Bumetanide could improve the quality of life and wellbeing of autistic children. Existing treatments are predominantly behavioural, including Applied Behaviour Analysis or ABA. Most families, particularly those in rural areas, will have limited or no access to these treatments, which are generally only available in specialised centres. The use of bumetanide would mean that there would even be a treatment for autistic children living in rural areas.

This study is important and exciting because bumetanide can improve social learning and reduce autism symptoms when the brains of these children are still developing. We now know that human brains are still in development until late adolescence and early adulthood. Further research is now needed to confirm the effectiveness of bumetanide in treating autism.

Barbara Jacquelyn Sahakian, Professor of Clinical Neuropsychology,University of CambridgeandChristelle Langley, Postdoctoral Research Associate, Cognitive Neuroscience,University of Cambridge

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Read more here:
Cheap Diuretic Pill Could Help With Autism Symptoms, New Findings Suggest - Technology Networks

AAAI 2020 | Whats Next for Deep Learning? Hinton, LeCun, and Bengio Share Their Visions – Synced

This is an updated version.

The Godfathers of AI and 2018 ACM Turing Award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio shared a stage in New York on Sunday night at an event organized by the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading.

Introduced in the mid 1980s, deep learning gained traction in the AI community the early 2000s. The year 2012 saw the publication of the CVPR paper Multi-column Deep Neural Networks for Image Classification, which showed how max-pooling CNNs on GPUs could dramatically improve performance on many vision benchmarks; while a similar system introduced months later by Hinton and a University of Toronto team won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. These events are regarded by many as the beginning of a deep learning revolution that has transformed AI.

Deep learning has been applied to speech recognition, image classification, content understanding, self-driving, and much more. And according to LeCun who is now Chief AI Scientist at Facebook the current services offered by Facebook, Instagram, Google, and YouTube are all built around deep learning.

Deep learning does however does have its detractors. Johns Hopkins University Professor and one of the pioneers of computer vision Alan Yuille warned last year that deep learnings potential in computer vision has hit a bottleneck.

We read a lot about the limitations of deep learning today, but most of those are actually limitations of supervised learning, LeCun explained in his talk. Supervised learning typically refers to learning with labelled data. LeCun told the New York audience that unsupervised learning without labels or self-supervised learning as he prefers to call it may be a game changer that ushers in AIs next revolution.

This is an argument that Geoff [Hinton] has been making for decades. I was skeptical for a long time but changed my mind, said LeCun.

There are two approaches to object recognition. Theres the good old-fashioned path based approach, with sensible modular representations, but this typically imposes a lot of hand engineering. And then there are convolutional neural nets (CNNs), which learn everything end to end. CNNs get a huge win by wiring in the fact that if a feature is good in one place, its good somewhere else. But their approach to object recognition is very different from human perception.

This informed the first part of Hintons talk, which he personally directed at LeCun: Its about the problems with CNNs and why theyre rubbish.

CNNs are designed to cope with translations, but theyre not so good at dealing with other effects of changing viewpoints such as rotation and scaling. One obvious approach is to use 4D or 6D maps instead of 2D maps but that is very expensive. And so CNN are typically trained on many different viewpoints in order for them to be able to generalize across viewpoints. Thats not very efficient, Hinton explained. Wed like neural nets to generalize to new viewpoints effortlessly. If it learned to recognize something, then you make it 10 times as big and you rotate it 60 degrees, it shouldnt cause them any problem at all. We know computer graphics is like that and wed like to make neural nets more like that.

Hinton believes the answer is capsules. A capsule is a group of neurons that learns to represent a familiar shape or part. Hinton says the idea is to build more structure into neural networks and hope that the extra structure helps them generalize better. Capsules are an attempt to correct the things that are wrong with CNNs.

The capsules Hinton introduced are Stacked Capsule Auto-encoders, which first appeared at NeurIPS 2019 and are very different in many ways from previous capsule versions from ICLR 2018 and NIPS 2017. These had used discriminative learning. Hinton said even at the time he knew this was a bad idea: I always knew unsupervised learning was the right thing to do so it was bad faith to do the previous models. The 2019 capsules use unsupervised learning.

LeCun noted that although supervised learning has proven successful in for example speech recognition and content understanding, it still requires a large amount of labelled samples. Reinforcement learning works great for games and in simulations, but since it requires too many trials its not really applicable in the real world.

The first challenge LeCun discussed was how models can be expected to learn more with fewer labels, fewer samples or fewer trials.

LeCun now supports the unsupervised learning (self-supervised learning) solution Hinton first proposed some 15 years ago. Basically its the idea of learning to represent the world before learning a task and this is what babies do, LeCun explained, suggesting really figuring out how humans learn so quickly and efficiently may be the key that unlocks self-supervised learnings full potential going forward.

Self-supervised learning is largely responsible for the success of natural language processing (NLP) over the last year and a half or so. The idea is to show a system a piece of text, image, or video input, and train a model to predict the piece thats missing for example to predict missing words in a text, which is what transformers and BERT-like language systems were built to do.

But success of Transformers and BERT et al has not transferred into the image domain because it turns out to be much more difficult to represent uncertainty in prediction on images or in video than it is in text because its not discrete. Its practical to produce distributions over all the words in a dictionary, but its hard to represent distributions over all possible video frames. And this is, in LeCuns view, the main technical problem we have to solve if we want to apply self-supervised learning to a wider variety of modalities like videos.

LeCun proposed one solution may be in latent variable energy-based models: An energy-based model is kind of like a probabilistic model except you dont normalize. And one way to train the energy-based model is to give low energy to samples that you observe and high energy to samples you do not observe.

In his talk, LeCun touched on two other challenges:

LeCun opined that nobody currently seems to have a good answer to either of these two challenges, and said he remains open to and looks forward to any possible ideas.

Yoshua Bengio, meanwhile, has shifted his focus to consciousness. After cognitive neuroscience, he believes the time is ripe for ML to explore consciousness, which he says could bring new priors to help systematic and good generalization. Ultimately, Bengio hopes such a research direction could allow DL to expand from System 1 to System 2 referring to a dichotomy introduced by Daniel Kahneman in his book Thinking, Fast and Slow. System 1 represents what current deep learning is very good at intuitive, fast, automatic, anchored in sensory perception. System 2 meanwhile represents rational, sequential, slow, logical, conscious, and expressible with language.

Before he dived into the valuable lessons that can be learned from consciousness, Bengio briefed the audience on cognitive neuroscience. It used to be seen in the previous century that working on consciousness was kind of taboo in many sciences for all kinds of reasons. But fortunately, this has changed and particularly in cognitive neuroscience. In particular, the Global Workspace Theory by Baars and the recent work in this century based on DeHaene, which really established these theories to explain a lot of the objective neuroscience observations.

Bengio likened conscious processing to a bottleneck and asked Why would this (bottleneck) be meaningful? Why is it that the brain would have this kind of bottleneck where information has to go through this bottleneck, just a few elements to be broadcast to the rest of the brain? Why would we have a short term memory that only contains like six or seven elements? It doesnt make sense.

Bengio said the bottom line is get the magic out of consciousness and proposed the consciousness prior, a new prior for learning representations of high-level concepts of the kind human beings manipulate with language. The consciousness prior is inspired by cognitive neuroscience theories of consciousness. This prior can be combined with other priors in order to help in disentangling abstract factors from each other. What this is saying is that at that level of representation, our knowledge is represented in this very sparse graph where each of the dependencies, these factors involve two, three, four or five entities and thats it.

Consciousness can also provide inspiration on how to build models. Bengio explained Agents are at the particular time at a particular place and they do something and they have an effect. And eventually that effect could have constant consequences all over the universe, but it takes time. And so if we can build models of the world where we have the right abstractions, where we can pin down those changes to just one or a few variables, then we will be able to adapt to those changes because we dont need as much data, as much observation in order to figure out what has changed.

So whats required if deep learning is going to reach human-level intelligence? Bengio referenced his previous suggestions, that missing pieces of the puzzle include:

In a panel discussion, Hinton, LeCun and Bengio were asked how they reconcile their research approaches with colleagues committed to more traditional methods. Hinton had been conspicuously absent from some AAAI conferences, and hinted at why in responding: The last time I submitted a paper to AAAI, I got the worst review I ever got. And it was mean. It said Hinton has been working on this idea for seven years [vector representations] and nobodys interested. Time to move on.

Hinton spoke of his efforts to find a common ground and move on: Right now were in a position where we should just say, lets forget the past and lets see if we can take the idea of doing gradient descent in great big system parameters. And lets see if we can take that idea, because thats really all weve discovered so far. That really works. The fact that that works is amazing. And lets see if we can learn to do reasoning like that.

Author: Fangyu Cai & Yuan Yuan | Editor: Michael Sarazen

Like Loading...

Read more:
AAAI 2020 | Whats Next for Deep Learning? Hinton, LeCun, and Bengio Share Their Visions - Synced

Unionized HealthPartners Workers OK Strike February 07, 2020 – Twin Cities Business Magazine

About 1,800 unionized HealthPartners workers are slated to strike later this month if theyre unable to reach an agreement with the health care system.

On Thursday, 95 percent of SEIU Healthcare Minnesota workers voted to authorize a seven-day strike, which would begin Feb. 19. The union filed a 10-day strike notice on Friday morning, said Kate Lynch, VP of SEIU Healthcare Minnesota.

It feels like its profits over patients and employees, Lynch said outside HealthPartners Neuroscience Center in St. Paul. She added that workers are willing to go back to the table at any time.

SEIU and HealthPartners last met to negotiate on Jan. 31. The marathon session spilled into early morning the following day. HealthPartners leaders have proposed increases to workers health insurance premiums and co-pays. SEIU which represents nurses, dental hygienists, physician assistants, and other frontline workers at more than 30 HealthPartners locations has rejected the health systems proposal.

The unions contract with HealthPartners expired Feb. 1.

Health insurance premiums and copays have remained the same for SEIU members for more than a decade, union officials said.

For their part, HealthPartners leaders maintain that their proposal is fair and reasonable. In a statement, they said the strike vote is disappointing.

We remain committed to reaching an agreement on a new contract that is fair to all, HealthPartners officials said in a statement.

A federal mediator will need to call both parties back to the table, according to HealthPartners.

The health system didn't say whether it had a contingency plan in place if the strike goes through.

"We can't really tell you what kind of care you're going to get when we're not there," Lynch said when asked how the union would address patients' concerns about the strike.

Read the original:
Unionized HealthPartners Workers OK Strike February 07, 2020 - Twin Cities Business Magazine

The science behind learning soft skills and hard skills on Brains Byte Back – The Sociable

On this podcast we learn the difference between soft skills and hard skills, why they are important, and how we can sharpen our skills.

Learning a new skill can be hard, especially if it is not something we are naturally good at. However, there is research that can help us understand what parts of the brain need to be activated in order to learn, and what we need to do to activate them.

Listen to this podcast below and onSpotify,Anchor,Apple Podcasts,Breaker,Google Podcasts,Overcast, andRadio Public.

Joining us on the show is Todd Maddox, an expertin the area of neuroscience, with more than 200 peer-reviewed research reports, and more than 12,000 citations under his belt. He is also the founder and CEO of Cognitive Design & Statistical Consulting and has a Ph.D. from the University of California, Santa Barbara.

And for our Neuron to something piece, we have results of a new survey which advocates that the public wouldnt trust companies to scan social media posts for signs of depression.

Here is the original post:
The science behind learning soft skills and hard skills on Brains Byte Back - The Sociable