A new way of predicting which kids will succeed in school: Look at their genes – NBC News

This article about the polygenic score was produced in partnership with The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. This is part 3 of the series Gifted Educations Race Problem.

Many factors boost a child's chance of success in school like having wealthy parents who can afford tutors. But recent research has raised another possibility one that is discomforting to many the idea that scientists might someday be able to spot the genetic markers associated with academic performance.

To do this, researchers are turning to a relatively new genetic approach called the polygenic score, which assesses a persons likelihood for a specific future based on a combination of genetic variables. Its a research technique that some scientists are using to assess obesity or cancer risk, for instance. Now, researchers are exploring this approach in non-medical contexts, like academic or athletic success.

The scientists studying genetic markers in education are trying to untangle how nature and nurture together explain school performance. In principle, genetic screening might enable teachers to tailor their approach to groups of students. Educators might then more effectively instruct kids together in one classroom, rather than separating students into accelerated and low-level courses, which can deprive Black and brown children and children from low-income families of academic opportunities.

But some researchers fear this gene screening work could be misapplied and used to further racist or eugenic thinking, even though race is a social, not a genetic, classification. Theres an ugly history of proponents of eugenics, who believe in reshaping humanity by breeding superior traits and removing inferior traits, justifying their thinking with genetics. And there are debunked racist theories that have endeavored to falsely connect race and intelligence.

For now, the science is almost entirely based on data collected from people with European ancestry, which limits the conclusions that can be drawn from it, so researchers feel that theyve at least temporarily sidestepped the issue.

But that doesnt mean they arent worried about it and about the other ways this research could exacerbate inequities in education. Screening is expensive, for instance, increasing the odds that privileged students will qualify for extra enrichment or support before their less privileged peers.

Indeed, the idea of predicting students academic performance based on their genes comes with such a raft of ethical questions and unknowns that scientists in the field are urging caution. Polygenic scores are a potentially useful new scientific tool. At the same time, there are clear reasons to be concerned, Stanford University social scientist Ben Domingue said. Were going to have the capacity, with a vial of spit, to be able to predict all these different things.

Scientists and ethicists are also concerned about commercializing this work while the research is still evolving. Already, several companies sell reports to consumers that incorporate polygenic scores for health or various physical characteristics despite the fact that the scores are not perfect forecasters of a persons future.

Researchers in the field want to see more critical discussion of how their work could be applied in educational settings. If we dont pay attention now, systems will be created, constructed around us, responding to our genetic difference, said Sophie von Stumm, a psychologist at the University of York, in the United Kingdom, who studies genetics and education. Its high time to have this discussion. Honestly, were late to the party.

Related: College graduation may be partly determined by your genes, genome study of siblings finds

The polygenic score that could help predict academic performance aims to assess genetic markers related to educational attainment. In other words, it combines hundreds of common genetic variants that are linked to the number of years a person stays in school. In 2016, this score could explain about 5 percent of the variation in the level of education completed.

In 2018, researchers studied data from more than a million people across countries and found they could strengthen the polygenic score to explain 11 percent of the variation in educational attainment. That value puts the score on par with factors like a mothers level of education attainment, which explains 15 percent of variation, and household income, which explains about 7 percent.

There are genes that affect educational attainment that is for certain now, said Aysu Okbay, an economist at Vrije Universiteit in the Netherlands who contributed to the 2016 and 2018 studies.

The scores ability to explain variation in years of schooling could improve with more data. Rough estimates indicate about 80 percent of the variation in educational attainment comes from environmental factors the rest is genetic. With enough data, some scientists believe, the polygenic score could get close to explaining 20 percent of the difference in peoples level of education.

If so, the score would be an incredibly powerful single factor for making predictions about an individuals academic future even though the combined environmental variables still eclipse the role of genes. Its really not a puny predictor at this point, Domingue said.

In February, Domingue and his colleagues found that the polygenic score could help identify which groups of high schoolers had been placed into advanced math classes. The score could also point to students most likely to stick with advanced math courses across all four years of high school.

But polygenic scores also come laced with caveats. So many, in fact, that Okbay and her colleagues published a massive list of public FAQs including how the study was designed and whether the research could lead to stigmatization of people with certain genes to help readers interpret their research.

For more of NBC News' in-depth reporting, download the NBC News app

Paige Harden, a clinical psychologist at the University of Texas at Austin and a co-author on the math study likens the polygenic score to a credit score. Neither the polygenic nor the credit score can really forecast what will happen to a particular person. Instead, they provide a rough sense of how people with that score will, on average, fare. The score is better at gauging a groups overall performance than an individuals performance.

Harden and others acknowledge that its still a mystery how the genetic variants behind the score contribute to how far a person gets in school. We dont know what the mechanisms are, Okbay said. We dont know whether its causal or not.

Some research suggests the genes associated with education are related to the nervous system and the brain, raising the possibility that theyre connected to cognitive functions things like strong memory, creativity and perseverance that serve people well in school.

But the relationship could be nuanced. Domingue pointed out that there could be genetic factors that make a person more likely to be a supportive parent, which, in turn, would correlate to better school performance in their children. Because the child and parent share DNA, the polygenic score could capture gene variants in the child that explain educational performance but actually reflect the parents behavior.

There is also an enormous shortcoming in the datasets used for this research: Virtually all are built with DNA from people of European ancestry. Although there are biobanks in the works in Asia and Africa that could address this omission, for the time being, the scores are essentially only applicable to people of European descent. Youre basically developing a tool thats only useful for one segment of the population, Harden said.

Related: Gifted classes may not help talented students move ahead faster

Given all of these limitations, most scientists believe it would be unlikely, and inappropriate, for educators to use polygenic scores to determine student placement in specific classes or schools. Will someone be mad enough to track or stream on the basis of genetic predispositions? von Stumm said. Fortunately, I think were far from that.

There could be other ways of using this genetic information. Once genetic variants are better understood and enough data is in hand, for example, it might be possible to identify children with a predisposition to learning disabilities and intervene early. In May, von Stumm and her colleagues published a paper exploring whether a toddlers polygenic score for educational attainment could identify children at risk for language or literacy delays later in life. The conclusion: Were not there yet.

Critics caution that there is too much to establish ethically and scientifically before we confront those scenarios. Someday well understand the genetic contribution to educational success or to life success but it will be our grandchildren who understand it. It wont be us, bioethicist Arthur Caplan at NYU Langone Health said.

And even if we understood this information, its not clear how to best use the scores in schools. Last year, Stanfords Domingue and two colleagues wrote about a hypothetical case study: What happens when a parent tries to use genetic data, like a polygenic score, to make the case that their child deserves additional classroom support?

I dont know that I have good answers to that, he said. But the scenario hints at another serious concern: inequality. Not everyone will be able to afford genetic screening, even when there are meaningful scores for people across ancestries.

Still, researchers are already using the polygenic score to explore long-standing conundrums like why children with very similar advantages follow different trajectories in life.

We are all subject to a big genetic lottery that corresponds to an environmental lottery, von Stumm said. She added that research into the links between genetics and academic attainment could push people to examine fairness in meritocratic societies, given that some people may carry genetic strengths that give them a slight but significant academic advantage, that, in turn, improves other aspects of their lives.

Measuring a persons genetic advantage (or disadvantage) also allows scientists to control for it in their studies. That is, they can better study factors that society can change, such as spending on special programs, compulsory education and school interventions, without having their results biased by a sample of students who are genetically advantaged or disadvantaged.

And researchers can use the polygenic score to assess whether a school has failed students with high potential or if an intervention helped retain children who were otherwise likely to drop out. In the math paper published in February, Domingue, Harden, and their colleagues found that some schools better supported high school students with low polygenic scores than others, ensuring those kids stayed in school.

Harden hopes to see the science applied in ways that emphasize social justice and provide resources to programs that need them: Thats how I think we should be using the polygenic scores if we use them at all.

Sign up for The Hechinger Reports newsletter.

Link:
A new way of predicting which kids will succeed in school: Look at their genes - NBC News

An evolutionary jolt helped cattle to spread across Africa. Now genetics must make them more productive – The Conversation Africa

African cattle breeds are astonishingly diverse, and often quite beautiful. They range from the dark-red Ankole of southern Uganda, with their massive heat-dissipating horns, to the Boran which thrive in the dusty plains of northern Kenya, to Ethiopias sturdy Mursi cattle, with their prominent shoulder humps and hanging dewlaps. The Kuri that graze on the grasses of Lake Chad are adept swimmers; the Red Fulani can trudge vast distances along the margins of the Sahara; and the famously disease-resistant Sheko inhabit tsetse fly-infested forests of southwest Ethiopia.

All billion or so cattle today descend from ancient aurochs, an extinct species of wild cattle that once inhabited large swaths of Eurasia. These cattle were domesticated on at least two distinct occasions approximately 10,000 years ago during the Neolithic era: once in south Asia leading to the zebu or humped cattle and the other in the Middle East leading to the taurine or humpless cattle.

In Africa, the oldest archaeological evidence of domestic cattle dates back to between 6000 and 5000 BC in western Egypt. These taurine cattle, initially confined to the Saharan-Sahelian belt, eventually reached isolated pockets of land in West and East Africa.

Africas cattle today have adapted to the climate, forage conditions, diseases and pests prevalent in their habitat. The individuals best adapted to their environments were more likely to survive and reproduce. They were also more favoured by people. Over time this led to different breeds and species.

Today there are an estimated 800 million livestock keepers across the continent. Cattle provide nutritious, calorie-dense food, much-needed income, and nitrogen-rich manure for replenishing soils. There are few regions of Africa where cattle do not play a central role, both economically and culturally.

But it was not always this way. My colleagues and I from the International Livestock Research Institute (ILRI) recently published a paper detailing how African cattle acquired their adaptive capacities.

Sifting through the DNA of 16 indigenous African breeds, we discovered a thousand-year-old event in which the worlds two main subspecies of cattle namely taurine and zebus mixed. This allowed African cattle after spending thousands of years confined to certain regions in Africa to diversify and spread across the continent.

Our findings help to explain how African cattle spread throughout the continent. But since they were selected and bred for resilience, African cattle never became as productive, in terms of meat or milk, as breeds in more temperate climates. Our hope is that, by studying the history hidden in indigenous cattle genomes, we can help guide efforts to breed for productivity without losing the breeds native resilience and sustainability.

Our new genome sequencing work revealed that, about a thousand years ago, pastoralist herders in the Horn of Africa began breeding the Asian zebu cattle with local taurine breeds.

The zebu offered traits that allowed cattle to survive in hot, dry climates. The taurine traits provided cattle with the ability to endure humid climates, where vector-borne diseases that affect cattle, like trypanosomiasis (or sleeping sickness) are common.

This event, which we dubbed an evolutionary jolt, allowed African cattle after spending thousands of years confined to a shifting patchwork of sub-regions in Africa to spread across the continent and flourish into the breeds we see today.

But this resilience came at a cost. African cattle are often not as productive in terms of growth rates, meat or milk as their European and American cousins. Canadian Holsteins, for example, can deliver 30 litres of milk per day, several times what most African breeds are capable of. Traditional Ethiopian Boran, for example, produced only four to six litres of milk per day.

Today scientists at ILRI, in partnership with governmental institutions in Tanzania and Ethiopia, are again trying to deliver an evolutionary jolt to Africas cattle. This time, however, they want to speed up the evolutionary clock by identifying genetic markers that signal both adaptability and productivity. Screening embryos for these markers could help scientists replicate in the lab the slow work of evolution by favouring the traits that most benefit farmers.

Earlier efforts to improve cattle productivity on the continent focused on importing cattle breeds from elsewhere, without adequately recognising African breeds unique resilience. Nearly, all these attempts have failed or resulted in crossbreeds with both adaptability and productivity diluted.

This time, we are focusing on sustainable productivityproductivity that builds on rather than disregards the resilience of indigenous African breeds.

But while we have new tools and shortcuts which enables scientists to analyse vast swaths of genetic data and decide which breeds could work well together, there are some lessons we should still draw from the first evolutionary jolt.

The first is that we shouldnt be overly concerned about crossbreeding. Because of a sense of national pride and wanting to conserve indigenous African cattle breeds, there is at times a tendency on the part of some to treat them as iconic, untouchable manuscripts.

This ignores the long tradition of crossbreeding practised by African livestock farmers and pastoralists they were (and still are) constantly mixing and matching breeds to select the animals best suited to their needs.

Another lesson is that, as scientists experiment and cross-breed, it is vitally important to remember that the local breeds have adaptations not all of them immediately obvious (a tolerance for episodic drought, for example) that have enabled their success. It is important that we do not lose those adaptive traits in the randomness of crossbreeding.

This will take innovative crossbreeding programs that incorporate scientists, government ministries, private partners and farmers to ensure the conservation of genetic information across the long life cycle of cattle generations.

And finally, its essential to include the practical, accumulated experience of pastoralists in these processes.

David Aronson, Senior Communications Advisor with ILRI, contributed to the writing of this article

Link:
An evolutionary jolt helped cattle to spread across Africa. Now genetics must make them more productive - The Conversation Africa

Bionano Genomics’ Saphyr System Shown to be Indispensable for the Analysis of Certain Genetic Disease Causing Variants – GlobeNewswire

SAN DIEGO, Oct. 15, 2020 (GLOBE NEWSWIRE) -- Bionano Genomics, Inc. (Nasdaq: BNGO) announced that a study led by scientists and clinicians from the Institute for Human Genetics at the UCSF School of Medicine and the Department of Pediatrics at the University of Colorado School of Medicine and published in bioRxiv used Bionanos proprietary genome imaging technology to identify novel disease causing variants in patients with three different genetic diseases and in a diverse control dataset of 154 individuals. The study found that Bionano's Saphyr System was able to comprehensively analyze complex genome structures called segmental duplications and helped identify several novel structural variations associated with each disease causing locus increasing the understanding of these diseases.

Segmental duplications are large segments of repetitive sequences tens to hundreds of thousands of base pairs in size. Short-read and long-read sequencing technologies cannot span these large segments of the genome. Only Bionanos optical mapping technology can image single molecules that are so long that they span the segmental duplications. These repetitive sequences can interact with each other when sperm or eggs are created and their rearrangement can cause severe genetic disease. Some of the most common of such diseases are microdeletions at 7q11.23, also known as Williams-Beuren syndrome (WBS), 15q13.3 microdeletion syndrome, 16p12.2 microdeletion syndrome and 22q11.2 deletion syndrome, also known as DiGeorge syndrome.

This study, published in bioRxiv, provides a population-level analysis of segmental duplications in 154 people and in patients with WBS, 15q13.3, and 16p12.2 microdeletion syndromes. Several novel SVs were detected for each locus, and the exact disease causing rearrangement was determined with much higher accuracy than was formerly possible without Saphyr. As previously announced, a recent publication in the journal Nature published on July 22, 2020 also discussed the unique contribution of Bionanos optical mapping technology to understanding the genetic causes of DiGeorge syndrome.

Erik Holmlin, Ph.D., CEO of Bionano Genomics commented, The microdeletion and microduplication syndromes are common genetic disorders, yet the exact genomic structures that cause them have been difficult or impossible to characterize with current sequencing-based methods. Even though microdeletion syndromes are commonly represented by hallmark features, in many cases a wide variability in clinical features is observed. Being able to understand and measure the subtle structural differences in microdeletions among different patients could allow for better clinical or therapeutic management. An increasing number of studies have relied on Bionanos Saphyr system to characterize disease-causing structural variants that could not be correctly analyzed with other molecular techniques. We will continue to make our technology available to researchers everywhere who want to greatly expand the capabilities of their genomic analysis.

The publication is available at https://www.biorxiv.org/content/10.1101/2020.04.30.071449v1.full

About Bionano GenomicsBionano is a genome analysis company providing tools and services based on its Saphyr system to scientists and clinicians conducting genetic research and patient testing, and providing diagnostic testing for those with autism spectrum disorder (ASD) and other neurodevelopmental disabilities through its Lineagen business. Bionanos Saphyr system is a platform for ultra-sensitive and ultra-specific structural variation detection that enables researchers and clinicians to accelerate the search for new diagnostics and therapeutic targets and to streamline the study of changes in chromosomes, which is known as cytogenetics. The Saphyr system is comprised of an instrument, chip consumables, reagents and a suite of data analysis tools, and genome analysis services to provide access to data generated by the Saphyr system for researchers who prefer not to adopt the Saphyr system in their labs. Lineagen has been providing genetic testing services to families and their healthcare providers for over nine years and has performed over 65,000 tests for those with neurodevelopmental concerns. For more information, visitwww.bionanogenomics.com or http://www.lineagen.com.

Forward-Looking StatementsThis press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Words such as may, will, expect, plan, anticipate, estimate, intend and similar expressions (as well as other words or expressions referencing future events, conditions or circumstances) convey uncertainty of future events or outcomes and are intended to identify these forward-looking statements. Forward-looking statements include statements regarding our intentions, beliefs, projections, outlook, analyses or current expectations concerning, among other things: the contribution of Bionanos technology to the analysis or understandings of microdeletion syndromes and future development of better clinical or therapeutic management for such diseases; the effectiveness and utility of Bionanos technology in clinical settings; Saphyrs capabilities in comparison to other genome analysis technologies; the benefits of Bionanos optical mapping technology and its ability to facilitate genomic analysis in future studies; and Bionanos strategic plans. Each of these forward-looking statements involves risks and uncertainties. Actual results or developments may differ materially from those projected or implied in these forward-looking statements. Factors that may cause such a difference include the risks and uncertainties associated with: the impact of the COVID-19 pandemic on our business and the global economy; general market conditions; changes in the competitive landscape and the introduction of competitive products; changes in our strategic and commercial plans; our ability to obtain sufficient financing to fund our strategic plans and commercialization efforts; the ability of medical and research institutions to obtain funding to support adoption or continued use of our technologies; the loss of key members of management and our commercial team; and the risks and uncertainties associated withour business and financial condition in general, including the risks and uncertainties described in our filings with the Securities and Exchange Commission, including, without limitation, our Annual Report on Form 10-K for the year ended December 31, 2019 and in other filings subsequently made by us with the Securities and Exchange Commission. All forward-looking statements contained in this press release speak only as of the date on which they were made and are based on management's assumptions and estimates as of such date. We do not undertake any obligation to publicly update any forward-looking statements, whether as a result of the receipt of new information, the occurrence of future events or otherwise.

CONTACTSCompany Contact:Erik Holmlin, CEOBionano Genomics, Inc.+1 (858) 888-7610eholmlin@bionanogenomics.com

Investor Relations Contact:Ashley R. RobinsonLifeSci Advisors, LLC+1 (617) 430-7577arr@lifesciadvisors.com

Media Contact:Darren Opland, PhDLifeSci Communications+1 (617) 733-7668darren@lifescicomms.com

See the original post here:
Bionano Genomics' Saphyr System Shown to be Indispensable for the Analysis of Certain Genetic Disease Causing Variants - GlobeNewswire

Proving The Value Of Preventive Genomics – Bio-IT World

By Deborah Borfitz

October 15, 2020| The Bio-IT World Conference & Expo closed out with a plenary keynote presentation on preventive genomics by Robert Green, M.D., professor of medicine at Harvard Medical School and a physician-scientist who directs the G2P Research Program at Brigham and Womens Hospital and the Broad Institute. Data-sharing difficulties were a recurring theme at this years conference but, as the COVID-19 Host Genetics Initiative has demonstrated, it is possible to combine genomic data to rapidly explore markers of disease, he says. But far more daily deaths are caused by cancer and cardiovascular diseasenot the pandemic virusand 59 of the causal genes are already known and actionable.

Genomic information is rarely incorporated into clinical care partly because labs, not care providers, are doing most of the testing and doctors are unclear if the benefits outweigh the costs and risks, says Green. The clinical value of DNA sequencing is also unproven, although its the central feature of personalized medicine programs that have been popping up around the country.

Green presented lessons learned from the MedSeq, exploring the impacts of incorporating genomic sequencing into everyday medicine for people with and without a suspected genetic cardiac disease, and BabySeq, testing methods for integrating sequencing into the care of newborns. Both are randomized trials funded by the National Institutes of Health.

MedSeq involved primary care physicians taking comprehensive family histories on participants with or without the addition of one-page genomic reports and following their outcomes. Reports from preventive genomic testing focused on defined, disease-specific variants with the highest clinical actionability, says Green, as distinct from indication-based testing looking at a wider universe of variants known or suspected of being pathogenic.

Notably, Green says, neither doctors nor patients experienced test-related anxietyeven when a monogenetic risk variant was discovered. In 100 individuals, 20% were found to carry a dominant mutation for a monogenetic condition. In fact, among the top four genetic mutations, sequencing often discovered ongoing disease that the healthcare system had missed.

Participating doctors, after only six hours of training, did not make any errors in communicating the results, adds Green. Healthcare spending six months post-disclosure was higher but not extraordinarily more. Two years later, 22% had been reclassified (e.g., variant of uncertain significance now likely benign or likely pathogenic variant now pathogenic).

In the smaller BabySeq Project, 11% of participants were identified as having monogenetic disease risk, Green says. As with MedSeq, a substantial number with genetic mutations already had phenotypic evidence of disease previously missed by their healthcare providers.

BabySeq additionally revealed no difference in bonding or vulnerability, says Green. Catastrophic distress is not an obstacle [to sequencing], as has often been suggested. The falling cost of genomic sequencing and interpretation should further improve the benefit-to-cost ratio.

Exactly how often does sequencing reveal something important? Herere the stats from Green: 91% of the time for recessive mutations, 80% for atypical responses to medications, 15% for dominant mutation, and 50% for elevated polygenic risk specific to at least one condition such as diabetes or cancer.

Polarizing Topic

The Mass General Brigham Biobank, which looked for the 59 genes linked to disease, has identified such mutations in over 350 of the roughly 36,000 people it has sequenced. In 75% if those cases, the mutations were linked to either cardiovascular disease or cancer and the individuals had no idea they were carrying mutations, says Green.

A significant number did not even want to know of their risk, he adds. A similarly high number met National Comprehensive Cancer Center criteria for genetic testing but had never before been tested.

The Preventive Genomics Clinic at Brigham and Womens Hospital, staffed by genetics experts and counselors, offers individuals a menu of testing options (whole genome sequencing as well as smaller panels) and also gives patients the option of being seen via telemedicine. The heart-touching stories shared on its website include a man nudged by discovered mutations to finally get a colonoscopy, revealing two cancerous lesions that were subsequently extracted, and another with worsening heart disease who learned the underlying cause was Fabry diseasea rare but treatable condition.

Genomics is a notoriously polarizing subject, Green says. The challenge in convincing the skeptics is that genomics crosses multiple therapeutic domains and testing needs to be repeated over individuals lifetime.

The exceptionalism of genomics is sometimes misplaced, he later adds, referring to the disproportionate amount of fear about misuse of genetic information relative to psychological or infectious disease data. Its perfectly possible for large groups to share genomic data that is not identifiable. Its not full-proof, but its [technically] feasible.

Federal genetic privacy laws prevent genetics-based discrimination by employers and health insurers, Green says. In July, Florida became the first state in the nation to enact a DNA privacy law that also prohibits life, disability and long-term care insurance companies from using genetic tests for coverage purposes.

Editors Note: Even if you missed the start of the event, Bio-IT World Conference & Expo virtualis still live. Register nowfor on-demand presentations.

See original here:
Proving The Value Of Preventive Genomics - Bio-IT World

The 23andMe Genetic Kit Is an Insanely Cool Gift You May Just Want to Give Yourself – It’s on Sale For Prime Day! – Yahoo News

Amazon Prime Day is here, and have you seen these deals? They're bigger and better than ever, and we can't say we mind all that much. For two days only, you can score everything from fitness deals to beauty buys and kitchen gadgets. But, we'd be remiss if we didn't talk about the fact that it's October (seriously, how?) and gift-giving season is almost here. If you've got someone in your life who could use a cool present, consider this 23andMe Health + Ancestry Service ($99, originally $199).

This genetic-testing kit is beloved by millions, and will give you a unique and in depth look into your genetics. You can save $100 if you buy it today, which is major. Whether you want to check some people off your gifting list or are curious for yourself, now's the time to buy this insanely cool kit.

Here is the original post:
The 23andMe Genetic Kit Is an Insanely Cool Gift You May Just Want to Give Yourself - It's on Sale For Prime Day! - Yahoo News

Novel findings on mole growth could pave way for skin cancer treatments – News-Medical.net

Reviewed by Emily Henderson, B.Sc.Oct 13 2020

Moles stop growing when they reach a certain size due to normal interactions between cells, despite having cancer-associated gene mutations, says a new study published today in eLife.

The findings in mice could help scientists develop new ways to prevent skin cancer growth that take advantage of the normal mechanisms that control cell growth in the body.

Mutations that activate the protein made by the BRAF gene are believed to contribute to the development of skin cancer. However, recent studies have shown that these mutations do not often cause skin cancer, but instead result in the formation of completely harmless pigmented moles on the skin. In fact, 90% of moles have these cancer-linked mutations but never go on to form tumors.

Exploring why moles stop growing might lead us to a better understanding of what goes wrong in skin cancer."

Roland Ruiz-Vega, lead author, postdoctoral researcher at the University of California, Irvine, US

Scientists believe that stress caused by rapid cell growth may stop the growth of moles through a process called oncogene-induced senescence (OIS), but this has not been proven. To test the idea, Ruiz-Vega and colleagues studied mice with BRAF mutations that develop numerous moles.

The team first focused on assessing 'senescence', a set of changes in cells usually associated with aging. Using a technique called single-cell RNA sequencing to compare mole cells with normal skin cells, they found that moles are growth-arrested, but no more senescent than normal skin cells. The cells also did not have any apparent differences in gene expression (where a gene is activated to create a necessary protein) that would support the idea of OIS controlling their growth.

Additionally, computer modelling of mole growth did not support the idea of OIS. In fact, the models suggested that mole cells communicate with each other when moles reach a certain size and stop growing. The same kind of communication also takes place in many normal tissues to enable them to achieve and maintain a correct size.

"Our results suggest that moles stop growing as a result of normal cell-to-cell communication, not as a response to stress from cancer genes, potentially changing the way we think about skin cancer," explains senior author Arthur Lander, Director of the Center for Complex Biological Systems, and Donald Bren Professor of Developmental and Cell Biology, at the University of California, Irvine. "This work paves the way for further research into the mechanisms that control skin cell growth, with the aim of better understanding what goes wrong to cause skin cancer and ultimately developing new treatments to help prevent the disease."

Source:

Journal reference:

Ruiz-Vega, R., et al. (2020) Dynamics of nevus development implicate cell cooperation in the growth arrest of transformed melanocytes. eLife. doi.org/10.7554/eLife.61026.

Read more from the original source:
Novel findings on mole growth could pave way for skin cancer treatments - News-Medical.net

Pushing the Boundaries of Translational Molecular Imaging – Technology Networks

For the promise of personalized medicine to be realized, a thorough understanding of the molecular underpinnings of health and disease is required. Advances in analytical technologies such as mass spectrometry (MS) have certainly strengthened our knowledge of cell biology, permitting a deeper look at how, and when, cells go awry in clinical specimens when compared to healthy cells. Over recent years it has become increasingly clear that for these molecular insights to be translated into the clinical space and impact patient care, spatial context is necessary.To provide spatially resolved molecular analyses of clinical specimens in a high-throughput and sensitive manner, matrixassisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI) and liquid chromatography-mass spectrometry (LC-MS) have been coupled together. However, the complexity of such experiments has required separate instrumentation that is, until now.Ron Heeren is a distinguished professor and the scientific director of the Maastricht MultiModal Molecular Imaging Institute at the University of Maastricht. His research interests include the energetics of macromolecular systems, conformational studies of non-covalently bound protein complexes, and translational imaging research, to name just a few examples.Prof. Heeren's research group recently embarked on a study to bring together the spatial molecular information that is provided by MALDI-MSI with the microproteomic characterization generated by LC-MS on the same tissue specimen, on a single instrument. Heeren and colleagues were successful in this feat, and their work is published in the journal Proteomics.1 The paper forms the basis of this interview, in which Technology Networks discusses the motivations and logistics behind the research with Heeren, in addition to reviewing the current state-of-play of the spatial omics research field.Molly Campbell (MC): Why is it important to connect different types of omics data via the spatial context, specifically in clinical research?Ron Heeren (RH): In clinical research it is all about context. It is important to understand that a biopsy can be very heterogeneous and does not always show which particular cell (out of thousands) is actually derailed or diseased. Being able to put molecular signals in the context undiluted of where they come from, is incredibly important. A lot of scientists work with blood-borne diagnostics, which is great, but also means that if you have a very tiny tumor or disease area in your body, your biomarker profile is going to be very diluted. Additionally, it is next to impossible to understand the full complexity of a disease from a single blood sample. For us, it is very important to understand molecular signals and cells in their spatial context, directly in the tissue.

There are many different ways of doing this. We like molecular imaging, because not only does it show us a specific molecule or a set of molecules, but it also shows their spatial distribution and spatial organization. Understanding the spatial organization of molecules in the context of disease is everything for us. But, generating images themselves is sometimes not enough you also want to dive into the depth of the -ome, whether it is the proteome, metabolome, or the lipidome.

Being able to look at the spatial context and organization and combine that with in-depth omics screening in the spatial context, essentially provides you with everything that you need in one go. The ability to do this on a single instrument, where we get the same type of data, the same spatial resolution and the same molecular resolution is crucial.

Molecular pathology needs to be completed in a clinically relevant timeframe. In the past, we have might conducted an imaging experiment and then proceeded with our business and prepare the images. Later, we would extract some cells and extracts from proteins and do a six-hour protein analysis experiment to understand cellular signalling in great detail. But if a patient is on the table of a surgeon, we want that information now. We need that information as soon as possible.

Data integration is a crucial aspect of this research getting all this data at in context together. Only then can you really understand the specifics of the progression of a disease. Once you understand that, you can come up with a more targeted, or perhaps personalized, precise treatment.

MC: What have been some of the key challenges in this space over recent years?RH: I already talked about one challenge, and that's throughput. A couple of years ago, Bruker introduced the rapifleX, which really sped up our work and allowed us to translate our molecular diagnostic imaging into a clinical context. But it did not have the omics part of the part of spatial analysis. Now that we have the timsTOF flex, which combines both imaging and the omics analysis, that particular challenge has been addressed.

If I look at tissue from a biopsy, or a resected piece of tissue, I can make tissue sections and I can image these sections. Five years ago, we would have been very happy to obtain a 50-micron high-throughput image. But that does not give us the required information for one single derailed cell; it maybe provides us with a group of 25 cells where something is going wrong.

One of the challenges here was to go down to spatial resolutions that allow us to analyze individual cells, and that is essentially what we've recently been doing in close collaboration with Bruker. We have created a way to integrate single-cell profiling into our imaging workflow.

Throughput and spatial resolution are challenges that we have tackled, and these are all related to sensitivity. Let's face it, if you have poorly sensitive instruments then it's going to be very difficult to conduct research in high-throughput because you will miss a lot of subtle molecular detail that you want to see.

MC: Please can you talk to us about your recent study, in which you were able to conduct lipid-based MSI and LC-MS on a single instrument? Why hasn't this been possible before? What were your motivations for conducting this work?RH: One of the challenges that we faced was identifying the right cells in a piece of tissue to subject to a proteome analysis. We want to take an in-depth look at the proteome in a number of derailed cells, such as cancer cells, and sometimes the cells have changed on a molecular level but not on a morphological level. If they have not changed morphologically, a simple optical imaging experiment will not allow you to see what the derailed cells are.

So, we wanted to use a molecular imaging approach MALDI-MSI to help us to find the right cells from the LC-MS analysis. This adds a layer of molecular information on top of the morphological images.

In the past, we would have conducted these experiments separately; we would run a separate MALDI imaging experiment, figure out where everything is, and then cut out a certain area and conduct proteomics analysis. Now with the timsTOF fleX, we can make a lipid image, and use lipids that are specific for, let's say a specific cancer, go to our laser capture microdissection microscope, cut out the cells that have been identified with lipid MSI, extract the proteins and run them timsTOF fleX with the PASEF approach. This approach allows us to extract more than 4000 different protein from only 2000 cells. This was not possible in the past, it was very difficult because you had to continuously look at different data from different instruments, make pieces of software that would translate one result to the other, and then in the end, manually connect the dots.

Now, with the spatial omics pipeline, we essentially have all these elements based on data that is taken from the same tissue or the same instrument. That improves throughput, it improves interpretability and it improves our capabilities to identify the relevant molecules for a specific disease. On top of that, it helps us to connect the dots between the different omics levels. For instance, we do lipidome based imaging, and we have a proteome panel for a certain area, and we can connect those dots. We can figure out which proteins are involved in these different lipid expression patterns locally, in the context of an entire cell in the context of an entire tissue.

MC: You applied the method to study a breast cancer sample. Can you discuss some of the key results?RH: First, on the lipid side, we found out that a very specific set of lipids are related to hypoxia and are indicative of early molecular changes in breast cancer. This allowed us to identify cells that the pathologist was not able to see. On top of that, we found out that there were proteins involved in this lipid synthesis pathway from a protein analysis that corroborated that result. We were able to see the interplay between proteins and lipids locally in these breast cancer samples that were taken from patients. With that, we essentially have a new diagnostic approach to come up with improved treatments for our patients.

MC: Are there any data handling challenges associated with combining MSI and LC/MS on the same instrument?RH: Yes. The challenge, of course, is that the imaging experiments produce tonnes of data, especially at the level of detail that we are able to go into now. And the experiments do this in a relatively limited amount of time. In other words, our data pipeline is not only solidly filled, it is almost bursting, to the extent that we actually have a challenge in keeping up with the experiments. In the past the throughput of the instrument was the limiting factor, but right now the data handling is essentially the limiting factor.

Fortunately, with help from the guys at ScILS, there are now tools in place that help us to deal with this data that actually make it manageable, from the classification perspective, and from the interpretation perspective. When we do these imaging experiments on for instance, metabolites, we use MetaboScape or Lipostar for molecular identification. These types of tools are crucial.

I think these will be the biggest challenges that the field as a whole faces in the future, because we can produce data galore but if we don't have the tools to interpret them, then it's lots of data but little information. We are working with several different partners in the field to solve that problem.

It is also an important problem because of the clinical context. We conduct this work in close collaboration with surgeons and pathologists. These pathologists are not mass spectrometrists, so how will they understand what we are trying to tell them? We need new tools to implement our findings in the workflows and in the systems used by pathologists. this will help to increase the acceptance of these new technologies as novel diagnostic tools in a surgical setting.

It's crucial for us to come up with ways to take care of what we call the translational side of our spatial omics approach, and that's also where data handling data reduction, data visualization in an intuitive environment are very, very important.

MC: How do you envision the development of this workflow will influence other omics research groups? What advice would you give them if they are considering adopting the workflow?RH: One thing that we found out from a device perspective is that looking at the problem at different angles is very, very important. Just looking at proteins gives you one view, looking at metabolites gives you another view, and the same for lipids. If you put everything together in a spatial context with imaging, you have another view of the same problem. Really, the integrative aspects of multi-level omics and imaging is what is crucial. That is what really reveals the complexity if health and disease. The advice that I would give to people starting in this field is to make sure you cover your bases. Get good mass spectrometers that are capable of delivering detailed information at all these different levels and set up the right workflow and protocols. Also make sure you have the right software tools to take care of data integration.

MC: What will be your next steps in this research space?RH: To provide an idea of where this is going now, a lot of the work we are doing is in pushing the spatial limits. We have just started a new collaboration with our surgical colleagues, and we are applying this method for organoid screening. We're developing ways to look at omics and imaging on patient-derived organoids to assess what the best treatment protocol for a patient is, based on cells that have been taken out of a tumor, grown in the lab and treated with different drugs. We like to use this workflow to understand what the effects of the drugs are. We are working with osteoarthritis in collaboration with our orthopedics department, where they are pursuing empirical regenerative therapies. So how do they really work at the molecular level? This combination of imaging and omics really shows the orthopedic department how their regenerative therapies work. All this information brought together gives us the insight as to what the best therapy for these patients is. We will keep on pushing the boundaries of translational molecular imaging to provide further insights in the molecular heterogeneity of health and disease.

Ron Heeren was speaking with Molly Campbell, Science Writer for Technology Networks.

Professor Ron Heeren. Credit: Harry Heuts.

Reference:

1. Dewez F, Oejten J, Henkel C, et al. MS imaging-guided microproteomics for spatial omics on a single instrument. PROTEOMICS. 2020;1900369. doi:10.1002/pmic.201900369

Read more:
Pushing the Boundaries of Translational Molecular Imaging - Technology Networks

Scientists reveal important role for ‘workhorse’ of cell division – The Institute of Cancer Research

Image: Cell division. Source: Wikimedia,CC SA-3.0

A new study shows how a crucial protein, which acts as trigger for cell division, helps release another key protein from the cells control centre.

When cell division goes wrong, mistakes can be made in separating the chromosomes - packages of DNA which carry the cell's genetic information - between the two dividing cells. This can lead to cancerous growth.

The crucial cell cycle enzyme, Cyclin B1-Cdk1, is the trigger for cell division to begin. It is also referred to as the workhorse of cell division, responsible for coordinating other proteins involved in the complex process.

Cyclin B1 is found in increased amounts in some cancer types. The new study offers a greater understanding of its movements and interactions during cell division, and could in future lead to more specific ways to alter the activity of Cyclin B1-Cdk1 and other proteins it interacts with, in cancer treatment.

Scientists at The Institute of Cancer Research, London, found that Cyclin B1 helps release a protein called MAD1 which plays a key role in cell division from the cells control centre when it starts to divide.

When a cell divides, its chromosomes must be separated to form two identical daughter cells from a single parent cell. This happens in a matter of minutes and the contents of the cell are completely reorganised in the process.

In the study, published in the Journal of Cell Biologyand funded by Cancer Research UK, researchers analysed the partners of Cyclin B1 and identified MAD1 as the protein with which it most strongly interacts.

They identified the precise area of the MAD1 protein which interacts with Cyclin B1, using gene editing to remove this part of the protein and showing that this prevented the two proteins from interacting.

This interaction allows MAD1 to perform its own crucial activities in the dividing process: MAD1 plays a key role in making sure each daughter cell received an equal and identical set of chromosomes.

Using a gene editing tool known as CRISPR to alter its structure, researchers looked closely at the function of MAD1 and showed that the interaction between MAD1 and Cyclin B1 importantly ensures Cyclin B1 is in the right place at the right time during cell division.

Preventing Cyclin B1 interacting with MAD1 disrupted the process of cell division and led to mistakes being made in parcelling out the chromosomes that were inherited by the two daughter cells.

MAD1 can function abnormally in some types of cancer, causing errors where cells divide unevenly, and one is left with more DNA than the other. A greater understanding of its functions in the process of cell division could lead to more precise ways to alter the activity of MAD1 in cancer treatment.

Scientists at the ICR worked for 15 years towards these findings which are fundamental to the wider understanding of cell biology and the intricate process of cell division. Many key processes involving Cyclin B1 were already known, but this study is the first to show the precise site on MAD1 which interacts with Cyclin B1, and the importance of this interaction in releasing MAD1 from the cells control centre to enable subsequent events to take place.

Study leader Professor Jon Pines, Head of the Division of Cancer Biologyat the ICR, said:

In this study, we show that Cyclin B1, a component of the workhorse of cell division, plays an important role in helping another crucial regulator, MAD1, to be released from the cells control centre. This is a fundamental example of how cell division is regulated in space and time.

Our findings help us to understand the inter-dependent roles of these proteins, both of which can function abnormally in cancer cells. Greater knowledge of the way in which a cells machinery is reorganised during the division process could lead us to new ways to tackle cancer.

Read the original post:
Scientists reveal important role for 'workhorse' of cell division - The Institute of Cancer Research

$4.5 Million NIH Grant to Penn to Build a Molecular Model of the Female Reproductive System – UPENN Almanac

$4.5 Million NIH Grant to Penn to Build a Molecular Model of the Female Reproductive System

The Penn Center for Multi-Scale Molecular Mapping of the Female Reproductive System, supported by the National Institutes of Health, will define a cellular atlas uncovering the complex interactions of cells that determine reproductive health.

Junhyong Kim, chair and Patricia M. Williams Professor of Biology in the School of Arts and Sciences, and Kate ONeill, assistant professor, department of obstetrics and gynecology in the Perelman School of Medicine, have been awarded a Human BioMolecular Atlas Program (HuBMAP) grant supported by the Eunice Kennedy Shriver National Institute of Child Health & Human Development and the Common Fund of the National Institutes of Health. Drs. Kim and ONeill are leading a multi-disciplinary team to create a comprehensive resource for womens health by documenting the molecular characteristics of individual cells in the female reproductive system. This four-year, $4.5 million grant from the NIH along with support from the University of Pennsylvania School of Arts and Sciences, Perelman School of Medicine, and the Center for Research on Reproduction and Womens Health will fund creation of the Penn Center for Multi-Scale Molecular Mapping of the Female Reproductive System.

The female reproductive system is composed of the uterus, fallopian tubes, and the ovaries. Together these organs are critical for the establishment of pregnancy, fetal development, and parturition and are central to common, costly, and debilitating disorders, including polycystic ovary syndrome, endometriosis, fibroids, and gynecologic cancers. Moreover, in addition to fertility, a functioning reproductive system is interrelated with overall health.

The new center, leveraging Penns leadership in single cell biology and reproductive biology, is an interdisciplinary effort requiring experts in gynecology, organ transplant, pathology, genomics, informatics, biomedical imaging and radiology. Co-investigators and collaborators involved in this project include: Kurt Barnhart, William Shippen Jr. Professor of Obstetrics and Gynecology; Ronny Drapkin, Franklin Payne Associate Professor of Pathology in Obstetrics and Gynecology; Jim Eberwine, Elmer Holmes Bobst Professor of Systems Pharmacology and Translational Therapeutics; Michael Feldman, professor of pathology and laboratory medicine; James Gee, associate professor of radiologic science and computer and information science; Nawar Latif, assistant professor of obstetrics and gynecology; Alison Pouch, assistant professor of radiology and bioengineering; Lauren Schwartz, assistant professor of clinical pathology and laboratory medicine; Abraham Shaked, Eldridge L. Eliason Professor of Surgery; all from the Perelman School of Medicine; Brian Gregory, associate professor and graduate chair of biology from the School of Arts and Sciences; and Arjun Raj, professor of bioengineering from the School of Engineering and Applied Science.

This important endeavor is made possible due to collaboration with the Gift of Life Donation Program and the generosity of donor families and Penn patients to participate in groundbreaking scientific research.

Go here to see the original:
$4.5 Million NIH Grant to Penn to Build a Molecular Model of the Female Reproductive System - UPENN Almanac

The coronavirus may live longer on some surfaces than previously believed. Here’s what that means – Salon

Scientists already know that the novel coronavirus, which causes the disease COVID-19, is primarily transmitted through airborne particles known as aerosols that are either inhaled or ingested. One lingering question, though, has been how long the disease can survive on surfaces after landing there. A new study has a potentially troubling answer namely, that the virus can stay on surfaces like banknotes, glass and stainless steel for up to four weeks.

The study found that fomites, or objects that are likely to carry infection, can contain live specimens of the novel coronavirus for weeks. These include "high contact surfaces such as touchscreens on mobile phones, bank ATMs, airport check-in kiosks and supermarket self-serve kiosks all acting as fomites for the transmission of viruses," a group of scientists from Australia's national science agency, theCommonwealth Scientific and Industrial Research Organisation (CSIRO), wrote in their study. Published in the Virology Journal on Monday, the researchers argued that the SARS-CoV-2 virus "remains viable" for 28 days or longer when it dries on "non-porous" surfaces, meaning that it would be possible to get infected the novel coronavirus in a room with a conventional temperature and humidity level(68F and 50% humidity).

They note that the SARS virus, a related coronavirus, also managed to regain its infectiousness after remaining dried up on plastic for 28 days at room temperature.

That has implications particularly for the kinds of things that people touch everyday and trade between each other, particularly currency. As the authors explain, money is regularly passed between large groups of people, and banknotes made of paper and polymer can both carry live specimens of the novel coronavirus."The persistence of virus on both paper and polymer currency is of particular significance, considering the frequency of circulation and the potential for transfer of viable virus both between individuals and geographic locations," the authors explain.

Debbie Eagles, deputy director of the the Australian Centre for Disease Preparedness and one of the paper's co-authors, told CNET that their study reinforces the existing consensus that there is a"need for good practices such as regular handwashing and cleaning surfaces."

Prior to this paper, scientists did not always believe that the novel coronavirus could survive on surfaces for very long. A study from the New England Journal of Medicine in March speculated that the virus could survive for up to 72 hours on plastic, for up to 48 hours on stainless steel and for up to 24 hours on cardboard. Carolyn Machamer,a professor of cell biology at the Johns Hopkins School of Medicine, explained to the university's tech hub that "you are more likely to catch the infection through the air if you are next to someone infected than off of a surface. Cleaning surfaces with disinfectant or soap is very effective because once the oily surface coat of the virus is disabled, there is no way the virus can infect a host cell." A Chinese doctor, Wang Zhou MD, expressed a similar view in March, writing that viruses can survive "for several hours on smooth surfaces" and"if the temperature and humidity permit, they can survive for several days."

Dr. Mark McKinlay, the director of the Center for Vaccine Equity at The Task Force for Global Health and who is working closely with the CDC in its response to the virus, told Salon in May that breathing in the virus is still much more of a concern than transmitting the virus through touch.

"This new CDC guidance is clarifying that it is not as easy to become infected by the SARS-CoV-2 coronavirus from hard surfaces as it is to become infected via person-to-person contact, via respiratory droplets," McKinlay explained."That's why the guidance on social distancing is so important to follow. However, it does not mean that the virus is never spread through contact with surfaces, just that it is not the predominant route of transmission."

Still, even if breathing another's infected air appears to be the most probable path of infection, there is evidence of people acquiring the coronavirus by touching objects that other coronavirus-positive people havetouched. In New Zealand, which has so few cases of COVID-19 that contact tracers are able to precisely follow the path of infections, public health experts traced two recent infections to an elevator lift button and a trash can."This particular [trash] bin had a lid that required you to lift the lid," the island nation's director of public health,Dr Caroline McElnay, said at a press conference.

The recent evidence regarding the coronavirus' life on surfacessuggests that those with compromised immune systems or who are specifically concerned about transmission may want to be diligent about wiping down oft-touched public surfaces, currency, or avoiding touching these in the first place.

Read more:
The coronavirus may live longer on some surfaces than previously believed. Here's what that means - Salon