Tag Archives: environment

The genetic lottery: Are our lives determined at birth? – New Zealand Herald

A controversial new book suggests that our success or failure in life is hard-coded in our genes at conception. By Danyl McLauchlan.

It's deeply unfair. Shortly after we're conceived, our genetic material long sequences of chemical codes arranged in a double-helical structure called DNA, tightly bundled into dense thread-like structures called chromosomes is uncoiled and scanned by complex factories of molecular machinery.

These factories use our genes as blueprints for turning a tiny, fertilised egg into a fully grown human, assembling proteins into cells, cells into organs, organs into anatomical systems digestive, muscular, cardiovascular, nervous that allow us to eat, walk, breathe and think. But we have no control over which genes we get, or the type of person they turn us into.

Each of us is genetically unique. We inherit our DNA from our parents, but in each sperm or egg the genetic sequences are recombined, shuffled around, mixed up. Which is why each of us resembles the other members of our family, but none of us is identical to them (even identical twins have minor genetic differences). If two people were able to produce kids carrying every possible combination of their genotypes, they'd have 70 trillion children.

We like to tell ourselves that we're all equal, despite our vast, randomly generated genetic diversity that life is about the choices we make or the world we're born into. These assumptions carry over into our politics. On the right, success or failure is considered meritocratic: people should have equality of opportunity but then take personal responsibility for themselves and work hard to get ahead. Left-wing politics focuses on social or economic injustice: income inequality, exploitation, discrimination. But in the first decades of the 21st century, new findings in the field of behavioural genetics call the premises behind both political projects into question.

Kathryn Paige Harden is a psychologist and behavioural geneticist at the University of Texas. In 2021, she became one of the most controversial scientists in the world when she published her first book, The Genetic Lottery. In it, Harden argues that genes matter. A lot. Social scientists have long known that family income is a strong predictor of educational attainment: if you grow up in a wealthy household, you're more likely to get a degree and a well-paid job. But what Harden is saying is that genetics are just as decisive that an important part of our success or failure is hard-coded at birth.

The sum total of all your DNA across all of your chromosomes is known as your genome. The first human genome was sequenced back in 2003, a 13-year project that cost more than a billion dollars. Today, a whole genome sequence costs about $1000 and takes 24 hours to produce. However, most labs doing behavioural research use a cheaper technique that looks for a known collection of genetic markers. This costs about $100.

There's a gene on your fourth-largest chromosome, called the huntingtin gene. It tells your cells how to create a protein that plays a role in building subcellular structures, especially in the brain. If you have a specific mutation in this gene, you're doomed to develop Huntington's chorea, a terrible neurodegenerative condition that strikes in adulthood. (When biology and medical students first learn about the gene, they worry that they might have this mutation, but if you're a member of one of the rare families that are stricken by the disease, you'll already be very aware of it.)

For a long time, genetics researchers thought that all genes worked like the huntingtin gene, in the sense that it coded for a specific protein, and a mutation caused a specific disorder. So they went in search of other monocausal genes; they looked for drug-addiction genes, depression genes, height genes, cancer genes, gay genes, criminal genes, and schizophrenia genes. Rather embarrassingly, they often announced that they'd found them.

But in 2007, the first large-scale genome-wide association study (GWAS) was published, and it revealed that most genes and gene variants were totally unlike the huntingtin model, and that none of these "depression genes" or "criminal genes" had any scientific validity.

4 Jan, 2022 04:00 PMQuick Read

27 Dec, 2021 04:00 PMQuick Read

GWAS is a suite of statistical tools: it works by comparing huge numbers of individual genomes the first studies used 10,000 people, now they're into the millions to look for differences in life outcomes. Which individuals have heart disease or cancer? How tall are they? What's their highest educational qualification? What's their household income? Researchers then use high-performance computers and sophisticated algorithms to find genes that correlate with those outcomes.

The results show us that most gene effects are tiny a variation in a single gene usually has a minimal impact, and almost all genetic effects are "polygenic", meaning they're the combination of many genes working together.

Instead of a single gene for height, there are about 700 gene variants involved, influencing everything from growth hormones to bone length. GWAS reveals that most genetic diseases or inherited traits are staggeringly complex. Even something as seemingly simple as hair colour is influenced by more than 100 different genes interacting with each other.

Because GWAS is such a powerful technique, it has been taken up by researchers across the life and social sciences. And they're uncovering the genetic origins of thousands of diseases and conditions.

You can look at the health outcomes, hair colour or height of the people in your study and correlate them to which variants they have. And you can calculate a polygenic score in which you add up all the effects of all the gene variants and estimate the likelihood that an individual will have the trait you're investigating that they'll be short or tall, have red hair or be prone to heart disease.

At the heart of Harden's argument in The Genetic Lottery is the claim that academic success in modern educational systems is innate that it's less to do with determination or grit and more like tallness or hair colour. "There are specific types of cognitive skills that are richly rewarded in modern educational systems: the verbal and visuospatial reasoning abilities that are tested by standardised cognitive tests," says Harden. And gene variations and combinations that correlate to those abilities show up in the GWAS results. "Beyond that, we see genes associated with personality traits, such as delay of gratification and openness to new experience, are also associated with going further in school."

When Harden was 22, her boyfriend at the time was a history PhD student, and for her birthday he gave her a copy of Daniel Kevles' horrifying historical study, In the Name of Eugenics. "Not the most romantic present I've ever gotten," she admits, "but certainly one of the most durably influential." So, she gets why people are so sensitive to this conversation: statistics and genetics share a very sinister past. People are right to be apprehensive.

But first, she counters, the way eugenicists and white supremacists talk about race is scientifically incoherent. Humans are a very promiscuous species; none of us are descended from one single group of people. Recent research estimates that the most recent common ancestor of every person currently alive probably lived in East Asia a few thousand years ago. We have superficial differences facial features such as the colour of our skin, eyes and hair based on where the majority of our recent ancestors are from, but we're all fairly recent relatives. "Ironically," Harden says, "genetic data help us see why modern 'race science' is actually pseudoscience."

Second, it's very hard to make racial comparisons with GWAS. This is partly because the biological markers don't line up properly across different ethnic groups and these mismatches confound the analysis. But it's also because the genomes currently available for GWAS analysis are mostly from white people. "I think it's helpful to step back and think about where the data in large-scale genetic studies of education are coming from," says Harden. "The biggest sources are European-ancestry participants from [genetic testing company] 23andMe and the UK Biobank. This is a very particular segment of the American and British populations people who are likely to be racially identified as white and who went to school in the UK and the US in the latter half of the 20th century. The genetic data does give us some clues about what sorts of traits are rewarded by the educational system for this segment of the population."

Third, Harden asks, if there's a cluster of genes that reward educational attainment, at least among people with recent European ancestry, and those people get good qualifications and access to well-paid high-status jobs how is that fair? None of us choose the genes we're born with. It is, quite literally, a lottery. "There is no measure of so-called 'merit' that is somehow free of genetic influence or untethered from biology," she says.

Instead of accepting the outcome of genetic meritocracy, she challenges us in the book with the assumption that a meritocratic society is moral. Shouldn't we be asking ourselves why our education system and labour markets allocate success and status to such a narrow set of attributes and punish others?

When it comes to genetic discrimination and how to address it, Harden says two things that school systems seem to be selecting against are ADHD symptoms and early fertility. "Making schools more inclusive and supportive of children who feel the need to move their body constantly, and of teenagers and young adults who have care responsibilities, would change the pattern of what genes are associated with getting more education."

In 1973, psychologists at the University of Otago began studying the lives of 1037 babies born between April 1, 1972 and March 31, 1973 at Dunedin's Queen Mary Maternity Hospital. In what is known as the Dunedin Study, researchers survey the participants at regular intervals, conducting interviews, physical tests, blood tests and even dental examinations. The Dunedin Study has been running for nearly 50 years and is one of the most respected longitudinal studies in the world.

In 2016, the journal Psychological Science published a paper in which the genomes of 918 non-Mori Dunedin Study participants were subjected to GWAS analysis for educational attainment. And they reported a number of key findings: educational attainment polygenic scores also predicted adult economic outcomes, such as how wealthy the subjects became; genes and environments were correlated in that children with higher polygenic scores were born into better-off homes; children with higher polygenic scores were more upwardly mobile than children with lower scores; polygenic scores predicted behaviour across the life course, from early acquisition of speech and reading skills through to geographic mobility and mate choice and on to financial planning for retirement; polygenic-score associations were mediated by psychological characteristics, including intelligence, self-control and interpersonal skill.

What they found was that children with gene-variant combinations that correlated with educational attainment were more likely to say their first words at younger ages, learn to read at younger ages and have higher aspirations as high school students. All of which sounds like a massive validation of Harden's thesis. But Professor Richie Poulton, a co-author of the paper and director of the Dunedin Study since 2000, cautions strongly against linking these findings to Harden's conclusions.

"A key point, often missed," he says, "is the genetic effects were small. A lot of hot air has been expended without acknowledging this very critical and basic fact. [Genes] are not huge influences by themselves: it's nature-nurture interplay that accounts for the most important outcomes. That's where the real gold (versus fool's gold) lies in understanding how people's lives turn out."

The same point is made by the University of Auckland's Sir Peter Gluckman, an internationally recognised expert in child development. For him, the problem with Harden's approach is that "there's no discussion of developmental plasticity. There's no doubt that genes influence behaviour. We know that genetic associations with educational achievement are very real.

"But we also know a lot of those mechanisms are very indirect. And we know that environmental influences, starting from before birth and acting right through childhood have the biggest outcomes. Take the famous experiment on the cat."

Gluckman is referring to the Harvard "pirate kitten experiment", an influential experiment in the 1960s that biology lecturers still like to shock their undergraduate students with. It involved suturing a kitten's eye closed for the first three months of its life. When the sutures were removed, the kitten was blind in that eye because the animal's brain didn't develop the ability to process data from it. "Yes, we have genes that determine eye growth," says Gluckman, "but if we don't use the eye properly in the first few months after birth, then those genes don't work properly. And at the end of the day, what we can affect is the environment."

Poulton agrees that it's far too early to talk about policy solutions based on behavioural genetics. "The understanding of how you would do this and what you would focus on is far too primitive. But you can focus on some of the 'environmental' factors that mediate the genetic effects Self-regulation abilities play a role here, and trying to strengthen those skills among all young people would have benefits."

Harden acknowledges that environmental factors are hugely important. And she believes that the effect sizes for educational attainment polygenic scores will only increase as the datasets grow and the genomic information becomes more fine-grained. But she doesn't believe you can talk about proper educational intervention without discussing genetics. She points out that most current educational interventions have almost no effect on student outcomes, no matter how well funded they are. "Not talking about genetics means sticking with the status quo," she says.

Read the original here:
The genetic lottery: Are our lives determined at birth? - New Zealand Herald

Social cognitive abilities are associated with objective isolation but not perceived loneliness – PsyPost

New research provides evidence that social isolation is associated with reduced social perception and emotion recognition skills. The findings, published in the Journal of Research in Personality, suggest that social cognitive capacity predicts objective isolation but not feelings of loneliness.

Loneliness has been increasingly recognized as a major societal problem population studies have shown that it has a higher impact on mortality rates than hypertension and obesity, explained study author ukasz Okruszek, the head of the Social Neuroscience Lab at the Polish Academy of Sciences.

Importantly, it has been emphasized that the feeling of loneliness is driven mostly by ones perception of social relationships rather than by objective qualities of social relationships per se. The same relationship (e.g. marriage) may be perceived as either loving and caring or detached and unaffectionate, depending on ones personal experiences, attitudes, and needs.

Thus, while loneliness can be linked to objective social isolation, the former does not implicate the latter, Okruszek explained. People may often report feeling lonely even despite maintaining numerous social ties. Given the important role that cognitive processes play in our appraisals of social relationships, we decided to examine the association between both subjective and objective social isolation and cognitive processes that underlie processing and interpretation of social information.

In the study, 252 individuals (aged 1850) with no history of psychiatric or neurological disorders completed assessments of subjective loneliness and objective social isolation. Objective social isolation was measured by asking the participants the number of relatives with whom they were in regular contact, could seek help from, and could confide in. Subjective loneliness, on the other hand, was measured by asking the participants the extent to which they agreed with statements such as No one really knows me well and I feel isolated from others.

The participants also completed several validated tests of social cognitive capacity, such as the ability to recognize others emotional states and infer someone elses state of mind.

The researchers found that those with a higher level of objective social isolation tended to exhibit worse social cognitive capacity. However, this was not the case for subjective feelings of loneliness.

Contrary to our hypotheses, we observed that social perception and emotion recognition were associated with objective social isolation, but not loneliness, Okruszek told PsyPost. In contrast, a tendency to attribute hostile intentions in ambiguous social situations (a hostility bias) was associated with both objective social isolation and loneliness. This finding suggests that social cognitive biases may be among the targets for interventions that are aimed at reducing loneliness.

But more research is needed on the longitudinal associations between social cognitive abilities and social isolation.

While we have shown which cognitive mechanisms are linked with loneliness and objective social isolation, the trajectories linking these findings with health outcomes observed in lonely and isolated individuals are still to be explored, Okruszek said. Previous studies have found that structural and functional abnormalities may be observed in lonely individuals in key brain structures that are involved in the processing of social information.

In addition, the feeling of loneliness may negatively impact heart rate variability, which can serve as an indicator of the ability to regulate activity in response to unknown and potentially threatening stimuli in the environment. Thus, the goal of our further studies is to examine the relationship between cognitive mechanisms, activity of brain networks during social information processing and physiological (reduced heart rate variability) markers in lonely individuals.

As noted above, loneliness is a major public health challenge, and its prevalence and importance is even more pronounced given the global pandemic, the consequences of which will likely be felt for years if not decades, Okruszek added. We believe it is critically important to understand how loneliness influences health and quality of life, and hope that this work, along with that of others, will ultimately benefit society.

The study, Owner of a lonely mind? Social cognitive capacity is associated with objective, but not perceived social isolation in healthy individuals, was authored by . Okruszek, A. Piejka, M. Krawczyk, A. Schudy, M. Wisniewska, K. Zurek, and A. Pinkham.

Read this article:
Social cognitive abilities are associated with objective isolation but not perceived loneliness - PsyPost

The Genetic Lottery is a bust for both genetics and policy – Massive Science

The last decade has seen genetics and evolution grapple with its history; one composed of figures who laid the foundations of their field while also promoting vile racist, sexist, and eugenicist beliefs.

In her new book, The Genetic Lottery, Kathryn Paige Harden, professor of psychology at University of Texas at Austin, attempts the seemingly impossible task of showing that, despite a history of abuse, behavioral genetics is not only scientifically valuable but is an asset to the social justice movement.

In this attempt, she fails twice. For the first half of the book, Harden tries to transform the disappointment of behavioral genetics in the years following the Human Genome Project into a success that proves that genes are a major and important cause of social inequality, like educational attainment or income levels. In the second half, she tries to show that this information is not a justification for inequality, rather it is a tool to use in our efforts to make society more equitable and cannot be ignored if we wish to be successful. To say the least, this section too falls short. Harden refuses to engage with the history and trajectory of her field, and ultimately the science fails to uphold the idea that not considering genetic differences hinders our attempts to create a more equitable world.

In the book Misbehaving Science, sociologist Aaron Panofsky documents the history and progression of behavioral genetics, from its formal inception in the 1960s. Throughout its history behavioral genetics has responded to criticism in a variety of ways.

In 1969, the educational psychologist Arthur Jensen used behavioral genetics methods to argue that IQ gaps between white and Black Americans had genetic origins and, therefore, could not be remedied by educators or social policy. As criticism from mainstream geneticists and evolutionary biologists tied Jensen and behavioral geneticists to each other, the field attempted to hold a middle ground between Jensens racist conclusions and the belief that human behavioral genetics was fundamentally flawed. However, in this attempt to preserve their field from criticism, behavioral geneticists progressively defended the importance of race science research and adopted some core premises about the influence of genetic differences on the racial IQ gap.

In the following decades, Jensen and like-minded researchers like J. Philippe Rushton, Richard Lynn, and Linda Gottfredson received funding from the Pioneer Fund, an organization explicitly dedicated to race betterment. All the while, they were integrated into editorial boards of journals that published behavioral genetics work and treated as colleagues. Even mainstream behavioral genetics work like the Minnesota Study of Twins Reared Apart and the Texas Adoption Project would receive funding from the noxious Fund.

In attempts to justify their field against continued criticism, behavioral geneticists themselves used twin study results to argue social interventions would be ineffective. As Panofsky wrote:

This history, including behavioral genetics' own role in generating, promoting, and defending scientific racism and determinist views of genetics is completely absent from Harden's book. This history matters; it is the source of the isolation of behavioral genetics from mainstream genetics research. This isolation has produced the intellectual and ideologically stagnant lineage that Harden operates in.

These biases are most pronounced in the early chapters walking readers through the science, which often leads to an incomplete, misleading, or mistaken account of genetic research and behavior. Harden presents an argument about the major causal role of genetic differences. These results span decades, including twin studies, and recent developments like genome-wide association studies (GWAS), polygenic scores (a single value combining individual estimated effects of genome-wide variations on a phenotype), and genomic analyses of siblings. Unfortunately, Harden often gives these results in such a misleading way that it obscures how damaging they actually are to her own core thesis.

For example, Harden extols sibling analyses as unassailable evidence of independent, direct genetic causation free of biases found in other methods. While its true that polygenic scores from sibling analyses resolve substantial problems that sometimes create inaccurate associations between DNA and a phenotype, Harden fails to mention several key differences between these sibling-based methods and other genomic or twin-based methods. It is rarely stated clearly that these family methods produce much smaller estimates of genetic effect, often nearly half the size as population-based methods, making the 13% variance explained by current education polygenic scores a likely overestimate. Harden also fails to mention that a commonly used method employed does not fully eliminate the problems from population structure or that estimates from siblings can still include confounding effects that create correlations between genes and environment.

Even worse, Harden moves between the less biased, but smaller, results from sibling methods to the more biased but larger estimates from population-based polygenic scores without being clear this is what she is doing. This happens frequently when discussing research claiming that educational polygenic scores substantially explain differences in income. The result is Harden obscures the fact that more reliable techniques result in lower predicted genetic effects. Readers may be wrongfully led to believe genetic effects are both large and reliable when in reality they are more often one or the other.

Hardens failure to engage with critics of behavioral genetics, often from the political left, veers between simple omissions and outright misrepresentation. This treatment is in stark contrast to how she treats biological determinists on the political right. The work of Charles Murray, the co-author of The Bell Curve, which claimed that differences in IQ scores between the rich and poor were genetic, and whose research aligns neatly with Hardens, is described as mostly true and his political implications are lightly challenged. The most prominent critic of behavioral genetics, Richard Lewontin, gets much rougher treatment.

In one of the three cases in which Harden bothers to mention Lewontins decades-long engagement with behavioral genetics, she gets it wrong, claiming that Lewontin merely said that heritability is useless because it is specific to a particular population at a particular time. In reality, Lewontin showed why the statistical foundation of heritability analyses means it is unable to truly separate genetic and environmental effects. Contra Hardens characterization of her opponents, Lewontin recognized genetic factors as a cause of phenotypes; however, he stressed their effects cannot be independent of environmental factors and the dynamics of development.

Harden implies that giving people access to equal resources increases inequality and genetic influence. Lewontin explained why the outcome of equalizing environments precisely depends on which environment you equalize. As a toy example, a cactus and a rose bush respond differently to varying amounts of water. Giving both plants the same, small, volume of water is good for the cactuss health and bad for the rose, giving both a larger volume of water is bad for the cactus and good for the rose. Equalized environments regardless of quality can reduce or increase inequality and can reduce or increase the impact of genotypic differences depending on the environment and the norm of reaction for a trait and set of genotypes. Heritability analyses cannot provide insight on this distribution or nature of genotype and environment interactions. These detailed, quantitative, and analytic arguments are entirely ignored by Harden.

In her story, people on the political left are ideologically driven to oppose behavioral genetics because they believe it invalidates their desire to ameliorate inequality. In the powerful book-length criticism of behavioral genetics, Not in Our Genes, Lewontin, with neuroscientist Steven Rose and psychologist Leon Kamin, all socialists, defy Hardens characterization of her critics from the left, writing:

They further write:

Not in Our Genes criticizes biological determinism for oversimplifying the processes that create diversity in the natural world. And the ways that biological determinism is employed for political and ideological reasons by people like Arthur Jensen, Daniel Patrick Moynihan, or Hans Eysenck, to undermine movements for social and economic equality on the basis of biological data. Lewontin, Kamin, and Rose did not oppose biological determinism simply on ideological grounds. They knew there was no true threat to egalitarian beliefs posed by biological data if one properly understands biology in a non-determinist way. Instead, they wanted to move beyond just a scientific critique and provide a social analysis of why the mistakes of biological determinism are made, persist, and gain in popularity. They write:

This lack of meaningful engagement with critics is not just poor scholarship, it weakens Hardens case. Problems arise with Hardens discussion of heritability, for example, which would be remedied with a genuine engagement with critics from mainstream genetics and evolutionary biology. Harden takes a hardline position that heritability is a measure of genetic causation within a sampled population; however, despite her attempt over two chapters to build this case, she is still fundamentally mistaken about the concept.

Early work in plant breeding and genetics can help shed light on the source of this confusion. The pre-eminent statistical geneticist, Oscar Kempthorne, in a 1978 critique of behavioral genetics, wrote that the methods employed by the field can tell us nothing about causation because all they really represent is simply a linear association between genetics and phenotypes, without any further ability to connect the two to each other.

The extent to which correlations can be interpreted as causation depends on properly controlling for confounding variables. In the context of heritability, this means that genetics and environment need to be independent of each other, but this cannot be the case without direct experimental manipulation. In fields like plant breeding, it is possible to experimentally randomize which environments a plant genotype experiences, and genetically identical plants can be put in different environments for extra control, so these inferences are safer to make. In human genetics, however, this is not possible even with the sibling and twin methods Harden focuses on. These processes that complicate causal interpretation of heritability estimates have been discussed ad nauseum by other behavioral geneticists, which is why Harden is one of the few who comes to her conclusions.

One final glaring omission worth noting occurs in Hardens chapter on race and findings of behavioral genetics. Here, Harden does an admirable job trying to prevent the misapplication of behavioral genetics to questions of racial differences. Surprisingly absent though is the fact that across a variety of studies, genetic variation is much larger within races compared to between races. This finding undermines core perceptions about the biological nature and significance of race. It also has important implications for our assumptions about the role of genetics in phenotypic differences between races, namely that they will be small to nonexistent. One could speculate the omission is because the finding was from none other than Richard Lewontin. This case is particularly problematic because in randomized control trials, biology classes emphasizing Lewontins findings have shown very strong evidence of reducing racial essentialism, prejudice, and stereotyping. Few science education interventions against racism and prejudice have such strong evidence in their favor.

Above all, Harden desperately wants to impart one idea in the first part of the book: genes cause social inequality. Here she argues for causation as differences makers in counterfactual scenarios. In other words, X causes Y if the probability of Y occurring is different were X not to happen. As Harden notes, experimental science adopts a similar and in ways stronger, interventionist theory of causation, based around experimental interventions. Here X is said to cause Y if there is a regular response of Y to an intervention on X.

Under the interventionist theory, Hardens account of genetic causation runs into trouble. First, it requires us to be able to isolate a specific property on which we can intervene. This is possible in cases of simple genetic disorders with clear biological mechanisms and short pathways from gene to trait, like sickle cell anemia or Tay-Sachs. However, this doesnt work for behaviorally- and culturally-mediated traits involving large numbers of genes, with small effects and diffuse associations between genetic and non-genetic factors. There is simply no method to isolate and intervene on the effects of specific genetic variants that holds environmental factors constant in a way we would normally recognize as an experimental intervention. This applies still to the sibling analyses that Harden tries to portray as randomization experiments. Contrary to one of Hardens more bizarre claims, meiosis does not approximate a randomized experiment. All it does is randomize genotypes with respect to siblings, it does not randomize environments experienced by genotypes. Our broad array of social and cultural institutions still acts in a confounding way. Instead, we just have a polygenic score, which is more a statistical construct than a tangible property in the world.

Second, for Hardens causal claims to hold weight, genetic and environmental factors must be distinct components that are independently disruptable. This reflects what the philosopher John Stuart Mill called the principle of the composition of causes, which states that the joint effect of several causes is identical with the sum of their separate effects. At the core, Harden assumes that genetic and environmental influences on human behavior are independent and separable. To say the absolute least, this is a highly dubious assumption. Based on the arguments from critics like Lewontin and the work from research programs like developmental systems theory, there is very good reason to think that biological systems are not modular, especially in the case of educational attainment. Genetic and environmental influences interact throughout development, the interactions are dynamic, reciprocal, and highly contingent. It simply isnt plausible to estimate the independent effect of one or the other because they directly influence each other.

A further weakness of Hardens book is that just because genes make a difference in phenotype, it does not mean that genes are even relevant to the analysis of these phenotypes. In reality, Lewiss account of causation, that X is a cause if a different outcome would have occurred in the absence of X, can be a pretty low bar, and the causes it identified may not be very relevant. An obviously absurd example is that the argument could be made that the sun caused me to wake up this morning since it is the origin of the trophic cascade that nourished my body enough to continue necessary biological functions. Under Lewis account, the sun is a cause of my waking up, but its hardly a relevant or informative cause compared to my alarm clock or to the bus I need to catch at 8:35am.

In Biology as Ideology, Lewontin discusses the causes of the disease tuberculosis. He notes that in medical textbooks the tubercle bacillus, which gives people the disease when infected, is the cause of tuberculosis. Lewontin writes that this biological explanation is focused on the individual level and treats the biological sphere as independent from external causes related to the environment or social structure. While we can surely talk about the role of the tubercle bacillus in causing the disease we can also talk about the social conditions of unregulated industrial capitalism and its role in causing outbreaks and deaths by tuberculosis and can gain far more insight by analyzing the causes of tuberculosis in that way.

This distinction of whether a cause is relevant for particular social and scientific issues becomes a problem for Harden in the climax of her book where she tries to convince the reader that genetic information is a crucial tool for addressing social inequality.

One example given by Harden is that children who perform well but are in poor schools are able to achieve less, and that poor people with higher education end up making less money than rich people in the same fields. These findings are neither novel nor do they require the use of potentially misleading genetic data. While Harden tries to defuse right-wing arguments about shortcomings of social science research, this isnt a given. As research Harden herself presents shows, results from behavioral genetics bolster the far right and they regularly share this research to promote their beliefs and challenge egalitarian policies. Instead of engaging with this bad-faith criticism from the right, we can simply disregard them, just as Harden disregards their co-option of her field of research.

Finally, Harden expresses a general concern that social science and psychological studies are plagued by genetic confounding, that is the correlations they observe are actually due to unconsidered genetic forces that relate an individual to their outcome (i.e. low income doesnt cause poor health, genes cause both low income and poor health). For this example, Harden is hard on these complaints, equating research that does not include genetic information as tantamount to robbing taxpayers, but light on evidence that this genetic confounding is a widespread problem, or that it can only be addressed with behavioral genetic research.

Surprisingly, all these examples abandon the earlier bluster about genes being crucial causal factors in our life and instead opt for genetic data as one of many methods for causal inference of environmental interventions. We no longer care about heritability estimates; instead, we use twins as an experimental design. In some cases this is fine, however using individuals who have similar genotype, environmental characteristics, and phenotype does not mean that genes are significant causes, its just a good experimental design. Here, some of Hardens arguments about social science research are accurate. Observational and correlation-based studies are weak for a number of reasons, not simply because they ignore genetic differences. The goal should be strengthening causal inference in the social sciences, and we have some idea of how to do that from other fields. To strengthen the ability to identify causes, epidemiologists employ direct experiments, like randomized control trials, exploit natural experiments that can approximate experimental randomization, such as studies that observe changes in outcome shortly after changes in government policy are enacted, or designs that use statistical methods to match people based on background demographic information like income, neighborhood quality, family education, etc.

In fact, there are principled reasons to think genetic data has little to no benefit above and beyond the kinds of data we can collect from non-genetic social science experiments. Eric Turkheimer, Hardens doctoral advisor, has articulated the phenotypic null hypothesis which states that for many behavioral traits the genetic variance identified from behavioral genetics studies is not an independent mechanism of individual differences and instead reflects deeply intertwined developmental processes that are best understood and studied at the level of the phenotype. This certainly appears to hold for the traits Harden talks about. Even with GWAS and polygenic scores, we are given no coherent biological mechanism beyond...something to do with the brain, they interact with and are correlated with the environment, and they are contextual and modifiable. Harden laments focus on mechanisms, but identifying specific causal mechanisms would be precisely how education polygenic scores could be actually helpful. For example, in medicine, GWAS have helped identify potential drug targets by identifying biological mechanisms of disease, and can double the likelihood of a drug making it through clinical trials.

However, this situation doesnt exist for things like education. Instead, we can understand the role of correlated traits like ADHD, or the effect of interventions purely at the phenotypic level by seeing how educational performance and attainment itself change upon interventions from well-designed experiments. In fact, several polygenic scores, from educational attainment to schizophrenia, and even diseases like cardiovascular disease have been shown to have virtually no predictive power beyond common clinical or phenotypic measures, meaning we do not more accurately predict the outcome of those particular phenotypes even with robust polygenic scores. So why not focus our efforts on phenotypes instead of genotypes in cases like education, income, and health where we have some ability to do randomized experiments and a wealth of quasi-natural experiments?

There are existing studies that attempt some kind of true experimental manipulation related to education. Despite what Harden or the charter-school supporting billionaire John Arnold says, we do have some idea on what can improve schools. Research indicates that de-tracking education, that is ending the separation of students by academic ability and having all students engage in challenging curriculum, regularly improves student performance for those with lower ability and does not hinder students with higher ability.

Experiments have shown large benefits to those passing classes and the grades they receive when courses are structured around a more pedagogically informed curriculum that actively engages students. Detracking and active learning have the added advantage of greatly affecting racial gaps in educational performance. To achieve these goals it is likely that teachers will need to be better trained and compensated, and student-pupil ratios would need to change. These changes would likely be related to school funding, teacher salary and quality, and school resources even if those factors are not sufficient to improve educational outcomes in every situation.

Simply identifying that other methods can improve social sciences doesnt mean we shouldnt use every tool in our toolbox, as Harden says. However, there are convincing reasons we ought not to rely on genetic data for this kind of research. One reason is that polygenic scores are not very good as controls for experiments testing the effect of environmental intervention. Research has found that the pervasive interplay of genes and environment weakens their ability to control for genetic confounding or identify the efficacy of environmental interventions. Since polygenic scores can reflect contingent social biases without us knowing, it is possible, and likely, that by relying on them to identify effective interventions we are in fact reifying ingrained social and economic biases further in our systems.

One final concern is how this research is interpreted by people, were it to be widely adopted. Researchers found in online experiments that the very act of classifying someone based on their educational polygenic score led to stigmas and self-fulfilling prophecies. Those with high scores were perceived to have more potential and competence while those with low scores were perceived in the opposite way. Not only does this research suggest genetic data leads to essentialist beliefs that can re-entrench existing inequalities, but this kind of dependency can also create even more confounding influences that complicate the application of genetic data for social science questions.

Finally, we reach the last issue with The Genetic Lottery: we dont need the concept of genetic luck to pursue egalitarian policies. Harden regularly remarks that the alternative is to perceive peoples outcomes as their individual responsibility. Either something is the result of genes they have no control over, or it is their fault for not working hard enough. However, progressive politics revolves around structural and systemic factors that are outside of peoples control and contribute to their outcomes. There is already a recognition of moral luck, or that peoples outcomes are not their fault, but due to the situations they find themselves in. This engagement with progressive motivations and philosophy is absent in Hardens analysis.

In Hardens penultimate chapter she contrasts eugenic, genome-blind, and anti-eugenic approaches to policy. What ultimately occurs is a strawman of genome-blind policy approaches and often anti-eugenic policies that are hard to distinguish from eugenic policies. For example, what is the difference between Hardens description of the eugenic policy Classify people into social roles or positions based on their genetics and the anti-eugenic policy Use genetic data to maximize the real capabilities of people to achieve social roles and positions? While the genome-blind position is described as Pretend that all people have an equal likelihood of achieving all social roles or positions after taking into account their environment., all we really need to do to achieve our progressive goals is ensure that peoples ability to succeed and thrive in life is not conditioned upon their origin, preferences, or abilities. Theres simply no need to use genetic data on people at all.

In another case involving healthcare Harden suggests the genome-blind approach is to keep our system the same while prohibiting the use of genetic information, while the anti-eugenic approach is creating systems where everyone is included, regardless of the outcome of the genetic lottery. However, the system Harden describes is not universal social programs that ensure healthcare, housing, or education regardless of economic situations. Rather it is a system that resembles means-testing social welfare with genetic data. Of course, universal social programs do achieve exactly the anti-eugenic goal while still being genome-blind! Hardens complete disregard for actual rationale and form of progressive policies when crafting the genome-blind caricatures is inexcusable from someone who claims to be progressive.

For a progressive that supports universal healthcare, a living wage for all, housing as a human right, or free education, it does not matter that people are different and it does not matter the cause for that difference. The fact that some people need healthcare to survive is the reason why it should be available for free, whether the need is from an inherited or acquired disease. It is acknowledged that people have different preferences and strengths, which ultimately results in them living different lives. The fact that for some people this means the difference between a living wage and poverty is what progressives take issue with, and it doesnt matter what the cause of these differences are, simply that we address them.

Ultimately, Harden tries to sell us on research that we dont need, based on faulty premises, and that is incapable of delivering on what she promises. Her failure to engage with the history of her own field, her scientific critics, or the actual content of progressive political goals leaves this book in a very poor place. In a way, The Genetic Lottery represents the fact that behavioral genetics no longer has a place to go after the tenets of genetic determinism and biological reductionism were shown to be untenable. If one wants to gain an understanding of modern genetics, or to learn how we may strengthen progressive causes, they should look elsewhere.

Follow this link:
The Genetic Lottery is a bust for both genetics and policy - Massive Science

What the First Two Laws of Thermodynamics Are and Why They Matter – Interesting Engineering

Thermodynamics is the branch of physics that studies the relationship between heat and other forms of energy. It's especially focused on energy transfer and conversion and has a lot to contribute to the fields of chemical and mechanical engineering, physical chemistry, and biochemistry.

The term thermodynamics was likely first coined by mathematical physicist William Thompson, also known as Lord Kelvin, in his paper On the Dynamical Theory of Heat (1854).

Modern thermodynamics is based on four laws:

In this article, well be focusing on the first and second laws of thermodynamics.

The first law of thermodynamics is also known as the law of conservation of energy. Given that the energy cant be created or destroyed, the total energy of an isolated system will always be constant because, and can only be converted into another form of energy or transferred somewhere else in the system.

The formula of the first law of thermodynamics is U = Q W, where U is the change in internal energy U of the system, Q is the net heat transferred into the system (the sum of all the heat transfers of the system), and W is the net work done by the system (the sum of all work performed on or by the system).

The second law introduces the concept of entropy in thermodynamics. Entropy is a physical property that measures the amount of thermal energy in a system that is unavailable for doing useful work. The energy that cant do work turns into heat, and the heat increases the molecular disorder of the system. Entropy can also be thought of as a measurement of that disorder.

The second law of thermodynamics states that entropy is always increasing. This is because, in any isolated system, there is always a certain amount of energy that is not available to do work. Consequently, heat will always be produced and this naturally increases the disorder (or entropy) of the system.

The increasing entropy (S) equates to the heat transfer (Q) divided by the temperature (T). This is why the second law of thermodynamics can be expressed with the formula S =Q / T.

As stated above, the first law of thermodynamics closely relates to the law of conservation of energy, which was first expressed by Julius Robert Mayer in 1842.Mayer realized that a chemical reaction produces heat and work and that work can then produce a definite amount of heat. Although this is essentially a statement of the conservation of energy, Mayer was not part of the scientific establishment, and his work was ignored for some years.

Instead, German physicist Rudolf Clausius, Irish mathematician William Thomson (Lord Kelvin), and Scottish mechanical engineer William Rankine would have a greater role in developing the science of thermodynamics and adapting the conservation of energy to thermodynamic processes, starting in around 1850.

The second law of thermodynamics has its origin in the work of French mechanical engineerNicolas Lonard Sadi Carnot, who studied steam engines. He is often considered the father of thermodynamics due to his book Reflections on the Motive Power of Fire (1824),which presented a theoretical discussion of the perfect (but unattainable) heat engine Motive power is what wed call work nowadays, and fire refers to heat.

In this book, Sadi Carnot wrote an early statement of the second law of thermodynamics, which was reformulated by Rudolf Clausius more than forty years later. Other scientists also contributed to defining the law: the aforementioned Lord Kelvin (1851), German mathematician Max Planck (1897), and Greek mathematician Constantin Carathodory (1909).

According to thermal science researcher Jayaraman Srinivasan, the discovery of the first and second laws of thermodynamics was revolutionary in the physics of the 19th Century.

The third law of thermodynamics was developed by German chemist Walther Nernst at the beginning of the 20th century. Nernst demonstrated that the maximum work obtainable from a process could be calculated from the heat evolved at temperatures close to absolute zero. The zeroth law had been studied since the 1870s but was defined as a separate law during the 1900s.

The first and second laws of thermodynamics are independent of each other because the law of entropy is not directly derived or deduced from the law of conservation of energy or vice versa.

But at the same time, the two laws complement each other because, while the first law of thermodynamics includes the transfer or transformation of energy, the second law of thermodynamics talks about the directionality of physical changes how isolated or closed systems move from lower to higher entropy due to the energy that cant be used for work.

In other words, the second law of thermodynamics takes into account the fact that the energy transformation described in the first law of thermodynamics always releases some extra, useless energy that cant be converted into work.

The laws of physics explain how natural phenomena and machines work. These explanations not only satisfy our curiosity but also allow us to predict phenomena. In fact, they are instrumental in allowing us to build functional machinery.

As a branch of physics, thermodynamics is no exception for this. If you know how much energy in a system can be used for work, and how much will turn into heat (and theres always a certain amount of useless energy in a system), you can predict how much heat a given machine will produce under different conditions. Then, you can decide what to do with that heat.

Heat is a form of energy and if you know that energy cant be destroyed but only transformed, you could find a way to turn that thermal energy into mechanical energy which is what, in fact, heat engines do.

Given this basic application of the first and second laws of thermodynamics, you can probably imagine how useful they can be in the engineering field. But they can also have applications in chemistry, cosmology (entropy predicts the eventual heat death of the universe), atmospheric sciences, biology (plants convert radiant energy into chemical energy during photosynthesis), and many other fields. Hence the importance of thermodynamics

To break the first law of thermodynamics, wed have to create a "perpetual motion" machine that worked continuously without the input of any kind of power. That doesnt exist yet. All the machines that we know receive energy from a source (thermal, mechanical, electrical, chemical, etc.) and transform it into another form of energy. For example, steam engines convert thermal energy into mechanical energy.

To break the first law of thermodynamics, life itself would have to be reimagined. Living things also exist in concordance with the law of conservation of energy. Plants use photosynthesis to make food (chemical energy for their use) and animals and humans eat to survive.

Eating is basically extracting energy from food and converting it into chemical energy (stored as glucose) which is what actually gives us energy. We turn that chemical energy into mechanical energy when we move, and into thermal energy when we regulate our bodys temperature, etc.

But things may be a bit different in the quantum world. In 2002, chemical physicists of the Australian National University in Canberra demonstrated that the second law of thermodynamics can be briefly violated at the atomic scale. The scientists put latex beads in water and trapped them with a precise laser beam. Regularly measuring the movement of the beads and the entropy of the system, they observed that the change in entropy was negative over time intervals of a few tenths of a second.

More recently, researchers, including some working on Googles quantum processor Sycamore, created "time crystals", an out of equilibrium phase of matter cycling indefinitely between two energy states without losing any energy to the environment. These nanoparticles never reach thermal equilibrium. They form a quantum system that does not appear to increase its entropy which totally violates the second law of thermodynamics.

This is a real-life demonstration of Maxwell's demon, a thought experiment to break the second law of thermodynamics.

Proposed by Scottish mathematician James Clerk Maxwell in 1867, the experiment consisted of putting a demon in the middle of two chambers of gas. The demon controlled a massless door that allowed the chambers to exchange gas molecules. But given that the demon opened and closed the door so quickly, only fast-moving molecules passed through in one direction, and only slow-moving molecules passed through in the other. This way, one chamber heated up and the other cooled down, diminishing the total entropy of the two gases without involving work.

Although we still dont know exactly how to use time crystals, it is already considered a revolutionary discovery in condensed matter physics. Time crystals could, at the very least, significantly improve quantum computing technology.

But theres also something about the concept of perpetual motion without using any energy that unavoidably leads futuristic minds to imagine perpetual motion quantum devices which wont require any additional input of energy such as an unplugged refrigerator that is still able to cool your food down; or more science-fictiony, a supercomputer sustaining the simulation we could be living in.

Read the original post:
What the First Two Laws of Thermodynamics Are and Why They Matter - Interesting Engineering

Mot Hennessy pledges to reduce its carbon footprint and invest in sustainability – decanter.com

Mot Hennessy, the wine and spirits division of leading luxury goods group Mot Hennessy Louis Vuitton (LVMH), has issued a pledge to reduce its carbon footprint by adopting the 1.5C target, as stipulated under the Paris Agreement and confirmed by the Science Based Targets initiative partnership (SBTi).

As part of the pledge, Mot Hennessy is committed to reduce its greenhouse gas emissions by 50% in absolute value by 2030 (compared to 2019 figures) by focusing on four core areas: reduce its raw materials carbon impact, develop eco-conscious packaging, leverage renewable energy, and promote low-carbon transportation.

We believe we have an important responsibility alongside the wines and spirits industry to significantly reduce our carbon footprint throughout the value chain, while developing biodiversity in our regions, said Mot Hennessy CEO, Philippe Schaus. We have set ambitious goals that we are committed to following regularly and integrating into Mot Hennessys overall strategy.

The announcement follows a series of sustainable viticulture initiatives released as part of Mot Hennessys Living Soils Living Together programme. Last week, the business inaugurated its new Robert-Jean de Vog Research Centre near its Mont Aigu winery in Oiry, Champagne. The centre, which cost the group some 20m (17m), is dedicated to advancing knowledge of future environmental and production challenges, tackling climate change, and to developing sustainable winemaking practices.

Faced with the two main environmental challenges of climate change and the loss of biodiversity, Mot Hennessy has structured all of its actions in its Living Soils Living Together programme, our social and environmental commitment, Mot Hennessy chief sustainability officer, Sandrine Sommers told Decanter. We have set ourselves ambitious goals for 2030 To support this we need research, innovations and new solutions and our new [centre] is crucial to meet the challenges of viticulture of tomorrow.

Conceived by architect Giovanni Pace, the new research centre covers an area of 4,000m2 and the building itself is designed to showcase Mot Hennessys commitment to sustainability. It is made from materials that grant natural insulation, which helps reduce energy consumption, and is embedded in a gently sloping earthen embankment to ensure it blends graciously with the surrounding landscape.

Its research efforts will focus on four key areas: a microbiology and biotechnology hub will be dedicated to better understand the impact of microorganisms on vineyards and on the fermentation process, while a further hub will focus on plant physiology, to mitigate the impact of climate change on vines and grapes and tackle the challenges induced by warming temperatures. The plant physiology team will benefit from innovative infrastructures such as climate chambers, which are capable of simulating climate changes in Mot Hennessy vineyards across the globe.

A process engineering hub will be destined to analyse the winemaking process, from pressing to bottling, with the aim to optimise it and to promote recyclability. Lastly, the sensory analysis and formulation hub will focus on studying the organoleptic profile of Mot Hennessy products throughout the different stages of the manufacturing process to maximise wine quality.

The research centre will be a hub for sharing knowledge both between [our] houses and with public sector researchers and will also embrace collaboration with other external structures, said Schaus. Indeed, over 10 partners will collaborate with the centre, including the Comit Interprofessionnel du Vin de Champagne (CIVC), the University of Reims Champagne Ardennes, and Frances National Research Institute for Agriculture, Food and Environment (INRAE de Colmar).

The new centre is named after Mot Hennessy former president and innovator, Robert-Jean de Vog, who contributed to the creation of the CIVC in 1941. Robert-Jean de Vog always thought a quarter-hour ahead, Schaus said. A great figure in the world of wine, he left an indelible impression on his era with his innovative spirit and activities with Maison Mot & Chandon in France and around the world.

See the article here:
Mot Hennessy pledges to reduce its carbon footprint and invest in sustainability - decanter.com

Epigenetics, the misunderstood science that could shed new light on ageing – The Guardian

A little over a decade ago, a clutch of scientific studies was published that seemed to show that survivors of atrocities or disasters such as the Holocaust and the Dutch famine of 1944-45 had passed on the biological scars of those traumatic experiences to their children.

The studies caused a sensation, earning their own BBC Horizon documentary and the cover of Time (I also wrote about them, for New Scientist) and no wonder. The mind-blowing implications were that DNA wasnt the only mode of biological inheritance, and that traits acquired by a person in their lifetime could be heritable. Since we receive our full complement of genes at conception and it remains essentially unchanged until our death, this information was thought to be transmitted via chemical tags on genes called epigenetic marks that dial those genes output up or down. The phenomenon, known as transgenerational epigenetic inheritance, caught the public imagination, in part because it seemed to release us from the tyranny of DNA. Genetic determinism was dead.

A decade on, the case for transgenerational epigenetic inheritance in humans has crumbled. Scientists know that it happens in plants, and weakly in some mammals. They cant rule it out in people, because its difficult to rule anything out in science, but there is no convincing evidence for it to date and no known physiological mechanism by which it could work. One well documented finding alone seems to present a towering obstacle to it: except in very rare genetic disorders, all epigenetic marks are erased from the genetic material of a human egg and sperm soon after their nuclei fuse during fertilisation. The [epigenetic] patterns are established anew in each generation, says geneticist Bernhard Horsthemke of the University of Duisburg-Essen in Germany.

Even at the time, sceptics pointed out that it was fiendishly difficult to disentangle the genetic, epigenetic and environmental contributions to inherited traits. For one thing, a person shares her mothers environment from the womb on, so that persons epigenome could come to resemble her mothers without any information being transmitted via the germline, or reproductive cells. In the past decade, the threads have become even more tangled, because it turns out that epigenetic marks are themselves largely under genetic control. Some genes influence the degree to which other genes are annotated and this shows up in twin studies, where certain epigenetic patterns have been found to be more similar in identical twins that in non-identical ones.

This has led researchers to think of the epigenome less as the language in which the environment commands the genes, and more as a way in which the genes adjust themselves to respond better to an unpredictable environment. Epigenetics is often presented as being in opposition to genetics, but actually the two things are intertwined, says Jonathan Mill, an epigeneticist at the University of Exeter. The relationship between them is still being worked out, but for geneticist Adrian Bird of the University of Edinburgh, the role of the environment in shaping the epigenome has been exaggerated. In fact, cells go to quite a lot of trouble to insulate themselves from environmental insult, he says.

Whatever that relationship turns out to be, the study of epigenetics seems to reinforce the case that its not nature versus nurture, but nature plus nurture (so genetic determinism is still dead). And whatever the contribution of the epigenome, it doesnt seem to translate across generations.

All the aforementioned researchers rue the fact that transgenerational epigenetic inheritance is still what most people think of when they hear the word epigenetics, because the past decade has also seen exciting advances in the field, in terms of the light it has shed on human health and disease. The marks that accumulate on somatic cells that is, all the bodys cells except the reproductive ones turn out to be very informative about these, and new technologies have made it easier to read them.

Different people define epigenetics differently, which is another reason why the field is misunderstood. Some define it as modifications to chromatin, the package that contains DNA inside the nuclei of human cells, while others include modifications to RNA. DNA is modified by the addition of chemical groups. Methylation, when a methyl group is added, is the form of DNA modification that has been studied most, but DNA can also be tagged with hydroxymethyl groups, and proteins in the chromatin complex can be modified too.

Researchers can generate genome-wide maps of DNA methylation and use these to track biological ageing, which as everyone knows is not the same as chronological ageing. The first such epigenetic clocks were established for blood, and showed strong associations with other measures of blood ageing such as blood pressure and lipid levels. But the epigenetic signature of ageing is different in different tissues, so these couldnt tell you much about, say, brain or liver. The past five years have seen the description of many more tissue-specific epigenetic clocks.

Mills group is working on a brain clock, for example, that he hopes will correlate with other indicators of ageing in the cortex. He has already identified what he believes to be an epigenetic signature of neurodegenerative disease. Were able to show robust differences in DNA methylation between individuals with and without dementia, that are very strongly related to the amount of pathology they have in their brains, Mill says. Its not yet possible to say whether those differences are a cause or consequence of the pathology, but they provide information about the mechanisms and genes that are disrupted in the disease process, that could guide the development of novel diagnostic tests and treatments. If a signal could be found in the blood, say, that correlated with the brain signal theyve detected, it could form the basis of a predictive blood test for dementia.

While Bird and others argue that the epigenome is predominantly under genetic control, some researchers are interested in the trace that certain environmental insults leave there. Smoking, for example, has a clear epigenetic signature. I could tell you quite accurately, based on their DNA methylation profile, if someone was a smoker or not, and probably how much they smoked and how long they had smoked for, says Mill.

James Flanagan of Imperial College London is among those who are exploiting this aspect of the epigenome to try to understand how lifestyle factors such as smoking, alcohol and obesity shape cancer risk. Indeed, cancer is the area where there is most excitement in terms of the clinical application of epigenetics. One idea, Flanagan says, is that once informed of their risk a person could make lifestyle adjustments to reduce it.

Drugs that remodel the epigenome have been used therapeutically in those already diagnosed with cancer, though they tend to have bad side-effects because their epigenetic impact is so broad. Other widely prescribed drugs that have few side-effects might turn out to work at least partly via the epigenome too. Based on the striking observation that breast cancer risk is more than halved in diabetes patients who have taken the diabetes drug metformin for a long time, Flanagans group is investigating whether this protective effect is mediated by altered epigenetic patterns.

Meanwhile, the US-based company Grail which has just been bought, controversially, by DNA sequencing giant Illumina has come up with a test for more than 50 cancers that detects altered methylation patterns in DNA circulating freely in the blood.

Based on publicly available data on its false-positive and false-negative rates, the Grail test looks very promising, says Tomasz K Wojdacz, who studies clinical epigenetics at the Pomeranian Medical University in Szczecin, Poland. But more data is needed and is being collected now in a major clinical trial in the NHS. The idea is that the test would be used to screen populations, identifying individuals at risk who would then be guided towards more classical diagnostic procedures such as tissue-specific biopsies. It could be a gamechanger in cancer, Wojdacz thinks, but it also raises ethical dilemmas, that will have to be addressed before it is rolled out. Imagine that someone got a positive result but further investigations revealed nothing, he says. You cant put that kind of psychological burden on a patient.

The jury is out on whether its possible to wind back the epigenetic clock. This question is the subject of serious inquiry, but many researchers worry that as a wave of epigenetic cosmetics hits the market, people are parting with their money on the basis of scientifically unsupported claims. Science has only scratched the surface of the epigenome, says Flanagan. The speed at which these things happen and the speed at which they might change back is not known. It might be the fate of every young science to be misunderstood. Thats still true of epidenetics, but it could about to change.

Until recently, sequencing the epigenome was a relatively slow and expensive affair. To identify all the methyl tags on the genome, for example, would require two distinct sequencing efforts and a chemical manipulation in between. In the past few years, however, it has become possible to sequence the genome and its methylation pattern simultaneously, halving the cost and doubling the speed.

Oxford Nanopore Technologies, the British company responsible for much of the tracking of the global spread of Covid-19 variants, which floated on the London Stock Exchange last week, offers such a technology. It works by pushing DNA through a nanoscale hole while current passes either side. DNA consists of four bases or letters A, C, G and T and because each one has a unique shape in the nanopore it distorts the current in a unique and measurable way. A methylated base has its own distinctive shape, meaning it can be detected as a fifth letter.

The US firm Illumina, which leads the global DNA sequencing market, offers a different technique, and chemist Shankar Balasubramanian of the University of Cambridge has said that his company, Cambridge Epigenetix, will soon announce its own epigenetic sequencing technology one that could add a sixth letter in the form of hydroxymethyl tags.

Protein modifications still have to be sequenced separately, but some people include RNA modifications in their definition of epigenetics and at least some of these technologies can detect those too meaning they have the power to generate enormous amounts of new information about how our genetic material is modified in our lifetime. Thats why Ewan Birney who co-directs the European Bioinformatics Institute in Hinxton, Cambridgeshire, and who is a consultant to Oxford Nanopore, says that epigenetic sequencing stands poised to revolutionise science: Were opening up an entirely new world.

The rest is here:
Epigenetics, the misunderstood science that could shed new light on ageing - The Guardian

The neuroscience of advanced scientific concepts | npj Science of Learning – Nature.com

This study identified the content of the neural representations in the minds of physicists considering some of the classical and post-classical physics concepts that characterize their understanding of the universe. In this discussion, we focus on the representations of post-classical concepts, which are the most recent and most abstract and have not been previously studied psychologically. The neural representations of both the post-classical and classical concepts were underpinned by four underlying neurosemantic dimensions, such that these two types of concepts were located at opposite ends of the dimensions. The neural representations of classical concepts tended to be underpinned by underlying dimensions of measurability of magnitude, association with a mathematical formulation, having a concrete, non-speculative basis, and in some cases, periodicity. By contrast, the post-classical concepts were located at the other ends of these dimensions, stated initially here in terms of what they are not (e.g. they are not periodic and not concrete). Below we discuss what they are.

The main new finding is the underlying neural dimension of representation pertaining to the concepts presence (in the case of the classical concepts) or absence (in the case of the post-classical concepts) of a concrete, non-speculative basis. The semantic characterization of this new dimension is supported by two sources of converging evidence. First, the brain imaging measurement of each concepts location on this underlying dimension (i.e. the concepts factor scores) converged with the behavioral ratings of the concepts degree of association with this dimension (as we have interpreted it) by an independent group of physicists. (This type of convergence occurred for the other three dimensions as well.) Second, the two types of concepts have very distinguishable neural signatures: a classifier can very accurately distinguish the mean of the post-classical concepts signatures from the mean of the classical concepts within each participant, with a grand mean accuracy of 0.93, p<0.001.

As physicists ventured into conceptually new territory in the 20th century and developed new post-classical concepts, their brains organized the new concepts with respect to a new dimension that had not played a role in the representation of classical concepts.

To describe what mental processes might characterize the post-classical end of this new dimension, it is useful to consider what attributes of the post-classical concepts could have led to their being neurally organized as they are and what cognitive and neural processes might operate on these attributes. Previously mentioned was that post-classical concepts often involve their immeasurability and their lower likelihood of being strongly associated with a mathematical formulation and periodicity, both of which are attributes that are often absent from post-classical concepts.

More informative than the absent attributes are four types of cognitive processes evoked by the post-classical concepts: (1) Reasoning about intangibles, taking into account their separation from direct experience and their lack of direct observability; (2) Assessing consilience with other, firmer knowledge; (3) Causal reasoning about relations that are not apparent or observable; and (4) Knowledge management of a large knowledge organization consisting of a multi-level structure of other concepts.

In addition to enabling the decoding of the content of the participants thoughts, whether they were thinking of dark matter or tachyon for example, the brain activation patterns are also informative about the concomitant psychological processes that operate on the concepts, in particular, the four processes listed above are postulated to co-occur specifically with the post-classical concepts. The occurrence of these processes was inferred from those locations of the voxel clusters associated with (having high loadings on) the classical/post-classical factor, specifically the factor locations where the activation levels increased for the post-classical concepts. (These voxel clusters are shown in Fig. 4, and their centroids are included in Table 2). Inferring a psychological process based on previous studies that observed activation in that location is called reverse inference. This can be an uncertain inferential method because many different processes or tasks can evoke activation at the same location. What distinguishes the current study are several sources of independent converging evidence, in conjunction with the brain locations associated with a factor (and not simply observed activation), indicating a particular process.

The factor clusters are encircled and numbered for ease of reference in the text and their centroids are included in Table 2. These locations correspond to the four classes of processes evoked by the post-classical concepts.

First, a statistically reliable decoding model predicted the activation levels for each concept in the factor locations, based on independent ratings of the concepts with respect to the postulated dimension/factor. The activation levels of the voxels in the factor locations were systematically modulated by the stimulus set, with the post-classical concepts, a specific subset of the stimuli eliciting the highest activation levels in these locations, resulting in the highest factor scores for this factor. Thus these brain locations were associated with an activation-modulating factor, not with a stimulus or a task. Second, the processes are consistent with the properties participants reported to have associated with the post-classical concepts. These properties provide converging evidence for these four types of processes occurring. For example, the concept of multiverse evoked properties related to assessing consilience, such as a hypothetical way to explain away constants. Another example is that tachyons and quasars were attributed with properties related to reasoning about intangibles, such as quasi-stellar objects. Third, the processes attributed to the factor locations were based not simply on an occasional previous finding, but on the large-scale meta-analysis (the Neurosynth database, Yarkoni et al.10) using the association based test feature. The association between the location and the process was based on the cluster centroid locations; particularly relevant citations are included in the factor descriptions. Each of the four processes is described in more detail below.

The nature of many of the post-classical concepts entails the consideration of alternative possible worlds. The post-classical factor location in the right temporal area (shown in cluster 5 in Fig. 4) has been associated with hypothetical or speculative reasoning in previous studies. In a hypothetical reasoning task, the left supramarginal factor location (shown in cluster 8) was activated during the generation of novel labels for abstract objects11. Additionally, the right temporal factor location (shown in cluster 5) was activated during the assessment of confidence in probabilistic judgments12.

Another facet of post-classical concepts is that they require the unknown or non-observable to be brought into consilience with what is already known. The right middle frontal cluster (shown in cluster 2) has been shown to be part of a network for integrating evidence that disconfirms a belief13. This consilience process resembles the comprehension of an unfolding narrative, where a new segment of the narrative must be brought into coherence with the parts that preceded it. When readers of a narrative judge the coherence of a new segment of text, the dorsomedial prefrontal cortex location (shown in cluster 6) is activated14. This location is associated with a post-classical factor location, as shown in Fig. 4. Thus understanding the coherence of an unfolding narrative text might involve some of the same psychological and neural consilience-seeking processes as thinking about concepts like multiverse.

Thinking about many of the post-classical concepts requires the generation of novel types of causal inferences to link two events. In particular, the inherent role of the temporal relations in specifying causality between events is especially complex with respect to post-classical concepts. The temporal ordering itself of events is frame-dependent in some situations, despite causality being absolutely preserved, leading to counter-intuitive (though not counter-factual) conclusions. For example, in relativity theory the concept of simultaneity entails two spatially separated events that may occur at the same time for a particular observer but which may not be simultaneous for a second observer, and even the temporal ordering of the events may not be fixed for the second observer. Because the temporal order of events is not absolute, causal reasoning in post-classical terms must eschew inferencing on this basis, but must instead rely on new rules (laws) that lead to consilience with observations that indeed can be directly perceived.

Another example, this one from quantum physics, concerns a particle such as an electron that may be conceived to pass through a small aperture at some speed. Its subsequent momentum becomes indeterminate in such a way that the arrival location of the particle at a distant detector can only be described in probabilistic terms, according to new rules (laws) that are very definite but not intuitive. The perfectly calculable non-local wave function of the particle-like object is said to collapse upon arrival in the standard Copenhagen interpretation of quantum physics. Increasingly elaborate probing of physical systems with one or several particles, interacting alone or in groups with their environment, has for decades elucidated and validated the non-intuitive new rules about limits and alternatives to classical causality in the quantum world. The fact that new rules regarding causal reasoning are needed in such situations was described as the heart of quantum mechanics and as containing the only mystery by Richard Feynman15.

Generating causal inferences to interconnect a sequence of events in a narrative text evokes activation in a right temporal and right frontal location (shown in clusters 3 and 4) which are post-classical factor locations16,17,18 as shown in Fig. 4. Causal reasoning accompanying perceptual events also activates a right middle frontal location (shown in cluster 3) and a right superior parietal location (shown in cluster 1)19. Notably, the right parietal activation is the homolog of a left parietal cluster associated with causal visualization1 found in undergraduates physics conceptualizations, suggesting that post-classical concepts may recruit right hemisphere homologs of regions evoked by classical concepts. Additionally, a factor location in the left supramarginal gyrus (shown in cluster 8) is activated in causal assessment tasks such as determining whether the causality of a social event was person-based (being a hard worker) or situation based (danger)20.

Although we have treated post-classical concepts such as multiverse as a single concept, it is far more complex than velocity. Multiverse entails the consideration of the uncertainty of its existence, the consilience of its probability of existence with measurements of matter in the universe, and the consideration of scientific evidence relevant to a multiverse. Thinking about large, multi-concept units of knowledge, such as the schema for executing a complex multi-step procedure evokes activation in medial frontal regions (shown in cluster 6)21,22. Reading and comprehending the description of such procedures (read, think about, answer questions, listen to, etc.) requires the reader to cognitively organize diverse types of information in a common knowledge structure. Readers who were trained to self-explain expository biological texts activated an anterior prefrontal cortex region (shown in cluster 7 in Fig. 4) during the construction of text models and strategic processing of internal representations23.

This underlying cognitive function of knowledge management associated with the post-classical dimension may generate and utilize a structure to manage a complex array of varied information that is essential to the concept. This type of function has been referred to as a Managerial Knowledge Unit22. As applied to a post-classical concept such as a tachyon, this knowledge management function would contain links to information to evaluate the possibility of the existence of tachyons, hypothetical particles that would travel faster than light-speed in vacuum. The concept invokes a structured network of simpler concepts (mass, velocity, light, etc.) that compose it. This constitutes a knowledge unit larger than a single concept.

Although the discussion has so far focused on the most novel dimension (the classical vs. post-classical), all four dimensions together compose the neural representation of each concept, which indicates where on each dimension a given concept is located (assessed by the concepts factor scores). The bar graphs of Fig. 5 show how the concepts at the extremes of the dimensions can appear at either extreme on several dimensions. These four dimensions are:

the classical vs. post-classical dimension, as described above, which is characterized by contrasting the intangible but consilient nature of post-classical concepts versus the quantifiable, visualizable, otherwise observable nature of classical concepts.

the measurability of a magnitude associated with a concept, that is, the degree to which it has some well-defined extent in space, time, or material properties versus the absence of this property.

the periodicity or oscillation which describes how many systems behave over time versus the absence of periodicity as an important element.

the degree to which a concept is associated with a mathematical formulation that formalizes the rules and principles of the behavior of matter and energy versus being less specified by such formalizations.

A concept may have a high factor score for more than one factor; for example, potential energy appears as measurable, mathematical, and on the classical end of the post-classical dimension. In contrast, multiverse appears as non-measurable, non-periodic, and post-classical.

The locations of the clusters of voxels with high loadings on each of the factors are shown in Fig. 6.

Colors differentiate the factors and greater color transparency indicates greater depth. Sample concepts from the two ends of the dimensions are listed. The post-classical factor locations include those whose activations were high for post-classical concepts (their locations are shown in Fig. 4) as well as those locations whose activations were high for classical concepts.

Classical concepts with high factor scores on the measurability factor, such as frequency, wavelength, acceleration, and torque, are all concepts that are often measured, using devices such as oscilloscopes and torque wrenches, whereas post-classical concepts such as duality and dark matter have an uncertainty of boundedness and no defined magnitude resulting in factor scores at the other end of the dimension. This factor is associated with parietal and precuneus clusters that are often found to be activated when people have to assess or compare magnitudes of various types of objects or numbers24,25,26, a superior frontal cluster that exhibits higher activation when people are comparing the magnitudes of fractions as opposed to decimals27, and an occipital-parietal cluster (dorsolateral extrastriate V3A) that activates when estimating the arrival time of a moving object28. Additional brain locations associated with this factor include left supramarginal and inferior parietal regions that are activated during the processing of numerical magnitudes;26 and left intraparietal sulcus and superior parietal regions activated during the processing of spatial information29. This factor was not observed in a previous study that included only classical concepts and hence the factor would not have differentiated among the concepts1.

The mathematical formulation factor is salient for concepts that are clearly associated with a mathematical formalization. The three concepts that are most strongly associated with this factor, commutator, Lagrangian, and Hamiltonian, are mathematical functions or operators. Cluster locations that are associated with this factor include: parietal regions that tend to activate in tasks involving mathematical representations30,31 and right frontal regions related to difficult mental calculations32,33. The parietal regions associated with the factor, which extend into the precuneus, activate in arithmetic tasks34. While most if not all physics concepts entail some degree of mathematical formulation, post-classical concepts such as quasar, while being measurable, are typically not associated with an algebraic formulation.

The periodicity factor is salient for many of the classical concepts, particularly those related to waves: wave function, light, radio waves, and gamma rays. This factor is associated with right hemisphere clusters and a left inferior frontal cluster, locations that resemble those of a similarly described factor in a neurosemantic analysis of physics concepts in college students1. This factor was also associated with a right hemisphere cluster in the inferior frontal gyrus and bilateral precuneus.

For all four underlying semantic dimensions, the brain activation-based orderings of the physics concepts with respect to their dimensions were correlated with the ratings of those concepts along those dimensions by independent physics faculty. This correlation makes it possible for a linear regression model to predict the activation pattern that will be evoked by future concepts in physicists brains. When a new physics concept becomes commonplace, (such as a new particle category, say, magnetic monopoliae), it should be possible to predict the brain activation that will be the neural signature of the magnetic monopole concept, based on how that concept is rated along the four underlying dimensions.

The neurosemantic conceptual space defined by the four underlying dimensions includes regions that are currently sparsely populated by existing concepts, but these regions may well be the site of some yet-to-be theorized concepts. It is also possible that as future concepts are developed, additional dimensions of neural representation may emerge, expanding the conceptual space that underpins the concepts in the current study.

Read the original here:
The neuroscience of advanced scientific concepts | npj Science of Learning - Nature.com

Dr. Ardem Patapoutian awarded Nobel Prize in Physiology or Medicine – Armenian Weekly

(Photo: Office of the High Commissioner for Diaspora Affairs)

Lebanese-Armenian scientist Ardem Patapoutian is one of the two winners of the Nobel Prize in Physiology or Medicine for their discoveries of receptors for touch, heat and bodily movement.

Dr. Patapoutian, a professor of neuroscience at the Scripps Research Institute in La Jolla, California and a Howard Hughes Medical Institute investigator, discovered a new class of sensors that respond to mechanical stimuli in the skin and internal organs.

He was honored alongside David Julius, a UC San Francisco professor of physiology, who identified a sensor in the nerve endings of the skin that responds to heat.

Our ability to sense heat, cold and touch is essential for survival and underpins our interaction with the world around us, the Nobel Assembly wrote in a statement announcing the accolades. The laureates identified critical missing links in our understanding of the complex interplay between our senses and the environment.

Dr. Julius and his team used a key ingredient of chili peppers to identify the gene that makes skin cells capsaicin sensitive. This discovery was a major breakthrough that led the way for scientists to find additional temperature-sensing receptors.

Dr. Patapoutian and his team used a micropipette to poke individual cells and find the sensors that respond to mechanical stimuli such as touch and pressure. In further research, these sensory channels have been shown to regulate physiological processes including blood pressure, respiration and urinary bladder control.

In 1944, Joseph Erlanger and Herbert Gasser received the Nobel Prize in Physiology or Medicine for their discovery of the different types of sensory nerve fibers that react to distinct stimuli, such as painful touch. While scientists have since proven that people perceive changes in their surroundings through highly specialized neurons, a key question long remained unanswered: how are temperature and mechanical stimuli converted into electrical impulses in the nervous system?

This really unlocks one of the secrets of nature, secretary-general of the Nobel Assembly Thomas Perlmann said in announcing the winners. Its actually something that is crucial for our survival, so its a very important and profound discovery.

The pairs findings also have astonishing medical implications, as they are already being used to develop treatments for a wide range of disease conditions, such as chronic back pain, arthritis and migraines.

Dr. Patapoutian said that his phone was on do not disturb when he received the call from Perlmann, who eventually reached his 92-year-old fathers landline. He then called his son at around 2:30AM California time to deliver the news.

Shortly after, the committee released a photo of a delighted Dr. Patapoutian watching the Nobel Prize press conference from his bed with his son Luca: A day to be thankful, tweeted Dr. Patapoutian. This country gave me a chance with a great education and support for basic research.

Dr. Patapoutian, who was born to an Armenian family in Beirut, Lebanon in 1967, came to the United States in 1986. I fell in love with doing basic research. That changed the trajectory of my career, he said in an interview with the New York Times. In Lebanon, I didnt even know about scientists as a career.

The Nobel Prize laureates will each receive a gold medal and 10 million Swedish kronor ($1.14 million).

Lillian Avedian is a staff writer for the Armenian Weekly. Her writing has also been published in the Los Angeles Review of Books, Hetq and the Daily Californian. She is pursuing masters degrees in Journalism and Near Eastern Studies at New York University. A human rights journalist and feminist poet, Lillian's first poetry collection Journey to Tatev was released with Girls on Key Press in spring of 2021.

More here:
Dr. Ardem Patapoutian awarded Nobel Prize in Physiology or Medicine - Armenian Weekly

2 US scientists win Nobel Prize in medicine for showing how we react to heat, touch – Fox17

Two American scientists have won the Nobel Prize in physiology or medicine for their discovery of receptors for temperature and touch.

The Nobel Assembly at Karolinska Institutet announced Monday morning that its awarding the honor to David Julius and Ardem Patapoutian.

Peter Barreras/Peter Barreras/Invision/AP

The Nobel Prize organization says Julius and Patapoutian solved how nerve impulses are initiated so that temperate and pressure can be perceived.

Julius utilized capsaicin, a pungent compound from chili peppers that induces a burning sensation, to identify a sensor in the nerve endings of the skin that responds to heat, according to the organization.

And Patapoutian reportedly used pressure-sensitive cells to discover a novel class of sensors that respond to mechanical stimuli in the skin and internal organs.

These discoveries launched research activities that officials say led to a rapid increase in our understanding of how the human nervous system senses heat, cold, and mechanical stimuli.

The laureates identified critical missing links in our understanding of the complex interplay between our senses and the environment, said the organization.

Julius, 65, is a physiologist who works as a professor at the University of California, San Francisco, while Patapoutian is a molecular biologist and neuroscientist at Scripps Research in La Jolla, California.

See original here:
2 US scientists win Nobel Prize in medicine for showing how we react to heat, touch - Fox17

Environment and Human Behavior | Applied Social Psychology …

What is the relationship between the environment and human behavior? Environmental psychologists study this question in particular, by seeking to understand how the physical environment affects our behavior and well-being, and how our behavior affects the environment (Schneider, Gruman, and Coutts, 2012).For example, pollution, a component of the physical environment, absolutely can affect our well-being and health. Ozone pollution can have unfavorableeffects on humans including shortness of breath, coughing, damage to the airways, damaging the lungs, and making lungs more susceptible to infection (U.S. Environmental Protection Agency [EPA], 2016). Meanwhile, us taking the action to recycle affects the quality of our environment. Recycling and using recycled products saves a substantial amount of energy considering it takes less energy to recycle products, than it would to create new materials entirely. In turn, the action of recycling helps battle climate change, one of the biggest threats our planet faces.

If humans can have direct effects on the environment, are we responsible for climate change? A lot of hard evidence suggests, yes. Every once in awhile, our planet warms from natural causes. This can occur from events like volcanic activity, or a change in solar output. However, recent evidence shows climate change is occurring too drastically to be solely explained through natural means. Humans have made remarkable advancements in technology by creating more automobiles, machines, factories, etc. But this revolution is not all positive. We have seen a rapid increase in greenhouse gas emissions over the last century. Sources of greenhouse gasses include automobiles, planes, factory farming and agriculture, electricity, and industrial production. The issue with greenhouse gasses is that they absorb and emit heat. Abundant greenhouse gases in our atmosphere include carbon dioxide, methane, nitrous oxide, and fluorinated gases (EPA, 2017). When thereare large quantities of greenhouse gasses in the atmosphere, the planet is going to get gradually warmer.

What happens as a result of climate change? Believe it or not, we are already experiencing some very damaging effects of climate change. Heat waves, floods, droughts, wildfires, and loss of sea ice just to name a few (National Aeronautics and Space Administration [NASA], 2017). Scientists predict we will begin to experience even more harmful effects of climate change in the future. At the current rate we are going, the Arctic sea ice is expected to disappear entirely by the end of the century. The current effects we are seeing are also expected to intensify. An even greater problem is the fact that plants and animals are unable to adapt to the quickly changing environment, and are dying off. As a result of climate change, animals habitats are becoming completely inhabitable. We are seeing a rapid loss of species which will inevitably effect the natural flow of the biosphere and the individual ecosystems it is composed of.

What can we do to slow down the effects of climate change? The first, and most simple response is we need to recognize climate change is a real threat to our planet, and even our existence. Given the recent political shift that has occurred in the United States, climate change and environmental issues do not appear to be a prime concern to some individuals. The blunt truth is we do not have time to wait. Climate change has already startedto take its toll on the planet, and ignoring it is no help to anyone. As I stated above, human behavior has the potential to make dramatic changes to the environment. Practicing beneficial behaviors such as engaging in environmental activism, recycling, conserving energy, decreasing water use, and decreasing the frequency of automobile use,are all useful measures to take regarding this issue. You can also research ways to reduce your carbon footprint. As a vegan, I always advise people to cut down on meat, dairy, and egg consumption given the large toll agriculture takes on water loss and the environment in general.If we collectively work to battle this giant threat to our environment, we may be able to slow, and even reverse the effects of climate change.

References

National Aeronautics and Space Administration (NASA)., (2017, January 31). Consequences of Climate Change. Retrieved February 2, 2017, from http://www.nasa.gov

Schneider, F. W., Gruman, J. A., & Coutts, L. M. (2012). Applied social psychology: understanding and addressing social and practical problems. Los Angeles: Sage.

U.S. Environmental Protection Agency (EPA)., (2016, March 4).Health Effects of Ozone Pollution. Retrieved February 2, 2017, from http://www.epa.gov

U.S. Environmental Protection Agency (EPA)., (2017, January 20). Overview of Greenhouse Gases. Retrieved February 2, 2017, from http://www.epa.gov

This entry was posted on Friday, February 3rd, 2017 at 1:16 amand is filed under Uncategorized.You can follow any comments to this entry through the RSS 2.0 feed.You can leave a comment, or trackback from your own site.

See the article here:
Environment and Human Behavior | Applied Social Psychology ...