Category Archives: Human Behavior

AI-Powered Test Automation: Embracing the Future of Software Testing – TechDecisions

Last year, the global pandemic caused a major shift in how businesses operate, introducing new challenges of remote work and accelerated digital transformation at an unprecedented pace.

Organizations are now in a race against time to build high-quality software to propel their digital transformation initiatives forward. However, ensuring optimal software quality in a fast-paced, hyper-connected and complex world is not an easy task.

While traditional test automation has enabled test teams with a smarter and quicker means for delivering high software quality, AI-powered tools can drive its capabilities to the next level.

Traditional test automation delivers tools to control test execution and compare test results against expected outcomes.

While such tools can test and deliver results automatically, they still need human supervision. Without human supervision, traditional test automation tools cant identify which tests to run, so they end up running every test or a predetermined set of tests.

When powered by AI, a test automation tool can review the current test status, recent code changes, code coverage and other associated metrics to intelligently decide which tests to run and then trigger them automatically.

Read: How Intelligent Automation is Changing Global Business

AI enables test automation to move beyond its scope of simple rule-based automation. It utilizes AI algorithms to efficiently train systems using large data sets.

Through the application of reasoning, problem-solving and machine learning, an AI-powered test automation tool can mimic human behavior and reduce the direct involvement of software testers in mundane tasks.

AI is changing software testing in many ways. It is removing many limitations in traditional test automation and delivering more value to testers and developers alike.

It enables organizations to test faster and better while reducing costs and human dependencies. AI has imparted an incredible positive impact on most software testing use cases, including:

Unit TestingTesters can use RPA tools (an application of AI) to reduce flaky test cases while conducting unit testing. Such tools can also help with the maintenance of unit test scripts.API TestingAI-powered test automation tools can convert manual UI tests into automated API tests. This lowers the requirement of specialized testing skills for the process and enables organizations to build a more sustainable API testing strategy.

Read: You Need Predictive Analytics for Your Software Testing: Heres Why

UI TestingEnsures more accuracy in comparison to manual testing. It is hard to manually detect parameters such as GUI size difference and a combination of colors, which can be easily identified with AI.

Regression TestingEnables test teams to run the entire test suite in a timely manner on every change, however minor it may be. AI can prioritize and re-target regression tests to test high-risk areas with short run-times. Image-Based TestingVisual validations involved in image-based testing can be simplified with the ML capability of AI. Automated visual validation tools make image-based testing a breeze.

Like any new technology, there is a lot of hype around AI-powered software testing. The utilization of AI in various testing scenarios is delivering significant improvements and making intelligent test automation a reality. AI-powered test automation is helping organizations reimagine software testing and delivering real business benefits. Some of its key benefits for organizations include:

1. Auto-Generation of Test Scripts

AI-powered test automation helps teams with the auto-generation of test codes that perform all the required functions, such as click buttons, form fills, app logins and more.

There will be complex test cases for which AI-powered test automation tools cant generate code, but it can auto-generate more than 80% of the required code reliably, enhancing the productivity of testing teams significantly.

Furthermore, AI also helps with auto-maintenance to ensure continuous quality while reducing the burden on human testers.

2. Optimization of Testing Process

AI is the force behind the product recommendations on Amazon or the shows Netflix suggests. An AI-enabled recommendation engine allows marketers to provide relevant product recommendations to customers in real-time.

The same approach can be applied to simplify software testing. AI can suggest tests with the maximum probability of finding bugs, based on the risk information, removing the guesswork from testing and empowering teams to home in on the actual risk areas.

3. Measurement of Release Impact

AI-powered test automation tools can predict how an upcoming software release will impact end-users.By leveraging neural networks and analyzing test history and data from current test runs, the tool can predict whether customer satisfaction will move up or down. Equipped with such information, organizations can adjust likewise and ensure that their customers remain satisfied with the user experience.

4. Delivers a Competitive Edge

AI-powered test automation tools help organizations gain a competitive edge. Various AI capabilities such as ML and neural networks can be used to understand how various technical factors are impacting the user experience and business outcomes.

For example, AI can detect whether a new implementation is negatively impacting the load times and could lower conversion rates upon release.

By delivering predictions on how releases will affect the business, AI-powered tools empower organizations to make course corrections to have a positive impact.

5. Enables Productivity and Cost Gains

A recent study discovered that testers spend 17% of their time dealing with false positives and another 14% on additional test maintenance tasks. An AI-powered tool with its auto-generation and auto-maintenance capabilities can help test teams save valuable time and effort and put it toward tackling complex requirements.

It can also help organizations optimize testing costs by reducing human dependence on mundane testing tasks.

Its quite clear that AI-powered test automation is not a passing fad. Such tools are enabling organizations to understand and adapt better to ever-changing customer expectations. Rather than taking a wait-and-watch approach, its time to embrace the innovation that AI has unleashed in test automation.

Here is the original post:
AI-Powered Test Automation: Embracing the Future of Software Testing - TechDecisions

Optic nerve firing may spark growth of vision-threatening childhood tumor – National Institutes of Health

News Release

Tuesday, June 1, 2021

NIH-funded pre-clinical study supports key role of neural activity in brain cancers.

In a study of mice, researchers showed how the act of seeing light may trigger the formation of vision-harming tumors in young children who are born with neurofibromatosis type 1 (NF1) cancer predisposition syndrome. The research team, funded by the National Institutes of Health, focused on tumors that grow within the optic nerve, which relays visual signals from the eyes to brain. They discovered that the neural activity which underlies these signals can both ignite and feed the tumors. Tumor growth was prevented or slowed by raising young mice in the dark or treating them with an experimental cancer drug during a critical period of cancer development.

Brain cancers recruit the resources they need from the environment they are in, said Michelle Monje, M.D., Ph.D., associate professor of neurology at Stanford University, Palo Alto, California, and co-senior author of the study published in Nature. To fight brain cancers, you have to know your enemies. We hope that understanding how brain tumors weaponize neural activity will ultimately help us save lives and reduce suffering for many patients and their loved ones.

The study was a joint project between Dr. Monjes team and scientists in the laboratory of David H. Gutmann, M.D., Ph.D., the Donald O. Schnuck Family Professor and the director of the Neurofibromatosis Center at the Washington University School of Medicine in St. Louis.

In 2015, Dr. Monjes team showed for the first time that stimulation of neural activity in mice can speed the growth of existing malignant brain tumors and that this enhancement may be controlled by the secretion of a protein called neuroligin-3. In this new study, the researchers hoped to test out these ideas during earlier stages of tumor development.

Over the years, cancer researchers have become more and more focused on the role of the tumor microenvironment in cancer development and growth. Until recently, neuronal activity has not been considered, as most studies have focused on immune and vascular cell interactions, said Jane Fountain, Ph.D., program director at the NIHs National Institute of Neurological Disorders and Stroke (NINDS), which partially funded the study. This study is one of the first to show a definitive role for neurons in influencing tumor initiation. Its both scary and exciting to see that controlling neuronal activity can have such a profound influence on tumor growth.

Specifically, the researchers chose to study optic nerve gliomas in mice. Gliomas are formed from newborn cells that usually become a type of brain cell called glia. The tumors examined in this study are reminiscent of those found in about 15-20% of children who are born with a genetic mutation that causes NF1. About half of these children develop vision problems.

Dr. Gutmann helped discover the disease-causing mutation linked to NF1 and its encoded protein, neurofibromin, while working in a lab at the University of Michigan, Ann Arbor, which was then led by the current NIH director, Francis S. Collins, M.D., Ph.D. Since then, the Gutman teams pioneering work on NF1, and particularly NF1-brain tumors, has greatly shaped the medical research communitys understanding of low-grade glioma formation and progression.

Based on multiple lines of converging evidence, we knew that these optic nerve gliomas arose from neural precursor cells. However, the tumor cells required help from surrounding non-cancerous cells in the optic nerve to form gliomas, said Dr. Gutmann, who was also a senior author of this study. While we had previously shown that immune cells, like T-cells and microglia, provide growth factors essential for tumor growth, the big question was: What role did neurons and neural activity play in optic glioma initiation and progression?

To address this, the researchers performed experiments on mice engineered by the Gutmann laboratory to generate tumors that genetically resembled human NF1-associated optic gliomas. Typically, optic nerve gliomas appear in these mice between six to sixteen weeks of age.

Initial experiments suggested that optic nerve activity drives the formation of the tumors. Artificially stimulating neural activity during the critical ages of tumor development enhanced cancer cell growth, resulting in bigger optic nerve tumors. In contrast, raising the mice in the dark during that same time completely prevented new tumors from forming.

Interestingly, the exact timing of the dark period also appeared to be important. For instance, two out of nine mice developed tumors when they were raised in the dark beginning at twelve weeks of age.

These results suggest there is a temporal window during childhood development when genetic susceptibility and visual system activity critically intersect. If a susceptible neural precursor cell receives the key signals at a vulnerable time, then it will become cancerous. Otherwise no tumors form, said Yuan Pan, Ph.D., a post-doctoral fellow at Stanford and the lead author. We needed to understand how this happens at a molecular level.

Further experiments supported the idea that neuroligin-3 may be a key player in this process. For instance, the scientists found high levels of neuroligin-3 gene activity in both mouse and human gliomas. Conversely, silencing the neuroligin-3 gene prevented tumors from developing in the neurofibromatosis mice.

Traditionally, neuroligin-3 proteins are thought to act like tie rods that physically brace neurons together at communication points called synapses. In this study, the researchers found that the protein may work differently. The optic nerves of neurofibromatosis mice raised under normal light conditions had higher levels of a short, free-floating version of neuroligin-3 than the nerves of mice raised in the dark.

Previously our lab showed that neural activity causes shedding of neuroligin-3 and that this shedding hastens malignant brain tumor growth. Here our results suggest that neuroligin-3 shedding is the link between neural activity and optic nerve glioma formation. Visual activity causes shedding and shedding, in turn, transforms susceptible cells into gliomas, said Dr. Monje.

Finally, the researchers showed that an experimental drug may be effective at combating gliomas. The drug is designed to block the activity of ADAM10, a protein that is important for neuroligin-3 shedding. Treating the neurofibromatosis mutant mice with the drug during the critical period of six to sixteen weeks after birth prevented the development of tumors. Treatment delayed to twelve weeks did not prevent tumor formation but reduced the growth of the optic gliomas.

These results show that understanding the relationship between neural activity and tumor growth provides promising avenues for novel treatments of NF-1 optic gliomas, said Jill Morris, Ph.D., program director, NINDS.

Dr. Monjes team is currently testing neuroligin-3-targeting drugs and light exposure modifications that may in the future help treat patients with this form of cancer.

This work was supported by grants from the NIH (NS092597, NS111132, NS097211, CA165962, EY026877, EY029137, CA233164); the Department of Defense (W81XWH-15-1-0131, W81XWH-19-1-0260); Brantleys Project supported by Ians Friends Foundation; Gilbert Family Foundation; Robert J. Kleberg, Jr. and Helen C. Kleberg Foundation; Cancer Research UK; Unravel Pediatric Cancer; McKenna Claire Foundation; Kyle OConnell Foundation; Virginia and D. K. Ludwig Fund for Cancer Research; Waxman Family Research Fund; Stanford Maternal and Child Health Research Institute; Stanford Bio-X Institute; Will Irwin Research Fund; Research to Prevent Blindness, Inc.; Schnuck Markets Inc., and Alexs Lemonade Stand Foundation.

This news release describes a basic research finding. Basic research increases our understanding of human behavior and biology, which is foundational to advancing new and better ways to prevent, diagnose, and treat disease. Science is an unpredictable and incremental process each research advance builds on past discoveries, often in unexpected ways. Most clinical advances would not be possible without the knowledge of fundamental basic research. To learn more about basic research at NIH, visit https://www.nih.gov/news-events/basic-research-digital-media-kit.

NINDSis the nations leading funder of research on the brain and nervous system.The mission of NINDS is to seek fundamental knowledge about the brain and nervous system and to use that knowledge to reduce the burden of neurological disease.

About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

Pan, Y. et al., NF1 mutation drives neuronal-activity dependent initiation of optic glioma. Nature, May 26, 2021 DOI: 10.1038/s41586-021-03580-6

###

Here is the original post:
Optic nerve firing may spark growth of vision-threatening childhood tumor - National Institutes of Health

40 years of AIDS taught us epidemiologic humility. We need to apply that lesson in fighting Covid-19 – STAT – STAT

Forty years later, I can still recall my visceral reaction to reading an article in the June 5, 1981, issue of Morbidity and Mortality Weekly Report (MMWR), which opened with this sentence: In the period October 1980-May 1981, 5 young men, all active homosexuals, were treated for biopsy-confirmed Pneumocystis carinii pneumonia at 3 different hospitals in Los Angeles, California.

I was an infectious disease fellow at Harvard Medical School at the time, trying to keep abreast of epidemic trends from the U.S. Centers for Disease Control, which published the weekly bulletin.

One of my first thoughts was that I couldnt believe the MMWR had actually referred to gay men, albeit in the purple prose of the era. It was completely unexpected, since I could not recall the bulletin or the CDC, for that matter ever having discussing sexual and gender minority people before. I was then volunteering once a week at the Fenway Community Health Center which, at the time, was a small neighborhood health clinic not far from Bostons Fenway Park used mostly by gay and bisexual men and transgender women. There, with only limited diagnostics and a fairly rudimentary therapeutic armamentarium, I treated the most challenging presentations of sexually transmitted infections, such as recurrent warts and ulcers.

advertisement

When I read the MMWR report, which many herald as the first report of AIDS, I was struck with how the cases described were quite distinctive in that the clinical conditions differed between the men, yet their problems suggested they were severely immunosuppressed without any identifiable cause such as chemotherapy. As a nascent infectious disease specialist, I had a lot of questions, not least of which was how being gay was associated with becoming ill.

The subsequent weeks and months blurred into years of misinformation, false leads, and agonizing deaths. In the earliest days, competing hypotheses for the cause of what was then known as gay-related immune deficiency (GRID) were proposed. The burnout hypothesis suggested that the diversity of illnesses was not due to a single pathogen, but that people who had numerous sexual partners and/or who used many different kinds of drugs were overwhelming their immune systems. Researchers also focused on party drugs such as volatile nitrites (known as poppers), which produced a sense of euphoria and increased sexual pleasure, in an attempt to demonstrate that these drugs were particularly toxic to the immune system.

advertisement

As reports emerged of individuals who had never used drugs and/or had few sex partners getting sick because they were sexual partners of individuals who became ill and died, researchers hypothesized that the disease was caused by a transmissible organism. The question then arose as to whether the microbe was a more virulent form of a common existing pathogen, such as ubiquitous herpes simplex, or if AIDS was caused by a new one. The lack of clarity about what was causing AIDS, and the lack of a diagnostic tool that could determine who was sick and who wasnt, fueled hysteria.

In the midst of this uncertainty, the silence of the Reagan administration was palpable, especially when compared to the attention given to the limited number of people who had become sick and died of Legionnaires disease or toxic shock syndrome, two other public epidemics from the 1980s. The implicit message from the administration was that because AIDS seemed to be confined to groups of individuals who didnt matter to society, the less said, the better.

Given what was then known about who was at greatest risk of AIDS and how they might have acquired the infection, people with AIDS also had to contend with high levels of stigma and discrimination. The press routinely referred to AIDS as the 4H disease because it affected Haitians, homosexuals, hemophiliacs, and heroin users.

One of the first patients with AIDS I took care of was a young college student who developed lesions of Kaposi sarcoma that covered his extremities and his face, made his lymph glands swell, and caused fevers, chills, and sweats. As he became sicker and frailer, his parents accompanied him to his medical appointments. His father, a school superintendent, asked perceptive questions about his sons condition. But his son was terrified that any suggestion on my part that he had AIDS would out him to his parents and alienate them from him just when he needed them the most. So I would answer the fathers questions only by saying that his son had a very serious malignancy, and couldnt discuss what was truly going on.

The focus of my research ever since has been HIV and AIDS, primarily how to reduce transmission of the virus. One of the primary lessons Ive learned, however, has little to do with biology: It is how social forces can amplify the transmission of hitherto obscure pathogens. It is clear to me that we will not succeed against SARS-CoV-2 unless we apply the following lessons from the AIDS epidemic:

Science matters. Support for getting people trained to be able to do science matters. Promotion of scientific literacy matters. Science is the creation of new knowledge. There are no such thing as alternative facts. As scientific knowledge expands, so does our understanding of the facts.

Discrimination is toxic. The failure to address the upstream causes of discrimination at the outset of an infectious disease outbreak will make things much worse than they otherwise would be. Homophobia, transphobia, sexism, and racism fueled the HIV epidemic. Racism and economic inequality are fueling the Covid-19 pandemic. The disproportionate impact of Covid-19 infections and health outcomes among people of color in the United States is testament to the urgent need to reduce and eliminate racial and linguistic inequities in scientific research, medical treatment, and disease prevention.

We are all in this together. We live in a global village and share a global gene pool. The HIV epidemic began in Central Africa, and disseminated because of urbanization and increased global mobility. SARS-CoV-2 apparently first appeared in China. But no country owns any virus or other pathogen since the patterns of dissemination of any of these wild organisms depends on human behavior, in addition to intrinsic properties of the pathogen.

AIDS taught us epidemiologic humility: There is only so much we can do. But we can do a lot. Former President George W. Bushs Presidents Emergency Plan for AIDS Relief (PEPFAR) saved millions of lives and is one of the most successful global public health interventions in history. As we scale up to vaccinate increasing numbers of individuals against SARS-CoV-2 in the U.S., Americans must understand that the pandemic is not over here until its over everywhere.

Kenneth H. Mayer is an infectious disease physician, medical research director of Fenway Health, co-director of The Fenway Institute, attending physician in the Division of Infectious Diseases at Beth Israel Deaconess Medical Center, professor of medicine at Harvard Medical School, and professor of global health and population at the Harvard T.C. Chan School of Public Health.

View post:
40 years of AIDS taught us epidemiologic humility. We need to apply that lesson in fighting Covid-19 - STAT - STAT

The Origin of COVID-19 and Preventing the Next Pandemic – War on the Rocks

Did COVID-19 originate with bats or scientists? Most experts continue to contend that the most likely origin of SARS-CoV-2 (the novel coronavirus that causes COVID-19) is a natural zoonotic spillover event between an animal reservoir (most likely bats) and humans. But over the last year of the pandemic, another theory has gained momentum: The SARS-CoV-2 virus may have resulted from an accident in a laboratory in China where scientists were working with closely related viruses. In the wake of the World Health Organization-led mission to Wuhan to examine the origins of the pandemic, proponents of the lab-leak theory have charged the investigative team with conflicts of interest, and suggested that the teams efforts failed to rule out the possibility of a lab release. Some have gone on to claim that scientists have maintained a conspiracy of silence about the possibility of a lab release in order to protect their funding or avoid a backlash from their government.

The desire to identify the origins of the novel coronavirus is perfectly understandable. COVID-19 has killed millions of people and upended everyday life. Theres an intuitive sense that finding out how the pandemic began might help to prevent another one from occurring. The Biden administration is redoubling efforts to determine the origins of COVID-19 after the intelligence community indicated that it had insufficient information to make a determination.

However, while answering the question of where the novel coronavirus came from is important, many of the most important policy decisions the United States needs to make to prevent future pandemics do not depend on viral origins. Very little about pandemic response or preparedness for future pandemics turns on the particulars of how this one started. Laboratory biosafety was already an issue before the pandemic, and the origins of this particular virus dont change the need for reform to prevent these rare but potentially catastrophic events. Regardless of how COVID-19 began, U.S. policy priorities should focus on both identifying and preventing the spread of zoonotic pathogens and bolstering safety and security in high-containment laboratories.

Preparing for the Next Pandemic

Whatever the origins of this pandemic, the United States has its work cut out to prepare for the next one. Lets assume, for the sake of argument, that the lab hypothesis is true. Efforts to prepare for natural spillover events do not then become less important. Since the 1940s, roughly 350 emerging infectious diseases have been identified. Of these, nearly three-quarters have zoonotic origins. Our understanding of how such diseases emerge is incomplete, but we know that there are a number of human behaviors that are likely contributing to this pattern: increasing demand for animal protein, factory farming and other agricultural intensification measures, wildlife trade, urbanization, extraction industries, changes in the food supply chain, and pet ownership, as well as increases in temperature, humidity, and other factors related to global climate change. Zoonotic crossover events are not limited to China, or even to Asia. Emerging infectious diseases have appeared all over the world: Zika in Latin America, Ebola virus disease in sub-Saharan Africa, H1N1 in bird reservoirs as disparate as Vietnam or Mexico; and henipaviruses in Australia. Coronaviruses have reservoirs in China, yes, but also in Africa, the Americas, the Middle East, and Southeast Asia.

If investigators are able to conclusively prove that the COVID-19 pandemic originated in a laboratory conducting research into coronaviruses, humanity will continue to confront the risk that a future spillover will result in another pandemic that is equally or more devastating. Fortunately, there are steps that the scientific community can take to manage this risk, including using predictive surveillance and developing other zoonotic risk-assessment tools. Early detection of such pathogens can help experts to isolate and contain them so that they do not spread widely. We can also promote behavioral change in high-risk populations and fund research into universal vaccines for zoonotic frequent-flyers like coronaviruses.

Lets say the converse is true, however. If evidence is found that satisfies even the most ardent lab-leak proponent that COVID-19 originated in an animal population, does that obviate the need to address laboratory biosafety and biosecurity? Absolutely not. Even as COVID-19 emerged, questions arose about the role of high-containment labs around the world. As the number of these labs increases, the risk of a consequential accident also increases.

Policymakers have debated biological safety in high-containment labs for most of this century. Biosafety, biosecurity, and awareness-raising among life scientists are ongoing topics of discussion at the Biological Weapons Convention. Biosafety is a major focus in the Global Health Security Agenda. The World Health Organization has maintained a guide for the responsible conduct of life sciences research with dual-use potential for more than a decade. In short, biosafety and biosecurity receive significant policymaker attention at the highest levels of international organizations, but that awareness doesnt necessarily translate into national-level action to manage biological risk and ensure protection from accidents. Even the states that have been most vocal in driving discussion of biosafety and biosecurity in international spheres have struggled with their own biorisk management. The United States has had a number of high profile laboratory incidents over the years, involving anthrax, highly pathogenic avian influenza, and smallpox, even as it has continued to develop and expand its high-containment lab capacity already the largest in the world.

Transparency and Biosecurity

Critics might claim that lab releases in the United States can be investigated transparently, while the potential COVID-19 release in China cannot. Indeed, China has put severe restrictions on research into the origins of the virus and prohibited scientists from speaking with journalists. During the World Health Organization-led investigation, members of the team were prevented from accessing patient data and other important research. After Australia pressed for an independent inquiry into the origins of the pandemic, China responded with threats and economic retaliation.

However, opacity surrounding public health is not a problem that is limited to authoritarian societies like China. Globally, biosafety norms are poorly implemented and reviews of biosafety and biosecurity are often conducted in secret. Even in the United States, there is no coordinated approach to laboratory biosafety or reporting laboratory accidents. As a result, public awareness of biosafety incidents often relies on local engagement between towns and specific labs, or comes from journalists filing Freedom of Information requests. The U.S. Government Accountability Office has consistently criticized U.S. biological security and safety for decades, but even recent developments in regulating the funding of potentially high-consequence gain of function research have been criticized as lacking transparency around the makeup of the review board, decision-making procedure, and notification of funded experiments. If this is the case for the United States, it is easy to imagine that other countries with less experience with biosafety and security might see it as politically advantageous to remain mum about incidents or problems. Clearly, more work is needed around the world to make sure that all countries have biorisk management policies and appropriate oversight measures in place, and that theyre open about the problems they encounter and their efforts to solve them.

Global norms and incentives are where the rubber hits the road for pandemic preparedness. Its reasonable in fact, vital to seek new ways to prevent laboratory accidents in the future. The worlds chief solution to this pandemic was the development of vaccines, a process driven by life sciences research, much of which took place in high-containment labs. Consequently, many political leaders may well choose to invest in more high-level biological research in the near future. If the solution to a lab release is more laboratory science, it makes sense to ensure that that science is carried out in a safe and secure manner. There is room for all countries to do better, and the United States should consider revitalizing its approach to promoting biosafety and biosecurity in the wake of the pandemic regardless of its origins.

As a final point, if the lab release hypothesis is true, we really shouldnt be surprised. An analysis in 2016 of gain of function research by Gryphon Scientific operated on the assumption that, eventually, a laboratory release of a potential pandemic pathogen would occur, a small number of those would lead to a local cluster, and a small number of those would seed a global pandemic. In other words, if COVID-19 did result from a lab release in China, it might simply have been bad luck, on top of whatever biosafety lapses China may have had which is all the more reason why, in addition to strengthening laboratory safety and security, the international community should do everything it can to develop appropriate infrastructure to handle a future pandemic.

Looking Ahead

There is one important scenario in which it would be absolutely vital to know the origins of COVID-19 in order to decide what to do next. If, as some scientists and politicians have suggested, the pandemic stemmed from a deliberate attempt to develop a biological warfare agent, this would have serious implications for the Biological Weapons Convention and the broader norm against the use of disease as a weapon. If a state party had violated its commitment to the treaty by developing biological weapons, the international community would need to determine how to hold that government accountable for its non-compliance a process with which states parties to the treaty have struggled in the past. Even treaties that have extensive verification provisions have grappled with what to do when a state party has demonstrably violated a treatys prohibitions. While some might criticize the Biological Weapons Convention for lacking a mechanism to verify compliance, such mechanisms dont solve the knotty political problem of what to do when flagrant violations take place. Moreover, the deliberate use of biological weapons could inspire copycat behavior by others, leading to the weakening of the norm against the use of disease as a weapon. Fortunately, to our knowledge no serious analysis of COVID-19s origins even from those who support a laboratory release hypothesis has concluded that anyone deliberately introduced the SARS-CoV-2 virus to the global population.

While its important to discover the origins of the pandemic, theres a danger in taking these efforts too far. Some have argued that conclusively demonstrating the pandemics origins in a lab release might help nations seeking to encourage China to pay financial reparations for the global economic cost of the virus to make their case. This could be a problematic approach. Not only is there no legal precedent under international law to hold a country liable for a pandemic, but in the long run this might be an unwise road for the United States, given its own history of laboratory accidents and safety lapses. Insisting that China bears responsibility for the pandemic and should be expected to pay compensation to other countries or the families of coronavirus victims could backfire in the future if the United States finds itself attempting to mitigate the consequences from a laboratory accident. Furthermore, legal efforts to blame China could fuel additional xenophobia against Asian-Americans, or even undermine U.S. foreign policy interests.

Meanwhile, the focus on where the virus came from should not divert attention from whats even more important preparing for the next pandemic. Political finger-pointing might make it far more difficult for researchers to collaborate internationally on pandemic preparedness efforts. Experts are already noting the possible implications for the National Institutes of Health and other research institutions of the growing tension between the United States and China, exacerbated by the allegations and skepticism around the viruss origins. This pandemic is far from over, despite the rollout of vaccines in the United States, and new potential pandemic diseases are already testing global health efforts elsewhere in the world. American experts therefore need to keep a laser-like focus on the real enemy: the causative agents of disease.

There will be far more blame to share if the international community becomes so fixated on the circumstances surrounding this unique case that its unable to see the big picture and predict or prepare for the next pandemic. Theres work that can be done in that respect while maintaining agnosticism about the origins of COVID-19. Regardless of the source, we need to be better prepared to respond to the next virus.

Amanda Moodie is a policy fellow at the National Defense Universitys Center for the Study of Weapons of Mass Destruction (WMD Center) in Washington, D.C. Her policy support at the center focuses on the international legal regimes that regulate the proliferation of chemical and biological weapons. She regularly serves as a member of the U.S. delegation to meetings of the states parties of the Biological Weapons Convention.

Nicholas G. Evans is an assistant professor in the Department of Philosophy at the University of Massachusetts Lowell, where he teaches biomedical ethics and security studies. He has been published in the British Medical Journal, Nonproliferation Review, and ELife. His book, The Ethics of Neuroscience and National Security, was released with Routledge in May 2021.

The views expressed in this paper are those of the authors and are not an official policy or position of the National Defense University, the Department of Defense, or the U.S. government.

Image: Xinhua (Photo by Fei Maohua)

Read the original here:
The Origin of COVID-19 and Preventing the Next Pandemic - War on the Rocks

COVID-19 Vaccination Hesitancy Finds Echo in Cancer Care – OncLive

Unfortunately, not all people recognize this success or the need for vaccines that prevent infections with these dangerous pathogens. The history of vaccination hesitancy is long. In much earlier times, there was legitimate concern for both the safety and efficacy of proposed vaccine products due to the lack of a rigorous clinical trials process and formal regulatory review by experts in vaccine science, epidemiology, and statistical analysis. More recently, however, with our far greater understanding of the biology of infectious diseases, the establishment of robust and well-validated clinical trial strategies for evaluating vaccine products and scrupulous review by both governmental health agencies and external experts, the utility of a given vaccine is clear before it is approved. Mandatory reporting of adverse effects following use in the real world ensures that rare or longer-term concerns are appropriately evaluated.

Consider, for example, findings from a recent review of 57 vaccines that the FDA approved from January 1, 1996, to December 31, 2015. More than 90% of the vaccines were supported by data from randomized controlled trials, each involving a median of more than 4100 study participants.4 The authors noted that the postsurveillance mechanism worked well, with a total of 58 safety-related modifications added to the FDA-approved vaccine labels involving approximately half of the vaccines (n = 25). Most of the changes related to additional warnings that stemmed from extended experience with the vaccine. A total of 8 contraindications were added to the labels, and only 1 vaccine product was withdrawn from the market due to safety-related issues.

Restrictions on the patient populations who should be eligible to receive the vaccine was the most common change mandated by the FDA, with additional notification regarding potential allergic reactions being the second most common issue arising from follow-up review. The investigators concluded: Over a 20-year period, vaccines were found to be remarkably safe. A large proportion of safety issues were identified through existing postmarketing surveillance programs and were of limited clinical significance. These findings confirm the robustness of the vaccine approval system and postmarketing surveillance.4

Following the introduction of several COVID-19 vaccines, there were reports of a rare blood-clotting disorder associated with at least 2 of the products in noninvestigative real-world use. The events were quickly evaluated, with public health agencies making recommendations for the future delivery of these vaccines. Although the COVID-19 vaccine rollout is unprecedented in speed and scope, the process of postapproval surveillance has been shown to be robust and should serve as a source of reassurance to the public regarding the effectiveness of the initial and follow-up review process.

Unfortunately, this is an oversimplified view of the entire spectrum of the vaccination process. In a most provocative commentary, Naomi Oreskes, PhD, a professor of the history of science at Harvard University, noted that we should perhaps reassess the nature of the difficulty associated with developing and implementing an effective vaccination strategy.5 The author highlights the fact we have traditionally considered problems to be hard that are associated with major technological challenges or an understanding of highly complex theories (eg, quantum physics). The development of several highly effective COVID-19 vaccines and their release for noninvestigative administration less than 1 year follow-ing the identification of the molecular structure of the causative virus is nothing short of remarkable, yet we have struggled until recently to implement a nationwide vaccine distribution strategy. What good is a vaccine that remains in a vial rather than being injected into the arm of an individual susceptible to a COVID-19 infection?

Oreskes concludes: We call the physical sciences hard because they deal with issues that are mostly independent of the vagaries of human nature; they offer laws that (at least in the right circumstances) yield exact answers. But physics and chemistry will never tell us how to design an effective vaccination programin part because they do not help us comprehend human behavior. The social sciences rarely yield exact answers. But that does not make them easy.5

Although the COVID-19 vaccines must be regarded as truly remark-able scientific success stories,6 we are faced with the reality of human behavior, and we are learning that overcoming obstacles to existing and firmly entrenched beliefs, reinforced by social media sources and conspiracy theories, will be hard.7,8

We should recognize that this conclusion also pertains to the admin-istration of vaccines that have been documented to be both safe and highly effective in the prevention of cancer. We now have conclu-sive evidence that vaccination against the human papillomavirus (HPV) can substantially reduce the risk of developing invasive cervi-cal cancer.9 However, recent self-reported data reveal that, among 12,644 women and men aged 18 to 21 years in the United States, only 55% of women and 34% of men had received at least 1 dose of the HPV vaccine.10 Clearly, we have a long way to go to solve this hard prob-lem of increasing the delivery of this critically important cancer prevention strategy.

Here is the original post:
COVID-19 Vaccination Hesitancy Finds Echo in Cancer Care - OncLive

Odds and Ends: Knowing the Difference Between Sports Fandom and Toxicity – SportsRaid

The NBA Playoffs are here so you know this is the time when weird, out there news takes over the cycle in conjunction with the actual games. You dont know why this happens. It just does for whatever reason. The latest discussion happening in the discourse is in response to fandom and how they act during games.

So recently Atlanta Hawks All Star point guard Trae Young has joined the ranks of Reggie Miller, Michael Jordan, Scottie Pippen, Isiah Thomas, and Paul Pierce as the latest edition of Knicks Killers, supervillains who were created for the sole purpose of ruining the New York Knickerbockers chances of postseason success. To be honest, thats quite an honor! Young has even acknowledged it! His Game 1 performance was a spectacle that showcased a young star player embracing his role as the bad guy against a rival team and it was fun (if youre a Hawks fan)!

What was not fun was the ensuing vitrol and mistreatment Young and his family has received after the game. Shouting f*** Trae Young, making fun of his height and hair is all over-the-top jeering that would not be out of place in an NBA Playoff atmosphere but the fan who was trying to spit at him crossed the line, which is a disgusting display of human behavior.

I definitely didnt see it, but theres no place for that, man, Knicks All Star Julius Randle said to reporters after the incident. I dont care if its our home crowd or not, theres no place for that. Weve got to protect the players. Thats disrespectful. Yeah, its our fans and I love our fans, but you see a guy on the street, you wouldnt spit on him. You wouldnt disrespect somebody like that. I dont care what arena its in, whose fan base it is, theres absolutely no place for disrespecting anybody in any capacity and especially spitting on him. Thats just ridiculous.

It has been happening through out much of the NBA postseason. Fans have been welcomed back to the arenas after being away for more than a year due to the COVID-19 Pandemic. However in true entitled, idiotic fashion fans have been abusing players by either hurling personal attacks on players families and friends, throwing bottles, or attempting to disrupt a game by running on the court, endangering the players on the floor.

Sheesh NBA fans cant have nothing.

Look, I get it. People have been locked up in their homes for over a year. It has been a struggle trying to re-experience social norms and nobody knows when this pandemic is going to end. Sure, vaccines are readily available and the numbers of those infected by the virus have decreased over time but that does not give people the right to mistreat others, especially those who are out there providing entertainment in the world of sports.

The idiot nature of those attempting to commit actual harm to NBA athletes and those mocking athletes with nonsensical chants should not be perceived as the same. If you chant Julius Randle is overrated! then that is not exactly a harmful insult considering the season Randle had and trust he has heard worst. However, if you throw insults at Randles wife and son then you are crossing the line. You made the matter personal and have put someones well-being in danger.

Fans have rights to jeer at athletes. It is all part of the fun within the playoffs. However, people should be mindful as to how they carry themselves. That type of behavior is unacceptable and the NBA should continue to hold those fans accountable for their actions. It is impossible to police human behavior considering that nobody not even the arena personnel can keep those type of people in check. I mean it is not like you can ban 15,000 people out of a game if they all chant Kyrie sucks! In any case, people need to learn to relax and enjoy the game.

To quote the great Kevin Wayne Durant, Have some respect for the human beings and have some respect for yourself. Your mother wouldnt be proud of you throwing water bottles at basketball players or spitting on players or tossing popcorn. So grow the f up and enjoy the game, its bigger than you.

*additional content from Sporting News, House of Highlights, ESPN, NBA on TNT, Bleacher Report, NY Post

Continue reading here:
Odds and Ends: Knowing the Difference Between Sports Fandom and Toxicity - SportsRaid

Why does the coronavirus change? – Khmer Times

Variants of viruses occur when there is a change or mutation to the viruss genes. Ray says it is the nature of RNA viruses such as the coronavirus to evolve and change gradually. Geographic separation tends to result in genetically distinct variants, he says.

Mutations in viruses including the coronavirus causing the COVID-19 pandemic are neither new nor unexpected. Bollinger explains: All RNA viruses mutate over time, some more than others. For example, flu viruses change often, which is why doctors recommend that you get a new flu vaccine every year.

Is there a new coronavirus mutation?We are seeing multiple variants of the SARS-CoV-2 coronavirus that are different from the version first detected in China, Ray says.

He notes that one mutated version of the coronavirus was detected in southeastern England in September 2020. That variant, now known as B.1.1.7, quickly became the most common version of the coronavirus in the United Kingdom, accounting for about 60% of new COVID-19 cases in December. It is now the predominant form of the coronavirus in some countries.

Different variants have emerged in Brazil, California and other areas. A variant called B.1.351, which first appeared in South Africa, may have the ability to re-infect people who have recovered from earlier versions of the coronavirus. It might also be somewhat resistant to some of the coronavirus vaccines in development. Still, other vaccines currently being tested appear to offer protection from severe disease in people infected with B.1.351.

B.1.351: A Coronavirus Variant of Concern?One of the main concerns about the coronavirus variants is if the mutations could affect treatment and prevention.

The variant known as B.1.351, which was identified in South Africa, is getting a closer look from researchers, whose early data show that the COVID-19 vaccine from Oxford-AstraZeneca provided minimal protection from that version of the coronavirus. Those who became sick from the B.1.351 coronavirus variant after receiving the Oxford-AstraZeneca vaccine experienced mild or moderate illness.

The B.1.351 variant has not been shown to cause more severe illness than earlier versions. But there is a chance that it could give people who survived the original coronavirus another round of mild or moderate COVID-19.

Researchers studying placebo (non-vaccine) recipients in the South African COVID-19 vaccine trial by Novavax compared subgroups of participants who did or did not have antibodies indicating prior COVID-19. Those who did have the antibodies most likely were infected with older variants of SARS-CoV-2. They found that having recovered from COVID-19 did not protect against being sickened again at a time when the B.1.351 variant was spreading there.

Will the COVID-19 vaccine work on the new variants?Ray says, There is new evidence from laboratory studies that some immune responses driven by current vaccines could be less effective against some of the new strains. The immune response involves many components, and a reduction in one does not mean that the vaccines will not offer protection.

People who have received the vaccines should watch for changes in guidance from the CDC [Centers for Disease Control and Prevention], and continue with coronavirus safety precautions to reduce the risk of infection, such as mask wearing, physical distancing and hand hygiene.

We deal with mutations every year for flu virus, and will keep an eye on this coronavirus and track it, says Bollinger. If there would ever be a major mutation, the vaccine development process can accommodate changes, if necessary, he explains.

How are the new coronavirus variants different?There are 17 genetic changes in the B.1.1.7 variant from England, Bollinger says. Theres some preliminary evidence that this variant is more contagious. Scientists noticed a surge of cases in areas where the new strain appeared.

He notes that some of the mutations in the B.1.1.7 version seem to affect the coronaviruss spike protein, which covers the outer coating of SARS-CoV-2 and give the virus its characteristic spiny appearance. These proteins help the virus attach to human cells in the nose, lungs and other areas of the body.

Researchers have preliminary evidence that some of the new variants, including B.1.1.7, seem to bind more tightly to our cells Bollinger says. This appears to make some of these new strains stickier due to changes in the spike protein. Studies are underway to understand more about whether any of the variants are more easily transmitted.

Are coronavirus variants more dangerous?Bollinger says that some of these mutations may enable the coronavirus to spread faster from person to person, and more infections can result in more people getting very sick or dying. In addition, there is preliminary evidence from Britain that some variants could be associated with more severe diseases. Therefore, it is very important for us to expand the number of genetic sequencing studies to keep track of these variants, he says.

Bollinger explains that it may be more advantageous for a respiratory virus to evolve so that it spreads more easily. On the other hand, mutations that make a virus more deadly may not give the virus an opportunity to spread efficiently. If we get too sick or die quickly from a particular virus, the virus has less opportunity to infect others. However, more infections from a faster-spreading variant will lead to more deaths, he notes.

Could a new COVID-19 variant affect children more frequently than earlier strains?Ray says that although experts in areas where the new strain is appearing have found an increased number of cases in children, he notes that the data show that kids are being infected by old variants, as well as the new ones. There is no convincing evidence that any of the variants have special propensity to infect or cause disease in children. We need to be vigilant in monitoring such shifts, but we can only speculate at this point, he says.

Will there be more new coronavirus variants?Yes. As long as the coronavirus spreads through the population, mutations will continue to happen.

New variants of the SARS-CoV-2 virus are detected every week, Ray says. Most come and go some persist but dont become more common; some increase in the population for a while, and then fizzle out. When a change in the infection pattern first pops up, it can be very hard to tell whats driving the trend changes to the virus, or changes in human behavior. It is worrisome that similar changes to the spike protein are arising independently on multiple continents.

Are there additional COVID-19 precautions for the new coronavirus mutations?Bollinger says that as of now, none of the new coronavirus variants call for any new prevention strategies. We need to continue doing what were doing, he says.

Ray concurs: There is no demonstration yet that these variants are biologically different in ways that would require any change in current recommendations meant to limit spread of COVID-19, he says. Nonetheless, we must continue to be vigilant for such phenomena.

Ray stresses that human behavior is important. The more people who are infected, the more chances there are for a mutation to occur. Limiting the spread of the virus through maintaining COVID-19 safeguards (mask wearing, physical distancing and practicing hand hygiene) gives the virus fewer chances to change. It also reduces the spread of more infectious variants, if they do occur.

We need to re-emphasize basic public health measures, including masking, physical distancing, good ventilation indoors and limiting gatherings of people in close proximity with poor ventilation. We give the virus an advantage to evolve when we congregate in more confined spaces, he says.

Regarding coronavirus variants, how concerned should we be?Most of the genetic changes we see in this virus are like the scars people accumulate over a lifetime incidental marks of the road, most of which have no great significance or functional role, Ray says. When the evidence is strong enough that a viral genetic change is causing a change in the behavior of the virus, we gain new insight regarding how this virus works.

As far as these variants are concerned, we dont need to overreact, Bollinger says. But, as with any virus, changes are something to be watched, to ensure that testing, treatment and vaccines are still effective. The scientists will continue to examine new versions of this coronaviruss genetic sequencing as it evolves.

In the meantime, we need to continue all of our efforts to prevent viral transmission and to vaccinate as many people as possible, and as soon as we can. Hopkins Medicine

Originally posted here:
Why does the coronavirus change? - Khmer Times

NSBORO Roundup: Masks not required outdoors; Studying the Holocaust; Start Time Reminder; and No to School Choice – mysouthborough

Im rounding up some of the school related news I missed sharing recently. Some stories are from the media and others from school announcements.

Outdoor Mask Requirement lifted for elementary schools NSBORO District:

When the district announced on May 19th that masks would still be required during recess at NSBORO elementary schools, some parents objected. At the time, the Medical Advisory Team noted that they would continue to look at the data weekly and make adjustments. It appears that they did just that.

The Districts website includes the following messageto parents of PreK-5 students of a new policy that went into effect yesterday:

Beginning on June 1, 2021, and in alignment with Governor Bakers shifting the states mask mandate to a mask advisory, and the Department of Elementary and Secondary Educations updated mask guidelines, the Public Schools of Northborough and Southborough will no longer require that masks be worn by students when outdoors during recess, physical education or outdoor classroom environments, even when social distance can not be maintained. Masks are required on the bus at all times and inside the school buildings, except when eating.

In alignment with the Center for Disease Control (CDC) guidelines*, The Public Schools of Northborough and Southborough strongly encourages all non-vaccinated persons to wear masks outdoors when they are with individuals from outside their household and unable to maintain social distance. The Medical Advisory Team supports mask wearing for non-vaccinated students. (read more)

Northborough HS Class Takes Deep Dive Into What Led To The Holocaust CBS Boston:

Local broadcast news covered a Social Studies elective at the high school. (Although, the headline ignores the Regional part of Algonquins name.)

Most kids learn about the Holocaust in school, but Algonquin Regional High School in Northborough is taking it one step further with lessons on human behavior.

I think its really helped me to understand why my family was killed, said Jordan Chastanet, a senior at Algonquin.

The course is called Holocaust & Human Behavior. The elective, which is offered to juniors and seniors, is more than just a class. Its personal.

My grandmother was actually a Holocaust survivor, and two of her sisters and her escaped from the Warsaw ghetto. And Ive heard so much about my history, said Chastanet. (read and view more)

(You can also find the full course description in Algonquins Program of Studies.)

Start Time Update Planning forthe 2021-2022 School Year NSBORO District:

Recent updates from the Superintendent remind parents and students to prepare for changing start times for schools this fall.

The initiative to allow Algonquin students sleep later resulted in a radically revised 2-tiered bus schedule. Most students schools times will start and end later, except for Trottier Middle School. (The 6th-8th graders will actually start and end 13 minutes earlier.) Below is an excerpt from the update in the May newsletter with the table of Southborough start times:

Northborough & Southborough School Districts opt-out of school choice Community Advocate:

It looks like there were no surprises in the school committee votes on School Choice. As usual, all three NSBORO districts chose not to accept students from outside the district:

The Northborough-Southborough Regional School Committee unanimously voted to opt-out of the inter-district school choice program at their meeting May 19.

This decision aligns with the vote of both the Northborough K-8 and Southborough K-8 School Committee.

Although there are some advantages to school choice, such as a means to generate revenue and allow for flexibility in enrollment declines, Superintendent Greg Martineau cited disadvantages.

He explained the $5,000 in tuition the district would receive for each student would be far less than the districts per-pupil expenditure of $18,621.13 based on fiscal year (FY)2020.

The fall reopening of schools was also a concern for Martineau.

Theres no need to add another variablein terms of what the fall may look like, he said. (read more)

Go here to read the rest:
NSBORO Roundup: Masks not required outdoors; Studying the Holocaust; Start Time Reminder; and No to School Choice - mysouthborough

Early human impacts and ecosystem reorganization in southern-central Africa – Science Advances

Abstract

Modern Homo sapiens engage in substantial ecosystem modification, but it is difficult to detect the origins or early consequences of these behaviors. Archaeological, geochronological, geomorphological, and paleoenvironmental data from northern Malawi document a changing relationship between forager presence, ecosystem organization, and alluvial fan formation in the Late Pleistocene. Dense concentrations of Middle Stone Age artifacts and alluvial fan systems formed after ca. 92 thousand years ago, within a paleoecological context with no analog in the preceding half-million-year record. Archaeological data and principal coordinates analysis indicate that early anthropogenic fire relaxed seasonal constraints on ignitions, influencing vegetation composition and erosion. This operated in tandem with climate-driven changes in precipitation to culminate in an ecological transition to an early, pre-agricultural anthropogenic landscape.

Modern humans act as powerful agents of ecosystem transformation. They have extensively and intentionally modified their environments for tens of millennia, leading to much debate about when and how the first human-dominated ecosystems arose (1). A growing body of archaeological and ethnographic evidence shows substantial, recursive interactions between foragers and their environments that suggest that these behaviors were fundamental to the evolution of our species (24). Fossil and genetic data indicate that Homo sapiens were present in Africa by ~315 thousand years (ka) ago, and archaeological data show notable increases in the complexity of behavior that took place across the continent within the past ~300- to 200-ka span at the end of the Middle Pleistocene (Chibanian) (5). Since our emergence as a species, humans have come to rely on technological innovation, seasonal scheduling, and complex social cooperation to thrive. These attributes have enabled us to exploit previously uninhabited or extreme environments and resources, so that today humans are the only pan-global animal species (6). Fire has played a key role in this transformation (7).

Biological models suggest that adaptations for cooked food extend back at least 2 million years, but regular archaeological evidence for controlled use of fire does not appear until the end of the Middle Pleistocene (8). A marine core with a dust record drawn from a wide swath of the African continent shows that over the past million years, peaks in elemental carbon occurred after ~400 ka, predominately during shifts from interglacial to glacial conditions, but also during the Holocene (9). This suggests that fire was less common in sub-Saharan Africa before ~400 ka and that by the Holocene, there was a substantial anthropogenic contribution (9). Fire is a tool that has been used by pastoralists to open and maintain grasslands throughout the Holocene (10). However, detecting the contexts and ecological impacts of fire use by Pleistocene early hunter-gatherers is more complex (11).

Fire is known both ethnographically and archaeologically as an engineering tool for resource manipulation, including improvement of subsistence returns or modification of raw materials, with these activities often associated with communal planning and requiring substantial ecological knowledge (2, 12, 13). Landscape-scale fires allow hunter-gatherers to drive game, control pests, and enhance productivity of habitat (2). On-site fire facilitates cooking, warmth, predator defense, and social cohesion (14). However, there is substantial ambiguity regarding the extent to which fires by hunter-gatherers can reconfigure components of a landscape, such as ecological community structure and geomorphology (15, 16).

Understanding the development of human-induced ecological change is problematic without well-dated archaeological and geomorphic data from multiple sites, paired with continuous environmental records. Long lacustrine sedimentary records from the southern African Rift Valley, coupled with the antiquity of the archaeological record in this region, make it a place where anthropogenically induced ecological impacts may be investigated into the Pleistocene. Here, we report the archaeology and geomorphology of an extensively dated Stone Age landscape in southern-central Africa. We then link it to paleoenvironmental data spanning >600 ka to identify the earliest coupled evidence for human behavior and ecosystem transformation in the context of anthropogenic fire.

We provide previously unreported age constraints for the Chitimwe Beds of the Karonga District that lie at the northern end of Lake Malawi in the southern portion of the African Rift Valley (Fig. 1) (17). These beds are composed of lateritic alluvial fan and stream deposits that cover ~83 km2, containing millions of stone artifacts, but do not have preserved organic remains such as bone (Supplementary Text) (18). Our optically stimulated luminescence (OSL) data from terrestrial records (Fig. 2 and tables S1 to S3) revise the age of the Chitimwe Beds to the Late Pleistocene, with the oldest age for both alluvial fan activation and burial of Stone Age sites ca. 92 ka (18, 19). The alluvial and fluvial Chitimwe Beds overlie Plio-Pleistocene Chiwondo Beds of lacustrine and fluvial origin in a low, angular unconformity (17). These sedimentary packages are in fault-bounded wedges along the lake margin. Their configuration indicates interactions between lake level fluctuations and active faulting extending into the Pliocene (17). Although tectonism may have affected regional relief and piedmont slopes over an extended time, fault activity in this region likely slowed since the Middle Pleistocene (20). After ~800 ka until shortly after 100 ka, the hydrology of Lake Malawi became primarily climate driven (21). Therefore, neither is likely the sole explanation for Late Pleistocene alluvial fan formation (22).

(A) Location of sites in Africa (star) relative to modern precipitation; blue is wetter and red is more arid (73); boxed area at left shows location of the MAL05-2A and MAL05-1B/1C cores (purple dots) in Lake Malawi and surrounding region, with the Karonga District highlighted as a green outline and location of Luchamange Beds as a white box. (B) Northern basin of Lake Malawi showing the hillshaded topography, remnant Chitimwe Beds (brown patches), and Malawi Earlier-Middle Stone Age Project (MEMSAP) excavation locations (yellow dots), relative to the MAL05-2A core; CHA, Chaminade; MGD, Mwangandas Village; NGA, Ngara; SS, Sadala South; VIN, Vinthukutu; WW, White Whale.

OSL central age (red lines) and error ranges at 1- (25% gray) for all OSL ages associated with in situ artifact occurrences in Karonga. Ages are shown against the past 125 ka of data for (A) Kernel density estimate of all OSL ages from alluvial fan deposits indicating sedimentation/alluvial fan accumulation (teal), and lake level reconstructions based on eigenvalues of a principal components analysis (PCA) of aquatic fossils and authigenic minerals from the MAL05-1B/1C core (21) (blue). (B) Counts of macrocharcoal per gram normalized by sedimentation rate, from the MAL05-1B/1C core (black, one value near 7000 off scale with asterisk) and MAL05-2A core (gray). (C) Margalefs index of species richness (Dmg) from fossil pollen of the MAL05-1B/1C core. (D) Percentages of fossil pollen from Asteraceae, miombo woodland, and Olea, and (E) percentages of fossil pollen from Poaceae and Podocarpus. All pollen data are from the MAL05-1B/1C core. Numbers at the top refer to individual OSL samples detailed in tables S1 to S3. Differences in data availability and resolution are due to different sampling intervals and material availability in the core. Figure S9 shows the two macrocharcoal records converted to z scores.

Landscape stability after (Chitimwe) fan formation is indicated by formation of laterites and pedogenic carbonates, which cap fan deposits across the study region (Supplementary Text and table S4). The formation of alluvial fans in the Late Pleistocene of the Lake Malawi basin is not restricted to the Karonga region. About 320 km to the southeast in Mozambique, terrestrial cosmogenic nuclide depth profiles of 26Al and 10Be constrain formation of the alluvial, lateritic Luchamange Beds to 119 to 27 ka (23). This broad age constraint is consistent with our OSL chronology for the western Lake Malawi basin and indicates regional alluvial fan expansion in the Late Pleistocene. This is supported by data from lake core records, which suggest a higher sedimentation rate accompanied by increased terrigenous input after ca. 240 ka, with particularly high values at ca. 130 and 85 ka (Supplementary Text) (21).

The earliest evidence for human occupation in the region is tied to the Chitimwe sedimentary deposits identified at ~92 7 ka. This result is based on 605 m3 of excavated sediment from 14 archaeological excavations with subcentimeter spatial control, and 147 m3 of sediment from 46 archaeological test pits with 20-cm vertical and 2-m horizontal control (Supplementary Text and figs. S1 to S3). In addition, we have surveyed 147.5 linear km, emplaced 40 geological test pits, and analyzed over 38,000 artifacts from 60 of these localities (tables S5 and S6) (18). These extensive surveys and excavations show that while hominins, including early modern humans, may have inhabited the region before ~92 ka, depositional aggradation associated with rising and then stabilized Lake Malawi levels did not preserve archaeological evidence until formation of the Chitimwe Beds.

The archaeological data support the inference that during the Late Quaternary, fan expansion and human activities in northern Malawi were substantial, and artifacts were of a type associated elsewhere in Africa with early modern humans. The majority of artifacts were created from quartzite or quartz river cobbles and featured radial, Levallois, platform, and casual core reduction (fig. S4). Morphologically diagnostic artifacts can be predominantly attributable to Levallois-type technologies characteristic of the Middle Stone Age (MSA), known to date from at least ~315 ka in Africa (24). The uppermost Chitimwe Beds, which continue into the Early Holocene, contain sparsely distributed Later Stone Age occurrences, found in association with terminal Pleistocene and Holocene hunter-gatherers across Africa. In contrast, stone tool traditions typically associated with the Early and Middle Pleistocene, such as large cutting tools, are rare. Where these do occur, they are found within MSA-bearing deposits dated to the Late Pleistocene, not an earlier phase of sedimentation (table S4) (18). Although sites are present from ~92 ka, the most well-represented period of both human activity and alluvial fan deposition occurs after ~70 ka, well defined by a cluster of OSL ages (Fig. 2). We confirm this pattern with 25 published and 50 previously unpublished OSL ages (Fig. 2 and tables S1 to S3). These show that of a total of 75 age determinations, 70 were recovered from sediment that postdates ~70 ka. The 40 ages associated with in situ MSA artifacts are shown in Fig. 2, relative to major paleoenvironmental indicators published from the MAL05-1B/1C central basin lake core (25) and previously unpublished charcoal from the MAL05-2A northern basin lake core (adjacent to the fans that produced the OSL ages).

Climate and environmental conditions coeval with MSA human occupation at Lake Malawi were reconstructed using freshly generated data from phytoliths and soil micromorphology from archaeological excavations and published data from fossil pollen, macrocharcoal, aquatic fossils, and authigenic minerals from the Lake Malawi Drilling Project cores (21). The latter two proxies are the primary basis of the reconstruction of relative lake depth dating back over 1200 ka (21) and are matched to pollen and macrocharcoal samples taken from the same places in the core that span the past ~636 ka (25). The longest cores (MAL05-1B and MAL05-1C; 381 and 90 m, respectively) were collected ~100 km southeast of the archaeological project area. A shorter core (MAL05-2A; 41 m) was collected ~25 km east, offshore from the North Rukuru River (Fig. 1). The MAL05-2A core reflects terrigenous paleoenvironmental conditions of the Karonga region, whereas the MAL05-1B/1C cores did not receive direct riverine input from Karonga and thus are more reflective of regional conditions.

Sedimentation rates recorded in the MAL05-1B/1C composite drill core began to increase starting ~240 ka from a long-term average of 0.24 to 0.88 m/ka (fig. S5). The initial increase is associated with changes in orbitally modulated insolation, which drive high amplitude changes in lake level during this interval (25). However, when orbital eccentricity decreased and climate stabilized after 85 ka, sedimentation rates remained high (0.68 m/ka). This is concurrent with the terrestrial OSL record, which shows extensive evidence for alluvial fan expansion after ~92 ka, and congruent with magnetic susceptibility data that show a positive relationship between erosion and fire after 85 ka (Supplementary Text and table S7). Given the error ranges of available geochronological controls, it is not possible to tell whether this set of relationships evolved slowly from a progression of recursive processes or occurred in rapid bursts as tipping points were reached. On the basis of geophysical models of basin evolution, rift extension and associated subsidence have slowed since the Middle Pleistocene (20) and, therefore, are not the primary cause of extensive fan formation processes we have dated to mainly after 92 ka.

Climate has been the dominant control of lake level since the Middle Pleistocene (26). Specifically, uplift in the northern basin closed an existing outlet ca. 800 ka, allowing the lake to deepen until reaching the sill elevation of the modern outlet (21). This outlet, located at the southern end of the lake, provides an upper limit on lake levels during wet intervals (including the present day), but allows the basin to close as lake levels drop during periods of aridity (27). Lake level reconstructions show alternating wet-dry cycles over the past 636 ka. On the basis of evidence from fossil pollen, periods of extreme aridity (>95% decrease in total water volume) linked to lows in summer insolation resulted in the expansion of semidesert vegetation with trees restricted to permanent waterways (27). These (lake) lowstands were associated with pollen spectra showing high proportions of grass (80% or more) and xerophytic herbs (Amaranthaceae) at the expense of tree taxa and low overall species richness (25). In contrast, when the lake was near the modern level, vegetation with strong affinities to Afromontane forest typically expanded to the lakeshore [~500 m above sea level (masl)]. Today, Afromontane forests only occur in small, discontinuous patches above ~1500 masl (25, 28).

The most recent period of extreme aridity occurred from 104 to 86 ka, after which open miombo woodland with substantial grass and herbaceous components became widespread, despite recovery of the lake level to high-stand conditions (27, 28). Afromontane forest taxa, most notably Podocarpus, never recovered after 85 ka to values similar to previous periods of high lake levels (10.7 7.6% after 85 ka versus 29.8 11.8% during analogous lake level before 85 ka). Margalefs index (Dmg) also shows that the past 85 ka has been marked by species richness 43% lower than during previous sustained periods of high lake level (2.3 0.20 versus 4.6 1.21, respectively), for example, in the high lake period between ca. 420 and 345 ka (Supplementary Text and figs. S5 and S6) (25). Pollen samples from the period ca. 88 to 78 ka also contain high percentages of Asteraceae pollen, which can be indicative of vegetation disturbance and is within the error range of the oldest date for human occupation of the area.

We use a climate anomaly approach (29) to analyze paleoecological and paleoclimatic data from the drill cores before and after 85 ka and test the hypothesis that the ecological relations among vegetation, species richness, and precipitation became decoupled from predictions derived from the presumably purely climate-driven baseline pattern of the preceding ~550 ka. This transformed ecological system was influenced by both lake infilling precipitation conditions and fire occurrence, as reflected in a species-poor and novel vegetation assemblage. Only some forest elements recovered after the last arid period, and these included fire-tolerant components of the Afromontane forest such as Olea, and hardy components of tropical seasonal forest such as Celtis (Supplementary Text and fig. S5) (25). To test this hypothesis, we model lake level derived from ostracode and authigenic mineral proxies as the independent variable (21) versus dependent variables such as charcoal and pollen that could have been affected by increased fire frequency (25).

To examine how similar or dissimilar the assemblages were to one another at different times, we conducted a principal coordinates analysis (PCoA) using pollen from Podocarpus (evergreen tree), Poaceae (grasses), Olea (a fire-tolerant component of Afromontane forest), and miombo (the dominant woodland component today). By mapping the PCoA on top of an interpolated surface that represents lake level at the time each assemblage was formed, we examine how pollen assemblages changed relative to precipitation and how this relationship changed after 85 ka (Fig. 3 and fig. S7). Before 85 ka, samples dominated by Poaceae cluster toward drier conditions, while samples dominated by Podocarpus cluster toward wetter conditions. In contrast, samples dating to after 85 ka cluster away from the majority of pre-85-ka samples and have a different average value, showing that their composition is unusual for similar precipitation conditions. Their position in the PCoA reflects the influence of Olea and miombo, both of which are favored under more fire-prone conditions. Of the post-85-ka samples, Podocarpus is only abundant in three successive samples, which occurred just after the onset of this interval between 78 and 79 ka. This suggests that after initial rainfall increase, forests appear to make a brief recovery before eventual collapse.

Each dot represents a single pollen sample at a given point in time, using the age model in the Supplementary Text and fig. S8. Vectors show the direction and gradient of change, with longer vectors representing a stronger trend. The underlying surface represents lake levels as a proxy for precipitation; darker blue is higher. A mean value for the PCoA eigenvalues is provided for the post-85-ka data (red diamond) and all pre-85-ka data from analogous lake levels (yellow diamond). Analogous lake levels are between 0.130- and 0.198- around the mean eigenvalue of the lake level PCA using the entire 636 ka of data.

To investigate the relations among the pollen, lake levels, and charcoal, we used a nonparametric multivariate analysis of variance (NP-MANOVA) to compare the total environment (represented by a data matrix of pollen, lake levels, and charcoal), before and after the transition at 85 ka. We found that variation and covariation found in this data matrix are statistically significantly different before and after 85 ka (Table 1).

DF, degrees of freedom.

Our terrestrial paleoenvironmental data from phytoliths and soils on the western lake margins agree with interpretations based on proxies from the lake. These show that despite high lake levels, the landscape had transitioned to one dominated by open canopy woodland and wooded grassland, much as today (25). All localities analyzed for phytoliths on the western margin of the basin date to after ~45 ka and show substantial arboreal cover that reflect wet conditions. However, they suggest that much of that cover is in the form of open woodlands in cohort with bambusoid and panicoid grasses. On the basis of phytolith data, fire-intolerant palms (Arecaceae) were present exclusively by the lake shoreline and rare or absent from inland archaeological sites (table S8) (30).

In general, wet but open conditions in the later part of the Pleistocene are also inferred from terrestrial paleosols (19). Lagoonal clay and palustrine-pedogenic carbonates from the vicinity of the Mwangandas Village archaeological site date between 40 and 28 cal ka BP (calibrated kiloanni before present) (table S4). Carbonate soil horizons within the Chitimwe Beds are typically nodular calcretes (Bkm) and argillic and carbonate (Btk) horizons, which indicate locations of relative landform stability with slow sedimentation derived from distal alluvial fan progradation by ca. 29 cal ka BP (Supplementary Text). Eroded, indurated laterite soils (petroplinthites) formed on remnants of paleofans are indicative of open landscape conditions (31) with strongly seasonal precipitation (32), illustrating the ongoing legacy of these conditions on the landscape.

Support for the role of fire in this transformation comes from the paired macrocharcoal records from the drill cores, which from the central basin (MAL05-1B/1C) show an overall increase in charcoal influx starting ca. 175 ka. Substantial peaks follow between ca. 135 and 175 ka and 85 and 100 ka, after which time lake levels recover but forest trees and species richness do not (Supplementary Text, Fig. 2, and fig. S5). The relationship between charcoal influx and magnetic susceptibility of lake sediments can also show patterns in long-term fire history (33). Using data from Lyons et al. (34), ongoing erosion of burned landscapes after 85 ka is implied at Lake Malawi by a positive correlation (Spearmans Rs = 0.2542 and P = 0.0002; table S7), whereas older sediments show an inverse relationship (Rs = 0.2509 and P < 0.0001). In the northern basin, the shorter MAL05-2A core has its deepest chronological anchor point with the Youngest Toba Tuff at ~74 to 75 ka (35). Although it lacks the longer-term perspective, it receives input directly from the catchment from which the archaeological data derive. The north basin charcoal record shows a steady increase in terrigenous charcoal input since the Toba crypto-tephra marker, over the period where archaeological evidence is most prevalent (Fig. 2B).

Evidence for anthropogenic fire may reflect intentional use at the landscape scale, widespread populations creating more or larger on-site ignitions, alteration of fuel availability through harvesting of the understory, or a combination of these activities. Modern hunter-gatherers use fire to actively modify foraging returns (2). Their activities increase prey abundances, maintain mosaic landscapes, and increase pyrodiversity and succession stage heterogeneity (13). Fire is also important for on-site activities such as heat, cooking, defense, and socialization (14). Even small differences in the deployment of fire outside of natural lightning strikes can alter patterns of forest succession, fuel availability, and seasonality of ignitions. Reductions in arboreal cover and woody understory have the most potential to enhance erosion, while loss of species diversity in this region is tightly tied to loss of Afromontane forest communities (25).

Human control of fire is well established in the archaeological record from before the start of the MSA (15), but its use as a landscape management tool has only so far been documented in a few Paleolithic contexts. These include in Australia ca. 40 ka (36), Highland New Guinea ca. 45 ka (37), and ca. 50 ka at Niah Cave in lowland Borneo (38). In the Americas, anthropogenic ignitions have been implicated as major factors in the reconfiguration of faunal and floral communities as humans first entered these ecosystems, especially within the past 20 ka (16). These conclusions are necessarily based on correlative evidence, but the argument for a cause-and-effect relationship is strengthened where there is direct overlap of archaeological, geochronological, geomorphic, and paleoenvironmental data. Although marine core data offshore of Africa have previously provided evidence of altered fire regimes over the past ~400 ka (9), here, we provide evidence of anthropogenic impacts that draw from correlated archaeological, paleoenvironmental, and geomorphic datasets.

Identifying anthropogenic fire in the paleoenvironmental record requires evidence of temporal or spatial changes in fire activity and vegetation, demonstration that these changes are not predicted by climate parameters alone, and temporal/spatial coincidence between fire regime changes and changes in the human record (29). Here, the first evidence for extensive MSA occupation and alluvial fan formation in the Lake Malawi basin occurred alongside a major reorganization of regional vegetation that began ca. 85 ka. Charcoal abundances in the MAL05-1B/1C core are reflective of regional trends in charcoal production and sedimentation that show substantial differences after ca. 150 ka when compared to the rest of the 636-ka record (figs. S5, S9, and S10). This transition shows an important contribution of fire for shaping ecosystem composition that cannot be explained by climate alone. In natural fire regimes, lightning ignitions typically occur at the end of the dry season (39). Anthropogenic fires, however, may be ignited at any time if fuels are sufficiently dry. On a site scale, humans can alter fire regimes continuously through collection of firewood from the understory. The net result of anthropogenic fire of any kind is that it has the potential to result in more consumption of woody vegetation, continuously throughout the year, and at a variety of scales.

In South Africa, fire was used in the heat treatment of stone for tool manufacture as early as 164 ka (12) and as a tool for cooking starchy tubers as early as 170 ka (40), taking advantage of resources that thrived in ancient fire-prone landscapes (41). Landscape fires reduce arboreal cover and are crucial tools in maintaining grassland and forest patch environments, which are defining elements of anthropogenically mediated ecosystems (13). If modification of vegetation or prey behavior was the intent of increased anthropogenic burning, then this behavior represents an increase in the complexity with which early modern humans controlled and deployed fire in comparison to earlier hominins and shows a transformed interdependency in our relationship with fire (7). Our analysis offers an additional avenue for understanding how human use of fire changed in the Late Pleistocene and what impacts these changes had on their landscapes and environments.

The expansion of alluvial fans during the Late Quaternary in the Karonga region may be attributable to changes in seasonal burning cycles under higher-than-average rainfall conditions, which resulted in enhanced hillslope erosion. The mechanism through which this occurred was likely by driving watershed-scale responses from fire-induced disturbance with enhanced and sustained denudation in the upper portions of the watersheds, and alluvial fan expansion in the piedmont environments adjacent to Lake Malawi. These responses likely included changes in soil properties to decrease infiltration rates, diminished surface roughness, and enhanced runoff as high precipitation conditions combined with reduced arboreal cover (42). Sediment availability is enhanced initially by the stripping of cover material and over longer time scales potentially by loss of soil strength from heating and from decreased root strength. The stripping of topsoil increased sediment flux, which was accommodated by fan aggradation downstream and accelerated laterite formation on the fans.

Many factors can control the landscape response to changing fire regime, and most of them operate at short time scales (4244). The signal we associate here is manifest at the thousand-year time scale. Analytical and landscape evolution models have shown notable denudation rate changes over thousand-year time scales with recurrent wildfire-induced vegetation disturbances (45, 46). A lack of regional fossil records contemporaneous with the observed changes in charcoal and vegetation records impedes reconstruction of the impacts of human behavior and environmental change on herbivore community composition. However, large grazing herbivores that inhabit landscapes that are more open play a role in maintaining them and in keeping woody vegetation from encroaching (47). Evidence of change across different components of the environment should not be expected to be simultaneous, but rather viewed as a series of cumulative effects that may have occurred over a prolonged period (11). Using a climate anomaly approach (29), we attribute human activity as a key driver in shaping the landscape of northern Malawi over the course of the Late Pleistocene. However, these impacts may be built on an earlier, less visible legacy of human-environment interactions. Charcoal peaks that appear in the paleoenvironmental record before the earliest archaeological dates may include an anthropogenic component that did not result in the same ecological regime change that is documented later in time and that did not involve sedimentation sufficient to confidently indicate human occupation.

Short sediment cores, such as that from the adjacent Lake Masoko basin in Tanzania, or shorter cores within Lake Malawi itself, show changes in the relative pollen abundances of grass to woodland taxa that have been attributed to natural climate variability over the past 45 ka (4850). However, it is only with the longer perspective of the >600-ka pollen record of Lake Malawi, accompanied by the extensively dated archaeological landscape next to it, that it is possible to understand the longer-term associations between climate, vegetation, charcoal, and human activity. Although humans were likely present in the northern Lake Malawi basin before 85 ka, the density of archaeological sites after ca. 85 ka, and especially after 70 ka, indicates that the region was attractive for human occupation after the last major arid period ended. At this time, novel or more intensive/frequent usage of fire by humans apparently combined with natural climate shifts to restructure a >550-ka ecological relationship, ultimately generating an early preagricultural anthropogenic landscape (Fig. 4). Unlike during earlier time periods, the depositional nature of this landscape preserved MSA sites as a function of the recursive relationship between environment (resource distributions), human behavior (activity patterns), and fan activation (sedimentation/site burial).

(A) ca. 400 ka: No detectable human presence. Wet conditions similar to today with high lake level. Diverse, nonfire-tolerant arboreal cover. (B) ca. 100 ka: No archaeological record, but human presence possibly detected by charcoal influx. Extremely arid conditions occur in a desiccated watershed. Commonly exposed bedrock, limited surface sediment. (C) ca. 85 to 60 ka: Lake level is increasing with higher precipitation. Human presence archaeologically detectable after 92 ka and concentrated after 70 ka. Burning of uplands and alluvial fan expansion ensue. Less diverse, fire-tolerant vegetation regime emerges. (D) ca. 40 to 20 ka: Ambient charcoal input in the northern basin increases. Alluvial fan formation continues but begins to abate toward the end of this period. Lake levels remain high and stable relative to the preceding 636-ka record.

The Anthropocene represents the accumulation of niche construction behaviors that have developed over millennia, at a scale unique to modern H. sapiens (1, 51). In the modern context, anthropogenic landscapes persist and have intensified following the introduction of agriculture, but they are extensions, not disconnections, of patterns established during the Pleistocene (52). Data from northern Malawi show that periods of ecological transition can be prolonged, complex, and iterative. Transformations of this scale reflect complex ecological knowledge by early modern humans and illustrate their transition to the globally dominant species we are today.

Site survey and recording of artifact and cobble characteristics on survey tracts followed protocols described in Thompson et al. (53). Test pit emplacement and main site excavation, including micromorphology and phytolith sampling, followed protocols described in Thompson et al. (18) and Wright et al. (19). Our Geographic Information System (GIS) maps based on Malawi geological survey maps of the region show a clear association between the Chitimwe Beds and archaeological sites (fig. S1). Placement of geologic and archaeological test pits in the Karonga region was spaced to capture the broadest representative sample possible (fig. S2). Geomorphic, geochronometric, and archaeological investigations of Karonga involved four main field approaches: pedestrian survey, archaeological test pitting, geological test pitting, and detailed site excavations. Together, these techniques allowed major exposures of the Chitimwe Beds to be sampled in the northern, central, and southern parts of Karonga (fig. S3).

Site survey and recording of artifact and cobble characteristics on pedestrian survey tracts followed protocols described in Thompson et al. (53). This approach had two main goals. The first was to identify localities where artifacts were actively eroding, and then place archaeological test pits upslope at those localities to recover artifacts in situ from buried contexts. The second goal was to formally document the distribution of artifacts, their characteristics, and their relationship to nearby sources of lithic raw material (53). For this work, a crew comprising three people walked at 2- to 3-m spacing for a combined total of 147.5 linear km, transecting across most of the mapped Chitimwe Beds (table S6).

Work concentrated first on the Chitimwe Beds to maximize the sample of observed artifacts, and second on long linear transects from the lakeshore to the uplands that crosscut different sedimentary units. This confirmed the key observation that artifacts located between the western highlands and the lakeshore are exclusively associated with the Chitimwe Beds or more recent Late Pleistocene and Holocene deposits. Artifacts found in other deposits are ex situ and have been relocated from elsewhere on the landscape, as revealed by their abundances, sizes, and degree of weathering.

Archaeological test pit emplacement and main site excavation, including micromorphology and phytolith sampling, followed protocols described in Thompson et al. (18, 54) and Wright et al. (19, 55). The primary aim was to understand the subsurface distribution of artifacts and fan deposits across the larger landscape. Artifacts are typically deeply buried within the Chitimwe Beds in all places except at the margins, where erosion has begun to remove the top part of the deposit. During informal survey, two people walked across Chitimwe Beds that appear as mapped features on Government of Malawi geological maps. As these people encountered the shoulders of Chitimwe Bed deposits, they began to walk along the margins where they could observe artifacts eroding from the deposits. By placing excavations slightly (3 to 8 m) upslope from actively eroding artifacts, excavations could reveal their in situ locations relative to their containing sediments, without the necessity of laterally extensive excavations. Test pits were emplaced so that they would be 200- to 300-m distant from the next-nearest pit and thus capture the variation across Chitimwe Bed deposits and the artifacts they contained. In some cases, test pits revealed localities that later became the sites of full excavations.

All test pits began as 1 2 m squares, oriented north-south, and excavated in 20-cm arbitrary units, unless there was a noticeable change in sediment color, texture, or inclusions. Sedimentologic and pedologic attributes were recorded for all excavated sediment, which was passed uniformly through a 5-mm dry sieve. If deposit depth continued beyond 0.8 to 1 m, then excavation ceased in one of the two square meters and continued in the other, thus creating a step so that the deeper layers could be safely accessed. Excavation then continued until bedrock was reached, at least 40 cm of archaeologically sterile sediment had been reached below a concentration of artifacts, or excavation became too unsafe (deep) to proceed. In some cases, deposit depth required extension of the test pit into a third square meter, with two steps into the trench.

Geologic test pits had previously revealed that the Chitimwe Beds often appear on geologic maps because of a distinctive reddish color, when they include a wide range of stream and river deposits, alluvial fan deposits, and do not always present as red in color (19). Geologic test pits were excavated as simple pits designed to clean off mixed upper sediments to reveal the subsurface stratigraphy of deposits. This was necessary because the Chitimwe Beds erode as parabolic hillslopes with slumped sediments coating the slope and do not typically form clear natural sections or cuts. These excavations thus occurred either at the tops of Chitimwe Beds, where there was an inferred subsurface contact between the Chitimwe Beds and the underlying Pliocene Chiwondo Beds, or where river terrace deposits required dating (55).

Full archaeological excavations proceeded at localities that promised large assemblages of in situ stone artifacts, typically based on test pits or where artifacts could be seen eroding in large quantities from a slope. Artifacts from the main excavations were recovered from sedimentary units that were excavated separately in 1 1 m squares. Units were excavated as spits of either 10 or 5 cm if artifact densities were high. All stone artifacts, fossil bone, and ochre were piece plotted at each main excavation, with no size cutoff. The sieve size was 5 mm. Artifacts were assigned unique barcoded plotted find numbers if they were recovered during excavation, and find numbers within the same series were assigned to sieved finds. Artifacts were labeled with permanent ink, placed in a bag with their specimen label, and bagged together with other artifacts from the same context. After analysis, all artifacts were stored at the Cultural and Museum Centre, Karonga.

All excavation was conducted according to natural layers. These were subdivided into spits, with spit thickness dependent on artifact density (e.g., spit thickness would be high if artifact density was low). Context data (e.g., sediment attributes, context relationships, and observations about disturbances and artifact densities) were recorded in an Access database. All coordinate data (e.g., piece-plotted finds, context elevations, square corners, and samples) are based on Universal Transverse Mercator (UTM) coordinates (WGS 1984, Zone 36S). At main sites, all points were recorded using a Nikon Nivo C-series 5 total station that was established within a local grid oriented as closely as possible to UTM north. The location of the northwest corner of each excavated site and the volume of sediment removed for each are given in table S5.

Profiles of sedimentologic and pedologic features were documented from all excavation units using the U.S. Department of Agriculture classification scheme (56). Sedimentologic units were designated on the basis of grain sizes, angularity, and bedding features. Anomalous inclusions and disturbances relative to the sediment unit were noted. Soil development was determined on the basis of subsurface accumulation of sesquioxides or carbonates in the subsoils. Subaerial weathering (e.g., redox, residual Mn nodule formation) was also commonly documented.

Collection points for OSL samples were determined on the basis of an estimation of which facies were likely to yield the most reliable estimation of sediment burial age. At sample locations, trenches were made to expose authigenic sediment layers. All samples for OSL dating were collected by inserting light-tight steel tubes (approximately 4 cm in diameter and 25 cm in length) into the sediment profiles.

OSL dating measures the size of the population of trapped electrons within crystals such as quartz or feldspar arising from exposure to ionizing radiation. The bulk of this radiation originates from the decay of radioactive isotopes within the environment with a minor additional component in the tropical latitudes coming in the form of cosmic radiation. Trapped electrons are released upon exposure of the crystals to light, which occurs either during transport (the zeroing event) or in the laboratory, where illumination occurs beneath a sensor (for example, a photomuliplier tube or charged couple device camera) that can detect photons emitted when the electrons return to their ground state. Quartz grains measuring between 150 and 250 m were isolated through sieving, acid treatments and density separations, and analyzed either as small aliquots (<100 grains) mounted to the surface of aluminum disks or as single grains held within 300 by 300 mm wells drilled into an aluminum disc. Burial doses were typically estimated using single aliquot regeneration methods (57). In addition to assessment of the radiation dose received by grains, OSL dating also requires estimation of the dose rate through measurements using gamma spectrometry or neutron activation analysis of radionuclide concentrations within the sediments from which the sample was collected, along with determination of a cosmic dose rate by reference to the sample location and burial depth. Final age determination is achieved by dividing the burial dose by the dose rate. However, statistical modeling is required to determine an appropriate burial dose to use when there is a variation in the doses measured for individual grains or groups of grains. Burial doses were calculated here using the Central Age Model, in the case of single aliquot dating, or the finite mixture model in the case of single grain dating (58).

Three separate laboratories performed OSL analysis for this study. Detailed individual methods for each laboratory are presented below. In general, we applied OSL dating using regenerative-dose methods to small aliquots (tens of grains) rather than using single grain analysis. This is because small aliquots of samples had low recuperation ratios (<2%) during regenerative growth experiments and the OSL signals were not saturated at the levels of natural signals. Interlaboratory consistency of age determinations, consistent harmony of results within and between stratigraphic sections tested, and parity with geomorphic interpretations of 14C ages from carbonates were the primary basis of this assessment. Single grain protocols were evaluated or performed at each laboratory, but independently determined to be inappropriate for use in this study. Detailed methods and analytical protocols followed by individual laboratories are provided in Supplementary Materials and Methods.

Lithic artifacts recovered from controlled excavations (BRU-I; CHA-I, CHA-II, and CHA-III; MGD-I, MGD-II, and MGD-III; and SS-I) were analyzed and described according to metric and qualitative characteristics. Weight and maximum dimension were measured for each artifact (weight was measured to 0.1 g using a digital scale; all dimensions measured to 0.01 mm with Mitutoyo digital calipers). All artifacts were also classified according to raw material (quartz, quartzite, chert, and other), grain size (fine, medium, and coarse), grain size homogeneity, color, cortex type and coverage, weathering/edge rounding, and technological class (complete or fragmentary core or flake, flake piece/angular shatter, hammerstone, manuport, and other).

Cores were measured along their maximum length; maximum width; width at 15, 50, and 85% of length; maximum thickness; and thickness at 15, 50, and 85% of length. Measurements were also taken to assess the volumetric attributes of hemispherically organized (radial and Levallois) cores. Both complete and broken cores were classified according to reduction method (single or multiplatform, radial, Levallois, and other), and flake scars were counted at both 15 mm and at 20% of core length. Cores with five or fewer scars 15 mm were classified as casual. Cortex coverage was recorded for the total core surface, and on hemispherically organized cores, the relative cortex coverage was recorded for each side.

Flakes were measured along their maximum length; maximum width; width at 15, 50, and 85% of length; maximum thickness; and thickness at 15, 50, and 85% of length. Fragmentary flakes were described according to the portion preserved (proximal, medial, distal, split right, and split left). Elongation was calculated by dividing maximum length by maximum width. Platform width, thickness, and exterior platform angle were measured on complete flakes and proximal flake fragments, and platforms were classified according to degree of preparation. Cortex coverage and location were recorded on all flakes and fragments. Distal edges were classified according to termination type (feather, hinge, and overshot). On complete flakes, the number and orientation of prior flake scars were recorded. When encountered, retouch location and invasiveness were recorded following the protocol established by Clarkson (59). Refitting programs were initiated for most of the excavated assemblages to assess reduction methods and site depositional integrity.

Lithic artifacts recovered from test pits (CS-TP1-21, SS-TP1-16, and NGA-TP1-8) were described according to a simpler scheme than those from controlled excavations. For each artifact, the following characteristics were recorded: raw material, grain size, cortex coverage, size class, weathering/edge damage, technological component, and preserved portion of fragmentary pieces. Descriptive notes were recorded for diagnostic features of flakes and cores.

Intact blocks of sediment were cut from profiles exposed in excavations and geological trenches. These blocks were stabilized in the field, using either plaster-of-Paris bandages or toilet paper and packaging tape, and transported to the Geoarchaeology Laboratory at the University of Tbingen, Germany. There, the samples were dried at 40C for at least 24 hours. They were then indurated under vacuum, using a mixture of unpromoted polyester resin and styrene at a ratio of 7:3. Methyethylketone peroxide was used as the catalyst, with resin-styrene mixture (3 to 5 ml/liter). Once the resin mixture had gelled, the samples were heated at 40C for at least 24 hours to completely harden the mixture. The hardened samples were cut with a tile saw into chips measuring 6 9 cm, which were glued to a glass slide and ground to a thickness of 30 m. The resulting thin sections were scanned using a flatbed scanner and analyzed under the naked eye and under magnification (50 to 200) using plane-polarized light, cross-polarized light, oblique incident light, and blue-light fluorescence. Terminology and descriptions of the thin sections follow guidelines published by Stoops (60) and Courty et al. (61). Pedogenic carbonate nodules, collected from a depth of >80 cm, were sliced in half so that one half could be impregnated and studied in thin section (4.5 2.6 cm), using standard stereoscopic and petrographic microscopes, as well as cathodoluminescence (CL) microscopy. Control on the type of carbonate was given much care, as pedogenic carbonates form in connection to a stable ground surface, while groundwater carbonates form independently from a ground surface or soil.

Samples were drilled from the cut faces of pedogenic carbonate nodules, which were halved to be used for various analyses. The thin sections were studied by F.S. with standard stereo and petrographic microscopes of the working group for geoarchaeology and with a CL microscope at the working group for experimental mineralogy, both in Tbingen, Germany. Subsamples for radiocarbon dating were drilled with a precision drill from designated areas of ca. 3 mm in diameter in the opposing half of the nodule, avoiding zones with late recrystallization, abundant mineral inclusions, or great variability in calcite crystal sizes. The same protocol could not be followed for samples MEM-5038, MEM-5035, and MEM-5055 A, which were selected from loose sediment samples and too small to be cut in half for thin sectioning. However, corresponding micromorphological samples of the adjacent sediment, including carbonate nodules, were studied in thin section.

We submitted samples for 14C dating to the Center for Applied Isotope Studies (CAIS), at the University of Georgia, Athens, USA. The carbonate samples were reacted with 100% phosphoric acid in evacuated reaction vessels to produce CO2. CO2 samples were cryogenically purified from the other reaction products and catalytically converted to graphite. Graphite 14C/13C ratios were measured using a 0.5-MeV accelerator mass spectrometer. The sample ratios were compared to the ratio measured from the oxalic acid I standard (NBS SRM 4990). Carrara marble (IAEA C1) was used as the background, and travertine (IAEA C2) was used as a secondary standard. The results are presented as percent modern carbon, and the quoted uncalibrated dates are given in radiocarbon years before 1950 (years BP), using the 14C half-life of 5568 years. The error is quoted as 1- and reflects both statistical and experimental errors. The dates have been corrected for isotope fractionation based on the isotope-ratio mass spectrometrymeasured 13C values reported by C. Wissing at the laboratory for Biogeology in Tbingen, Germany, except in the case of UGAMS-35944r, which was measured at CAIS. Sample 6887B was analyzed in duplicate. A second subsample was drilled from the nodule for this purpose (UGAMS-35944r) from the sampling region indicated on the cut surface. All samples were corrected for atmospheric fractionation of 14C to 2- using the southern hemisphere application of the INTCAL20 calibration curve (table S4) (62).

Sample (sediment, 0.7 g) was mixed with 0.1% preboiled solution of sodium hexametaphosphate Na6[(PO3)6] and sonicated (5 min). Orbital shaking took place overnight at 200 rpm. After clay dispersal, 3 N hydrochloric and nitric acids (HCl) (HNO3) plus hydrogen peroxide (H2O2) were added. Then, sodium polytungstate (3Na2WO49WO3H2O) (Poly-Gee) at specific gravity 2.4 (preboiled) separated out phytoliths. This was followed by rinsing and centrifugation of samples at 3000 rpm for 5 min. Aliquots (15 l) were mounted on boiled microscope slides with Entellan New (cover, 20 40 mm = inspected area). System microscopy was performed at 40 (Olympus BX41, Motic BA410E). Classification nomenclature followed the International Code for Phytolith Nomenclature (63). The referential baseline included modern plants from several African ecoregions (64) and local soils (65), as well as archaeological localities in the Malawi basin (19, 66).

The OSL data from the landscape and paleoecological data from the Lake Malawi 1B/1C core were subjected to statistical analyses to examine how they changed before and after ~85 ka. Kernel density estimates (KDEs) of sedimentation were constructed following protocols developed in Vermeesch (67) and Kappler et al. (68) from 72 luminescence ages interpreted as originating from alluvial fan deposits (tables S1 to S3). KDEs provide reliable distributions of age occurrences when standard errors (SEs) overlap or the analytical imprecision of the true age is high (67). For the present analysis, each age was replotted 10,000 times along a normal distribution using the rnorm command in R based on the laboratory generated mean and 1- SE. The KDE was created in the kde1d package in R (69). Bandwidth was set to default, with data-derived parameters developed by Sheather and Jones (70).

To characterize the biotic environment, we used proportions of pollen from Poaceae, Podocarpus, miombo, and Olea. We used lake levels to characterize the abiotic environment. Over the ~636 ka span of the MAL05-1B/1C core for which pollen data are available, there have been several periods when lake level was equivalent to modern conditions. We have defined these analogous conditions by downsampling the published lake level data (21) to fit the pollen sample intervals (25), and then calculating the statistical mean of the principal components analysis (PCA) eigenvalue for all lake level proxies over the past 74 ka to represent modern-like lake conditions. The pollen sampling intervals effectively make this the statistical mean of lake levels between 21.4 and 56.2 ka [0.130- and 0.198- (25)] and enable us to compare recent vegetation composition to its composition during older, analogous precipitation regimes.

To evaluate whether there were differences in the regional environmental structure before and after 85 ka, we conducted a NP-MANOVA (71). However, vegetation and lake level proxies are inherently different data types [pollen proportions (25) and the first principal component of all lake level proxies (21), respectively]. To conduct the MANOVA, these data must be the same type. Pollen, lake level proxies, and macrocharcoal were also sampled at different densities and intervals in the cores. To properly adjust the data so that once a single pollen sample and its age are matched to a single charcoal and lake level sample, we conducted a series of transformations. Because the pollen data were the most sparsely sampled, we used a spline to fit and downsample the lake level and charcoal data to match them. To make the pollen and lake level data equivalent, we conducted a PCoA using R software (72). PCoA is similar to the widely known PCA in that PCoA conducts a decomposition of a data matrix to obtain eigenvalues and their corresponding eigenvectors. The difference is that while PCA decomposes the variance-covariance matrix, PCoA solves for the eigenvalues of a distance matrix of the original data. To create the distance matrix, we used the 2 distance, which is appropriate for proportion data, like pollen. The PCoA results in a set of scores, representing the original data, which can be plotted similar to PCA. In our case, these scores are not only useful for graphic illustration, but as they are normalized and Euclidean, they are identical to the lake level data and maintain all information contained by the original pollen dataset. This procedure allowed us to use the PCoA pollen scores in conjunction with lake level variable in the NP-MANOVA to test whether there was a difference in environment before and after 85 ka. For the Supplementary Materials statistics, biplots of species richness and lake level were constructed using the ggplot2 package of R. Box and whiskers quartiles used the boxplot command in base R.

D. Delvaux, Peri-Tethys Memoir: PeriTethyan Rift/Wrench Basins and Passive Margins, P. A. Ziegler, W. Cavazza, A. H. F. Robertson, S. Crasquin-Soleau, Eds. (Memoirs of the National Museum of Natural History, Paris 2001), pp. 545567.

N. van Breemen, P. Buurman, Soil Formation, N. van Breemen, P. Buurman, Eds. (Springer Netherlands, Dordrecht, 1998), pp. 291312.

C. Whitlock, C. Larsen, Charcoal as a fire proxy, in Tracking Environmental Change Using Lake Sediments: Terrestrial, Algal, and Siliceous Indicators, J. P. Smol, H. J. B. Birks, W. M. Last, R. S. Bradley, K. Alverson, Eds. (Springer Netherlands, 2001), pp. 7597.

P. J. Schoeneberger, D. A. Wysocki, E. C. Benham; Soil Survey Staff, Field Book for Describing and Sampling Soils, Version 3.0 (Natural Resources Conservation Service, National Soil Survey Center, 2012).

G. Stoops, Guidelines for Analysis and Description of Soil and Regolith Thin Sections (Soil Science Society of America, Inc., 2003).

M. A. Courty, P. Goldberg, R. Macphail, Soils and Micromorphology in Archaeology (Cambridge Manuals in Archaeology, Cambridge Univ. Press, 1989).

R Core Team, R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2020).

W. R. Van Schumus, Natural radioactivity in crust and mantle, in Global Earth Physics: A Handbook of Physical Constants, T. J. Ahrens, Ed. (American Geophysical Union, 1995), pp. 283291.

M. J. Aitken, An Introduction to Optical Dating: The Dating of Quaternary Sediments by the Use of Photon-Stimulated Luminescence (Oxford Univ. Press, 1998).

A. M. Alonso-Zarza, V. P. Wright, Calcretes, in Carbonates in Continental Settings: Facies, Environments and Processes, Developments in Sedimentology, A. M. Alonso-Zarza, L. H. Tanner, Eds. (Elsevier, 2010), vol. 61, pp. 225267.

G. M. Ashley, D. M. Deocampo, J. Kahmann-Robinson, S. G. Driese, Groundwater-fed wetland sediments and paleosols: Its all about water table, in New Frontiers In Paleopedology And Terrestrial Paleoclimatology, S. G. Driese, L. C. Nordt, Eds. (SEPM Society for Sedimentary Geology, 2013), vol. 104, pp. 4761.

M. N. Machette, Calcic Soils of the Southwestern United States, in Soils and Quaternary Geology of the Southwestern United States: Geological Society of America Special Paper, D. L. Weide, M. L. Faber, Eds. (Geological Society of America, 1985), vol. 203, pp. 121.

A. M. Alonso-Zarza, V. P. Wright, Palustrine carbonates, in Carbonates in Continental Settings: Facies, Environments and Processes, Developments in Sedimentology A. M. Alonso-Zarza, L. H. Tanner, Eds. (Elsevier, 2010), vol. 61, pp. 103131.

A. S. Goudie, Calcrete, in Chemical Sediments and Geomorphology, A. S. Goudie, K. Pye, Eds. (Academic Press, 1983), pp. 93131.

R. J. Schaetzl, S. Anderson, Soil Genesis and Geomorphology (Cambridge Univ. Press, 2005).

V. P. Wright, Calcrete, in Geochemical Sediments and Landscapes, D. J. Nash, S. J. McLaren, Eds. (Blackwell Publishing, 2007), pp. 1045.

G. Taylor, R. A. Eggleton, Regolith Geology and Geomorphology (John Wiley & Sons, Chicester, 2001).

V. P. Wright, Soil Micromorphology: A Basic and Applied Science, L. A. Douglas, Ed. (Elsevier, 1990), pp. 401407.

M. J. Vepraskas, L. P. Wilding, L. R. Drees, Aquic conditions for Soil Taxonomy: Concepts, soil morphology and micromorphology, in Developments in Soil Science, A. J. Ringrose-Voase, G. S. Humphreys, Eds. (Elsevier, 1993), vol. 22, pp. 117131.

M. J. McFarlane, Laterite and Landscape (Academic Press, 1976).

J. E. Delvigne, Atlas of Micromorphology of Mineral Alteration and Weathering (Mineralogical Association of Canada, 1998).

I. Kovda, A. R. Mermut, Vertic features, in Interpretation of Micromorphological Features of Soils and Regoliths, G. Stoops, V. Marcelino, F. Mees, Eds. (Elsevier, 2010), pp. 109127.

Y. Tardy, Petrology of Laterites and Tropical Soils (A. A. Balkema Publishers, 1997).

K. Faegri, J. Iversen, P. E. Kaland, K. Krzywinski, Textbook of Pollen Analysis (Blackburn Press, ed. 4th, 1989).

R. Bonnefille, G. Riollet, Pollens de Savanes dAfrique Orientale (ditions du Centre national de la recherche scientifique, Paris, 1980).

E. C. Grimm, Tilia and Tiliagraph (Illinois State Museum, 1991).

B. M. Campbell, The Miombo in transition: Woodlands and welfare in Africa (Bogor, Indonesia, Center for International Forestry Research, 1996).

The rest is here:
Early human impacts and ecosystem reorganization in southern-central Africa - Science Advances

What Do Mountain Lions Think Of Humans? Find Out At The (Virtual) Pub! – kclu.org

Santa Barbara Natural History Museum is hosting an on-line event which explores how human presence impacts mountain lion behavior.

They are mysterious and reclusive animals, and the Science Pub From Home run by Santa Barbara Natural History Museum is exploring how human behavior impacts mountain lion behavior.

The speaker is UC Santa Cruz Professor Chris Wilmers, who studies mountain lions.

He told KCLU that mountain lions "don't particularly care" for humans.

"For the most part [they] try to avoid us," he said.

"Humans make it incredibly hard to be a mountain lion. We put up barriers like roads which make it very hard for them to move form part of their habitat to another."

Details about how to join the conversation are here.

Original post:
What Do Mountain Lions Think Of Humans? Find Out At The (Virtual) Pub! - kclu.org