All posts by medical

Treatment of Atopic Dermatitis and Psoriasis in People Who Are … – Managed Healthcare Executive

Women who are pregnant dont have to stop all of their treatments during pregnancy. Some can be safely treated for their psoriasis or eczema, according to a presentation today at the annual meeting of the American Academy of Dermatology.

Women who are pregnant and who have psoriasis or eczema often have limited options. The evidence about the effect that the systemic medications might have on pregnant women and their fetuses is limited.

But women deserve more than just topical treatment and moisturizers, and doctors may be more restrictive than necessary, Elizabeth Kiracofe, M.D., a dermatologist in private practice in Chicago, said during a presentation today at the annual meeting of the American Academy of Dermatology in New Orleans.

We are more restrictive with both systemics and topical medications in our patients who have atopic dermatitis and psoriasis in a way that is not scientifically backed, she said. We may be doing a disservice to patients by being too restrictive in our prescribing patterns. We may have concerns, and there are uncertainties, but these patients deserve to be treated because there's also risk of nontreatment. These patients have a chronic immune-mediated inflammatory disease and theyre pregnant. Having an uncontrolled chronic immune mediated inflammatory disease is also not healthy in pregnancy.

Inadequate control of disease can lead to flares and infections and even septicemia, Kiracofe said. Additionally, she said, depression is more common in women with psoriasis than in healthy women without psoriasis. And 21% of pregnant women who have psoriasis suffer from depression, compared with 10% of non-psoriasis patients.

In the United States, 4.6% of women have atopic dermatitis, and 3.2% of women have psoriasis. About 75% of women are diagnosed before the age of 40. At the same time, the representation of women in psoriasis studies has decreased. In a recent JAMA study, investigators found that clinical studies with least 45% of women in the enrolled population decreased from 87.2% in the 2010 to 2015 timeframe to 29.5% in the 2015 to 2020 timeframe.

Kiracofe acknowledged the treatment choices for pregnant patients with atopic dermatitis are complicated. There are no large clinical studies on the possible effects and side-effect of biologics on conception, pregnancy and lactation.

When treating pregnant women, its important to use the science, Kiracofe said. We cant just base decisions on just the label; we need to know the pathophysiology, we need to know how the body works. And we need to have a shared decision-making process and talk with our patients because that is a really important conversation.

An important piece of the puzzle in the use of biologics in psoriasis and atopic dermatitis during pregnancy is the degree of immune suppression the infant experiences within the few first few months of life.

Its important to think about the impact of using biologics in the second and third trimester. Thats a really big frameshift when thinking about medication interacting with our patients. Were taught from medical school to think about the first trimester fetal malformations. But for these type of systemic medications, we actually want to be thinking in the second and third trimester.

Kiracofe described how the placental Fc receptor "grabs." The Fc receptor is instrumental in allowing medications to cross the placenta. Cimzia (certolizumab) is the only biologic with a confirmed safety profile during pregnancy and lactation with no increased mortality rate for the fetus, Kiracofe told an audience at the dermatology organization's annual meeting. Cimzia is a TNF-alpha inhibitor that doesnt bind to the placental Fc receptor.

Cimzia was approved in September 2013 by the FDA as a treatment of active psoriatic arthritis and in May 2018 as a treatment of moderate-to-severe psoriasis in adults. It is also approved for treating rheumatoid arthritis, Crohn's disease, ankylosing spondylitis, and non-radiographic axial spondyloarthritis.

More here:

Treatment of Atopic Dermatitis and Psoriasis in People Who Are ... - Managed Healthcare Executive

What Human Behavior Tells Us About How To Get Hybrid Work Right – Forbes

Reactionary Rulemaking

Unsplash

I once transferred to a school that had a strange and in-your-face rule: No Climbing Trees.

And not just when on school grounds; as long as you attended, it was No Climbing Trees Ever.

This rule had two unintended psychological effects.

First, I now wanted to climb a lot more trees than I had my whole life. And second, I hated the administration.

The rule came about (I investigated) because a student once broke his foot climbing a tree. To prevent future headaches, the powers-that-be made a blanket rule.

Now, as far as I'm aware, there aren't a lot of peer-reviewed studies on the benefits of climbing trees, but this rule simultaneously prohibited something potentially positive (clean fun/exploration/nature/exercise) and created animositymaking it potentially more difficult for the school to get my cooperation on other, weightier matters.

Lately, when I consult with organizations about teamworkor read articles about the difficulties of hybrid work like this one in last week's New York TimesI have thought about the No Climbing Trees era of my young life.

Remote work was a well-meaning reaction to the dire circumstances of the COVID-19 pandemic. There are clear benefits to working from anywhere, many of which, we've discovered, are still benefits in a non-pandemic situation.

But now that the pandemic has largely stabilized, many businesses want the benefits of in-person work back.

It's like for two years leaders said, "Go ahead! Climb trees! It's good exercise!" But now, they would like us to not be in trees and be in an office sometimes.

Some organizations have reacted with a blanket rule: No Remote Work Anymore.

And given that working remotely has even more benefits than climbing trees, it's easy to see why the human reactions of "I never wanted to work from home so badly as now" and "I hate this company" have come on so strongly.

Seeing some companies step in this cow pie, many leaders have opted for Hybrid Work policies. The rationale is that we can have the best of both worlds, and workers will be happier. And in the right circumstances, this is exactly the case. But from a psychology standpoint, many leaders who attempt this are actually tromping on the same turd.

Most Hybrid Work policies I've seen are the equivalent of "No Climbing Trees, Except Sometimes" or "Only Climb Trees 2 Days A Week Or Else." With these kinds of rules, even if workers understand the benefits to the company of some in-person work, nanny-style rules make workers tend to feel like they're being restricted because of someone else's incompetence.

And this reinforces a negative trust loop in their employers.

After all, the ticker tape in the back of your head will say, if your boss truly trusted you, she wouldn't need to put in rules that make you feel like a kid. She'd assume you'll use good judgment and decide when it's best to work remotely and when in-person will benefit your work and your team.

The solution for leaders who don't want to think too hard, or who see clear benefits to only one type of work, is to make a 100% Remote or a 100% In-Person policy and leave it at that. Some people will not like it. But you won't have to relitigate the issue. And perhaps the cost in some areas of your team (in terms of productivity, resentment) is worth it to you.

This is like the Abstinence method of addiction recovery. Never again. No room for misinterpretation. For better or worse. And I think this is a bargain worth making in some cases. If you never ever ever slip up (or never have an important reason to bend the rule).

The worst solution, I believe, is to create hard-and-fast rules that are easy to justify good reasons to violate them.

A hard rule of two days a week in-person might seem like a perkflexibility!but it can be like an alcoholic deciding only to drink two drinks a week. It works, until you have a lapsewhich is easy to do in the absence of training or well-established habits. After a lapse, as the psychology of the abstinence violation effect shows us, it's very easy for people to get into an "all is lost" mentality. It's your husband's birthday, so you have that third drink. Oops. Then you say, "screw it." You're drinking all weekend. Future You can pick up the pieces.

When this happens at work (e.g. someone has a good reason to come in only one day this week), the boss is left to decide how to deal with the violationor else they risk the rest of the team concluding the rule has no teeth.

Or perhaps worse, they're seen as playing favorites.

And suddenly, the well-meaning hybrid work policy has become more of a headache than it might be worth.

The solution for leaders who want the benefits of both Remote and In-Person is to do what successful anti-substance-abuse training programs do. Instead of telling students to "just say no" and leaving them to fend for themselves when high-pressure situations arise (plus risking an abstinence violation effect), as Scientific American reports, effective anti-drug programs "involve substantial amounts of interaction between instructors and students. They teach students the social skills they need to refuse drugs and give them opportunities to practice these skills with other students."

Instead of banning all tree climbingor not banning it and letting untrained tree-climbers risk their safetyit's more effective to coach people how to safely navigate the ins and outs of climbing trees, how to make wise climbing decisions, or how to use a buddy system to stay safe when you go out exploring.

Likewise, instead of just banning Remote work, or just placing headache-inducing Hybrid rules in place, effective leaders need to spend time with their team members training on how to manage their work in tricky situations.

They need to train them in social and team skills that help them identify how best to pull through for their teammatesand how to help each other navigate work-life integration.

They need to train teams to develop benevolence-based trust and charity-based conflict resolution skills.

They need to ditch easy-but-backfiring rules for more empowering principles.

All of this takes more time and thinking than posting a "No Climbing Trees Ever" sign. But for those of us who are convinced of the benefits of remote work flexibility, it's time and thinking well spent.

Shane Snow is a bestselling author, keynote speaker, and CEO. If you liked this post, subscribe to Shane's Substack, or check out his interactive courses on modern leadership skills.

Journalist covering innovation and human behavior, media-tech entrepreneur, and the bestselling author of three books, including Dream Teams: Working Together Without Falling Apart.

Excerpt from:
What Human Behavior Tells Us About How To Get Hybrid Work Right - Forbes

What dogs do when humans are not around, according to experts – Salon

The human-dog bond is ancient: we have co-evolved together since before writing even existed. Our long cohabitation with dogs has granted both species a unique insight into the other's feelings: dogs, for instance, know when you are looking into their eyes, unlike wolves and other animals. And, dogs can understand human language to some extent: one "Guiness"-worthy dog knows over 1,000 nouns.

Yet for all our mutual insights, we can't truly see inside the mind of a dog nor can we know for sure what they're thinking, or what they do when we're not looking. And while cameras that watch our pets can reveal what they are doing, it's harder to know what they're thinking in private. What can dog owners know for sure?

When they are not peacefully snoozing, dogs may also engage in what is known as "vigilant behavior" performing their self-assigned duty of guarding your home.

First, we know that they do indeed miss their humans.MRI tests of dogs' brainsconfirm that dogs associate the sounds and smells of their preferred humans with positive rewards. Because dogs are intelligent and perceptive about their environment, they quickly figure out patterns that indicate a human is about to leave e.g., picking up their keys, walking toward the door and clearly communicate feelings of distress when that happens. When secretly recorded, dogs who are alone in their homes often spend time at the door where their preferred human left, quite likely hoping they will soon return.

Yet if your heart aches at the thought that your dog does nothing but emotionally suffer while you are gone, rest at ease. There is plenty of research on domestic canine behavior and we know that, in addition to missing you, dogs routinely take naps.

"Previous research has demonstrated that dogs mostly spend their time resting when the owner is gone," Dr. Erica N. Feuerbacher, anAssociate Professor at Virginia Tech'sDepartment of Animal & Poultry Science, told Salon by email. When they are not peacefully snoozing, dogs may also engage in what is known as "vigilant behavior" performing their self-assigned duty of guarding your home "likely when they hear or see something outside, like a car or someone walking down the sidewalk."

When they are neither tired nor on alert, dogs may occupy themselves with play. This is why humans may return home to find their property damaged.

Want more health and science stories in your inbox? Subscribe toSalon's weekly newsletter The Vulgar Scientist.

"Of course some dogs engage in behaviors that are probably less desirable to their owners, like counter surfing or getting into the trash or vocalizing," Feuerbacher explained."Some dogs do develop separation anxiety which is a severe behavioral issue; other dogs are simply bored or take advantage of the owner not being there to explore places (like the counter) where they are usually forbidden from. But if they find something good up there to eat, that behavior will continue to happen."

It is important to remember that dogs, like humans, have quirks specific to their individual personalities. As such, anticipating their solitary behavior can be unpredictable.

"What dogs do when we are not around also depends on the individual, age, location and even the quality of relationship we share with them," Dr. Monique Udell, an associate professor who specializes in human-animal bonding atOregon State University, told Salon by email. Puppies, for instance, are more likely to get into mischief because they are biologically programmed to spend more of their time in activities like exploring and teething.Younger dogs can also experience more frequent bathroom problems, similar to older dogs.

"Puppies, whose bodies are still developing, as well as older dogs who may be experiencing health problems or cognitive decline, are often less likely to be able to avoid urinating or defecating when left alone for longer periods of alone time," Udell pointed out. "Dogs with separation anxiety experience greater than normal distress when left alone, and may panic or try to escape, which can result in injury or damage to property." LikeFeuerbacher, however, Udell emphasized that dogs spend most of their solitary time sleeping, and that this is healthy as long as the rest of their environment is sufficiently stimulating.

"Owners who have high expectations of their dogs and are highly responsive to their dog's needs are more likely to raise secure dogs."

"One important thing concerned humans can do, is make sure that the time they do spend with their dogs is quality time," Udell explained. "Dogs with secure attachment bonds to their owner are also less likely to display separation anxiety when their owner is away. Owners who have high expectations of their dogs (engage in positive reinforcement training, have consistent rules) and are highly responsive to their dog's needs (provide attention, recognize and respond when their dog is scared or sick) are more likely to raise secure dogs."

While dogs need their rest and therefore benefit from some time away from their humans, that does not mean all dogs will naturally accept that isolation. Fortunately, asFeuerbacher tells Salon, there are ways to train dogs to be as okay with temporary separation from you as you are from them.

"First, owners should work on their dog tolerating being left alone," Feuerbacher explained."Dogs are social animals so the owner leaving can be upsetting to the dog. You can do this by practicing lots of short departures, like running out to check the mail and coming back in, gardening for a few minutes and coming back in, taking a quick trip to the grocery store. This is especially useful when you bring a new dog home."

It can also be helpful to leave dogs with toys and other enrichment items bones, stuffed animals, chew devices, and so on. Finally, one should make sure to either paper train dogs or ask someone to take your dog out for a walk periodically if their humans will be gone for a while. It is cruel to expect the dog to hold in their excrement for too long. After all, while "The Secret Life of Pets" is not scientifically accurate, the essential point of the story that dogs lead rich lives separate from their humans, and should be respected as such is certainly true.

"While [the movie] might be fictional, I hope it does help folks recognize that their animals lead very rich lives, with their own interests like smelling certain smells, or getting to visit a dog friend," Feuerbacher told Salon."This also comes into play when we are interacting with our dogs we might want them to sit or do some other behavior we want, but it's worth remembering they have their own interests (such as smelling a certain patch of grass!) that doesn't align with what I want them to do."

Read the rest here:
What dogs do when humans are not around, according to experts - Salon

Theresa Welles named new director of Penn State Psychological … – Pennsylvania State University

Welles spent five years at Anxiety Specialists of Atlanta, where she started as a staff psychologist before being promoted to clinical director. During her tenure as director, the practice, which specializes in anxiety spectrum disorders, obsessive compulsive disorder (OCD), and post-traumatic stress disorder (PTSD), doubled in size.

Shes proud of her time there and grateful to have worked with such a talented group of clinicians, and for all that she learned while navigating the unique challenges created as the result of the COVID-19 pandemic.

While the pandemic was challenging, some of the innovations that came out of it, like telemedicine, have been amazing, and were not only great for some patients, but to some extent better from a business perspective as well, Welles said. And I think its important to identify and get stakeholders behind some of the incredible technology out there that streamlines important documentation such as billing, clinical notes, and scheduling, and allows more efficient use of staff, faculty, and student time -- which ultimately benefits the patients.

We are truly at a crossroads when it comes to mental health, she added. The stigma in many ways has been lifted, and the need for mental health services has been highlighted everywhere, from professional sports to the entertainment industry, and younger generations are much more accepting of psychological therapy than ever before. People appreciate us, but were still under-resourced. So, being a part of a program that is contributing so much in terms of research and training is incredibly exciting, and aligns with my lifelong commitment to advocate for mental health parity in the medical industry.

Welles has been fascinated by human behavior for as long as she can remember. While an undergraduate, she took a psychology course that inspired her to pursue it as a career. Among other things, she loved that the field was a combination of science, medicine, and philosophy, as one of her professors described it while she was pursuing her doctorate in counseling psychology and school psychology at Florida State University.

Earlier in her career, Welles served as module lead of behavioral health at Kaiser Permanente Gwinnett Comprehensive Medical Center, an experience that led her to highly value the importance of collaborative care, which she believes is the future of behavioral health medicine. And her experiences in higher education, including serving as the assistant director of counseling and psychological services at 40,000-student Kennesaw State University in the Atlanta metro area, have prepared her well for the challenges of her new position.

Im a very analytical person, and can be introverted at times, but I really value connecting with other people and find helping people, in any way, incredibly rewarding, Welles said. And Im a huge believer in the importance of a liberal arts education and am excited our program is housed in that college at Penn State. Psychology is the science of human behavior its entire focus is on the human experience, making it generalizable to just about any chosen career, whether youre a teacher, an anthropologist, a lawyer, an artist, or a stockbroker.

Welles and her husband have four children, including one who will begin attending Penn State later this year. Though it was difficult leaving her patients and colleagues in Atlanta, shes incredibly excited about this new professional and personal chapter in her life.

Im really grateful to be working with such an incredible group of people, she said. Being able to be part of the clinical psychology programs future, and continuing to support a psychological clinic of excellence, where science and practice come together thats very exciting to me. I cant wait to get started.

Continue reading here:
Theresa Welles named new director of Penn State Psychological ... - Pennsylvania State University

Droughts bring disease: Here are four ways they do it – Phys.org

Credit: Riccardo Mayer / Shutterstock

Countries in the Horn of Africa have been hit by a multiyear drought. Ethiopia, Kenya, Somalia and Uganda are expected to continue getting below-normal rainfall in 2023. Excluding Uganda, 36.4 million people are affected and 21.7 million are in need of food assistance.

Climate change projections show changes in temperature and rainfall extremes, especially without emissions reductions. Some parts of Africa are projected to become wetter and others drier. Prolonged dry spells, particularly in semi-arid and arid regions, may have serious impacts, particularly if people aren't prepared.

Droughts can have wide-ranging implications for the affected population. The decreased availability of wateroften accompanied by high temperaturescan increase the risk of contamination, cause dehydration and result in an inability to wash and maintain hygiene practices.

Droughts can have an impact on non-resistant crops and livestock, causing malnutrition and food insecurity. The economic implications of agricultural losses can go on to affect mental health, gender-based violence and poverty.

The changes to the environment and human behavior caused by drought can also lead to higher exposure to disease-causing organisms. It can increase the risk of infections and disease outbreaks. Diseases that are spread through food, water, insects and other animals can all break out during times of drought and often overlap. Understanding and managing the known risk factors for these outbreaks, and how drought can exacerbate them, is important in preventing infectious disease mortality during drought.

During droughts there can be changes in what kinds of food are accessible, as less water is available to produce and process it. Food insecurity can lead to malnutrition, which has an impact on immunity. Certain foods may become less available and it may not be possible to reduce food contamination via traditional methods of acidification such as lemon juice, curdled milk, tamarind and vinegar.

Food insecurity can lead to an increased reliance on roadside food vendors. Food vendors are often linked to food-borne disease outbreaks as hygiene standards can vary widely and are often poorly regulated. Cooking fuel, particularly wood, may be in short supply, so food may be eaten cold, raw or without re-heating, increasing the chances of contamination.

Food-borne diseases linked to droughts include cholera, dysentery, salmonella and hepatitis A and E. But any food-borne pathogen can be a risk during times of water scarcity.

The impact of drought on water availability also affects water-borne pathogens. It can change the environment and human behavior in ways that increase transmission risks, similar to food-borne diseases.

During times of limited water resources, a pathogen can become more concentrated in the environment, particularly when higher temperatures suit its growth. IPC v Acute Food Insecurity Phase. Credit: The Famine Early Warning Systems Network

Risky water use behaviors may increase. People might use water sources they would normally avoid, and reduce hand-washing.

Water-borne diseases linked to droughts include cholera, dysentery, typhoid and rotavirus.

Breeding sites for vectors such as mosquitoes may be reduced during drought because there is less groundwater for females to lay their eggs. But new areas may be created. Droughts can lead to an increase in potable water, due to stockpiling or the delivery of water aid to households from the government or NGOs. If water containers are open, this can create ideal vector breeding grounds. Open containers may also move the vector breeding groundand therefore the vectorcloser to the household.

Changes in temperature and water can affect egg and larval survival and intermediate or animal host transmission, helping the pathogen to survive longer. Higher temperature can affect vector behavior, mainly biting frequency and timing of feeding, altering transmission.

Vector-borne diseases linked to droughts include West Nile virus, St Louis encephalitis, Rift Valley fever, chikungunya and dengue.

Zoonotic diseases are those that can be transmitted from animals to humans. Water scarcity increases the pressure on water sources, and so water is used for several purposes and may be shared by livestock, wildlife and people. Interactions between humans, livestock and wildlife increase, expanding the opportunity for contact and disease transmission. Food supply issues and agricultural losses may also increase reliance on bushmeat for food and income, which can be a risk for zoonotic disease spillover.

Recent examples of zoonotic disease spillover include Nipah virus, Ebola and monkeypox (recently renamed mpox).

At an individual level, education around disease risks is important. This will allow people to make informed choices to protect their health to the best of their abilities. Household water should be covered. And personal and food hygiene should be maintained as much as possible.

To prevent drought-related disease outbreaks, pre-existing vulnerability (poverty, access to water, education) needs to be addressed. It is not the drought that causes the outbreak, but instead how society deals with these dry conditions.

Better water resource management is needed at a regional and international level, to treat large water sources as a common resource for all. Authorities need to act to provide drought assistance. This includes safe water to prevent the use of poor quality water sources, and agricultural and food aid to mitigate dehydration and malnutrition.

Excerpt from:
Droughts bring disease: Here are four ways they do it - Phys.org

Animal personalities can trip up science, but theres a solution – The Hindu

Several years ago, Christian Rutz started to wonder whether he was giving his crows enough credit. Rutz, a biologist at the University of St. Andrews in Scotland, and his team were capturing wild New Caledonian crows and challenging them with puzzles made from natural materials before releasing them again.In one test, birds faced a log drilled with holes that contained hidden food, and could get the food out by bending a plant stem into a hook. If a bird didnt try within 90 minutes, the researchers removed it from the dataset.

But, Rutz says, he soon began to realize he was not, in fact, studying the skills of New Caledonian crows. He was studying the skills of only a subset of New Caledonian crows that quickly approached a weird log theyd never seen before maybe because they were especially brave, or reckless.

The team changed their protocol. They began giving the more hesitant birds an extra day or two to get used to their surroundings, then trying the puzzle again. It turns out that many of these retested birds suddenly start engaging, Rutz says. They just needed a little bit of extra time.

Scientists are increasingly realizing that animals, like people, are individuals. They have distinct tendencies, habits and life experiences that may affect how they perform in an experiment. That means, some researchers argue, that much published research on animal behavior may be biased. Studies claiming to show something about a species as a whole that green sea turtles migrate a certain distance, say, or how chaffinches respond to the song of a rival may say more about individual animals that were captured or housed in a certain way, or that share certain genetic features. Thats a problem for researchers who seek to understand how animals sense their environments, gain new knowledge and live their lives.

The samples we draw are quite often severely biased, Rutz says. This is something that has been in the air in the community for quite a long time.

In 2020, Rutz and his colleague Michael Webster, also at the University of St. Andrews, proposed a way to address this problem. They called it STRANGE.

Why STRANGE? In 2010, anarticlein Behavioral and Brain Sciencessuggested that the people studied in much of published psychology literature are WEIRD drawn from Western, Educated, Industrialized, Rich and Democratic societies and are among the least representative populations one could find for generalizing about humans. Researchers might draw sweeping conclusions about the human mind when really theyve studied only the minds of, say, undergraduates at the University of Minnesota.

A decade later, Rutz and Webster, drawing inspiration from WEIRD, published a paper in the journal Naturecalled How STRANGE are your study animals?

They proposed that their fellow behavior researchers consider several factors about their study animals, which they termed Social background, Trappability and self-selection, Rearing history, Acclimation and habituation, Natural changes in responsiveness, Genetic makeup, and Experience.

I first began thinking about these kinds of biases when we were using mesh minnow traps to collect fish for experiments, Webster says. He suspected and thenconfirmed in the labthat more active sticklebacks were more likely to swim into these traps. We now try to use nets instead, Webster says, to catch a wider variety of fish.

Thats Trappability. Other factors that might make an animal more trappable than its peers, besides its activity level, include a bold temperament, a lack of experience or simply being hungrier for bait.

Other research has shown that pheasants housed in groups of fiveperformed betteron a learning task (figuring out which hole contained food) than those housed in groups of only three thats Social background. Jumpingspidersraised in captivity wereless interested in preythan wild spiders (Rearing history), and honeybeeslearned bestin the morning (Natural changes in responsiveness). And so on.

It might be impossible to remove every bias from a group of study animals, Rutz says. But he and Webster want to encourage other scientists to think through STRANGE factors with every experiment, and to be transparent about how those factors might have affected their results.

We used to assume that we could do an experiment the way we do chemistry by controlling a variable and not changing anything else, says Holly Root-Gutteridge, a postdoctoral researcher at the University of Lincoln in the United Kingdom who studies dog behavior. But research has been uncoveringindividual patterns of behavior scientists sometimes call it personality in all kinds of animals, from monkeys tohermit crabs.

Just because we havent previously given animals the credit for their individuality or distinctiveness doesnt mean that they dont have it, Root-Gutteridge says.

This failure of human imagination, or empathy, mars some classic experiments, Root-Gutteridge and coauthors noted in a2022 paperfocused on animal welfare issues. For example, experiments by psychologist Harry Harlow in the 1950s involved baby rhesus macaques and fake mothers made from wire. They allegedly gave insight into how human infants form attachments. But given that these monkeys were torn from their mothers and kept unnaturally isolated, are the results really generalizable, the authors ask? Or do Harlows findings apply only to his uniquely traumatized animals?

All this individual-based behavior, I think this is very much a trend in behavioral sciences, says Wolfgang Goymann, a behavioral ecologist at the Max Planck Institute for Biological Intelligence and editor-in-chief of Ethology. The journal officiallyadoptedthe STRANGE framework in early 2021, after Rutz, who is one of the journals editors, suggested it to the board.

Goymann didnt want to create new hoops for already overloaded scientists to jump through. Instead, the journal simply encourages authors to include a few sentences in their methods and discussion sections, Goymann says, addressing how STRANGE factors might bias their results (or how theyve accounted for those factors).

We want people to think about how representative their study actually is, Goymann says.

Several other journals have recently adopted the STRANGE framework, and since their 2020 paper Rutz and Webster have run workshops, discussion groups and symposia at conferences. Its grown into something that is bigger than we can run in our spare time, Rutz says. We are excited about it, really excited, but we had no idea it would take off in the way it did.

His hope is that widespread adoption of STRANGE will lead to findings in animal behavior that are more reliable. The problem of studies that cant be replicated has lately received much attention in certain other sciences, human psychology in particular.

Psychologist Brian Nosek, executive director of the Center for Open Science in Charlottesville, Virginia and a coauthor of the 2022 paper Replicability, Robustness, and Reproducibility in Psychological Science in the Annual Review of Psychology, says animal researchers face similar challenges to those who focus on human behavior. If my goal is to estimate human interest in surfing and I conduct my survey on a California beach, I am not likely to get an estimate that generalizes to humanity, Nosek says. When you conduct a replication of my survey in Iowa, you may not replicate my finding.

The ideal approach, Nosek says, would be to gather a study sample thats truly representative, but that can be difficult and expensive. The next best alternative is to measure and be explicit about how the sampling strategy may be biased, he says.

Thats just what Rutz hopes STRANGE will achieve. If researchers are more transparent and thoughtful about the individual characteristics of the animals theyre studying, he says, others might be better able to replicate their work and be sure the lessons theyre taking away from their study animals are meaningful, and not quirks of experimental setups. Thats the ultimate goal.

In his own crow experiments, he doesnt know whether giving shyer birds extra time has changed his overarching results. But it did give him a larger sample size, which can mean more statistically robust results. And, he says, if studies are better designed, it could mean that fewer animals need to be caught in the wild or tested in the lab to reach firm conclusions. Overall, he hopes that STRANGE will be a win for animal welfare.

In other words, whats good for science could also be good for the animals seeing them not as robots, Goymann says, but as individual beings that also have a value in themselves.

Read more:
Animal personalities can trip up science, but theres a solution - The Hindu

Annual Shaw Biology Lecture to feature New York Times best … – University of Southern Indiana

The University of Southern Indiana will host its 9th annual Shaw Biology Lecture at 7 p.m. Monday, April 17 in Mitchell Auditorium, located in the Nursing and Health Professions Building. Frans de Wall, New York Times bestselling author, will present Politics, Cognition, Morality: You Name It Our Fellow Primates Have It All. The presentation is open to the public at no charge.

De Waal is a C.H. Candler Professor Emeritus of Psychology at Emory University, and is former Director of Living Links, a division of the Yerkes National Primate Research Center, established for primate studies to shed light on human behavioral evolution. A Dutch/American biologist, de Waal is known for his work on the behavior and social intelligence of primates.

In 2011, Discover Magazine named him among the 47 (All Time) Great Minds of Science and in 2019, Prospect Magazine ranked him fourth for the Worlds Top Thinkers. His scientific work has been published in hundreds of articles and journals, such as Science and Nature, and volumes specialized in animal behavior. His dozen popular books, translated into over 20 languages, made him one of the worlds most visible primatologists.

De Waals bestsellers include Are We Smart Enough to Know How Smart Animals Are? and Mamas Last Hug. His latest book is titled Different: Gender Through the Eyes of a Primatologist. Following his presentation, de Waal will be available for a book signing.

The Shaw Lecture Series is funded by a USI Foundation endowment with support by the USI Biology Department and the Pott College of Science, Engineering, and Education.

For questions, contact Dr. Marlene Shaw, Professor Emerita of Biology,at mshaw@usi.edu.

Read more from the original source:
Annual Shaw Biology Lecture to feature New York Times best ... - University of Southern Indiana

Use This Powerful Theory to Be a Better Leader – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

Adept and nimble leadership is essential in today's fast-paced and ever-changing business world. Those in such positions are responsible for setting the tone, driving innovation and inspiring others to achieve. This is a heady mix of tasks, but how to perfect them? One powerful way is by leveraging Rene Girard's mimetic theory.

Girard, a French historian, literary critic and philosopher, developed a theory of human behavior that emphasizes the role of imitation and desire in social interactions. His concepts were based on the idea that, from a very young age, human beings are fundamentally imitative creatures, and that our desires and behaviors are largely shaped by the desires and behaviors of those around us. The resulting theory has gained a significant amount of attention in recent years, particularly among business leaders and entrepreneurs, not least because it provides a powerful framework for understanding both employee and consumer behavior.

The process plays out simply: When we see someone else achieve or acquire something we desire, we are more likely to imitate their behavior in the hopes of doing the same. And leaders might be well advised to apply this incite in the process of motivating and inspiring teams.

Related: To Be Heard and To be Admired

In a sense, we are always in competition with others, trying to outdo them in our pursuit of shared desires. However, this competition can often lead to conflict and rivalry, especially in a business setting where individuals may have different goals and aspirations. Mimetic theory helps leaders understand this, and ideally to find ways of channeling it positively, such as promoting healthy competition and collaboration in which team members work together to achieve shared goals. In such a culture of camaraderie and innovation, employees can feel valued, engaged and motivated to achieve their full potential.

To leverage Girard's theory, leaders can choose from several strategies (or apply them all):

Lead by example and demonstrate the behaviors and attitudes that they want others to emulate in an organization.

Identify shared desires and goals, and align those with the goals of the organization as a whole.

Create a culture of collaboration that values teamwork, open communication and shared ownership.

Encourage innovation and creativity by creating an environment that values pioneering ideas.

Related: 9 Ways Your Company Can Encourage Innovation

To put these strategies into action, follow these steps:

1: Evaluate the current company culture and identify areas for improvement.

2: Set goals and objectives that align with the company's vision and mission.

3: Communicate this new approach to employees and provide training and resources to support their success.

4: Monitor progress and make adjustments as needed.

To illustrate a few key aspects of mimetic theory, consider the example of Microsoft. In 2014, the company's new CEO, Satya Nadella, adopted a "growth mindset" that emphasized collaboration, creativity and innovation. He encouraged employees to work together to achieve shared goals and provided platforms for them to exchange ideas. Under Nadella's leadership, Microsoft's stock price nearly tripled, and the company's market capitalization grew to more than $2 trillion.

An example of a different kind can be found in F. Scott Fitzgerald's classic novel, The Great Gatsby. The character of Jay Gatsby, who supposedly embodies the American Dream, becomes the object of desire for many other characters in the novel, including narrator Nick Carraway and Gatsby's former lover, Daisy Buchanan. They imitate his behaviors and embrace similar desires, hoping to achieve the same success and happiness. Ultimately, however, the desire for imitation and competition leads to conflict and tragedy, which helps highlight the dangerous potential of unchecked mimetic desire. Business leaders can learn from this, too, by finding ways to channel desire positively fostering healthy competition and collaboration.

Related: Entrepreneurship and Eudaimonia: The Pursuit Of Lasting Happiness

Giraud's theory offers a roadmap for understanding the power of imitation, and so achieving success. With the right strategies, leaders can leverage it to their teams to achieve greatness and take companies to the next level.

See original here:
Use This Powerful Theory to Be a Better Leader - Entrepreneur

Students share perspectives on new design and data science majors – The Stanford Daily

In September, Stanford announced two major changes to its undergraduate education offerings: the former product design major was rebranded to the new design major, and the former data science minor would now be offered as both a B.A. and B.S. degree.

Current and prospective students from the programs shared their thoughts with The Daily.

New Design Major

The design major now belongs under the d.schools interdisciplinary programs (IDPs), and is categorized as a Bachelor of Science (B.S.) degree in Design. Previously, the product design major resulted in the conferral of a B.S. in Engineering. However, students may still choose to complete the product design engineering subplan if they matriculated before the 2022-2023 academic year.

The design major now has three methods tracks: Physical Design and Manufacturing, AI and Digital User Experience, and Human Behavior and Multi-stakeholder Research. From there, students also select one Domain Focus area, which may be Climate and Environment, Living Matter, Healthcare and Health Technology Innovation, Oceans and Global Development, and Poverty. While not possible in the 2022-23 academic year, students will be able to propose their own Domain Focus area as an honors option in the future.

Sydney Yeh 26 said that the major is a great way to use my creative skills, apply it to technology and move with the current times.

She also believes that the shift from product design to more broad design offerings is beneficial. [While] people are pretty split [on this issue], I think its a good change because theres more variety in what you can specialize in, Yeh said. Before, it was mostly physical design and designing products.

Yeh intends to pursue the digital design track, as she is interested in designing apps and interfaces. She says the design major effectively weaves together her interests in art and computer science. Originally, I was going to combine art and CS and design my own major, but found that the design major fits my goals, Yeh said.

Hannah Kang 26, another prospective design major, echoed Yehs sentiments about combining interests in computer science and art. [The major allows me] to integrate the art aspect and the STEM aspect that I know for sure that Stanford is excelling in, Kang said.

Kang also expressed her appreciation for the CS requirements of the design major, saying, Im trying to take more CS classes so that I can have at least the most fundamental CS knowledge [and can] seek ways to use my engineering skills to create something.

Sosi Day 25, a design major on the human behavior track, praised the collaborative and multidisciplinary aspects of design. Theres a lot of communal learning, she said. Its also very creative, and it engages a lot of different parts of my brain. A lot of it is artistic, but theres also problem solving skills involved.

Day said that as someone who seeks to apply design thinking to other issues beyond manufacturing, the change in major has been a positive one for her. I never considered doing a product design major last year, but now that theyve added two new tracks, its changed my mind, she said.

New Data Science Major

The new data science major was also announced this year. Whereas previously, students could only minor in data science, undergraduates now have the option of majoring on either the B.S. or B.A. track.

Professor Chiara Sabatti, associate director of Data Sciences B.S. track, said that the B.A. has similar foundational requirements to the B.S., but has a concentration of interest in applying data science methods to solve problems in the social sciences.

According to Sabatti, the B.S. track is closely aligned with the former mathematical and computational science (MCS) major, which was phased out this year. She explained that the change to a data science major with more broad offerings was to more closely match with MCS graduates career paths, saying that [the changes] are in response to the needs of the students and the demands of society.

Professor Emmanuel Candes, the Barnum-Simons Chair of math and statistics, said that the formal name change from MCS to data science occurred last spring, though the process of changing the curriculum and developing the B.S. and B.A. paths began in 2019.

Candes echoed Sabattis reflections about students career paths, saying, we realized that more and more of our graduates [of Mathematical and Computational Science] were entering the workforce as data scientists, and it seems like the [new] name represents more of a reality.

The major program has shifted to accommodate this growing interest in data, according to Sabatti.

The structure of the program has changed to make sure that we prepare students for this sustained interest in data science, Sabatti said. For example, theres some extra requirements in computing, because the data sets that people need to work with require substantial use of computational devices, [and] theres some extra classes on inference and how you actually extract information from this data.

Similar to the new design major, many prospective data science majors say the interdisciplinary offerings of the major are enticing.

I like [data science] because its an intersection between technical fields and humanities-focused fields, said Caroline Wei 26, a prospective B.A. data science major on the Technology and Society pathway. What makes data science so powerful is it gives you the option to draw conclusions about society and present that to the rest of the world.

Similarly, Savannah Voth 26, another prospective data science major, shared the humanities and technical skills she feels the major helps her build. The data science B.A. allows me to use quantitative skills and apply it to the humanities and social sciences, she said.

Voth expressed some concerns regarding the ability to connect required coursework with data science more directly.

One issue is that the requirements include classes in statistics and classes in areas you want to apply data science to, but there arent as many opportunities to connect them, Voth said. It would be cool if for each pathway, there was at least one class that is about data science applied to that topic.

Despite this concern, Voth praised the openness of the majors coursework. I like how [the requirements] are very flexible and you can choose which area to focus on through the pathways.

Wei highlighted the effectiveness of the core requirements in building skills and perspectives, saying, The ethics [requirement] is relevant since you have to know how to handle data in an ethical way, the compsci core combines the major aspects of technical fields..and the social science core helps you see why those technical skills are important.

Go here to read the rest:
Students share perspectives on new design and data science majors - The Stanford Daily

Miscalibration of Trust in Human Machine Teaming – War On The Rocks

A recent Pew survey found that 82 percent of Americans are more or equally wary than excited about the use of artificial intelligence (AI). This sentiment is not surprising tales of rogue or dangerous AI abound in pop culture. Movies from 2001: A Space Odyssey to The Terminator warn of the dire consequences of trusting AI. Yet, at the same time, more people than ever before are regularly using AI-enabled devices, from recommender systems in search engines to voice assistants in their smartphones and automobiles.

Despite this mistrust, AI is becoming increasingly ubiquitous, especially in defense. It plays a role in everything from predictive maintenance to autonomous weapons. Militaries around the globe are significantly investing in AI to gain a competitive advantage, and the United States and its allies are in a race with their adversaries for the technology. As a result, many defense leaders are concerned with ensuring these technologies are trustworthy. Given how widespread the use of AI is becoming, it is imperative that Western militaries build systems that operators can trust and rely on.

Enhancing understanding of human trust dynamics is crucial to the effective use of AI in military operational scenarios, typically referred to in the defense domain as human-machine teaming. To achieve trust and full cooperation with AI teammates, militaries need to learn to ensure that human factors are considered in system design and implementation. If they do not, military AI use could be subject to the same disastrous and deadly errors that the private sector has experienced. To avoid this, militaries should ensure that personnel training educates operators both on the human and AI sides of human-machine teaming, that human-machine teaming operational designs actively account for the human side of the team, and that AI is implemented in a phased approach.

Building Trust

To effectively build human-machine teams, one should first understand how humans build trust, specifically in technology and AI. AI here refers to models with the ability to learn from data, a subset called machine learning. Thus far, almost all efforts to develop trustworthy AI focus on addressing technology challenges, such as improving AI transparency and explainability. The human side of the human-machine interaction has received little attention. Dismissing the human factor, however, risks limiting the positive impacts that purely technology-focused improvements could have.

Operators list many reasons why they do not trust AI to complete tasks for them, which is unsurprising given the generally untrustworthy cultural attitude outlined in the Pew survey above towards the technology. However, research shows that humans often do the opposite with new software technologies. People trust websites with their personal information and use smart devices that actively gather that information. They even engage in reckless activity in automated vehicles not recommended by the manufacturer, which can pose a risk to ones life.

Research shows that humans struggle to accurately calculate appropriate levels of trust in the technology they use. Humans, therefore, will not always act as expected when using AI-enabled technology often they may put too much faith in their AI teammates. This can result in unexpected accidents or outcomes. Humans, for example, have a propensity toward automation bias, which is the tendency to favor information shared by automated systems over information shared by non-automated systems. The risk of this occurring with AI, a notorious black-box technology with frequently misunderstood capabilities, is even higher.

Humans often engage in increasingly risky behavior with new technology they believe to be safe, a phenomenon known as behavioral adaption. This is a well-documented occurrence in automobile safety research. A study conducted by University of Chicago economist Sam Peltzman found no decreased death rate from automobile accidents after the implementation of safety measures. He theorized this was because drivers, feeling safer as the result of the new regulations and safety technology, took more risks while driving than they would have before the advent of measures made to keep them safe. For example, drivers who have anti-lock braking were found to drive faster and closer behind other vehicles than those who did not. Even using adaptive cruise control, which maintains a distance from the car in front of you, leads to an increase in risk-taking behavior, such as looking at a phone while driving. While it was laterdetermined that the correlation between increased safety countermeasures and risk-taking behavior was not necessarily as binary as Peltzman initially concluded, the theory and the concept of behavioral adaption itself have gained a renewed focus in recent years to explain risk-taking behavior in situations a diverse as American football and the COVID-19 pandemic. Any human-machine teaming should be designed with this research and knowledge in mind.

Accounting for the Human Element in Design

Any effective human-AI team should be designed to account for human behavior that could negatively affect the teams outcomes. There has been extensive research into accidents involving AI-enabled self-driving cars, which have led some question whether human drivers can be trusted with self-driving technology. A majority of these auto crashes using driver assistance or self-driving technology have occurred as a result of Teslas Autopilot system in particular, leading to a recent recall. While the incidents are not exclusively a product of excessive trust in the AI-controlled vehicles, videos of these crashes indicate that this outsized trust plays a critical role. Some videos showed drivers were asleep at the wheel, while others pulled off stunts like putting a dog in the drivers seat.

Tesla says its autopilot program is meant to be used by drivers who are also keeping their eyes on the road. However, studies show that once the autopilot is engaged, humans tend to pay significantly less attention. There have been documented examples of deadly crashes with no one in the drivers seat or while the human driver was looking at their cell phone. Drivers made risky decisions they would not have in a normal car because they believed the AI system was good enough to go unmonitored, despite what the company says or the myriad of examples to the contrary. A report published as part of the National Highway Traffic Safety Administrations ongoing investigation into these accidents recommends that important design considerations include the ways in which a driver may interact with the system or the foreseeable ranges of driver behavior, whether intended or unintended, while such a system is in operation.

The military should take precautions when integrating AI to avoid a similar mis-calibration of trust. One such precaution could be to monitor the performance not only of the AI, but also of the operators working with it. In the automobile industry, video monitoring to ensure drivers are paying attention while the automated driving function is engaged is an increasingly popular approach. Video monitoring may not be an appropriate measure for all military applications, but the concept of monitoring human performance should be considered in design.

A recent Proceedings article framed the this dual monitoring in the context of military aviation training. Continuous monitoring of the health of the AI system is like aircraft pre-flight and in-flight system monitoring. Likewise, aircrew are continuously evaluated in their day-to-day performance. Just as aircrew are required to undergo ongoing training on all aspects of an aircrafts employment throughout the year, so too should AI operators be continuously trained and monitored. This would not only ensure that military AI systems were working as designed and that the humans paired with those systems were also not inducing error, but also build trust in the human-machine team.

Education on Both Sides of the Trust Dynamic

Personnel should also be educated about the capabilities and limitations of both the machine and human teammates in any human-machine teaming situation. Civilian and military experts alike widely agree that a foundational pillar of effective human-machine teaming is going to be the appropriate training of military personnel. This training should include education on both the AI systems capabilities and limitations, incorporating a feedback loop from the operator back into the AI software.

Military aviation is deeply rooted in a culture of safety through extensive training and proficiency through repetition, and this military aviation safety culture could provide a venue for necessary AI education. Aviators learn not just to interpret the information displayed in the cockpit but also to trust that information. This is a real-life demonstration of research showing that humans will more accurately perceive risks when they are educated on how likely they are to occur.

Education specifically relating to how humans themselves establish and maintain trust through behavioral adaptation can also help operators become more self-aware of their own, potentially damaging, behavior. Road safety research and other fields have repeatedly proven that this kind of awareness training helps to mitigate negative outcomes. Humans are able to self-correct when they realize theyre engaging in undesirable behavior. In a human-machine teaming context, this would allow the operator to react to a fault or failure in that trusted system but retain the benefit of increased situational awareness. Therefore, implementing AI early in training will give future military operators confidence in AI systems, and through repetition the trust relationship will be solidified. Moreover, by having a better understanding not only of the machines capabilities but also its constraints will decrease the likelihood of the operator incorrectly inflating their own levels of trust in the system.

A Phased Approach

Additionally, a phased approach should be taken when incorporating AI to better account for the human element of human-machine teaming. Often, new commercial software or technology is rushed to market to outpace the competition and ends up failing when in operation. This often costs a company more than if they had delayed rollout to fully vet the product.

In the rush to build military AI applications for a competitive advantage, militaries risk pushing AI technology too far, too fast, to gain a perceived advantage. A civilian sector example of this is the Boeing 737 Max software flaws, which resulted in two deadly crashes. In October 2018, Lion Air Flight 610 crashed, killing all 189 people on board, after the pilots struggled to control rapid and un-commanded descents. A few months later, Ethiopian Airlines Flight 302 crashed, killing everyone on board, after pilots similarly struggled to control the aircraft. While the flight-control software that caused these crashes is not an example of true AI, these fatal mistakes are still a cautionary tale. Misplaced trust in the software at multiple levels resulted in the deaths of hundreds.

The accident investigation for both flights found that an erroneous inputs from an angle of attack sensor to the flight computer caused a cascading and catastrophic failure. These sensors measure the angle of the wing relative to airflow and give an indication of lift, the ability of the aircraft to stay in the air. In this case, the erroneous input caused the Maneuvering Characteristics Augmentation System, an automated flight control system, to put the plane into repeated dives because it thought it needed to gain lift quickly. These two crashes resulted in the grounding of the entire 737 Max fleet worldwide for 20 months, costing Boeing over $20 billion.

This was all caused by a design decision and a resultant software change, assumed to be safe. Boeing, in a desire to stay ahead of their competition, updated a widely used aircraft, the base model 737. Moving the engine location on the wing of the 737 Max helped the plane gain fuel efficiency but significantly changed flight characteristics. These changes should have required Boeing to market it as a completely new airframe, which would mean significant training requirements for pilots to remain in compliance with the Federal Aviation Administration. This would have cost significant time and money. To avoid this, the flight-control software was programmed to make the aircraft fly like an older model 737. While flight-control software is not new, this novel use allowed Boeing to market the 737 Max as an update to an existing aircraft, not a new airframe. There were some issues noted during testing, but Boeing trusted the software due to previous flight control system reliability and pushed the Federal Aviation Administration for certification. Hidden in the software, however, was erroneous code that caused the cascading issues seen on the Ethiopian and Lion Air flights. Had Boeing not put so much trust in the software, or the regulator similarly put such trust in Boeings certification of the software, these incidents could have been avoided.

The military should take this as a lesson. Any AI should be phased in gradually to ensure that too much trust is not placed in the software. In other words, when implementing AI, militaries need to consider cautionary tales such as the 737 Max. Rather than rushing an AI system into operation to achieve a perceived advantage, it should be carefully implemented into training and other events before full certification to ensure operator familiarity and transparency into any potential issues with the software or system. This is currently being demonstrated by the U.S. Air Forces 350th Spectrum Warfare Wing, which is tasked with integrating cognitive electromagnetic warfare into its existing aircraft electromagnetic warfare mission. The Air Force has described the ultimate goal of cognitive electromagnetic warfare as establishing a distributed, collaborative system which can make real-time or near-real-time adjustments to counter advanced adversary threats. The 350th, the unit tasked with developing and implementing this system, is taking a measured approach to implementation to ensure that warfighters have the capabilities they need now while also developing algorithms and processes to ensure the future success of AI in the electromagnetic warfare force. The goal is to first use machine learning to speed up the aircraft software reprogramming process, which can sometimes take up to several years. The use of machine learning and automation will significantly shorten this timeline while also familiarizing engineers and operators with the processes necessary to implementing AI in any future cognitive electromagnetic warfare system.

Conclusion

To effectively integrate AI into operations, there needs to be more effort devoted not only to optimizing software performance but also to monitoring and training human teammates. No matter how capable an AI system is, if human operators mis-calibrate their trust in the system they will be unable to effectively capitalize on AIs technological advances, and potentially make critical errors in design or operation. In fact, one of the strongest and most repeated recommendations to come out of the Federal Aviation Administrations Joint Investigation of the 737 Max accidents was that human behavior experts needed to play a central role in research and development, testing, and certification. Likewise, research has shown that in all automated vehicle accidents, operators did not monitor the system effectively. This means that operators need to be monitored as well. Militaries should account for the growing body of evidence that human trust in technology and software is often mis-calibrated. Through incorporating human factors into AI system design, building relevant training, and utilizing a carefully phased approach, the military can establish a culture of human-machine teaming that is free of the failures seen in the civilian sector.

John Christianson is an active-duty U.S. Air Force colonel and current military fellow at the Center for Strategic and International Studies. He is an F-15E weapons systems officer and served as a safety officer while on an exchange tour with the U.S. Navy. He will next serve as vice commander of the 350th Spectrum Warfare Wing.

Di Cooke is a visiting fellow at the International Security Program in the Centre for Strategic and International Studies, exploring the intersection of AI and the defense domain. She has been involved in policy-relevant research and work at the intersection of technology and security across academia, government, and industry. Previous to her current role, she was seconded to the U.K. Ministry of Defence from the University of Cambridge to inform the UK Defence AI operationalization approach and ensure alignment with its AI Ethical Principles.

Courtney Stiles Herdt is an active-duty U.S. Navy commander and current military fellow at the Center for Strategic and International Studies. He is an MH-60R pilot and just finished a command tour at HSM-74 as part of the Eisenhower Carrier Strike Group. Previously, he has served in numerous squadron and staff tours, as an aviation safety and operations officer, and in various political-military posts around Europe and the western hemisphere discussing foreign military sales of equipment that utilized human-machine teaming.

The opinions expressed are those of the authors and do not represent to official position of the U.S. Air Force, U.S. Navy, or the Department of Defense.

Image: U.S. Navy photo by John F. Williams

Continued here:
Miscalibration of Trust in Human Machine Teaming - War On The Rocks