Category Archives: Human Behavior

Use This Powerful Theory to Be a Better Leader – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

Adept and nimble leadership is essential in today's fast-paced and ever-changing business world. Those in such positions are responsible for setting the tone, driving innovation and inspiring others to achieve. This is a heady mix of tasks, but how to perfect them? One powerful way is by leveraging Rene Girard's mimetic theory.

Girard, a French historian, literary critic and philosopher, developed a theory of human behavior that emphasizes the role of imitation and desire in social interactions. His concepts were based on the idea that, from a very young age, human beings are fundamentally imitative creatures, and that our desires and behaviors are largely shaped by the desires and behaviors of those around us. The resulting theory has gained a significant amount of attention in recent years, particularly among business leaders and entrepreneurs, not least because it provides a powerful framework for understanding both employee and consumer behavior.

The process plays out simply: When we see someone else achieve or acquire something we desire, we are more likely to imitate their behavior in the hopes of doing the same. And leaders might be well advised to apply this incite in the process of motivating and inspiring teams.

Related: To Be Heard and To be Admired

In a sense, we are always in competition with others, trying to outdo them in our pursuit of shared desires. However, this competition can often lead to conflict and rivalry, especially in a business setting where individuals may have different goals and aspirations. Mimetic theory helps leaders understand this, and ideally to find ways of channeling it positively, such as promoting healthy competition and collaboration in which team members work together to achieve shared goals. In such a culture of camaraderie and innovation, employees can feel valued, engaged and motivated to achieve their full potential.

To leverage Girard's theory, leaders can choose from several strategies (or apply them all):

Lead by example and demonstrate the behaviors and attitudes that they want others to emulate in an organization.

Identify shared desires and goals, and align those with the goals of the organization as a whole.

Create a culture of collaboration that values teamwork, open communication and shared ownership.

Encourage innovation and creativity by creating an environment that values pioneering ideas.

Related: 9 Ways Your Company Can Encourage Innovation

To put these strategies into action, follow these steps:

1: Evaluate the current company culture and identify areas for improvement.

2: Set goals and objectives that align with the company's vision and mission.

3: Communicate this new approach to employees and provide training and resources to support their success.

4: Monitor progress and make adjustments as needed.

To illustrate a few key aspects of mimetic theory, consider the example of Microsoft. In 2014, the company's new CEO, Satya Nadella, adopted a "growth mindset" that emphasized collaboration, creativity and innovation. He encouraged employees to work together to achieve shared goals and provided platforms for them to exchange ideas. Under Nadella's leadership, Microsoft's stock price nearly tripled, and the company's market capitalization grew to more than $2 trillion.

An example of a different kind can be found in F. Scott Fitzgerald's classic novel, The Great Gatsby. The character of Jay Gatsby, who supposedly embodies the American Dream, becomes the object of desire for many other characters in the novel, including narrator Nick Carraway and Gatsby's former lover, Daisy Buchanan. They imitate his behaviors and embrace similar desires, hoping to achieve the same success and happiness. Ultimately, however, the desire for imitation and competition leads to conflict and tragedy, which helps highlight the dangerous potential of unchecked mimetic desire. Business leaders can learn from this, too, by finding ways to channel desire positively fostering healthy competition and collaboration.

Related: Entrepreneurship and Eudaimonia: The Pursuit Of Lasting Happiness

Giraud's theory offers a roadmap for understanding the power of imitation, and so achieving success. With the right strategies, leaders can leverage it to their teams to achieve greatness and take companies to the next level.

See original here:
Use This Powerful Theory to Be a Better Leader - Entrepreneur

What are bio-computers? How can they help us dive deep into the human brain? – Jagran Josh

Science never stops evolving, and this time, it has come up with a novel research area known as organoid intelligence. Can Science and Technology read the human mind? Lets know better.

Hopkin University scientists very recently brought forward a plan for a novel area of research known as organoid intelligence. This particular field of study intends to create biocomputers. In such an invention, a blend of brain cultures developed and grown in laboratories, input and output devices, and real-world sensors are intended. The aim is to control the brains processing power and dive deep into the biological basis of learning, cognition, learning, and a myriad of neurological disorders.

Humans have always been inquisitive about the human mind. However, unlike all other parts of the body, studying the human mind has never been easy. Earlier, methods like ablation were used on animals, especially rats to study the human brain parts that are similar in both rats and humans. The use of brain studying techniques on animals and sometimes harming them eventually in the process of enhancing human behavior understanding has always been a controversial discussion. Moreover, while studying the rat brain was an easier and perhaps more accessible option to study the human brain, one cannot ignore the massive differences in the structures and functions of the rat brain and the human brain. Next came advanced methods like EEG, MEG, and fMRI to study the human brain.

Now, the technology is perhaps at its best, and thus, 3D cultures of the brain would be the next big thing. Modern-day scientists are designing brain organoids, which are actually 3D cultures of the brain designed in laboratories. These organoids would actually be called "mini-brains" and would be built with the use of human stem cells. These brains would be able to hold a myriad of structural and functional features of a developing human brain. Who thought mankind would be able to create a mini-human brain in the 21st century?

Human behavior is based on some internal or external stimulation. Various sensory inputs like vision, smell, touch, and more are required by the human brain, and that's what makes it a complex yet incredible organ of the body. A body of science just at its birth stage cannot compete with nature. The brain organoids not only lack sensory inputs like the normal human brain but also do not have blood circulation.

Bio-computers would be designed and created by combining brain organoids and modern computing methods. Machine learning will be used to couple the organoids. Organoids would be grown inside flexible structures affixed with various electrodes. One can visualize them similar to the ones used in the case of Electroencephalogram readings.

Such structures would be able to dive deep into recording and studying the firing patterns of neurons. They would also offer electrical stimuli in order to mimic sensory stimuli. Machine learning techniques would then be used to analyze human behavior and biology.

Not long ago, scientists grew human neurons on top of a microelectrode array that was able to not only record but also stimulate such neurons. With the help of positive or negative electric feedback derived from the sensors, neurons could be trained to generate an electrical activity pattern that would be generated in case the neurons were playing a sport like a table tennis.

Visit link:
What are bio-computers? How can they help us dive deep into the human brain? - Jagran Josh

Miscalibration of Trust in Human Machine Teaming – War On The Rocks

A recent Pew survey found that 82 percent of Americans are more or equally wary than excited about the use of artificial intelligence (AI). This sentiment is not surprising tales of rogue or dangerous AI abound in pop culture. Movies from 2001: A Space Odyssey to The Terminator warn of the dire consequences of trusting AI. Yet, at the same time, more people than ever before are regularly using AI-enabled devices, from recommender systems in search engines to voice assistants in their smartphones and automobiles.

Despite this mistrust, AI is becoming increasingly ubiquitous, especially in defense. It plays a role in everything from predictive maintenance to autonomous weapons. Militaries around the globe are significantly investing in AI to gain a competitive advantage, and the United States and its allies are in a race with their adversaries for the technology. As a result, many defense leaders are concerned with ensuring these technologies are trustworthy. Given how widespread the use of AI is becoming, it is imperative that Western militaries build systems that operators can trust and rely on.

Enhancing understanding of human trust dynamics is crucial to the effective use of AI in military operational scenarios, typically referred to in the defense domain as human-machine teaming. To achieve trust and full cooperation with AI teammates, militaries need to learn to ensure that human factors are considered in system design and implementation. If they do not, military AI use could be subject to the same disastrous and deadly errors that the private sector has experienced. To avoid this, militaries should ensure that personnel training educates operators both on the human and AI sides of human-machine teaming, that human-machine teaming operational designs actively account for the human side of the team, and that AI is implemented in a phased approach.

Building Trust

To effectively build human-machine teams, one should first understand how humans build trust, specifically in technology and AI. AI here refers to models with the ability to learn from data, a subset called machine learning. Thus far, almost all efforts to develop trustworthy AI focus on addressing technology challenges, such as improving AI transparency and explainability. The human side of the human-machine interaction has received little attention. Dismissing the human factor, however, risks limiting the positive impacts that purely technology-focused improvements could have.

Operators list many reasons why they do not trust AI to complete tasks for them, which is unsurprising given the generally untrustworthy cultural attitude outlined in the Pew survey above towards the technology. However, research shows that humans often do the opposite with new software technologies. People trust websites with their personal information and use smart devices that actively gather that information. They even engage in reckless activity in automated vehicles not recommended by the manufacturer, which can pose a risk to ones life.

Research shows that humans struggle to accurately calculate appropriate levels of trust in the technology they use. Humans, therefore, will not always act as expected when using AI-enabled technology often they may put too much faith in their AI teammates. This can result in unexpected accidents or outcomes. Humans, for example, have a propensity toward automation bias, which is the tendency to favor information shared by automated systems over information shared by non-automated systems. The risk of this occurring with AI, a notorious black-box technology with frequently misunderstood capabilities, is even higher.

Humans often engage in increasingly risky behavior with new technology they believe to be safe, a phenomenon known as behavioral adaption. This is a well-documented occurrence in automobile safety research. A study conducted by University of Chicago economist Sam Peltzman found no decreased death rate from automobile accidents after the implementation of safety measures. He theorized this was because drivers, feeling safer as the result of the new regulations and safety technology, took more risks while driving than they would have before the advent of measures made to keep them safe. For example, drivers who have anti-lock braking were found to drive faster and closer behind other vehicles than those who did not. Even using adaptive cruise control, which maintains a distance from the car in front of you, leads to an increase in risk-taking behavior, such as looking at a phone while driving. While it was laterdetermined that the correlation between increased safety countermeasures and risk-taking behavior was not necessarily as binary as Peltzman initially concluded, the theory and the concept of behavioral adaption itself have gained a renewed focus in recent years to explain risk-taking behavior in situations a diverse as American football and the COVID-19 pandemic. Any human-machine teaming should be designed with this research and knowledge in mind.

Accounting for the Human Element in Design

Any effective human-AI team should be designed to account for human behavior that could negatively affect the teams outcomes. There has been extensive research into accidents involving AI-enabled self-driving cars, which have led some question whether human drivers can be trusted with self-driving technology. A majority of these auto crashes using driver assistance or self-driving technology have occurred as a result of Teslas Autopilot system in particular, leading to a recent recall. While the incidents are not exclusively a product of excessive trust in the AI-controlled vehicles, videos of these crashes indicate that this outsized trust plays a critical role. Some videos showed drivers were asleep at the wheel, while others pulled off stunts like putting a dog in the drivers seat.

Tesla says its autopilot program is meant to be used by drivers who are also keeping their eyes on the road. However, studies show that once the autopilot is engaged, humans tend to pay significantly less attention. There have been documented examples of deadly crashes with no one in the drivers seat or while the human driver was looking at their cell phone. Drivers made risky decisions they would not have in a normal car because they believed the AI system was good enough to go unmonitored, despite what the company says or the myriad of examples to the contrary. A report published as part of the National Highway Traffic Safety Administrations ongoing investigation into these accidents recommends that important design considerations include the ways in which a driver may interact with the system or the foreseeable ranges of driver behavior, whether intended or unintended, while such a system is in operation.

The military should take precautions when integrating AI to avoid a similar mis-calibration of trust. One such precaution could be to monitor the performance not only of the AI, but also of the operators working with it. In the automobile industry, video monitoring to ensure drivers are paying attention while the automated driving function is engaged is an increasingly popular approach. Video monitoring may not be an appropriate measure for all military applications, but the concept of monitoring human performance should be considered in design.

A recent Proceedings article framed the this dual monitoring in the context of military aviation training. Continuous monitoring of the health of the AI system is like aircraft pre-flight and in-flight system monitoring. Likewise, aircrew are continuously evaluated in their day-to-day performance. Just as aircrew are required to undergo ongoing training on all aspects of an aircrafts employment throughout the year, so too should AI operators be continuously trained and monitored. This would not only ensure that military AI systems were working as designed and that the humans paired with those systems were also not inducing error, but also build trust in the human-machine team.

Education on Both Sides of the Trust Dynamic

Personnel should also be educated about the capabilities and limitations of both the machine and human teammates in any human-machine teaming situation. Civilian and military experts alike widely agree that a foundational pillar of effective human-machine teaming is going to be the appropriate training of military personnel. This training should include education on both the AI systems capabilities and limitations, incorporating a feedback loop from the operator back into the AI software.

Military aviation is deeply rooted in a culture of safety through extensive training and proficiency through repetition, and this military aviation safety culture could provide a venue for necessary AI education. Aviators learn not just to interpret the information displayed in the cockpit but also to trust that information. This is a real-life demonstration of research showing that humans will more accurately perceive risks when they are educated on how likely they are to occur.

Education specifically relating to how humans themselves establish and maintain trust through behavioral adaptation can also help operators become more self-aware of their own, potentially damaging, behavior. Road safety research and other fields have repeatedly proven that this kind of awareness training helps to mitigate negative outcomes. Humans are able to self-correct when they realize theyre engaging in undesirable behavior. In a human-machine teaming context, this would allow the operator to react to a fault or failure in that trusted system but retain the benefit of increased situational awareness. Therefore, implementing AI early in training will give future military operators confidence in AI systems, and through repetition the trust relationship will be solidified. Moreover, by having a better understanding not only of the machines capabilities but also its constraints will decrease the likelihood of the operator incorrectly inflating their own levels of trust in the system.

A Phased Approach

Additionally, a phased approach should be taken when incorporating AI to better account for the human element of human-machine teaming. Often, new commercial software or technology is rushed to market to outpace the competition and ends up failing when in operation. This often costs a company more than if they had delayed rollout to fully vet the product.

In the rush to build military AI applications for a competitive advantage, militaries risk pushing AI technology too far, too fast, to gain a perceived advantage. A civilian sector example of this is the Boeing 737 Max software flaws, which resulted in two deadly crashes. In October 2018, Lion Air Flight 610 crashed, killing all 189 people on board, after the pilots struggled to control rapid and un-commanded descents. A few months later, Ethiopian Airlines Flight 302 crashed, killing everyone on board, after pilots similarly struggled to control the aircraft. While the flight-control software that caused these crashes is not an example of true AI, these fatal mistakes are still a cautionary tale. Misplaced trust in the software at multiple levels resulted in the deaths of hundreds.

The accident investigation for both flights found that an erroneous inputs from an angle of attack sensor to the flight computer caused a cascading and catastrophic failure. These sensors measure the angle of the wing relative to airflow and give an indication of lift, the ability of the aircraft to stay in the air. In this case, the erroneous input caused the Maneuvering Characteristics Augmentation System, an automated flight control system, to put the plane into repeated dives because it thought it needed to gain lift quickly. These two crashes resulted in the grounding of the entire 737 Max fleet worldwide for 20 months, costing Boeing over $20 billion.

This was all caused by a design decision and a resultant software change, assumed to be safe. Boeing, in a desire to stay ahead of their competition, updated a widely used aircraft, the base model 737. Moving the engine location on the wing of the 737 Max helped the plane gain fuel efficiency but significantly changed flight characteristics. These changes should have required Boeing to market it as a completely new airframe, which would mean significant training requirements for pilots to remain in compliance with the Federal Aviation Administration. This would have cost significant time and money. To avoid this, the flight-control software was programmed to make the aircraft fly like an older model 737. While flight-control software is not new, this novel use allowed Boeing to market the 737 Max as an update to an existing aircraft, not a new airframe. There were some issues noted during testing, but Boeing trusted the software due to previous flight control system reliability and pushed the Federal Aviation Administration for certification. Hidden in the software, however, was erroneous code that caused the cascading issues seen on the Ethiopian and Lion Air flights. Had Boeing not put so much trust in the software, or the regulator similarly put such trust in Boeings certification of the software, these incidents could have been avoided.

The military should take this as a lesson. Any AI should be phased in gradually to ensure that too much trust is not placed in the software. In other words, when implementing AI, militaries need to consider cautionary tales such as the 737 Max. Rather than rushing an AI system into operation to achieve a perceived advantage, it should be carefully implemented into training and other events before full certification to ensure operator familiarity and transparency into any potential issues with the software or system. This is currently being demonstrated by the U.S. Air Forces 350th Spectrum Warfare Wing, which is tasked with integrating cognitive electromagnetic warfare into its existing aircraft electromagnetic warfare mission. The Air Force has described the ultimate goal of cognitive electromagnetic warfare as establishing a distributed, collaborative system which can make real-time or near-real-time adjustments to counter advanced adversary threats. The 350th, the unit tasked with developing and implementing this system, is taking a measured approach to implementation to ensure that warfighters have the capabilities they need now while also developing algorithms and processes to ensure the future success of AI in the electromagnetic warfare force. The goal is to first use machine learning to speed up the aircraft software reprogramming process, which can sometimes take up to several years. The use of machine learning and automation will significantly shorten this timeline while also familiarizing engineers and operators with the processes necessary to implementing AI in any future cognitive electromagnetic warfare system.

Conclusion

To effectively integrate AI into operations, there needs to be more effort devoted not only to optimizing software performance but also to monitoring and training human teammates. No matter how capable an AI system is, if human operators mis-calibrate their trust in the system they will be unable to effectively capitalize on AIs technological advances, and potentially make critical errors in design or operation. In fact, one of the strongest and most repeated recommendations to come out of the Federal Aviation Administrations Joint Investigation of the 737 Max accidents was that human behavior experts needed to play a central role in research and development, testing, and certification. Likewise, research has shown that in all automated vehicle accidents, operators did not monitor the system effectively. This means that operators need to be monitored as well. Militaries should account for the growing body of evidence that human trust in technology and software is often mis-calibrated. Through incorporating human factors into AI system design, building relevant training, and utilizing a carefully phased approach, the military can establish a culture of human-machine teaming that is free of the failures seen in the civilian sector.

John Christianson is an active-duty U.S. Air Force colonel and current military fellow at the Center for Strategic and International Studies. He is an F-15E weapons systems officer and served as a safety officer while on an exchange tour with the U.S. Navy. He will next serve as vice commander of the 350th Spectrum Warfare Wing.

Di Cooke is a visiting fellow at the International Security Program in the Centre for Strategic and International Studies, exploring the intersection of AI and the defense domain. She has been involved in policy-relevant research and work at the intersection of technology and security across academia, government, and industry. Previous to her current role, she was seconded to the U.K. Ministry of Defence from the University of Cambridge to inform the UK Defence AI operationalization approach and ensure alignment with its AI Ethical Principles.

Courtney Stiles Herdt is an active-duty U.S. Navy commander and current military fellow at the Center for Strategic and International Studies. He is an MH-60R pilot and just finished a command tour at HSM-74 as part of the Eisenhower Carrier Strike Group. Previously, he has served in numerous squadron and staff tours, as an aviation safety and operations officer, and in various political-military posts around Europe and the western hemisphere discussing foreign military sales of equipment that utilized human-machine teaming.

The opinions expressed are those of the authors and do not represent to official position of the U.S. Air Force, U.S. Navy, or the Department of Defense.

Image: U.S. Navy photo by John F. Williams

Continued here:
Miscalibration of Trust in Human Machine Teaming - War On The Rocks

Students share perspectives on new design and data science majors – The Stanford Daily

In September, Stanford announced two major changes to its undergraduate education offerings: the former product design major was rebranded to the new design major, and the former data science minor would now be offered as both a B.A. and B.S. degree.

Current and prospective students from the programs shared their thoughts with The Daily.

New Design Major

The design major now belongs under the d.schools interdisciplinary programs (IDPs), and is categorized as a Bachelor of Science (B.S.) degree in Design. Previously, the product design major resulted in the conferral of a B.S. in Engineering. However, students may still choose to complete the product design engineering subplan if they matriculated before the 2022-2023 academic year.

The design major now has three methods tracks: Physical Design and Manufacturing, AI and Digital User Experience, and Human Behavior and Multi-stakeholder Research. From there, students also select one Domain Focus area, which may be Climate and Environment, Living Matter, Healthcare and Health Technology Innovation, Oceans and Global Development, and Poverty. While not possible in the 2022-23 academic year, students will be able to propose their own Domain Focus area as an honors option in the future.

Sydney Yeh 26 said that the major is a great way to use my creative skills, apply it to technology and move with the current times.

She also believes that the shift from product design to more broad design offerings is beneficial. [While] people are pretty split [on this issue], I think its a good change because theres more variety in what you can specialize in, Yeh said. Before, it was mostly physical design and designing products.

Yeh intends to pursue the digital design track, as she is interested in designing apps and interfaces. She says the design major effectively weaves together her interests in art and computer science. Originally, I was going to combine art and CS and design my own major, but found that the design major fits my goals, Yeh said.

Hannah Kang 26, another prospective design major, echoed Yehs sentiments about combining interests in computer science and art. [The major allows me] to integrate the art aspect and the STEM aspect that I know for sure that Stanford is excelling in, Kang said.

Kang also expressed her appreciation for the CS requirements of the design major, saying, Im trying to take more CS classes so that I can have at least the most fundamental CS knowledge [and can] seek ways to use my engineering skills to create something.

Sosi Day 25, a design major on the human behavior track, praised the collaborative and multidisciplinary aspects of design. Theres a lot of communal learning, she said. Its also very creative, and it engages a lot of different parts of my brain. A lot of it is artistic, but theres also problem solving skills involved.

Day said that as someone who seeks to apply design thinking to other issues beyond manufacturing, the change in major has been a positive one for her. I never considered doing a product design major last year, but now that theyve added two new tracks, its changed my mind, she said.

New Data Science Major

The new data science major was also announced this year. Whereas previously, students could only minor in data science, undergraduates now have the option of majoring on either the B.S. or B.A. track.

Professor Chiara Sabatti, associate director of Data Sciences B.S. track, said that the B.A. has similar foundational requirements to the B.S., but has a concentration of interest in applying data science methods to solve problems in the social sciences.

According to Sabatti, the B.S. track is closely aligned with the former mathematical and computational science (MCS) major, which was phased out this year. She explained that the change to a data science major with more broad offerings was to more closely match with MCS graduates career paths, saying that [the changes] are in response to the needs of the students and the demands of society.

Professor Emmanuel Candes, the Barnum-Simons Chair of math and statistics, said that the formal name change from MCS to data science occurred last spring, though the process of changing the curriculum and developing the B.S. and B.A. paths began in 2019.

Candes echoed Sabattis reflections about students career paths, saying, we realized that more and more of our graduates [of Mathematical and Computational Science] were entering the workforce as data scientists, and it seems like the [new] name represents more of a reality.

The major program has shifted to accommodate this growing interest in data, according to Sabatti.

The structure of the program has changed to make sure that we prepare students for this sustained interest in data science, Sabatti said. For example, theres some extra requirements in computing, because the data sets that people need to work with require substantial use of computational devices, [and] theres some extra classes on inference and how you actually extract information from this data.

Similar to the new design major, many prospective data science majors say the interdisciplinary offerings of the major are enticing.

I like [data science] because its an intersection between technical fields and humanities-focused fields, said Caroline Wei 26, a prospective B.A. data science major on the Technology and Society pathway. What makes data science so powerful is it gives you the option to draw conclusions about society and present that to the rest of the world.

Similarly, Savannah Voth 26, another prospective data science major, shared the humanities and technical skills she feels the major helps her build. The data science B.A. allows me to use quantitative skills and apply it to the humanities and social sciences, she said.

Voth expressed some concerns regarding the ability to connect required coursework with data science more directly.

One issue is that the requirements include classes in statistics and classes in areas you want to apply data science to, but there arent as many opportunities to connect them, Voth said. It would be cool if for each pathway, there was at least one class that is about data science applied to that topic.

Despite this concern, Voth praised the openness of the majors coursework. I like how [the requirements] are very flexible and you can choose which area to focus on through the pathways.

Wei highlighted the effectiveness of the core requirements in building skills and perspectives, saying, The ethics [requirement] is relevant since you have to know how to handle data in an ethical way, the compsci core combines the major aspects of technical fields..and the social science core helps you see why those technical skills are important.

Go here to read the rest:
Students share perspectives on new design and data science majors - The Stanford Daily

How disgust-related avoidance behaviors help animals survive – Phys.org

Overlooked species in risk perception research and how disease avoidance and disgust may be used in different contexts of conservation and wildlife management. Credit: Dr Cecile Sarabian

Animals risk getting sick every day, just like humans, but how do they deal with that risk? An international team led by Dr. Cecile Sarabian from the University of Hong Kong (HKU) examines the use of disgust-related avoidance behaviors amongst animals and their role in survival strategy.

The feeling of disgust is an important protective mechanism that has evolved to protect us from diseases risks. Triggered by sensory cues, we feel disgust surrounding things such as the sight of infected wounds. This releases a set of behavioral, cognitive and/or physiological responses that enable animals to avoid pathogens and toxins.

An international team, led by Dr. Cecile Sarabian from the University of Hong Kong, has turned their attention to the emotion's role in animal disease avoidancean area of study typically neglected. The team developed a framework to test disgust and its associated disease avoidance behaviors across species, social systems and habitats.

Characteristics such as whether a species lives in groups or alone are important when analyzing their response to disease. The paper, published in Journal of Animal Ecology, highlights the positives and negatives of experiencing disgust to avoid disease.

Over 30 species use disease avoidance strategies in the wild, according to previous reports, however the authors provided predictions for seven additional species that were previously overlooked. These include the common octopus, a species native to Hong Kong, and the red eared slideran invasive species.

Species exhibit varying levels of disease avoidance behavior depending on their social systems and ecological niches. Solitary species can be less vulnerable to socially transmitted diseases, and thus less adapted to recognize and avoid that risk. But a group-living species are more prone, but also more likely to recognize and avoid sick animals.

However, species living in colonies like rabbits or penguins may be more likely to tolerate infected mates. As the species depend on each other to survive, collective immunity can be less costly than having to isolate. This model could also apply to human diseases, for instance, the COVID-19 pandemic.

Furthermore, the authors suggest five practical applications of disgust-related avoidance behaviors in wildlife management and conservation. These include endangered species rehabilitation, crop damage and urban pests. For example, modulating the space use and food consumption of crop-damaging species, disgust-related behaviors could be useful. This could involve creating an environment that is unappealing to pests.

"Given the escalation of conflicts between humans and wildlife, the translation of such knowledge on disease risk perception and avoidance into relevant conservation and wildlife management strategies is urgent," says Dr. Sarabian.

More information: Ccile Sarabian et al, Disgust in animals and the application of disease avoidance to wildlife management and conservation, Journal of Animal Ecology (2023). DOI: 10.1111/1365-2656.13903

Journal information: Journal of Animal Ecology

Original post:
How disgust-related avoidance behaviors help animals survive - Phys.org

Microbiomes Connected More than Ever to Psychological Well Being – Greenwich Sentinel

New research has shown that the microbiome the vast communities of microbes in our digestive tract can affect our emotions and cognition. Studies have suggested that the microbiome plays a role in influencing moods and the state of psychiatric disorders, as well as information processing. However, the mechanisms behind how the microbiome interacts with the brain have remained elusive.

Recent research has built on earlier studies that demonstrate the microbiomes involvement in responses to stress. Focusing on fear and how it fades over time, researchers have identified differences in cell wiring, brain activity, and gene expression in mice with depleted microbiomes. The study also identified four metabolic compounds with neurological effects that were far less common in the blood serum, cerebrospinal fluid, and stool of the mice with impaired microbiomes.

The researchers were intrigued by the concept that microbes inhabiting our bodies could affect our feelings and actions. The studys lead author and a postdoctoral associate at Weill Cornell Medicine, Coco Chu, set out to examine these interactions in detail with the help of psychiatrists, microbiologists, immunologists, and scientists from other fields.

The research has pinpointed a brief window after birth when restoring the microbiome could still prevent adult behavioral deficits. The microbiome appeared to be critical in the first few weeks after birth, which fits into the larger idea that circuits governing fear sensitivity are impressionable during early life.

The research on microbial effects on the nervous system is a young field, and there is even uncertainty around what the effects are. Previous experiments reached inconsistent or contradictory conclusions about whether microbiome changes helped animals to unlearn fear responses.

The findings from the recent study have given extra weight to the specific mechanism causing the behavior observed, pointing to the possibility of predicting who is most vulnerable to disorders like post-traumatic stress disorder.

Although the interactions of the brain and the gut microbiome differ in humans and mice, the study has identified potential interventions targeting the microbiome that might be most effective in infancy and childhood when the microbiome is still developing, and early programming takes place in the brain.

The studys findings could have significant implications for the future of potential therapies and deepen scientific knowledge around the mechanisms that influence core human behaviors

Read this article:
Microbiomes Connected More than Ever to Psychological Well Being - Greenwich Sentinel

‘Cultural Misogyny’ and Why Men’s Aggression To Women Is So … – FlaglerLive.com

As the country watches Scott Morrison grapple with the sex scandals rocking our federal parliament, it is worth wondering what has really changed since former Prime Minister Julia Gillards now-famous 2012 misogyny speech.

The power of that speech is undeniable, and it resonates loudly today.

Gillard spoke to the imbalance of power between men and women and the under-representation of women in positions of authority. Her speech raised serious concerns about how some politicians saw womens roles in contemporary Australia.

Fast forward Scott Morrison attempting to address the most recent shocking allegations of lewd behavior by some coalition staff the allegation being a group of government staffers had shared images and videos of themselves undertaking lewd acts in Parliament House, including in the office of a female federal MP.

These stories raise the question as to why some men participate in sexually denigrating women both those in authority as well as those in positions of submission in hierarchical organisations. And why is male aggression towards women so often expressed through sex rather than through other means?

As a criminologist, I interpret mens sexually aggressive behavior whether it is desecrating a womens desk by videoing himself masturbating on it, or a sexual assault as an activity born of a need for power and control.

When some men feel challenged, or want to dominate someone to fulfill an innate internal inadequacy, they can feel the need to do so sexually. Often, the subjects of their rage about feelings of inadequacy are women.

From lewd comments, to being groped, through to sexual assault, the attacks on women in the workplace continue.

Research suggests heterosexual men who are more socially dominant are also more likely to sexually objectify women. When these men are placed in positions of submission to women at work and their dominance is challenged, the levels of sexual objectification of women go up. This supports the assertion that some men increase their dominance by sexually objectifying women, and this objectification can become physical.

This conversation around how we address this has been building for some time.

In 2017, the #MeToo movement went viral, as women started to share their negative sexual experiences via social media. The discussion initially focused on women being sexually harassed by their bosses in the media and entertainment industry, but it soon became obvious the problem was much wider than that. It permeates every industry in every country.

Sexual harassment and assault are more common than many people might believe, or want to believe. A 2018 study surveyed 2,000 people in the US. It found 81% of women and 43% of men had suffered some form of sexual harassment or assault. Further, 38% of the women surveyed said they have suffered from sexual harassment in the workplace.

The picture is mirrored in Australia. A 2018 Australian Human Rights Commission report found 23% of women said they had been sexually harassed at work in the previous 12 months.

In 2021, we are still having the same debate.

One big question is where these bad male behaviors originate from?

Social Learning Theory might help us to understand what is going on in relation to some mens need for sexual domination of women. It is based in the premise that individuals develop notions of gender and the associated behaviors by watching others and mimicking them. This learning is then reinforced vicariously through the experiences of others.

Combine this learnt behavior with cognitive development theory, which suggests gender-related behavior is an adoption of a gender identity through an intellectual process, and we can see how misogynistic behaviors can be identified, remembered, and mimicked by subsequent generations of males.

This could be termed cultural misogyny.

How do we change the dynamic?

The only way to shift the framing around appropriate behaviour in the workplace, and society more generally, is to continue to break down gender stereotypes. Women need to be elevated to positions of power to reduce male domination in all aspects of life. We must challenge the undermining of womens and girls autonomy and value when boys exhibit it, to break the chain of passing on these negative attitudes.

We are only now beginning to the hear the breadth of stories from women speaking out about their own negative experiences.

As a woman in academia a very hierarchical structure I have been sexually harassed, and I just accepted it as part of my working world. My experience was with a very senior member of a previous university, and I would never have considered challenging him or reporting it, as I was very well aware of the power he had over me and my career. I even considered changing organizations to avoid the unwanted behaviors.

The brave women who are now speaking up have changed the way I view my own experience. The more we raise our voices, support each other and encourage change in the attitudes around us, the more we will all benefit.

Xanthe Mallett is a Forensic Criminologist at the University of Newcastle.

Follow this link:
'Cultural Misogyny' and Why Men's Aggression To Women Is So ... - FlaglerLive.com

Why Humans Are Built for Connection, Love and Friendship – WHYY

All the bad news and stories of bad, even horrific, human behavior can overwhelm us, leading to a very pessimistic outlook on humanity. It overshadows the examples of people doing the right thing, acting generously, with kindness and empathy.

Social scientist and physician Nicholas Christakis says its actually our tendency toward goodness that has been a big driver in our evolution. Christakis runs the Human Nature Lab at Yale and was once a hospice physician, work that has informed his research.

He says our need for human connection is one of our most defining characteristics and hes seen it expressed at the bedside of people at the end of life, holding onto loved ones in their final moments. Christakis joins us to talk about our social evolution, why friendship and love are vital to our species survival, and how he maintains his optimism for humankind.

Go here to read the rest:
Why Humans Are Built for Connection, Love and Friendship - WHYY

Humans and Our Alarming Fear of Robots – DISCOVER Magazine

I was standing in line for a tourist attraction in Tokyo when a small robot began addressing the crowd. The robot resembled Rosey fromThe Jetsonsand was meant to amuse people while they waited. It babbled for a while, and then its eyes turned into two pink hearts. I love everyone, it announced.

(Credit:meunierd/Shutterstock)

Oh, really? I responded sarcastically. I couldnt help myself. Everyone? Thats disingenuous.

The Tokyo robot was one of many robots and other forms of artificial intelligence (AI) that have grated on my nerves. Im not alone in my disdain. Scientists have been studying robot hate for more than 30 years. Research finds that many people view robots as the other, and robot hatred can lead to sabotage, attacks and even robot bullying.

Robots and AI have a relatively short history in the U.S., but its one that has long been controversial. With theincrease in automation during the 1950s, some people saw mechanization as a way to make life better or easier. Others saw it as a threat. Robots could take over jobs or the world for those who read a lot of science fiction.

By the 1990s,information retrieval agentsbecame mainstream, but they werent always functional and could be more of a nuisance than a help. Microsoft introduced Clippy, a virtual assistant, in 1996, and it became famous for popping up at inopportune moments and asking aggravating questions like, it looks like youre writing a letter. Would you like help with that?One studydescribed Clippy as having fatal shortcomings in its ability to determine when users actually needed help.

In the early 2000s, AI became more useful. People turned to online search engines to retrieve information, and global position systems (GPS) became widely available. But AI also became more personal. Tech companies introduced chatbots, like Jabberwacky, that interacted and responded to users.

Vocal social agents such as Siri or Alexa are now a part of daily life for many users. Similar to their chatbot predecessors, they are designed to replicate human communication norms, and they learn and repeat our behavior patterns.

For some users, asking Alexa to play 80s music is a convenience. But for others, it can be an opportunity for bad behavior.

Read More: Robots and Artificial Intelligence Have Ancient Mythology Origins

Well before people asked Siri or Alexa rude questions, users of early 2000s chatbots also showed a tendency for harassment. This poor human behavior toward robots is an example of robot bullying.

In 2008,a studyinInteracting with Computersanalyzed how users engaged with Jabberwacky, the online chatterbot that started in 1997 and garnered more than 10 million replies in the following decade.

To analyze conversations, the researchers picked a time sample, meaning they selected a specific day (Nov. 22, 2004) and then analyzed all the interactions (716 conversations) that occurred during the time period.

When analyzing the content of the conversations, the authors found some users were friendly or curious about testing the system and its capabilities. But many were unkind. On the milder side of the AI abuse spectrum, some users liked telling Jabberwacky that it was merely a computer or correcting its grammar.

About 10 percent of interactions, however, involved insulting or offensive language. Another 11 percent was sexually-explicit, or as the researchers described it: Harsh verbal abuse was the norm in these conversations, which were more similar to dirty soliloquies than to hot chats.

The authors concluded that because chatbots lack memory and reasoning, they are a way for people to violate social norms in a seemingly harmless manner. But studies have found other instances in which people perceive robots or AI as threats, leading to anti-robot attacks.

What exactly is robot bullying in the physical sense, such as attacks? Scholars organizeanti-robot attacks into several categories. Physical attacks, decision-making impairment (i.e., messing with sensors), manipulation, intentional neglect and security breaches. Theres also an extremely specific category staging of robot attacks for online dissemination that involves stunts like ordering food delivered by a robot, waiting for the machine to roll up and then kicking it, or pulling off the little flag it has. Attackers then post the video on the internet.

(Credit:JHVEPhoto/Shutterstock)

So why would anyone kick a food-delivering robot? Scholars have found there are complex motivations. Sincethe early 1800s, people have attacked machinery that threatened to displace workers. Some anti-robot disdain continues to stem from the threat that people feel robots have on their livelihood.

People also view robots asthe other,meaning they are not one of us, yet were supposed to accept them into our lives. Similarly, people might associate a specific robot with an organization or corporation they dislike.

Read More: What a Digital Worker Could Mean for the Human Workforce

And because the technology is relatively new, people can be distrustful and cynical. A 2022 study inPersonality and Individual Differencesmeasured how high school students felt about AI. Using the Cynical Hostility Towards AI Scale, researchers had 659 participants complete a survey about their feelings toward AI.

The study found that just because a person was cynical toward AI didnt mean they were cynical in general or toward other people. Participants were also more distrustful of AI when they felt it was hostile or had negative intentions.

The belief that a machine can have negative intentions demonstrates the complexity of robot hate. People believe a machine can be programmed to be harmful, yet people understand that robots arent conscious and they dont have the ability to suffer if were mean to them.

One scholarargued the fact that robotsare not morally considerable was one of the reasons people felt comfortable with robot hate. Our sarcasm doesnt hurt robots feelings. Food delivery robots arent traumatized by being kicked. Thus, robots can be a safe place for people (like the Jabberwacky users) to break social norms.

And sometimes... it can feel like robots and AI are just cruising for a bruising. Devices that are programmed to replicate human communication can become sassy with their responses. Researchers are now exploring ways that devicescan be better anthropomorphizedin order to elicit empathy from users.

Read More: Study Suggests What the Human-Robot Relationship Looks Like

See more here:
Humans and Our Alarming Fear of Robots - DISCOVER Magazine

Is ChatGPT a disruption to Google? | by Vishnuaravi | Mar, 2023 – DataDrivenInvestor

My thoughts on the ChatGPT

Hey, everyone; in this article, I would like to present my thoughts on the ChatGPT and everything thats happening in the world of AI. This is not an AI war; this is a war of data that these companies have been collecting for years and years.

Recently, one of the talks of those founders of Gmail said that within just the next two years, GPT would disrupt how Google is working. Now, that disruption might happen in favor of Google or might happen in favor of Microsoft, but disruption is sure.

Google is a company that just came in 1998 and nearly 90% of the people prefer Google search engine. Their 60% of revenue depends on this search (i.e.) the ads shown to you while you are searching for something. Google, a $2 trillion company whose primary synonym is information like you want to search anything, the synonym is Google it out, and this company was able to change the fundamental behavior of humans.

Before Google, people used to remember the website name, and now, after Google, people dont remember the website name. You just search for it and eventually land on a page that will probably give you the best information. This is a fundamental change in human behavior from remembering the website to just searching about the website or information.

Yes, we should be worried about just two things.

The adoption rate of a technology

Adoption rate means, for example, ChatGPT was able to secure 100 million users in just two months, which is pretty insane. People might not be paid users eventually, but they were curious about it, which is more than enough in this attention-driven economy.

The fundamental change in human behavior

Then the next most important thing people should be worried about is when there is a fundamental change in human behavior. For example, now taking a cab via an app on the phone just seems normal. For example, the mobile phone nowadays, you check your pockets probably ten times a day or probably more than that, whether your phone is there or not; it is a fundamental change in human behavior.

Google has been riding on this fundamental change of behavior for so long. For years, our behavior has been really simple, whenever we need some information, we just look for that information on Google, we get a lot of links in return, we click all of them and open them up in different tabs, and we look out and scrape the information, whatever is needed for us. But ChatGPT is changing this fundamental behavior.

Now, wait a minute and just think for a second. Dont think like a techie person; think about how your mom and dad will search for the information. If my mom needs a recipe, shes not going to look for Google because one page might give a result, and one page might not give a result, and thats why my mom these days is looking for all the recipes on YouTube because its a direct answer. Similarly to this, regular people will not like to google the stuff eventually in the future; if they can get the result directly by asking a bot and getting the exact result and recipe, why would anybody be searching on Google?

Now, surely I do understand right now this tech is not perfect, and it might give you biased results on politics and all these things, but thats an edge case, and if I were talking about just the ChatGPT, I would not be that much concerned. This is a point of concern because this ChatGPT is now backed by a big giant, and when investor money pours in, then the game is all different. Microsoft recently invested $1 billion in OpenAI and the ChatGPT, and recently they are pouring in more money, $10 billion, and also providing infrastructure of the entire Azure system.

This is really getting exciting; in fact, Microsoft CEO just said that this would be a game-changer war. This will impact the level of how the personal computer came in and how mobile technologies came in, and CEOs like Microsoft dont make statements randomly; they are very serious, and they know the impact of their statements.

Chatgpt means a chat-based AI bot, but thats a little bit of vague information. Here GPT stands for Generative Pre-trained Transformer. In simple words, these guys took billions and billions of data, labeled it out, and it was trained on petabytes of data, billions of parameter petabytes of data.

So ChatGPT is a generative chatbot, it has already learned so many things via our labels and our information, and now it understands the context. The context is really important here; you can chat and ask for some questions; if you feel like you need more of that question or information, you can just ask for it. It will remember what you asked it, and it can improvise over on top of it. The biggest mistake people make with ChatGPT is thinking like a techie because techies are making all these conversations nowadays. But this chatbot, if you think from the perspective of your mom and dad or somebody who is not that techie, if he gets the result directly there, then you will start to understand that this is a revolutionary tech. Just simply ask the question, and it will give you answers; no need to go through hundreds of links.

This also makes a really big educational impact, like asking a question, and ChatGPT can answer it. ChatGPT can bring the answer in a very concentrated format; if you dont like it, just ask it to revise it. This is almost like your favorite YouTube channel. Why are you subscribed to that YouTube channel? Because there is no fuss around it; its a direct point, and you get the answer or the exact tutorial. Thats what YouTube is shining for, and thats exactly in the text format is ChatGPT.

Personally, I dont consider ChatGPT a problem; its just the next big step in evolution. It can obviously clear exams and write your essays; you dont need to memorize anything. So you need to be more creative around your jobs and work; no need for memorization anymore.

Of course, right now, the only big problem is that I get a biased opinion. Since Im not looking for hundreds of links and hundreds of websites, I dont get another side of the information. Whatever ChatGPT is feeding me, Im just assuming thats true. Its almost like getting into a WhatsApp university where people think this is the source and the truth. But again, there could be an improvement; there could be a role of these edge cases, which can be improved over time.

I will not sugarcoat it; it will 100% kill a few of the jobs just like when computers came in, a lot of people doing clerical jobs are no more doing it now because computers can actually store your data much more efficiently. Surely it gave demand for new roles and new jobs. Now, we dont need to pass on the files around, but we need somebody who can enter the data, so data entry jobs are there. Creative jobs like programming came only after the computer, and at one point, everybody was resisting and getting afraid of computers.

Now, with the introduction of these AI jobs, some boring jobs or time-consuming jobs will certainly not be there. For example, if I just need a form where I can have login information and I want to code it out, ChatGPT can easily give it to me, but the way how I want it, the taste and the flavors, and the colors probably need to be tweaked out, but thats an easy job compared to writing the whole form as a code. So surely ChatGPT will give rise to a new kind of job, a new variety of jobs, but some jobs will get obsolete, so there is no need to sugarcoat it. Yes, we are about to lose a few jobs, but there will be many more creative jobs in the same industry. So this AI will help us to perform these tedious tasks, and we people can be more creative.

At one point in time, these (Orkut, Yahoo!, and BlackBerry) companies were too big to fail, but we have all witnessed that these companies are no longer as much relative as they were at one point in time.

At one point in time, we thought that Facebook was the ultimate social network, and then Instagram came, and now we are into the era of vertical videos, so things do change. No company is that big that it cannot fail or cannot fall down, and when the big giant fall down, it makes a lot of noise.

But this is not going to happen with Google because they are sitting on a huge, huge pile of cash, and as Peter Thiel once said, Google is such a big giant sitting on such a huge amount of cash, yet they are doing less innovation. Now the pressure is on Google, so in pressure, diamonds are always created; the same will happen with Google as well. Under this pressure, they will be moving more aggressively toward AI. They might win the race, or they might lose the race, but somebody is kicking their butt. But Google is known for experimenting and creating new ones all the time.

See simply what we can do is, first of all, get ready to change. Adoption is really the foundation; survival of the fittest. Get ready to learn and improvise, and try to be a bit more open as well. Its not like Google is here, so its going to be remaining here forever. Things do change and try to explore a little bit more about Bing. Lets be more open about it, and if the world eventually moves there, we will also be moving there. So dont be rigid, be more creative, be more open, and explore the more horizons and more opportunities that are available. I am already on Bing and exploring it just like Im exploring Google. The app is already on my phone, the browser is already there, and with my tabs are selected there, so Im just equally and open-mindedly; Im actually exploring Bing quite often.

So, this is the worlds most interesting time to be alive. I easily remember witnessing them when mobile phones were created, which was revolutionary. Seeing people that dont have any mobile devices now dont go anywhere, even in their bathrooms, without mobile phones, and now we are moving into another phase of AI, and I think this is the best time to see how the world is transforming.

Excerpt from:
Is ChatGPT a disruption to Google? | by Vishnuaravi | Mar, 2023 - DataDrivenInvestor