Society for Immunotherapy of Cancer to Host Unique 2-Day Workshop Focused on Interrogating the Tumor-Specific Surfaceome for Immune Targeting – PR Web

SITC Logo

MILWAUKEE (PRWEB) December 18, 2019

The Society for Immunotherapy of Cancer (SITC) will host an innovative workshop on April 2324, 2020, in San Diego, which will focus on the identification and biology of cancer cell surface molecules, and implications for cancer immunotherapy drug delivery and targeting.

The SITC Surfaceome Workshop is geared toward academic and industrial researchers from a variety of fields including medical oncology; bioinformatics; cancer biology; genetics/epigenetics and immunology, among others. Organized by prominent members of the immuno-oncology community, including Samir M. Hanash, MD, PhD, from The University of Texas MD Anderson Cancer Center and Avery D. Posey Jr., PhD, from University of Pennsylvania School of Medicine, the workshop will include oral presentations by leading experts in the field, including a keynote by Carl H. June, MD, from the University of Pennsylvania.

Immunotherapies targeted to tumor cells or the tumor microenvironment, such as bispecific molecules, antibody-drug conjugates, and genetically-engineered lymphocytes, show great promise. Moreover, the technologies to create and develop these treatments are advancing rapidly, said SITC President Mario Sznol, MD. We initiated this conference to address a potential limitation for application of these novel approaches to a broad group of patients, which is the identification and understanding of tumor-specific cell surface targets.

The program will aim to define the cancer cell surfaceome, describe techniques used to investigate it, and summarize methods to evaluate the normal tissue expression of identified tumor cell surface targets. Discussions will also focus on the application and development of immunotherapies and other cancer therapies for cancer cell surface targets.

This workshop will also provide an intimate opportunity for attendees to discuss their work with experts in the field, develop collaborations and learn about novel studies of the tumor cell surfaceome. Starting in January, individuals are encouraged to submit an abstract for an opportunity to present their research; a select number of oral abstract presentation slots will be available. Those abstracts not selected for oral presentation will also have the opportunity to present as a poster. Abstract submission is open to anyone working in this field. Encore presentations are welcome. Abstract submissions are due by February 28, 2020, at 5:00 p.m. PST.

The SITC Surfaceome Workshop will take place on April 2324, 2020, at the Hotel Republic San Diego. Registration rates, criteria for abstract submissions and program schedule are available on SITC Cancer Immunotherapy CONNECT.

About SITCEstablished in 1984, the Society for Immunotherapy of Cancer (SITC) is a nonprofit organization of medical professionals dedicated to improving cancer patient outcomes by advancing the development, science and application of cancer immunotherapy and tumor immunology. SITC is comprised of influential basic and translational scientists, practitioners, health care professionals, government leaders and industry professionals around the globe. Through educational initiatives that foster scientific exchange and collaboration among leaders in the field, SITC aims to one day make the word cure a reality for cancer patients everywhere. Learn more about SITC, our educational offerings and other resources at sitcancer.org and follow us on Twitter, LinkedIn, Facebook and YouTube.

Share article on social media or email:

Go here to read the rest:
Society for Immunotherapy of Cancer to Host Unique 2-Day Workshop Focused on Interrogating the Tumor-Specific Surfaceome for Immune Targeting - PR Web

Study finds differences in energy use by immune cells in ME/CFS – National Institutes of Health

News Release

Thursday, December 12, 2019

NIH-funded research suggests changes in the immune system in myalgic encephalomyelitis/chronic fatigue syndrome.

New findings published in the Journal of Clinical Investigation suggest that specific immune T cells from people with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) show disruptions in the way they produce energy. The research was supported by the National Institutes of Health.

This research gives us additional evidence for the role of the immune system in ME/CFS and may provide important clues to help us understand the mechanisms underlying this devastating disease, said Vicky Whittemore, Ph.D., program director at NIHs National Institute of Neurological Disorders and Stroke (NINDS), which partially funded the study.

ME/CFS is a severe, chronic, and debilitating disease that can cause a range of symptoms including pain, severe exhaustion, cognitive impairment, and post-exertional malaise, the worsening of symptoms after physical or mental activity. Estimates suggest that between 836,000 and 2.5 million people in the United States may be affected by ME/CFS. It is unknown what causes the disease and there are no treatments.

Research by Alexandra Mandarano and collaborators in the laboratory of Maureen Hanson, Ph.D., professor of molecular biology and genetics at Cornell University in Ithaca, New York, examined biochemical reactions involved in energy production, or metabolism, in two specific types of immune cells obtained from 45 healthy controls and 53 people with ME/CFS. Investigators focused on CD4 T cells, which alert other immune cells about invading pathogens, and CD8 T cells, which attack infected cells. Dr. Hansons team used state-of-the-art methods to look at energy production by the mitochondria within T cells, when the cells were in a resting state and after they had been activated. Mitochondria are biological powerhouses and create most of the energy that drives cells.

Dr. Hanson and her colleagues did not see significant differences in mitochondrial respiration, the cells primary energy-producing method, between healthy and ME/CFS cells at rest or after activation. However, results suggest that glycolysis, a less efficient method of energy production, may be disrupted in ME/CFS. Compared to healthy cells, CD4 and CD8 cells from people with ME/CFS had decreased levels of glycolysis at rest. In addition, ME/CFS CD8 cells had lower levels of glycolysis after activation.

Our work demonstrates the importance of looking at particular types of immune cells that have different jobs to do, rather than looking at them all mixed together, which can hide problems specific to particular cells, said Dr. Hanson. Additional studies focusing on specific cell types will be important to unravel whats gone wrong with immune defenses in ME/CFS.

Dr. Hansons group also looked at mitochondrial size and membrane potential, which can indicate the health of T cell mitochondria. CD4 cells from healthy controls and people with ME/CFS showed no significant differences in mitochondrial size nor function. CD8 cells from people with ME/CFS showed decreased membrane potential compared to healthy cells during both resting and activated states.

Dr. Hansons team examined associations between cytokines, chemical messengers that send instructions from one cell to another, and T cell metabolism. The findings revealed different, and often opposite, patterns between healthy and ME/CFS cells, suggesting changes in the immune system. In addition, the presence of cytokines that cause inflammation unexpectedly correlated with decreased metabolism in T cells.

This study was supported in part by the NIHs ME/CFS Collaborative Research Network, a consortium supported by multiple institutes and centers at NIH, consisting of three collaborative research centers and a data management coordinating center. The research network was established in 2017 to help advance research on ME/CFS.

In addition to providing valuable insights into the immunology of ME/CFS, we hope that the results coming out of the collaborative research network will inspire more researchers, particularly those in the early stages of their careers, to work on this disease, said Joseph Breen, Ph.D., section chief, Immunoregulation Section, Basic Immunology Branch, National Institute of Allergy and Infectious Diseases (NIAID), which partially funded the study.

Future research studies will examine metabolism in other subsets of immune cells. In addition, researchers will investigate ways in which changes in metabolism affect the activity of T cells.

This study was supported by NINDS grant U54NS105541, NIAID grant R21AI117595, Simmaron Research, and an anonymous private donor.

NINDS (https://www.ninds.nih.gov/) is the nations leading funder of research on the brain and nervous system.The mission of NINDS is to seek fundamental knowledge about the brain and nervous system and to use that knowledge to reduce the burden of neurological disease.

About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

Mandarano et al. Myalgic encephalomyelitis/chronic fatigue syndrome patients exhibit altered T cell metabolism and cytokine associations, Journal of Clinical Investigation. December 12, 2019

###

Original post:
Study finds differences in energy use by immune cells in ME/CFS - National Institutes of Health

Real-World Outcomes & Technology Company OM1 Closes $50 Million Series C Financing To Make Healthcare More Measured, Precise, And Pre-Emptive -…

BOSTON, Dec. 18, 2019 /PRNewswire/ -- OM1, a real-world outcomes and technology company, today announced $50 million in Series C financing led by Scale Venture Partners, with participation from existing investors, including General Catalyst (GC), Polaris Partners, and 7wire Ventures. In conjunction with the funding, Rory O'Driscoll, Partner at Scale Venture Partner, has joined OM1's Board of Directors.

"Clinical outcomes are the most important metric in healthcare," said Dr. Richard Gliklich, CEO and founder of OM1. "With this funding, OM1 will accelerate our work towards delivering rapid access to real-world outcomes and evidence and with helping our customers apply those data in impactful ways."

Increasingly healthcare stakeholders, including regulators, payer and providers, are seeking real-world evidence for supporting outcomes-based decision making. By organizing health information and applying artificial intelligence (AI) technology, OM1 helps customers generate and use real-world evidence more rapidly and effectively to gain regulatory approval, understand the effectiveness, safety and value of treatments, and personalize care.

"AI and data are driving factors in the transformation of many industries," said Driscoll. "OM1 is at the forefront of bridging these two in transformative ways in healthcare, and we are excited to be part of the journey to drive the better development of medicine and delivery of care."

OM1 focuses on specific therapeutic areas, including chronic conditions like immunology, rheumatology, cardiometabolic disorders, musculoskeletal conditions and central nervous system (CNS)/behavioral health. Among the products developed by OM1 are industry-leading therapeutic-focused registries for advancing medical research, such as in rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE), and state-of-the art AI solutions for measuring and predicting outcomes for patients and populations.

The funding comes on the heels of a high-growth year for OM1 in which the company has seen more than 400% growth in year-over-year sales. OM1 will use the funding to continue the buildout of its data-driven solutions for real-world evidence, value-based care, and predictive medicine.

OM1 was founded in 2015 by Dr. Richard Gliklich, an Executive-in-Residence (XIR) at GC and the former founder of Outcome, a technology and services company focused on real world research and health outcomes that was acquired in 2011. Dr. Gliklich is also the principal investigator for a major federal effort focused on outcomes measurement and standardization.

For more information, visitwww.om1.com.

Contact

Renee HurleyHead of Marketing, OM1617-620-9571rhurley@om1.com

About OM1

OM1 is a leading real-world outcomes and technology company leveraging big clinical data and AI to better understand, compare, and predict patient outcomes. OM1's real world evidence platform, clinical registries and AI technologies enable clients to accelerate research, measure and benchmark health outcomes and to personalize patient care. Learn more at http://www.om1.com.

About Scale Venture Partners

Scale (@scalevp) invests in software companies that are building the intelligent connected world. Investments include: Bill.com, Box (BOX), Cloudhealth, Pantheon, Demandbase, DocuSign (DOCU), ExactTarget (ET), HubSpot (HUBS), JFrog, Lever, OneLogin, and WalkMe. Scale partners with entrepreneurs to support accelerated growth from the first customer to market leadership. Founded in 2000, Scale has over $1 billion under management and is located in Silicon Valley. For more information, visit http://www.scalevp.com.

View original content to download multimedia:http://www.prnewswire.com/news-releases/real-world-outcomes--technology-company-om1-closes-50-million-series-c-financing-to-make-healthcare-more-measured-precise-and-pre-emptive-300976837.html

SOURCE OM1

Read more here:
Real-World Outcomes & Technology Company OM1 Closes $50 Million Series C Financing To Make Healthcare More Measured, Precise, And Pre-Emptive -...

Eli Lilly Stock Rises as Earnings Guidance Beats Analyst Expectations – Barron’s

Text size

Shares of the drugmaker Eli Lilly jumped 1.1% in premarket trading on Tuesday as the company announced 2020 financial guidance that is higher than current Wall Street estimates. The guidance comes a day after the company increased its quarterly dividend by 15% and helps extend a breakout that began in November.

Eli Lilly (ticker: LLY) projected that its operating margin would be 31% on a non-GAAP basis next year. This is better than what investors we spoke with were expecting and represents a step-up from the 28.6% operating margin in 3Q19, wrote Cantor Fitzgerald analyst Louise Chen in a note out Tuesday.

Lilly said it expected revenue in 2020 of between $23.6 billion and $24.1 billion. As of Tuesday morning, the Wall Street consensus estimate was $21.1 billion, according to FactSet.

The company said it expected non-GAAP earnings per share of between $6.70 and $6.80 in 2020, higher than the Wall Street consensus estimate of $5.95, according to FactSet.

We expect 2020 to be a year of strong operating and financial performance for Lilly, characterized by revenue growth for our key medicines both in the U.S. and in international markets, ongoing productivity initiatives leading to further margin expansion, continued progress in our clinical pipeline of new medicines, and solid cash flow, said Josh Smiley, the companys chief financial officer, in a statement.

The back story. Shares of Lilly are up 6.2% so far this year. The stock is trailing the S&P 500, which is up 27.3% this year, the S&P 500 Health Care sector index, up 17.5% this year, and the S&P 500 Pharmaceuticals industry group, up 9.5% this year.

Whats new. In its announcement Tuesday, Lilly said that it expected its 2020 revenue growth to be driven by sales of products including the diabetes drug Trulicity, the psoriasis drug Taltz, the migraine drug Emgality, and Reyvow, another migraine drug recently approved by the Food and Drug Administration.

Lilly said that if it meets the revenue forecast, it will hit the 7% revenue compound annual growth rate it had previously projected for the 2015-2020 time frame.

The company also increased its dividend on Monday, announcing that the first quarter dividend in 2020 will be 74 cents per share, up from 64.5 cents per share.

Lilly is in the early phase of an exciting period of prolonged growth for the company, driven by an expanding portfolio of new medicines focused on diabetes, oncology, immunology, and neuroscience, said the companys chairman and CEO, David Ricks,

Looking forward. The company will discuss the new financial guidance on a conference call set to begin at 9 a.m.

Write to Josh Nathan-Kazis at josh.nathan-kazis@barrons.com

See the original post here:
Eli Lilly Stock Rises as Earnings Guidance Beats Analyst Expectations - Barron's

A tale of two explanations: Enhancing human trust by explaining robot behavior – Science

Embodied haptic model details

The embodied haptic model leverages low-level haptic signals obtained from the robots manipulator to make action predictions based on the human poses and forces collected with the tactile glove. This embodied haptic sensing allows the robot to reason about (i) its own haptic feedback by imagining itself as a human demonstrator and (ii) what a human would have done under similar poses and forces. The critical challenge here is to learn a mapping between equivalent robot and human states, which is difficult due to the different embodiments. From the perspective of generalization, manually designed embodiment mappings are not desirable. To learn from human demonstrations on arbitrary robot embodiments, we propose an embodied haptic model general enough to learn between an arbitrary robot embodiment and a human demonstrator.

The embodied haptic model consists of three major components: (i) an autoencoder to encode the human demonstration in a low-dimensional subspace (we refer to the reduced embedding as the human embedding); (ii) an embodiment mapping that maps robot states onto a corresponding human embedding, providing the robot with the ability to imagine itself as a human demonstrator; and (iii) an action predictor that takes a human embedding and the current action executing as the input and predicts the next action to execute, trained using the action labels from human demonstrations. Figure 2B shows the embodied haptic network architecture. Using this network architecture, the robot infers what action a human was likely to execute on the basis of this inferred human state. This embodied action prediction model picks the next action according toat+1p(ft,at)(1)where at + 1 is the next action, ft is the robots current haptic sensing, and at is the current action.

The autoencoder network takes an 80-dimensional vector from the human demonstration (26 for the force sensors and 54 for the poses of each link in the human hand) and uses the post-condition vector, i.e., the average of last N frames (we choose N = 2 to minimize the variance), of each action in the demonstration as input (see the autoencoder portion of Fig. 2B). This input is then reduced to an eight-dimensional human embedding. Given a human demonstration, the autoencoder enables the dimensionality reduction to an eight-dimensional representation.

The embodiment mapping maps from the robots four-dimensional post-condition vector, i.e., the average of the last N frames (different from human post-condition due to a faster sample rate on the robot gripper compared with the tactile glove; we chose N = 10), to an imagined human embedding (see the embodiment mapping portion of Fig. 2B). This mapping allows the robot to imagine its current haptic state as an equivalent low-dimensional human embedding. The robots four-dimensional post-condition vector consists of the gripper position (one dimension) and the forces applied by the gripper (three dimensions). The embodiment mapping network uses a 256-dimensional latent representation, and this latent representation is then mapped to the eight-dimensional human embedding.

To train the embodiment mapping network, the robot first executes a series of supervised actions where, if the action produces the correct final state of the action, the robot post-condition vector is saved as input for network training. Next, human demonstrations of equivalent actions are fed through the autoencoder to produce a set of human embeddings. These human embeddings are considered as the ground-truth target outputs for the embodiment mapping network, regardless of the current reconstruction accuracy of the autoencoder network. Then, the robot execution data are fed into the embodiment mapping network, producing an imagined human embodiment. The embodiment mapping network optimizes to reduce the loss between its output from the robot post-condition input and the target output.

For the action predictor, the 8-dimensional human embedding and the 10-dimensional current action are mapped to a 128-dimensional latent representation, and the latent representation is then mapped to a final 10-dimensional action probability vector (i.e., the next action) (see action prediction portion of Fig. 2B). This network is trained using human demonstration data, where a demonstration is fed through the autoencoder to produce a human embedding, and that human embedding and the one-hot vector of the current action execution are fed as the input to the prediction network; the ground truth is the next action executed in the human demonstration.

The network in Fig. 2B is trained in an end-to-end fashion with three different loss functions in a two-step process: (i) a forward pass through the autoencoder to update the human embedding zh. After computing the error Lreconstruct between the reconstruction sh and the ground-truth human data sh, we back-propagate the gradient and optimize the autoencoderLreconstruct(sh,sh)=12(shsh)2(2)

(ii) A forward pass through the embodiment mapping and the action prediction network. The embodiment mapping is trained by minimizing the difference Lmapping between the embodied robot embedding zr and target human embedding zh; the target human embedding zh is acquired through a forward pass through the autoencoder using a human demonstration post-condition of the same action label, sh. We compute the cross-entropy loss Lprediction of the predicted action label a and the ground-truth action label a to optimize this forward passLplanning(a,a)=Lmapping+LpredictionLmapping=12(zrzh)2Lprediction=H(p(a),q(a))(3)where H is the cross entropy, p is the model prediction distribution, q is the ground-truth distribution, and is the balancing parameter between the two losses (see text S2.2 for detailed parameters and network architecture).

A similar embodied haptic model was presented in (23) but with two separate loss functions, which is more difficult to train compared with the single loss function presented here. A clear limitation of the haptic model is the lack of long-term action planning. To address this problem, we discuss the symbolic task planner below and then discuss how we integrated the haptic model with the symbolic planner to jointly find the optimal action.

To encode the long-term temporal structure of the task, we endow a symbolic action planner that encodes semantic knowledge of the task execution sequence. The symbolic planner uses stochastic context-free grammars to represent tasks, where the terminal nodes (words) are actions and sentences are action sequences. Given an action grammar, the planner finds the optimal action to execute next on the basis of the action history, analogous to predicting the next word given a partial sentence.

The action grammar is induced using labeled human demonstrations, and we assume that the robot has an equivalent action for each human action. Each demonstration forms a sentence, xi, and the collection of sentences from a corpus, xi X. The segmented demonstrations are used to induce a stochastic context-free grammar using the method presented in (21). This method pursues T-AOG fragments to maximize the likelihood of the grammar producing the given corpus. The objective function is the posterior probability of the grammar given the training data Xp(GX)p(G)p(XG)=1ZeGxiXp(xiG)(4)where G is the grammar, xi = (a1, a2,, am) X represents a valid sequence of actions with length m from the demonstrator, is a constant, G is the size of the grammar, and Z is the normalizing factor. Figure 3 shows examples of induced grammars of actions.

During the symbolic planning process, this grammar is used to compute which action is the most likely to open the bottle based on the action sequence executed thus far and the space of possible future actions. A pure symbolic planner picks the optimal action based on the grammar priorat+1*=arg maxat+1p(at+1a0:t,G)(5)where at + 1 is the next action and a0:t is the action sequence executed thus far. This grammar prior can be obtained by a division of two grammar prefix probabilities: p(at+1a0:t,G)=p(a0:t+1G)p(a0:tG), where the grammar prefix probability p(a0:t G) measures the probability that a0:t occurs as a prefix of an action sequence generated by the action grammar G. On the basis of a classic parsing algorithmthe Earley parser (31)and dynamic programming, the grammar prefix probability can be obtained efficiently by the Earley-Stolcke parsing algorithm (32). An example of pure symbolic planning is shown in fig. S4.

However, due to the fixed structure and probabilities encoded in the grammar, always choosing the action sequence with the highest grammar prior is problematic because it provides no flexibility. An alternative pure symbolic planner picks the next action to execute by sampling from the grammar priorat+1p(a0:t,G)(6)In this way, the symbolic planner samples different grammatically correct action sequences and increases the adaptability of the symbolic planner. In the experiments, we choose to sample action sequences from the grammar prior.

In contrast to the haptic model, this symbolic planner lacks the adaptability to real-time sensor data. However, this planner encodes long-term temporal constraints that are missing from the haptic model, because only grammatically correct sentences have nonzero probabilities. The GEP adopted in this paper naturally combines the benefits of both the haptic model and the symbolic planner (see the next section).

The robot imitates the human demonstrator by combining the symbolic planner and the haptic model. The integrated model finds the next optimal action considering both the action grammar G and the haptic input ftat+1*=arg maxat+1p(at+1a0;t,ft,G)(7)Conceptually, this can be thought of as a posterior probability that considers both the grammar prior and the haptic signal likelihood. The next optimal action is computed by an improved GEP (22); GEP is an extension of the classic Earley parser (31). In the present work, we further extend the original GEP to make it applicable to multisensory inputs and provide explanation in real time for robot systems, instead of for offline video processing (see details in text S4.1.3).

The computational process of GEP is to find the optimal label sentence according to both a grammar and a classifier output of probabilities of labels for each time step. In our case, the labels are actions, and the classifier output is given by the haptic model. Optimality here means maximizing the joint probability of the action sequence according to the grammar prior and haptic model output while being grammatically correct.

The core idea of the algorithm is to directly and efficiently search for the optimal label sentence in the language defined by the grammar. The grammar constrains the search space to ensure that the sentence is always grammatically correct. Specifically, a heuristic search is performed on the prefix tree expanded according to the grammar, where the path from the root to a node represents a partial sentence (prefix of an action sequence).

GEP is a grammar parser, capable of combining the symbolic planner with low-level sensory input (haptic signals in this paper). The search process in the GEP starts from the root node of the prefix tree, which is an empty terminal symbol indicating that no terminals are parsed. The search terminates when it reaches a leaf node. In the prefix tree, all leaf nodes are parsing terminals e that represent the end of parse, and all non-leaf nodes represent terminal symbols (i.e., actions). The probability of expanding a non-leaf node is the prefix probability, i.e., how likely is the current path being the prefix of the action sequence. The probability of reaching a leaf node (parsing terminal e) is the parsing probability, i.e., how likely is the current path to the last non-leaf node being the executed actions and the next action. In other words, the parsing probability measures the probability that the last non-leaf node in the path will be the next action to execute. It is important to note that this prefix probability is computed on the basis of both the grammar prior and the haptic prediction; in contrast, in the pure symbolic planner, the prefix probability is computed on the basis of only the grammar prior. An example of the computed prefix and parsing probabilities and output of GEP is given by Fig. 8, and the search process is illustrated in fig. S5. For an algorithmic description of this process, see algorithm S1.

(A) A classifier is applied to a six-frame signal and outputs a probability matrix as the input. (B) Table of the cached probabilities of the algorithm. For all expanded action sequences, it records the parsing probabilities at each time step and prefix probabilities. (C) Grammar prefix tree with the classifier likelihood. The GEP expands a grammar prefix tree and searches in this tree. It finds the best action sequence when it hits the parsing terminal e. It finally outputs the best label grasp, pinch, pull with a probability of 0.033. The probabilities of children nodes do not sum to 1 because grammatically incorrect nodes are eliminated from the search and the probabilities are not renormalized (22).

The original GEP is designed for offline video processing. Here, we made modifications to enable online planning for a robotic task. The major difference between parsing and planning is the uncertainty about past actions: There is uncertainty about observed actions during parsing. However, during planning, there is no uncertainty about executed actionsthe robot directly chooses which actions to execute, thereby removing any ambiguity regarding which action was executed at a previous time step. Hence, we need to prune the impossible parsing results after executing each action; each time after executing an action, we change the probability vector of that action to a one-hot vector. This modification effectively prunes the action sequences that contain the impossible actions executed thus far by the robot.

Human participants were recruited from the University of California, Los Angeles (UCLA) Department of Psychology subject pool and were compensated with course credit for their participation. A total of 163 students were recruited, each randomly assigned to one of the five experimental groups. Thirteen participants were removed from the analysis for failing to understand the haptic display panel by not passing a recognition task. Hence, the analysis included 150 participants (mean age of 20.7). The symbolic and haptic explanation panels were generated as described in the Explanation generation section. The text explanation was generated by the authors based on the robots action plan to provide an alternate text summary of robot behavior. Although such text descriptions were not directly yielded by the model, they could be generated by modern natural language generation methods.

The human experiment included two phases: familiarization and prediction. In the familiarization phase, participants viewed two videos showing a robot interacting with a medicine bottle, with one successful attempt of opening the bottle and a failure attempt without opening the bottle. In addition to the RGB videos showing the robots executions, different groups viewed the different forms of explanation panels. At the end of familiarization, participants were asked to assess how well they trusted/believed that the robot had the ability to open the medicine bottle (see text S2.5 and fig. S7 for the illustration of the trust rating question).

Next, the prediction phase presented all groups with only RGB videos of a successful robot execution; no group had access to any explanatory panels. Specifically, participants viewed videos segmented by the robots actions; for segment i, videos start from the beginning of the robot execution up to the ith action. For each segment, participants were asked to predict what action the robot would execute next (see text S2.5 and fig. S8 for an illustration of the action prediction question).

Regardless of group assignment, all RGB videos were the same across all groups; i.e., we showed the same RGB video for all groups with varying explanation panels. This experimental design isolates potential effects of execution variations in different robot execution models presented in the Robot learning section; we only sought to evaluate how well explanation panels foster qualitative trust, enhance prediction accuracy, and keep robot execution performance constant across groups to remove potential confounding.

For both qualitative trust and prediction accuracy, the null hypothesis is that the explanation panels foster equivalent levels of trust and yield the same prediction accuracy across different groups, and therefore, no difference in trust or prediction accuracy would be observed. The test is a two-tailed independent samples t test to compare performance from two groups of participants, because we used between-subjects design in the study, with a commonly used significance level = 0.05, assuming t-distribution, and the rejection region is P < 0.05.

Go here to see the original:
A tale of two explanations: Enhancing human trust by explaining robot behavior - Science

Is Screen Time Really Bad for Kids? – The New York Times

The first iPhone was introduced in 2007; just over a decade later, in 2018, a Pew survey found that 95 percent of teenagers had access to a smartphone, and 45 percent said they were online almost constantly. When researchers began trying to gauge the impact of all this screen time on adolescent mental health, some reported alarming results. One widely publicized 2017 study in the journal Clinical Psychological Science found that the longer adolescents were engaged with screens, the greater their likelihood of having symptoms of depression or of attempting suicide. Conversely, the more time they spent on nonscreen activities, like playing sports or hanging out with friends, the less likely they were to experience those problems. These and other similar findings have helped stoke fears of a generation lost to smartphones.

But other researchers began to worry that such dire conclusions were misrepresenting what the existing data really said. Earlier this year, Amy Orben and Andrew K. Przybylski, at Oxford University, applied an especially comprehensive statistical method to some of the same raw data that the 2017 study and others used. Their results, published this year in Nature Human Behavior, found only a tenuous relationship between adolescent well-being and the use of digital technology. How can the same sets of numbers spawn such divergent conclusions? It may be because the answer to the question of whether screen time is bad for kids is It depends. And that means figuring out On what?

The first step in evaluating any behavior is to collect lots of health-related information from large numbers of people who engage in it. Such epidemiological surveys, which often involve conducting phone interviews with thousands of randomly selected people, are useful because they can ask a wider range of questions and enroll far more subjects than clinical trials typically can. Getting answers to dozens of questions about peoples daily lives how often they exercise, how many close friends they have allows researchers to explore potential relationships between a wide range of habits and health outcomes and how they change over time. Since 1975, for instance, the National Institute on Drug Abuse has been funding a survey called Monitoring the Future (M.T.F.), which asks adolescents about drug and alcohol use as well as other things, including more recently, vaping and digital technology; in 2019, more than 40,000 students from nearly 400 schools responded.

This method of collecting data has drawbacks, though. For starters, people are notoriously bad at self-reporting how often they do something or how they feel. Even if their responses are entirely accurate, that data cant speak to cause and effect. If the most depressed teenagers also use the most digital technology, for example, theres no way to say if the technology use caused their low mood or vice versa, or if other factors were involved.

Gathering data on so many behaviors also means that respondents arent always asked about topics in detail. This is particularly problematic when studying tech use. In past decades, if researchers asked how much time a person spent with a device TV, say they knew basically what happened during that window. But screen time today can range from texting friends to using social media to passively watching videos to memorizing notes for class all very different experiences with potentially very different effects.

Still, those limitations are the same for everyone who accesses the raw data. What makes one study that draws on that data distinct from another is a series of choices researchers make about how to analyze those numbers. For instance, to examine the relationship between digital-technology use and well-being, a researcher has to define well-being. The M.T.F. survey, as the Nature paper notes, has 13 questions concerning depression, happiness and self-esteem. Any one of those could serve as a measure of well-being, or any combination of two, or all 13.

A researcher must decide on one before running the numbers; testing them all, and then choosing the one that generates the strongest association between depression and screen use, would be bad science. But suppose five ways produce results that are strong enough to be considered meaningful, while five dont. Unconscious bias (or pure luck) could lead a researcher to pick one of the meaningful ways and find a link between screen time and depression without acknowledging the five equally probable outcomes that show no such link. Even just a couple of years ago, we as researchers still considered statistics kind of like a magnifying glass, something you would hold to the data and you would then see whats inside, and it just helped you extract the truth, Orben, now at the University of Cambridge, says. We now know that statistics actually can change what you see.

To show how many legitimate outcomes a large data set can generate, Orben and Przybylski used a method called specification curve analysis to look for a relationship between digital-technology use and adolescent well-being in three ongoing surveys of adolescents in the United States and the United Kingdom, including the M.T.F. A specification is any decision about how to analyze the data how well-being is defined, for example. Researchers doing specification curve analysis dont test a single choice; they test every possible combination of choices that a careful scientist could reasonably make, generating a range of outcomes. For the M.T.F., Orben and Przybylski identified 40,966 combinations that could be used to calculate the relationship between psychological well-being and the use of digital technology.

When they averaged them, they found that digital-technology use has a small negative association with adolescent well-being. But to put that association in context, they used the same method to test the relationship between adolescent well-being and other variables. And in all the data sets, smoking marijuana and being bullied were more closely linked with decreased well-being than tech use was; at the same time, getting enough sleep and regularly eating breakfast were more closely tied to positive feelings than screen time was to negative ones. In fact, the strength of the association screen time had with well-being was similar to neutral factors like wearing glasses or regularly eating potatoes.

Not finding a strong association doesnt mean that screen time is healthy or safe for teenagers. It could come with huge risks that are simply balanced by huge rewards. The part that people dont appreciate is that digital technology also has significant benefits, says Nick Allen, director of the Center for Digital Mental Health at the University of Oregon. These include helping teenagers connect with others. The real conclusion of the Nature paper is that large surveys may be too blunt an instrument to reveal what those risks and benefits truly are. Whats needed are experiments that break screen time into its component parts and change one of them in order to see what impact that has and why, says Ronald Dahl, director of the Institute of Human Development at the University of California, Berkeley. A screen-related activity may be beneficial or harmful depending on who is doing it, how much theyre doing it, when theyre doing it and what theyre not doing instead. If we just respond to emotions or fears about screen time, then we actually could be interfering with our ability to understand some of these deeper questions, he says.

Allen notes a vexation: The behavioral data is already being quantified on the granular level researchers need. But tech companies dont routinely share that information with scientists. To deliver the advice the public wants, Orben says, will require a very difficult ethical conversation on data sharing. I dont think we can shy away from it much longer. Till then, parents struggling with how much screen time is O.K. for their children might benefit from trying, as researchers are, to get a more detailed picture of that behavior. Ask your kids: What are you doing on there? What makes you feel good? What makes you feel bad? says Michaeline Jensen, of the University of North Carolina, Greensboro. She was an author of a study in August showing that on days when teenagers use more technology, they were no more likely to report problems like depressive symptoms or inattention than on days when they used less. Even an hour a day, that could be particularly problematic or enriching.

See the original post here:
Is Screen Time Really Bad for Kids? - The New York Times

The Giving Season It’s Really a Thing | Homes & Lifestyle – Noozhawk

By Tom Jacobs for UCSB | December 18, 2019 | 1:52 p.m.

Its easy to become cynical about the holiday spirit. For a few weeks every year, we focus on giving to family, friends, charitable organizations. But soon after the new year, most of us return to a self-centered status quo.

Hypocrisy?

Not at all, according to evolutionary anthropologist Michael Gurven. Chair of integrative anthropological sciences at UC Santa Barbara, he argues that giving to others is a fundamental part of human nature but so is being selective about who we give to, and under what circumstances. Therefore a season of giving makes perfect sense.

The impulse to connect with others is a human universal, and a major way we do this is by giving and sharing, Gurven said. When you compare us to our nearest primate relative, chimpanzees, we share a wide range of resources and give freely not just upon request or in response to begging.

Thats especially true at this time of year, when the air is filled with familiar melodies of carols proclaiming peace and goodwill.

In his research, Gurven approaches human behavior from an evolutionary perspective, which posits that our habits and motivations often echo behaviors that allowed our ancient ancestors to survive and thrive. Our impulse to give to others, he argues, very much reflects our biological legacy.

Early and again late in life, Gurven notes, we depend upon others to take care of us. These experiences imprint on us the importance of sharing.

Even in hunter-gatherer societies, people cant make ends meet until theyre in their late teens, he said. That means the first 18 years of life, you need to receive food from others. That can also be true in your productive prime say your 30s and 40s if you have a lot of hungry mouths to feed in your family.

"On the other hand, chimpanzees can feed themselves shortly after weaning. Humans grow and develop slowly, and it takes a long time to become a successful food producer, be it in hunting, farming or gathering, he said. That training period requires subsidies from other individuals.

"Cooperation is not just a curious human attribute its a large part of who we are.

That said, as philanthropists, we are very selective, Gurven said. If we gave everything we produced away every day, wed be destitute. So we are strategic about what we give and who we give it to. If youre primed to give all the time, it could become overwhelming, and then you might not want to give at all.

So many of us wait until the holidays the time of year when all of the signals that inspire giving are turned up really high. When youre at a supermarket, the Salvation Army is right outside the door. You cant avoid them.

Gurven believes all those opportunities to give can produce a certain contagion. Generosity is in the air, he said. Everyone around you is giving, and were competitive.

If you get an appeal in the mail that starts Dear Friend or Dear Brother, the charity is creating a fictive social relationship that might pull on your obligation to give to family or close ties, he said. When a friend donates to a charitable cause, you might see it on social media; its virtue signaling to everybody see what I just did, which could inspire others to do the same thing.

Then there are those holiday white elephant parties, which Gurven notes are opportunities to bring people together and remind them to think about each other.

Some people act altruistically no matter what, he said. They have to watch out that they dont get exploited. For the rest of us, context matters, culture matters. The holiday season focuses us. We recognize how important our social networks are, so we spend money on gifts for family and friends.

OK, but why do we take the time and effort to select presents for our loved ones, rather than just giving the gift we can be assured they will like: cold, hard cash?

When you exchange gifts with people in your social network, (well thought out) gifts have a lot of symbolic value, Gurven explained. An economist would argue that money is the best gift because you can get anything you want, which should maximize your satisfaction.

"But thats too easy. It doesnt show much about your relationship; it just shows you have a thick wallet. If Im giving you a gift that was both costly to me and shows that Ive been paying careful attention to your likes and dislikes, from your perspective it signals, He must really value me.

"As a result, youre more likely to value our friendship and want to interact in the future. Thats a big deal.

So take care when choosing those gifts, and dont feel bad when your donations drop off in mid-January. Both, Gurven said, are prime examples of human nature.

Tom Jacobs for UCSB.

See the original post here:
The Giving Season It's Really a Thing | Homes & Lifestyle - Noozhawk

BWW Review: THE SANTALAND DIARIES at The Whisenhunt At ZACH – Broadway World

After last year's successful return, THE SANTALAND DIARIES is back with J. Robert "Jimmy" Moore starring as Crumpet the Elf under the masterful direction of Nat Miller.

What was once an essay based on David Sedaris' personal experiences working as an elf at Macy's during the Holiday season, has developed into a witty and irreverent portrayal of human behavior at this time of the year. Adapted for the stage by Joe Mantello, THE SANTALAND DIARIES follows the misadventures of an out-of-work actor who finds himself applying to be an elf at a major department store in New York City. After getting the job and going through countless hours of training, costuming, and orientation, he becomes Crumpet the Elf. Crumpet turns to humor and cynicism as he tries to juggle with not-so-sober Santas, over-the-top parents, vomiting children, and magnificent tantrums.

The Whisenhunt at ZACH provides a perfect setting for this one-man show and Mr. Moore makes use of every inch of that space. He artfully interacts with the audience, drawing from their energy and laughter, to deliver one of the most entertaining performances I have seen in decades. There is an intimacy to the space, designed by J. Aaron Bell, that makes the audience an accomplice to sarcastic storytelling, and as such, we shamelessly laugh as Crumpet impersonates the several characters that visit Santaland. We nod in agreement as he retells the adventures of the day with the most politically incorrect undertones, and we feel little guilt when the elf reaches the end of the season without a shred of Holiday Spirit left. In one of the most memorable moments of the show, Mr. Moore shows us the truth behind Crumpet's cynicism. He would much rather be singing show tunes than getting paid to be one of Santa's helpers, although the latter is what pays the bills.

David Sedaris' hysterical and contemptuous work in THE SANTALAND DIARIES is delivered through the skillful artistic collaboration between J. Robert "Jimmy" Moore and Nat Miller. Be prepared to laugh uncontrollably from beginning to end in a play that if not in the list of your Holiday traditions, it should be!

THE SANTALAND DIARIES

When: Now playing through December 29, 2019

Where: The Whisenhunt at ZACH | 1510 Toomey Road | Austin, TX | 78704

Tickets: Start at $40 available at ZACH's box office - (512) 476-0541 x1, zachtheatre.org

Duration: 75 minutes with no intermissions

Age Recommendation: Fourteen and up for adult humor

Link:
BWW Review: THE SANTALAND DIARIES at The Whisenhunt At ZACH - Broadway World

A Film More Talked About Than Seen, ‘Stntang’ To Screen At The MFA – WBUR

The cinephiles Mount Everest, director Bla Tarrs massive, magisterial, 439-minute Stntang gets two rare screenings at the Museum of Fine Arts this weekend. This is one of the film events of the year, though admittedly not an undertaking for the faint of heart. Seldom shown and only fleetingly available on home video, the 1994 film was for the longest time a movie more talked about than seen. In hardcore cinema circles Stntang was something discussed in hushed, reverent tones, with fans like Susan Sontag saying shed be glad to watch it once a year for the rest of her life, while folks traveled hundreds of miles to attend infrequent 35mm presentations of the movie in all its muddy, oppressive glory.

Logistics precluded many theatrical showings for most venues, the 20-odd reels of film proved prohibitively expensive to ship, especially since the seven-and-a-half-hour running time limits exhibitors to a single showtime per day. According to my research, the last time Stntang came to the Boston area was a screening at the Harvard Film Archive in March of 2012. But now, in advance of a Blu-ray release slated for 2020, the film returns to celebrate its 25th anniversary in a stunning new 4K digital restoration from Arbelos Films, pristinely preserving every spatter of grime and muck in this doomed Hungarian bog town.

If youre serious about exploring international cinema, at some point or another youve got to reckon with Stntang. And boy, is it a work to be reckoned with. At once spellbinding and infuriating, annoying and transcendent, its a movie that alternates between being mordantly hilarious and intensely, unutterably tragic. Shot in staggering, high-contrast black-and-white long takes that stretch out into eternities, Stntang bends your perception of time and turns monotony into an epiphany. The films opening shot is a full eight uninterrupted minutes of cows meandering their way through a dilapidated village and its one of the single greatest things Ive ever seen.

Based on a novel by Lszl Krasznahorkai, the movie chronicles in minute detail the unravelling of a desperately poor farming community upon the return of a mysterious prodigal son (Mihly Vig) the villagers had long presumed dead. A lot of interpretations like to claim this is all an allegory for the collapse of communism and capitalisms rise in Eastern Europe, but the director is on record rejecting such readings, and personally, I consider the films insights into human behavior to be more depressingly universal than overtly political. (Those cows arent the only dumb herd animals were watching.)

Broken up into 12 discrete segments, the movies designed to mimic the structure of a tango six steps forward and six steps back so the events of the film are constantly doubling back upon themselves. Sometimes its a good long while before you realize youre watching something youve already seen from another angle, other times chapter titles like The Perspective from the Front and The Perspective from Behind are more helpful in terms of getting your bearings. According to the director, there are only 150 shots in these entire seven-and-a-half hours, so between the temporal dislocation and durational excesses, the film feels like its rewiring your brain while youre watching it.

When I was younger and jumpier I never used to have much patience for this kind of thing. Academics call it slow cinema and I wanted films to get on with things already. But now that our attention spans have atomized and the world is too much with us, I look at going to the movies as more like a form of meditation, a place where I can get out of my head and have somebody elses dream for a little while. I dont care so much about plot these days and look more for sensation and mood. As arduous an experience as Stntang may be, I can see why Susan Sontag wanted to watch it once a year. When its over you really feel like youve been somewhere.

I'm always drawn to movies that make time evaporate, says my old friend Matt Prigge, a former film critic for Metro and now an adjunct professor at NYU. Matts in the five-timers club for this film, and before moving to New York would travel great distances to see Stntang on a big screen. People tend to single out the endless shots trailing people trudging through miserable hellscapes, but it has great variety, he says. Every chapter offers different ways to approach slow cinema. It's f---ing funny, too. Except when it absolutely isn't.

Indeed, the movies most notorious sequence finds an abused and neglected young girl attempting to exhibit the only control of which shes capable by torturing and poisoning her pet cat the endpoint to a cycle of cruelty weve witnessed working its way down the towns hierarchy until naturally it is the smallest, most helpless creatures that pay most dearly. (Despite urban legends to the contrary, Tarr insists the scene was shot under a veterinarians supervision and that kitty went on to live a long and happy life.)

Amid all the moldering rot, drunken boorishness and stupid, venal scheming, Stntang also offers us a glimpse of something infinite, a vastness of space and time within this small village that can only be experienced at such an obscene duration. Our story starts with a shot of the dawn slowly seeping in through a window until it eventually lights up in a dark room, only to end some seven hours or so later with a soused hermit boarding up his windows to block out the sun, slurring a repetition of the narration with which the film began. Six steps up and back again, the tango will go on until the cows come home.

Stntang screens at the Museum of Fine Arts on Saturday, Dec. 21 and Sunday, Dec. 22.

Read more here:
A Film More Talked About Than Seen, 'Stntang' To Screen At The MFA - WBUR

Soundtrack Review: This Is Us With A Groove Thats Pretty Good – Forbes

Soundtrack

WhenThis is Ushit the scene back in 2016, it took the television world by storm, much to the surprise of nearly every single one of NBCs competitors. The show lacked a giant genre premise, a (really) crazy hook and super intense drama yet, somehow, managed to connect with a large majority of primetime viewership by simply being a show about the complexities that come with being a human being. Since then, many imitators have come and gone without managing to stake their claim as a real companion to the series. But, thats about to change with NetflixsSoundtrack.

Created by Joshua Safran,Soundtrackfollows the lives of various Los Angeles residents as they try to go about their lives as best they can through a jukebox musical like lens that gives the audience a peek into the music living in their heads during the most notable moments of their existence.

Soundtrackis a show about love that also serves as a love letter to the modern age of self-scoring. Its a show that could only exist now, in the age of digital music. Its a show that acknowledges the way many of us consume sound like the way we breathe in todays age. Theres seldom a time each of us doesnt have our earbuds jammed into our skulls while scoring our own existence. This is the modern human behavior the series is trying to bring to the screen.

But, while the grand idea behind the series is commendable, its not without its quirks. The show presents a tone that takes a moment to get used to. Its hyper-real but subdued while also being of a low-stakes nature but also kind of not. The show carries itself in a very loose nature that one must give themselves over to, to enjoy. However, that willingness to go on the shows ride is rewarded with a kind of sweetness rarely seen from Netflixs library these days. One could even argue the shows development for network television really gives it an edge over its streaming competition.

Overall,Soundtrackhas a lot going for it that will delight fans of this kind of show. The music is modern pop that plays as it should, and the stories carry with them that kind of melodrama thats easy to get sucked into if allowed to be enjoyed. This is one that will play very well during the holiday season this year.

Soundtrack premieres Wednesday, December 18th on Netflix

Link:
Soundtrack Review: This Is Us With A Groove Thats Pretty Good - Forbes