Category Archives: Human Behavior

A tale of two explanations: Enhancing human trust by explaining robot behavior – Science

Embodied haptic model details

The embodied haptic model leverages low-level haptic signals obtained from the robots manipulator to make action predictions based on the human poses and forces collected with the tactile glove. This embodied haptic sensing allows the robot to reason about (i) its own haptic feedback by imagining itself as a human demonstrator and (ii) what a human would have done under similar poses and forces. The critical challenge here is to learn a mapping between equivalent robot and human states, which is difficult due to the different embodiments. From the perspective of generalization, manually designed embodiment mappings are not desirable. To learn from human demonstrations on arbitrary robot embodiments, we propose an embodied haptic model general enough to learn between an arbitrary robot embodiment and a human demonstrator.

The embodied haptic model consists of three major components: (i) an autoencoder to encode the human demonstration in a low-dimensional subspace (we refer to the reduced embedding as the human embedding); (ii) an embodiment mapping that maps robot states onto a corresponding human embedding, providing the robot with the ability to imagine itself as a human demonstrator; and (iii) an action predictor that takes a human embedding and the current action executing as the input and predicts the next action to execute, trained using the action labels from human demonstrations. Figure 2B shows the embodied haptic network architecture. Using this network architecture, the robot infers what action a human was likely to execute on the basis of this inferred human state. This embodied action prediction model picks the next action according toat+1p(ft,at)(1)where at + 1 is the next action, ft is the robots current haptic sensing, and at is the current action.

The autoencoder network takes an 80-dimensional vector from the human demonstration (26 for the force sensors and 54 for the poses of each link in the human hand) and uses the post-condition vector, i.e., the average of last N frames (we choose N = 2 to minimize the variance), of each action in the demonstration as input (see the autoencoder portion of Fig. 2B). This input is then reduced to an eight-dimensional human embedding. Given a human demonstration, the autoencoder enables the dimensionality reduction to an eight-dimensional representation.

The embodiment mapping maps from the robots four-dimensional post-condition vector, i.e., the average of the last N frames (different from human post-condition due to a faster sample rate on the robot gripper compared with the tactile glove; we chose N = 10), to an imagined human embedding (see the embodiment mapping portion of Fig. 2B). This mapping allows the robot to imagine its current haptic state as an equivalent low-dimensional human embedding. The robots four-dimensional post-condition vector consists of the gripper position (one dimension) and the forces applied by the gripper (three dimensions). The embodiment mapping network uses a 256-dimensional latent representation, and this latent representation is then mapped to the eight-dimensional human embedding.

To train the embodiment mapping network, the robot first executes a series of supervised actions where, if the action produces the correct final state of the action, the robot post-condition vector is saved as input for network training. Next, human demonstrations of equivalent actions are fed through the autoencoder to produce a set of human embeddings. These human embeddings are considered as the ground-truth target outputs for the embodiment mapping network, regardless of the current reconstruction accuracy of the autoencoder network. Then, the robot execution data are fed into the embodiment mapping network, producing an imagined human embodiment. The embodiment mapping network optimizes to reduce the loss between its output from the robot post-condition input and the target output.

For the action predictor, the 8-dimensional human embedding and the 10-dimensional current action are mapped to a 128-dimensional latent representation, and the latent representation is then mapped to a final 10-dimensional action probability vector (i.e., the next action) (see action prediction portion of Fig. 2B). This network is trained using human demonstration data, where a demonstration is fed through the autoencoder to produce a human embedding, and that human embedding and the one-hot vector of the current action execution are fed as the input to the prediction network; the ground truth is the next action executed in the human demonstration.

The network in Fig. 2B is trained in an end-to-end fashion with three different loss functions in a two-step process: (i) a forward pass through the autoencoder to update the human embedding zh. After computing the error Lreconstruct between the reconstruction sh and the ground-truth human data sh, we back-propagate the gradient and optimize the autoencoderLreconstruct(sh,sh)=12(shsh)2(2)

(ii) A forward pass through the embodiment mapping and the action prediction network. The embodiment mapping is trained by minimizing the difference Lmapping between the embodied robot embedding zr and target human embedding zh; the target human embedding zh is acquired through a forward pass through the autoencoder using a human demonstration post-condition of the same action label, sh. We compute the cross-entropy loss Lprediction of the predicted action label a and the ground-truth action label a to optimize this forward passLplanning(a,a)=Lmapping+LpredictionLmapping=12(zrzh)2Lprediction=H(p(a),q(a))(3)where H is the cross entropy, p is the model prediction distribution, q is the ground-truth distribution, and is the balancing parameter between the two losses (see text S2.2 for detailed parameters and network architecture).

A similar embodied haptic model was presented in (23) but with two separate loss functions, which is more difficult to train compared with the single loss function presented here. A clear limitation of the haptic model is the lack of long-term action planning. To address this problem, we discuss the symbolic task planner below and then discuss how we integrated the haptic model with the symbolic planner to jointly find the optimal action.

To encode the long-term temporal structure of the task, we endow a symbolic action planner that encodes semantic knowledge of the task execution sequence. The symbolic planner uses stochastic context-free grammars to represent tasks, where the terminal nodes (words) are actions and sentences are action sequences. Given an action grammar, the planner finds the optimal action to execute next on the basis of the action history, analogous to predicting the next word given a partial sentence.

The action grammar is induced using labeled human demonstrations, and we assume that the robot has an equivalent action for each human action. Each demonstration forms a sentence, xi, and the collection of sentences from a corpus, xi X. The segmented demonstrations are used to induce a stochastic context-free grammar using the method presented in (21). This method pursues T-AOG fragments to maximize the likelihood of the grammar producing the given corpus. The objective function is the posterior probability of the grammar given the training data Xp(GX)p(G)p(XG)=1ZeGxiXp(xiG)(4)where G is the grammar, xi = (a1, a2,, am) X represents a valid sequence of actions with length m from the demonstrator, is a constant, G is the size of the grammar, and Z is the normalizing factor. Figure 3 shows examples of induced grammars of actions.

During the symbolic planning process, this grammar is used to compute which action is the most likely to open the bottle based on the action sequence executed thus far and the space of possible future actions. A pure symbolic planner picks the optimal action based on the grammar priorat+1*=arg maxat+1p(at+1a0:t,G)(5)where at + 1 is the next action and a0:t is the action sequence executed thus far. This grammar prior can be obtained by a division of two grammar prefix probabilities: p(at+1a0:t,G)=p(a0:t+1G)p(a0:tG), where the grammar prefix probability p(a0:t G) measures the probability that a0:t occurs as a prefix of an action sequence generated by the action grammar G. On the basis of a classic parsing algorithmthe Earley parser (31)and dynamic programming, the grammar prefix probability can be obtained efficiently by the Earley-Stolcke parsing algorithm (32). An example of pure symbolic planning is shown in fig. S4.

However, due to the fixed structure and probabilities encoded in the grammar, always choosing the action sequence with the highest grammar prior is problematic because it provides no flexibility. An alternative pure symbolic planner picks the next action to execute by sampling from the grammar priorat+1p(a0:t,G)(6)In this way, the symbolic planner samples different grammatically correct action sequences and increases the adaptability of the symbolic planner. In the experiments, we choose to sample action sequences from the grammar prior.

In contrast to the haptic model, this symbolic planner lacks the adaptability to real-time sensor data. However, this planner encodes long-term temporal constraints that are missing from the haptic model, because only grammatically correct sentences have nonzero probabilities. The GEP adopted in this paper naturally combines the benefits of both the haptic model and the symbolic planner (see the next section).

The robot imitates the human demonstrator by combining the symbolic planner and the haptic model. The integrated model finds the next optimal action considering both the action grammar G and the haptic input ftat+1*=arg maxat+1p(at+1a0;t,ft,G)(7)Conceptually, this can be thought of as a posterior probability that considers both the grammar prior and the haptic signal likelihood. The next optimal action is computed by an improved GEP (22); GEP is an extension of the classic Earley parser (31). In the present work, we further extend the original GEP to make it applicable to multisensory inputs and provide explanation in real time for robot systems, instead of for offline video processing (see details in text S4.1.3).

The computational process of GEP is to find the optimal label sentence according to both a grammar and a classifier output of probabilities of labels for each time step. In our case, the labels are actions, and the classifier output is given by the haptic model. Optimality here means maximizing the joint probability of the action sequence according to the grammar prior and haptic model output while being grammatically correct.

The core idea of the algorithm is to directly and efficiently search for the optimal label sentence in the language defined by the grammar. The grammar constrains the search space to ensure that the sentence is always grammatically correct. Specifically, a heuristic search is performed on the prefix tree expanded according to the grammar, where the path from the root to a node represents a partial sentence (prefix of an action sequence).

GEP is a grammar parser, capable of combining the symbolic planner with low-level sensory input (haptic signals in this paper). The search process in the GEP starts from the root node of the prefix tree, which is an empty terminal symbol indicating that no terminals are parsed. The search terminates when it reaches a leaf node. In the prefix tree, all leaf nodes are parsing terminals e that represent the end of parse, and all non-leaf nodes represent terminal symbols (i.e., actions). The probability of expanding a non-leaf node is the prefix probability, i.e., how likely is the current path being the prefix of the action sequence. The probability of reaching a leaf node (parsing terminal e) is the parsing probability, i.e., how likely is the current path to the last non-leaf node being the executed actions and the next action. In other words, the parsing probability measures the probability that the last non-leaf node in the path will be the next action to execute. It is important to note that this prefix probability is computed on the basis of both the grammar prior and the haptic prediction; in contrast, in the pure symbolic planner, the prefix probability is computed on the basis of only the grammar prior. An example of the computed prefix and parsing probabilities and output of GEP is given by Fig. 8, and the search process is illustrated in fig. S5. For an algorithmic description of this process, see algorithm S1.

(A) A classifier is applied to a six-frame signal and outputs a probability matrix as the input. (B) Table of the cached probabilities of the algorithm. For all expanded action sequences, it records the parsing probabilities at each time step and prefix probabilities. (C) Grammar prefix tree with the classifier likelihood. The GEP expands a grammar prefix tree and searches in this tree. It finds the best action sequence when it hits the parsing terminal e. It finally outputs the best label grasp, pinch, pull with a probability of 0.033. The probabilities of children nodes do not sum to 1 because grammatically incorrect nodes are eliminated from the search and the probabilities are not renormalized (22).

The original GEP is designed for offline video processing. Here, we made modifications to enable online planning for a robotic task. The major difference between parsing and planning is the uncertainty about past actions: There is uncertainty about observed actions during parsing. However, during planning, there is no uncertainty about executed actionsthe robot directly chooses which actions to execute, thereby removing any ambiguity regarding which action was executed at a previous time step. Hence, we need to prune the impossible parsing results after executing each action; each time after executing an action, we change the probability vector of that action to a one-hot vector. This modification effectively prunes the action sequences that contain the impossible actions executed thus far by the robot.

Human participants were recruited from the University of California, Los Angeles (UCLA) Department of Psychology subject pool and were compensated with course credit for their participation. A total of 163 students were recruited, each randomly assigned to one of the five experimental groups. Thirteen participants were removed from the analysis for failing to understand the haptic display panel by not passing a recognition task. Hence, the analysis included 150 participants (mean age of 20.7). The symbolic and haptic explanation panels were generated as described in the Explanation generation section. The text explanation was generated by the authors based on the robots action plan to provide an alternate text summary of robot behavior. Although such text descriptions were not directly yielded by the model, they could be generated by modern natural language generation methods.

The human experiment included two phases: familiarization and prediction. In the familiarization phase, participants viewed two videos showing a robot interacting with a medicine bottle, with one successful attempt of opening the bottle and a failure attempt without opening the bottle. In addition to the RGB videos showing the robots executions, different groups viewed the different forms of explanation panels. At the end of familiarization, participants were asked to assess how well they trusted/believed that the robot had the ability to open the medicine bottle (see text S2.5 and fig. S7 for the illustration of the trust rating question).

Next, the prediction phase presented all groups with only RGB videos of a successful robot execution; no group had access to any explanatory panels. Specifically, participants viewed videos segmented by the robots actions; for segment i, videos start from the beginning of the robot execution up to the ith action. For each segment, participants were asked to predict what action the robot would execute next (see text S2.5 and fig. S8 for an illustration of the action prediction question).

Regardless of group assignment, all RGB videos were the same across all groups; i.e., we showed the same RGB video for all groups with varying explanation panels. This experimental design isolates potential effects of execution variations in different robot execution models presented in the Robot learning section; we only sought to evaluate how well explanation panels foster qualitative trust, enhance prediction accuracy, and keep robot execution performance constant across groups to remove potential confounding.

For both qualitative trust and prediction accuracy, the null hypothesis is that the explanation panels foster equivalent levels of trust and yield the same prediction accuracy across different groups, and therefore, no difference in trust or prediction accuracy would be observed. The test is a two-tailed independent samples t test to compare performance from two groups of participants, because we used between-subjects design in the study, with a commonly used significance level = 0.05, assuming t-distribution, and the rejection region is P < 0.05.

Go here to see the original:
A tale of two explanations: Enhancing human trust by explaining robot behavior - Science

The Giving Season It’s Really a Thing | Homes & Lifestyle – Noozhawk

By Tom Jacobs for UCSB | December 18, 2019 | 1:52 p.m.

Its easy to become cynical about the holiday spirit. For a few weeks every year, we focus on giving to family, friends, charitable organizations. But soon after the new year, most of us return to a self-centered status quo.

Hypocrisy?

Not at all, according to evolutionary anthropologist Michael Gurven. Chair of integrative anthropological sciences at UC Santa Barbara, he argues that giving to others is a fundamental part of human nature but so is being selective about who we give to, and under what circumstances. Therefore a season of giving makes perfect sense.

The impulse to connect with others is a human universal, and a major way we do this is by giving and sharing, Gurven said. When you compare us to our nearest primate relative, chimpanzees, we share a wide range of resources and give freely not just upon request or in response to begging.

Thats especially true at this time of year, when the air is filled with familiar melodies of carols proclaiming peace and goodwill.

In his research, Gurven approaches human behavior from an evolutionary perspective, which posits that our habits and motivations often echo behaviors that allowed our ancient ancestors to survive and thrive. Our impulse to give to others, he argues, very much reflects our biological legacy.

Early and again late in life, Gurven notes, we depend upon others to take care of us. These experiences imprint on us the importance of sharing.

Even in hunter-gatherer societies, people cant make ends meet until theyre in their late teens, he said. That means the first 18 years of life, you need to receive food from others. That can also be true in your productive prime say your 30s and 40s if you have a lot of hungry mouths to feed in your family.

"On the other hand, chimpanzees can feed themselves shortly after weaning. Humans grow and develop slowly, and it takes a long time to become a successful food producer, be it in hunting, farming or gathering, he said. That training period requires subsidies from other individuals.

"Cooperation is not just a curious human attribute its a large part of who we are.

That said, as philanthropists, we are very selective, Gurven said. If we gave everything we produced away every day, wed be destitute. So we are strategic about what we give and who we give it to. If youre primed to give all the time, it could become overwhelming, and then you might not want to give at all.

So many of us wait until the holidays the time of year when all of the signals that inspire giving are turned up really high. When youre at a supermarket, the Salvation Army is right outside the door. You cant avoid them.

Gurven believes all those opportunities to give can produce a certain contagion. Generosity is in the air, he said. Everyone around you is giving, and were competitive.

If you get an appeal in the mail that starts Dear Friend or Dear Brother, the charity is creating a fictive social relationship that might pull on your obligation to give to family or close ties, he said. When a friend donates to a charitable cause, you might see it on social media; its virtue signaling to everybody see what I just did, which could inspire others to do the same thing.

Then there are those holiday white elephant parties, which Gurven notes are opportunities to bring people together and remind them to think about each other.

Some people act altruistically no matter what, he said. They have to watch out that they dont get exploited. For the rest of us, context matters, culture matters. The holiday season focuses us. We recognize how important our social networks are, so we spend money on gifts for family and friends.

OK, but why do we take the time and effort to select presents for our loved ones, rather than just giving the gift we can be assured they will like: cold, hard cash?

When you exchange gifts with people in your social network, (well thought out) gifts have a lot of symbolic value, Gurven explained. An economist would argue that money is the best gift because you can get anything you want, which should maximize your satisfaction.

"But thats too easy. It doesnt show much about your relationship; it just shows you have a thick wallet. If Im giving you a gift that was both costly to me and shows that Ive been paying careful attention to your likes and dislikes, from your perspective it signals, He must really value me.

"As a result, youre more likely to value our friendship and want to interact in the future. Thats a big deal.

So take care when choosing those gifts, and dont feel bad when your donations drop off in mid-January. Both, Gurven said, are prime examples of human nature.

Tom Jacobs for UCSB.

See the original post here:
The Giving Season It's Really a Thing | Homes & Lifestyle - Noozhawk

Is Screen Time Really Bad for Kids? – The New York Times

The first iPhone was introduced in 2007; just over a decade later, in 2018, a Pew survey found that 95 percent of teenagers had access to a smartphone, and 45 percent said they were online almost constantly. When researchers began trying to gauge the impact of all this screen time on adolescent mental health, some reported alarming results. One widely publicized 2017 study in the journal Clinical Psychological Science found that the longer adolescents were engaged with screens, the greater their likelihood of having symptoms of depression or of attempting suicide. Conversely, the more time they spent on nonscreen activities, like playing sports or hanging out with friends, the less likely they were to experience those problems. These and other similar findings have helped stoke fears of a generation lost to smartphones.

But other researchers began to worry that such dire conclusions were misrepresenting what the existing data really said. Earlier this year, Amy Orben and Andrew K. Przybylski, at Oxford University, applied an especially comprehensive statistical method to some of the same raw data that the 2017 study and others used. Their results, published this year in Nature Human Behavior, found only a tenuous relationship between adolescent well-being and the use of digital technology. How can the same sets of numbers spawn such divergent conclusions? It may be because the answer to the question of whether screen time is bad for kids is It depends. And that means figuring out On what?

The first step in evaluating any behavior is to collect lots of health-related information from large numbers of people who engage in it. Such epidemiological surveys, which often involve conducting phone interviews with thousands of randomly selected people, are useful because they can ask a wider range of questions and enroll far more subjects than clinical trials typically can. Getting answers to dozens of questions about peoples daily lives how often they exercise, how many close friends they have allows researchers to explore potential relationships between a wide range of habits and health outcomes and how they change over time. Since 1975, for instance, the National Institute on Drug Abuse has been funding a survey called Monitoring the Future (M.T.F.), which asks adolescents about drug and alcohol use as well as other things, including more recently, vaping and digital technology; in 2019, more than 40,000 students from nearly 400 schools responded.

This method of collecting data has drawbacks, though. For starters, people are notoriously bad at self-reporting how often they do something or how they feel. Even if their responses are entirely accurate, that data cant speak to cause and effect. If the most depressed teenagers also use the most digital technology, for example, theres no way to say if the technology use caused their low mood or vice versa, or if other factors were involved.

Gathering data on so many behaviors also means that respondents arent always asked about topics in detail. This is particularly problematic when studying tech use. In past decades, if researchers asked how much time a person spent with a device TV, say they knew basically what happened during that window. But screen time today can range from texting friends to using social media to passively watching videos to memorizing notes for class all very different experiences with potentially very different effects.

Still, those limitations are the same for everyone who accesses the raw data. What makes one study that draws on that data distinct from another is a series of choices researchers make about how to analyze those numbers. For instance, to examine the relationship between digital-technology use and well-being, a researcher has to define well-being. The M.T.F. survey, as the Nature paper notes, has 13 questions concerning depression, happiness and self-esteem. Any one of those could serve as a measure of well-being, or any combination of two, or all 13.

A researcher must decide on one before running the numbers; testing them all, and then choosing the one that generates the strongest association between depression and screen use, would be bad science. But suppose five ways produce results that are strong enough to be considered meaningful, while five dont. Unconscious bias (or pure luck) could lead a researcher to pick one of the meaningful ways and find a link between screen time and depression without acknowledging the five equally probable outcomes that show no such link. Even just a couple of years ago, we as researchers still considered statistics kind of like a magnifying glass, something you would hold to the data and you would then see whats inside, and it just helped you extract the truth, Orben, now at the University of Cambridge, says. We now know that statistics actually can change what you see.

To show how many legitimate outcomes a large data set can generate, Orben and Przybylski used a method called specification curve analysis to look for a relationship between digital-technology use and adolescent well-being in three ongoing surveys of adolescents in the United States and the United Kingdom, including the M.T.F. A specification is any decision about how to analyze the data how well-being is defined, for example. Researchers doing specification curve analysis dont test a single choice; they test every possible combination of choices that a careful scientist could reasonably make, generating a range of outcomes. For the M.T.F., Orben and Przybylski identified 40,966 combinations that could be used to calculate the relationship between psychological well-being and the use of digital technology.

When they averaged them, they found that digital-technology use has a small negative association with adolescent well-being. But to put that association in context, they used the same method to test the relationship between adolescent well-being and other variables. And in all the data sets, smoking marijuana and being bullied were more closely linked with decreased well-being than tech use was; at the same time, getting enough sleep and regularly eating breakfast were more closely tied to positive feelings than screen time was to negative ones. In fact, the strength of the association screen time had with well-being was similar to neutral factors like wearing glasses or regularly eating potatoes.

Not finding a strong association doesnt mean that screen time is healthy or safe for teenagers. It could come with huge risks that are simply balanced by huge rewards. The part that people dont appreciate is that digital technology also has significant benefits, says Nick Allen, director of the Center for Digital Mental Health at the University of Oregon. These include helping teenagers connect with others. The real conclusion of the Nature paper is that large surveys may be too blunt an instrument to reveal what those risks and benefits truly are. Whats needed are experiments that break screen time into its component parts and change one of them in order to see what impact that has and why, says Ronald Dahl, director of the Institute of Human Development at the University of California, Berkeley. A screen-related activity may be beneficial or harmful depending on who is doing it, how much theyre doing it, when theyre doing it and what theyre not doing instead. If we just respond to emotions or fears about screen time, then we actually could be interfering with our ability to understand some of these deeper questions, he says.

Allen notes a vexation: The behavioral data is already being quantified on the granular level researchers need. But tech companies dont routinely share that information with scientists. To deliver the advice the public wants, Orben says, will require a very difficult ethical conversation on data sharing. I dont think we can shy away from it much longer. Till then, parents struggling with how much screen time is O.K. for their children might benefit from trying, as researchers are, to get a more detailed picture of that behavior. Ask your kids: What are you doing on there? What makes you feel good? What makes you feel bad? says Michaeline Jensen, of the University of North Carolina, Greensboro. She was an author of a study in August showing that on days when teenagers use more technology, they were no more likely to report problems like depressive symptoms or inattention than on days when they used less. Even an hour a day, that could be particularly problematic or enriching.

See the original post here:
Is Screen Time Really Bad for Kids? - The New York Times

BWW Review: THE SANTALAND DIARIES at The Whisenhunt At ZACH – Broadway World

After last year's successful return, THE SANTALAND DIARIES is back with J. Robert "Jimmy" Moore starring as Crumpet the Elf under the masterful direction of Nat Miller.

What was once an essay based on David Sedaris' personal experiences working as an elf at Macy's during the Holiday season, has developed into a witty and irreverent portrayal of human behavior at this time of the year. Adapted for the stage by Joe Mantello, THE SANTALAND DIARIES follows the misadventures of an out-of-work actor who finds himself applying to be an elf at a major department store in New York City. After getting the job and going through countless hours of training, costuming, and orientation, he becomes Crumpet the Elf. Crumpet turns to humor and cynicism as he tries to juggle with not-so-sober Santas, over-the-top parents, vomiting children, and magnificent tantrums.

The Whisenhunt at ZACH provides a perfect setting for this one-man show and Mr. Moore makes use of every inch of that space. He artfully interacts with the audience, drawing from their energy and laughter, to deliver one of the most entertaining performances I have seen in decades. There is an intimacy to the space, designed by J. Aaron Bell, that makes the audience an accomplice to sarcastic storytelling, and as such, we shamelessly laugh as Crumpet impersonates the several characters that visit Santaland. We nod in agreement as he retells the adventures of the day with the most politically incorrect undertones, and we feel little guilt when the elf reaches the end of the season without a shred of Holiday Spirit left. In one of the most memorable moments of the show, Mr. Moore shows us the truth behind Crumpet's cynicism. He would much rather be singing show tunes than getting paid to be one of Santa's helpers, although the latter is what pays the bills.

David Sedaris' hysterical and contemptuous work in THE SANTALAND DIARIES is delivered through the skillful artistic collaboration between J. Robert "Jimmy" Moore and Nat Miller. Be prepared to laugh uncontrollably from beginning to end in a play that if not in the list of your Holiday traditions, it should be!

THE SANTALAND DIARIES

When: Now playing through December 29, 2019

Where: The Whisenhunt at ZACH | 1510 Toomey Road | Austin, TX | 78704

Tickets: Start at $40 available at ZACH's box office - (512) 476-0541 x1, zachtheatre.org

Duration: 75 minutes with no intermissions

Age Recommendation: Fourteen and up for adult humor

Link:
BWW Review: THE SANTALAND DIARIES at The Whisenhunt At ZACH - Broadway World

A Film More Talked About Than Seen, ‘Stntang’ To Screen At The MFA – WBUR

The cinephiles Mount Everest, director Bla Tarrs massive, magisterial, 439-minute Stntang gets two rare screenings at the Museum of Fine Arts this weekend. This is one of the film events of the year, though admittedly not an undertaking for the faint of heart. Seldom shown and only fleetingly available on home video, the 1994 film was for the longest time a movie more talked about than seen. In hardcore cinema circles Stntang was something discussed in hushed, reverent tones, with fans like Susan Sontag saying shed be glad to watch it once a year for the rest of her life, while folks traveled hundreds of miles to attend infrequent 35mm presentations of the movie in all its muddy, oppressive glory.

Logistics precluded many theatrical showings for most venues, the 20-odd reels of film proved prohibitively expensive to ship, especially since the seven-and-a-half-hour running time limits exhibitors to a single showtime per day. According to my research, the last time Stntang came to the Boston area was a screening at the Harvard Film Archive in March of 2012. But now, in advance of a Blu-ray release slated for 2020, the film returns to celebrate its 25th anniversary in a stunning new 4K digital restoration from Arbelos Films, pristinely preserving every spatter of grime and muck in this doomed Hungarian bog town.

If youre serious about exploring international cinema, at some point or another youve got to reckon with Stntang. And boy, is it a work to be reckoned with. At once spellbinding and infuriating, annoying and transcendent, its a movie that alternates between being mordantly hilarious and intensely, unutterably tragic. Shot in staggering, high-contrast black-and-white long takes that stretch out into eternities, Stntang bends your perception of time and turns monotony into an epiphany. The films opening shot is a full eight uninterrupted minutes of cows meandering their way through a dilapidated village and its one of the single greatest things Ive ever seen.

Based on a novel by Lszl Krasznahorkai, the movie chronicles in minute detail the unravelling of a desperately poor farming community upon the return of a mysterious prodigal son (Mihly Vig) the villagers had long presumed dead. A lot of interpretations like to claim this is all an allegory for the collapse of communism and capitalisms rise in Eastern Europe, but the director is on record rejecting such readings, and personally, I consider the films insights into human behavior to be more depressingly universal than overtly political. (Those cows arent the only dumb herd animals were watching.)

Broken up into 12 discrete segments, the movies designed to mimic the structure of a tango six steps forward and six steps back so the events of the film are constantly doubling back upon themselves. Sometimes its a good long while before you realize youre watching something youve already seen from another angle, other times chapter titles like The Perspective from the Front and The Perspective from Behind are more helpful in terms of getting your bearings. According to the director, there are only 150 shots in these entire seven-and-a-half hours, so between the temporal dislocation and durational excesses, the film feels like its rewiring your brain while youre watching it.

When I was younger and jumpier I never used to have much patience for this kind of thing. Academics call it slow cinema and I wanted films to get on with things already. But now that our attention spans have atomized and the world is too much with us, I look at going to the movies as more like a form of meditation, a place where I can get out of my head and have somebody elses dream for a little while. I dont care so much about plot these days and look more for sensation and mood. As arduous an experience as Stntang may be, I can see why Susan Sontag wanted to watch it once a year. When its over you really feel like youve been somewhere.

I'm always drawn to movies that make time evaporate, says my old friend Matt Prigge, a former film critic for Metro and now an adjunct professor at NYU. Matts in the five-timers club for this film, and before moving to New York would travel great distances to see Stntang on a big screen. People tend to single out the endless shots trailing people trudging through miserable hellscapes, but it has great variety, he says. Every chapter offers different ways to approach slow cinema. It's f---ing funny, too. Except when it absolutely isn't.

Indeed, the movies most notorious sequence finds an abused and neglected young girl attempting to exhibit the only control of which shes capable by torturing and poisoning her pet cat the endpoint to a cycle of cruelty weve witnessed working its way down the towns hierarchy until naturally it is the smallest, most helpless creatures that pay most dearly. (Despite urban legends to the contrary, Tarr insists the scene was shot under a veterinarians supervision and that kitty went on to live a long and happy life.)

Amid all the moldering rot, drunken boorishness and stupid, venal scheming, Stntang also offers us a glimpse of something infinite, a vastness of space and time within this small village that can only be experienced at such an obscene duration. Our story starts with a shot of the dawn slowly seeping in through a window until it eventually lights up in a dark room, only to end some seven hours or so later with a soused hermit boarding up his windows to block out the sun, slurring a repetition of the narration with which the film began. Six steps up and back again, the tango will go on until the cows come home.

Stntang screens at the Museum of Fine Arts on Saturday, Dec. 21 and Sunday, Dec. 22.

Read more here:
A Film More Talked About Than Seen, 'Stntang' To Screen At The MFA - WBUR

Customs and Border Protection agents and officers are less trained and more unqualified than ever before – The Outline

Over the past three years, Customs and Border Protection has made several attempts to quell a so-called hiring crisis brought on by President Donald Trumps intention to hire 5,000 new agents, part of a bid to expand CBP and Immigrations and Customs Enforcement staff by a whopping 15,000 people. Trumps ask was aimed at increasing frontline positions, the roles dedicated to patrolling borders and ports of entry by ground and air to arrest migrants. The Department of Homeland Security is still struggling to meet that goal despite an increase in Congressional funding and a decrease in skills requirements for new recruits. Now, CBP, one of the nations largest law enforcement agencies, is facing a crisis in both recruitment and retention. However, data obtained from the agency through an open-records request shows that despite the fact that the agency isnt meeting its hiring goals, the past three years have seen a significant spike in overall frontline positions people hired specifically to work along U.S. borders to identify and apprehend migrants.

In November 2018, the DHS Inspector General warned that a significant uptick in hiring of border patrol agents that was not matched by an increase in resources for training them would leave new recruits less prepared for their assigned field environments, potentially impeding mission achievability and increasing safety risk to themselves, other law enforcement officers, and anyone within their enforcement authority.

While the data shows erratic fluctuations in frontline staffing over the past five years, former CBP Sector Chief Victor Manjarrez, Jr. told me recruitment at the agency has always been a challenge. Manjarrez, who currently serves as the associate director of the Center for Law & Human Behavior at the University of Texas, El Paso, worked in leadership across several CBP offices under the Bush and Obama administrations. He says the hiring challenges faced by the agency dont stem from public perception or scrutiny, but from the agencys low starting salaries, difficult entrance requirements, and less-than-desirable working locations. You had to reach an ungodly number of people just to get them to take a test, Manjarrez said of his time leading recruitment for his sector.

The agency has been under intense scrutiny since it began enforcing Trumps tough immigration policies. In 2019, CBP conducted more than one million denials and apprehensions of immigrants seeking to cross into the U.S, refusing entry for over 288,000 people attempting to legally cross through ports of entry and detaining over 850,000 who crossed over illegally. The agency continues to field accusations of cruelty and racism. Still, Manjarrez said thats not necessarily the reason the agency is struggling to staff up.

Becoming a CBP officer or patrol agent takes months. While both are considered frontline roles, border patrol agents do more tracking, and officers do more arresting. For both, CBP says the Southwest border is in need of the most bodies. While these positions require a college degree, the starting salary sits just above $33,000 for officers and $47,000 for agents. On top of that, recruits are sent where theyre needed often rural border towns where typical suburban comforts like nightlife, recreation, and even housing are scant. Manjarrez said that while he was at CBP, a common occurrence would be losing recruits during the long hiring process, simply because they found jobs elsewhere. Manjarrez said that out of the few recruits that would actually show up for the proctored exam, only a fraction would pass. And out of those that passed, even less would show up for the next stage of hiring. It was such a long process, he said.

Border Patrol agents in Nogales, Arizona. CBP on Flickr

D.B., a border patrol agent who spoke to me under the condition of anonymity, told me it took him more than two years from when he first applied to be hired by CBP to become an agent. He said that the wait time is a huge deterrent in keeping recruits interested in the positions; CBP cites their average time to hire as 300 days. Lives change in that period of time, he said. People get into relationships, get pregnant, offered other jobs.

D.B., who had a lifelong goal of working in law enforcement, also works as a recruiter for the agency. Contrary to Manjarrezs point of view on salary, D.B. says hes satisfied with the pay scale and the benefits he gets from CBP. He says that for his recruits, location, not compensation, is the primary deterrent. Location of living is why we struggle to fill positions, he says. No one wants to live in a run-down border town.

CBP has struggled to make service look desirable since long before Trumps aggressive staffing goals and immigration rhetoric, and the desire to get more boots on the border is decades in the making. Under President Bill Clinton, border patrol, called Immigration and Naturalization Services at the time, saw its first major increase in staffing. Clintons Attorney General, Janet Reno, announced Operation Gatekeeper in 1994; this ushered in a new era of U.S. border patrol, with an uptick not only in staffing but in equipment, including adding 40 seismic sensors to detect movement across the border at all hours.

Now, as thousands of agents and officers hired under Clinton become eligible for retirement, CBP faces major potential frontline losses. The big disadvantage for ICE and CBP is that they actually hire large numbers, but need those numbers to cover attrition, Manjarrez said. You have to have people in the pipeline, people that you've already recruited.

However, that long timeline for hiring recruits is something that D.B. and Manjarrez said poses the biggest systemic challenge to creating a steady pipeline. In addition to a proctored exam, extensive background checks, and multiple interviews, CBP recruits are required to take a polygraph, or lie-detector test. CBPs website says recruits can expect the polygraph test to take up to six hours and that its required by the Anti-Border Corruption Act of 2010. While the polygraph requirement was introduced with the intention of creating stricter standards for CBP recruits, particularly in hopes of identifying undisclosed drug use or criminal activity, the American Psychological Association maintains that the veracity of polygraph testing is widely questioned.

The polygraph has been detrimental to our hiring program, D.B. said. Granted, it helps weed out candidates who shouldnt be in law enforcement; it also in turn is weeding out truly qualified candidates because they have a non-conclusive result.

To ease the burden of its onerous recruitment process, CBP eliminated several requirements for new recruits around the same time Trump took office. In 2017, the agency was granted authority to waive the polygraph test for certain veterans and to conduct expedited hires for certain roles. A year later, only 184 officers and agents were hired as a result of those policies. For pilots, CBP dropped a requirement that recruits complete over 100 flight hours in the year prior to their application. Data showed that while CBP gained zero new air operations agents in 2014 and 2015, 46 were brought on in 2018. The agency also eliminated one of two fitness tests required, which D.B. says was a misguided move. I am against that move 100 percent, he said. With dropping the physical fitness test, the amount of injuries at the academy have increased due to trainees not being ready for the strenuous physical demand.

Concurrently to when CBP started softening their requirements, the agency sought outside help to quench their hiring woes in the wake of Trumps hefty goals, awarding a five-year, $297 million contract to Ireland-based professional services company Accenture to aid with recruitment. The 2017 contract laid out the companys plans to help CBP recruits simply finish the tedious application process, aiming to provide one-on-one counseling to encourage completion, according to a report by Mother Jones.

Accenture didnt hold up their end of the multi-million-dollar deal. By 2018, 10 months and more than $13 million into the contract, Accenture had only processed two new hires, according to the Department of Homeland Security. By December 2018, the DHS Inspector Generals office filed a report under the urgent title CBP Needs to Address Serious Performance Issues on the Accenture Hiring Contract. The report, which gives scathing performance review of the company, lists each of Accentures shortcomings in detail. CBP has paid Accenture approximately $13.6 million for startup costs, security requirements, recruiting, and applicant support, the report reads. In return, Accenture has processed two accepted job offers.

DHS wasnt the only one who felt unenthralled by the deal. While the Inspector Generals office was finalizing its report, Accenture employees had drawn up a petition to end the contract with DHS on the grounds of their work being used to supercharge inhumane and cruel policies, according to Bloomberg. (The outlet did not specify whether signers of the petition were employees who worked on the CBP deal, or Accenture employees from various teams.)

By spring, DHS succumbed to evidence that the contract was a bust and terminated the deal. By that point, only 22 new recruits had joined CBPs ranks with Accentures help less than one percent of the 2,357 frontline gains CBP had overall during 2018 alone.

While DHSs failure to reach Trumps hiring goals is ongoing, data from CBP shows that there have been dramatic increases in certain frontline programs that illustrate how the agency is still managing to increase its presence on the border (and the denials and arrests that follow) despite these challenges. The number of officers hired to patrol and work the field in jurisdictions like Laredo, Texas and Tucson, Arizona more than doubled between 2016 and 2018. In Tucson and New York City, officer gains in the first three quarters of 2019 has already surpassed that of 2018s yearly total. In fact, even excluding the last three months of 2019 that havent been reported yet, this year has seen CBPs highest number of new frontline gains in the past five years. However, these numbers dont take into account attrition and include internal hires and re-assignments. So, while more employees seem to be moving to frontline roles to try and meet Trumps demand, CBP is still struggling to bring in new people.

The long-range planning in terms of bringing large groups of people has not been well thought out, Manjarrez said In the next couple of years, they're going to have massive retirement, and they'll be further in the hole.

Read the original here:
Customs and Border Protection agents and officers are less trained and more unqualified than ever before - The Outline

Soundtrack Review: This Is Us With A Groove Thats Pretty Good – Forbes

Soundtrack

WhenThis is Ushit the scene back in 2016, it took the television world by storm, much to the surprise of nearly every single one of NBCs competitors. The show lacked a giant genre premise, a (really) crazy hook and super intense drama yet, somehow, managed to connect with a large majority of primetime viewership by simply being a show about the complexities that come with being a human being. Since then, many imitators have come and gone without managing to stake their claim as a real companion to the series. But, thats about to change with NetflixsSoundtrack.

Created by Joshua Safran,Soundtrackfollows the lives of various Los Angeles residents as they try to go about their lives as best they can through a jukebox musical like lens that gives the audience a peek into the music living in their heads during the most notable moments of their existence.

Soundtrackis a show about love that also serves as a love letter to the modern age of self-scoring. Its a show that could only exist now, in the age of digital music. Its a show that acknowledges the way many of us consume sound like the way we breathe in todays age. Theres seldom a time each of us doesnt have our earbuds jammed into our skulls while scoring our own existence. This is the modern human behavior the series is trying to bring to the screen.

But, while the grand idea behind the series is commendable, its not without its quirks. The show presents a tone that takes a moment to get used to. Its hyper-real but subdued while also being of a low-stakes nature but also kind of not. The show carries itself in a very loose nature that one must give themselves over to, to enjoy. However, that willingness to go on the shows ride is rewarded with a kind of sweetness rarely seen from Netflixs library these days. One could even argue the shows development for network television really gives it an edge over its streaming competition.

Overall,Soundtrackhas a lot going for it that will delight fans of this kind of show. The music is modern pop that plays as it should, and the stories carry with them that kind of melodrama thats easy to get sucked into if allowed to be enjoyed. This is one that will play very well during the holiday season this year.

Soundtrack premieres Wednesday, December 18th on Netflix

Link:
Soundtrack Review: This Is Us With A Groove Thats Pretty Good - Forbes

Reevaluating human colonization of the Caribbean using chronometric hygiene and Bayesian modeling – Science Advances

Abstract

Human settlement of the Caribbean represents the only example in the Americas of peoples colonizing islands that were not visible from surrounding mainland areas or other islands. Unfortunately, many interpretive models have relied on radiocarbon determinations that do not meet standard criteria for reporting because they lack critical information or sufficient provenience, often leading to specious interpretations. We have collated 2484 radiocarbon determinations, assigned them to classes based on chronometric hygiene criteria, and constructed Bayesian colonization models of the acceptable determinations to examine patterns of initial settlement. Colonization estimates for 26 islands indicate that (i) the region was settled in two major population dispersals that likely originated from South America; (ii) colonists reached islands in the northern Antilles before the southern islands; and (iii) the results support the southward route hypothesis and refute the stepping-stone model.

Radiocarbon (14C) dating is the most frequently used chronometric technique in archaeology given its wide applicability and temporal range that covers the last ca. 50 ka. Preserved carbon-based organic materials such as charcoal, shell, and bone are often key sources of information for determining the onset and duration of cultural events that occurred in the past. Unfortunately, building refined chronologies in many regions has been hampered by a lack of critical evaluation and application of radiocarbon dating. The Caribbean is no exception in this regard.

Initial human colonization of the insular Caribbean, which comprises more than 2.75 million km2 of open water, represents one of the most remarkable, but least understood population dispersals in the human history. In archaeology, the term colonization as it applies to initial human settlement of a landscape has not always been readily defined. For the purposes of this paper, we follow other case studies that define colonization as the earliest reliable (i.e., unambiguous) evidence for human arrival to previously uninhabited landmasses [e.g., (1)]. What sets the Caribbean apart from the rest of the Americas is that these colonization events are the only instances where ancient Amerindian groups would have crossed hundreds or even thousands of kilometers of open sea using watercraftlikely single-hulled canoesto reach uninhabited islands after losing sight of land, either from surrounding mainland areas or between the islands themselves (2). However, the onset, tempo, and origin of these movements are still debated (3, 4), and persistent problems with how radiocarbon determinations are used and reported have plagued Caribbean archaeology. Many published determinations lack the necessary information essential to adequately examine potential sources of error (e.g., contamination, poor cultural associations, taphonomic issues, or publication of uncorrected marine determinations), all of which can greatly influence archaeological interpretation (57).

This lack of rigor in reporting radiocarbon determinations brings into question the temporal efficacy of the regions cultural-historical framework for various phases of settlement and subsequent cultural behaviors. One major outcome has been an ongoing debate regarding how, when, and from where the Caribbean islands were first colonized during both the Archaic ca. 70002500 B.P.) and Ceramic Ages (beginning ca. 2500 B.P.), during which groups are thought to have ventured north from somewhere along the South American mainland. This is highlighted in two competing models: (i) the stepping-stone model, which suggests a general south-to-north settlement from South America through the Lesser Antilles into the Greater Antilles (8), and (ii) the southward route hypothesis, which proposes that the northern Antilles were settled directly from South America followed by progressively southward movement(s) into the Lesser Antilles (Fig. 1) (9).

Colonists reached islands in the northern Antilles bypassing islands in the southern Lesser Antilles, refuting a stepping stone pattern. SS denotes the stepping stone model, and SRH denotes the southward route hypothesis.

Like other world regions where humans appear to have moved rapidly through landscapes or seascapes, such as the Pacific colonization of Remote Oceania that took place in stages from different points of originor in North America where the coastal migration versus the ice-free corridor debate has raged for decadessupport for one model or another largely depends on the number, quality, and suitability of radiocarbon determinations used in analysis. For the Caribbean, this not only has relevance for establishing the routes of dispersal but also has important implications for understanding other natural and social variables that would have influenced the movement of peoples in watercraft that possibly encouraged (or discouraged) travel, including prevailing oceanographic conditions (e.g., currents, winds), climatic anomalies (e.g., El Nio), technological capabilities, or natural events (e.g., volcanism) (2, 3).

A common approach to improving the efficacy of large radiocarbon inventories in the event of unreliable or inadequately reported determinations is to apply a chronometric hygiene protocol [e.g., (5, 10, 11); see Materials and Methods]. In this selection process, determinations are assigned to different reliability classes that effectively cull spurious radiocarbon determinations. To resolve many of the issues related to our understanding of the timing and trajectories of Caribbean colonization, we have compiled the largest publicly available database of radiocarbon determinations for the region (n = 2484), applied a chronometric hygiene protocol, and found that only 54% of dates meet current reporting standards. Radiocarbon determinations from 55 islands were obtained through an extensive literature review, including available English, Spanish, and French publications, and were bolstered by contacting more than 100 researchers and radiocarbon laboratories to obtain unpublished or underreported determinations and their associated data. These efforts have more than tripled the number of radiocarbon dates used in the last assessment (5). Bayesian analyses of the resulting acceptable 1348 determinations for 26 Caribbean islands provide the first model-based age estimates for initial human arrival in the Caribbean and help resolve long-standing debates about initial settlement of the region.

Following results of the first chronometric hygiene study done for the Caribbean more than a decade ago (5), we expect that many islands will have younger colonization estimates after the hygiene protocol is applied, a result also seen in other similar studies (11). Hence, we examine competing colonization models using only the most reliable determinations from this enhanced database.

For decades, archaeologists have assumed that the Caribbean was settled in multiple stages and directions. The first, termed Lithic (8, 12, 13), was said to originate in Mesoamerica with dispersal into Cuba and through parts of the Greater Antilles ca. 60005000 cal years B.P. The evidence for this is based almost solely on the perceived similarity in stone tools, ephemeral archaeological assemblages, and a limited number of radiocarbon dates (3, 13). The second was a northward movement from South America around the same time or slightly earlier known as the Archaic. While both the Lithic and Archaic Ages are now generally referred to as the Archaic regardless of supposed origin, it is evident that not all islands in the Antilles were settled during this time for reasons that are still unclear (3). It was not until thousands of years later, ca. 2500 B.P., that an apparently new migratory group known as Saladoidnamed after the Saladero site in Venezuela where distinctive pottery was first identifiedmoved into Puerto Rico and much of the Lesser Antilles. However, Saladoid dates are not all contemporaneous, and some islands remained uninhabited until much later.

Apart from Trinidad, which today is only 10 km from Venezuela and was connected to the mainland by a land bridge during the Late Pleistocene/Early Holocene (14), it was recognized that the oldest radiocarbon dates in the regionboth for initial colonization (Lithic/Archaic) and later Saladoid populationswere found in the northern Caribbean (e.g., Cuba, Puerto Rico, St. Martin, and Anguilla). Yet, there had been no substantive attempt to compile or critically examine larger datasets to investigate this model in more detail until Fitzpatricks study in 2006.

The long-held stepping-stone model in which groups originating in South America moved northward through the Lesser Antilles and Puerto Rico, and then eventually west into the rest of the Greater Antilles, does not discount a possible earlier migration eastward from Mesoamerica into Cuba [e.g., (8)]. In this model, groups were able to move quickly through the Lesser Antilles because of the close proximity and intervisibility of islands once peoples reached Grenada. Chronological support for this model would require that the oldest radiocarbon dates be found in the southern Lesser Antilles with those in Puerto Rico occurring later in time (presuming a slight lag as movement progressed northward), or at the very least, contemporaneous if movement was rapid (9). This has been the prevailing model for decades, in part because of the ubiquity of Saladoid pottery found throughout Puerto Rico and the Lesser Antilles and the assumption that their presence was coeval. Despite some scholars noting a discrepancy in which dates in the northern Antilles were older than those in the south, the SS model had not been explicitly tested, despite evidence that pottery styles were not always reliable chronological markers (7, 9).

The prevailing stepping-stone model was challenged more than two decades ago when computer simulations of seafaring suggested that migrants voyaging from South America would have had the highest probability of initial landfall in the northern Caribbean due to the consistently strong easterly trade winds blowing through the southern Lesser Antilles and ocean currents that flow in the same direction, making eastward progress difficult, if not impossible (15). Fitzpatrick (5) was the first to examine this problem using quantitative archaeological data. After reviewing more than 600 radiocarbon dates from 36 Caribbean islands, he came to a similar conclusion, showing that the earliest acceptable dates for Saladoidas well as earlier Archaic settlementwere found in the northern islands, with first settlement of the southern Lesser Antilles, Bahamas, and Jamaica occurring centuries later after a long pause of around 1000 years (5).

As a result of these studies, a second model, termed the southward route hypothesis, suggested that there was instead a direct movement from South America to the northern Caribbean (Puerto Rico and the northern Lesser Antilles) that initially bypassed the southern Lesser Antilles [see (2, 5, 9, 13)]. This model largely rejects a Mesoamerican origin based on spurious data and assumes that the oldest radiocarbon dates are found in the northern Lesser Antilles and Puerto Rico based on previous chronometric hygiene analysis (5). Giovas and Fitzpatrick (16) further explored this scenario using an ideal free distribution framework. Their results indicated that settlement location was likely influenced by the attractiveness of resources, available land, and seafaring limitations. Together, these factors suggested that dispersals were fluctuating and opportunistic, leading to settlement of the largest and most productive islands first, followed by a gradual southward movement ca. 2000 cal years B.P. Only around 500 years later ca. 1400 cal years B.P. were Jamaica and the Bahamas occupied for the first time (Fig. 1).

More recently, analyses of paleoenvironmental data from lake cores showing an increase in charcoal particle concentrations and changes in vegetation regimes through time have also recently been used as proxy evidence in support of an even earlier settlement of many islands, in some cases thousands of years before the archaeological evidence (1719). However, we do not view the results of these paleoenvironmental surveys as convincing evidence of human colonization as the data used in these analyses are often not clearly from cultural contexts nor do they contain unequivocal anthropogenic signatures such as pollen or other micro- or macrobotanical remains from introduced cultigens [see also (2022)]. Nonetheless, the argument has revitalized the notion of a northward stepping stone population movement, one that is much earlier than archaeological records indicate.

Fitzpatricks previous chronometric hygiene study more than 10 years ago revealed that 87.6% of the radiocarbon dates available at that time were acceptable (5). In addition, only 21 (58.3%) of 36 islands examined had any archaeological sites with at least three radiocarbon dates; astonishingly, 127 (73.8%) of 172 sites in the dataset had three or fewer dates. While this earlier study was relatively thorough, there were still an unknown number of dates unavailable due to issues of accessibility (e.g., contract-based gray literature) or nonreporting. Fortunately, there has been a considerable increase in published radiocarbon dates over the past decade that has substantially expanded the amount of chronological data available. The greater number of radiocarbon dates for the Caribbean now has the potential to dramatically improve our understanding of the mode and tempo of prehistoric colonization and a host of other issues, such as measuring human impacts on island ecosystems and reconstructing paleoecological and paleoclimatological conditions through time. However, many of the same problems with radiocarbon dating that were prevalent 13 years ago persist today, including the use of unidentified wood from potentially long-lived taxa, unknown marine reservoir corrections, and/or the inclusion of dates from contexts that are not clearly anthropogenic. Because all of these issues require chronometric hygiene before colonization models can be sufficiently reevaluated, the data presented here comprise the largest compendium of radiocarbon determinations yet assembled for the Caribbean, which are used to create the first model-based colonization estimates for 26 islands.

A total of 2484 radiocarbon determinations were compiled from 585 sites on 55 islands (table S1). Dates were assigned to one of four classes using chronometric hygiene protocols (see Materials and Methods for criteria). Only 10 dates (0.40%) met criteria for Class 1 (most acceptable dates), and 1338 (53.9%) dates met the criteria for Class 2, for a total of 1348 (54.3%) dates that were considered acceptable for Bayesian analysis (see Methods and Materials for a description of class criteria). Seventeen islands (31.0%) with radiocarbon dates did not have any Class 1 or 2 dates (Table 1). Despite a tremendous increase in research and publication over the past decade, 433 (74.0%) archaeological sites still have three or fewer radiocarbon determinations, and 237 (40.5%) sites only have a single date representing an entire site. This is a minimal change compared with the earlier study a decade ago where 164 (39.4%) sites had a single reported radiocarbon date (5). Surprisingly, only 881 published radiocarbon determinations (35.5%) contained 13C/12C values (13C), many of which were only made available after contacting the author or radiocarbon laboratory. These values are important for understanding whether dates were corrected with estimated values, the 13C in the sample itself, and whether the fractionation was calculated using accelerator mass spectrometry (AMS) or isotope ratio mass spectrometry (IRMS).

Consequently, many islands settled before European contact were excluded from our Bayesian modeling, which only used Classes 1 and 2 dates. For example, while it is clear that Saba has a rich prehistoric record (23), it was not modeled due to the lack of acceptable radiocarbon determinations (two Class 2 dates out of 41 total determinations) based on our chronometric hygiene criteria. Similarly, our chronometric hygiene protocol and Bayesian analyses show that the modeled colonization estimate for Nevis is 14251000 cal years B.P. [95% highest posterior density (HPD)], despite the presence of the Hichmans site, which was identified as an earlier Archaic settlement containing an assemblage similar to other Archaic sites on nearby islands (24, 25). Our results suggest a more recent settlement chronology for many islands similar to other chronometric hygiene studies [e.g., (11)] and highlight important problems with the quality of radiocarbon dates in the region and/or misinterpretation of supposed earlier dates, as many of those previously reported fail to meet criteria for accurate, reliable reporting.

Class 1 dates include those from the Coralie site on Grand Turk (26), a cenote from Manantial de la Aleta on Hispaniola (27), Cave 18 on Mona Island (table S1), and two sites on Puerto Rico: AR-39 (28) and Cag-3 (29) (Table 2). One of three Class 1 radiocarbon determinations from the Coralie site is the oldest acceptable date from Grand Turk, but three Class 1 dates are not enough to produce a robust colonization estimate. The remaining Class 1 dates from Hispaniola, Puerto Rico, and Mona Island likely do not date to first colonization of those islands. Together, these 10 dates cannot be used to evaluate different colonization models. Therefore, we have chosen to instead generate colonization models using Class 1 dates and the larger, more robust Class 2 data set.

EU, excavation unit; cmbd, centimeters below datum.

Of 55 islands, 26 met the criteria for Bayesian modeling. Nearly all Class 2 determinations from wood samples were from unidentified taxa or could potentially be long-lived species that can present inbuilt age problems. Therefore, modeled colonization estimates were produced using the Charcoal_Outlier analysis in OxCal, which treats radiocarbon determinations on unidentified wood as having 100% probability of having as much as 100 years of inbuilt age [(30, 31); see Materials and Methods]. All islands selected for Bayesian modeling possessed nine or more acceptable dates and produced a model agreement (Amodel) 77.9% and an overall agreement (Aoverall) 62.8% (Table 3; see Materials and Methods).

Puerto Rico was modeled with the 100 oldest determinations (see Materials and Methods).

The oldest modeled dates for Cuba (LE-4283) and Vieques (I-16153) had poor agreement indices, but the model agreement (Amodel) and overall agreement (Aoverall) remained high (Table 3 and tables S2 to S4). Poor agreement indices were likely caused by a gap between the oldest modeled dates and the rest of the Phase, caused by both the chronometric hygiene protocol and a relative dearth of radiocarbon determinations dating to early settlement when compared with later periods.

Bayesian modeling of Classes 1 and 2 radiocarbon dates from each island markedly truncates the earliest estimated date of human settlement for six modeled islands. The biggest differences are for Anguilla, Cuba, Hispaniola, and Puerto Rico, which are as much as ca. 2100 to 2300 years younger than previously reported. Although still dating to the Archaic Age (ca. >2500 cal years B.P.), the modeled colonization estimate places human settlement of Puerto Rico and Hispaniola after other islands such as Cuba, Curaao, St. Martin, and, possibly, Barbados.

The results of our chronometric hygiene and Bayesian modeling both support and offer new perspectives on the pattern of pre-Columbian colonization of the Caribbean islands. Trinidad produced the oldest colonization model estimate of 84207285 cal years B.P. (95% HPD). This is expected given that lower sea levels in the Late Pleistocene and Early Holocene either connected or placed Trinidad close enough to the South American mainland to allow for settlement that would not have necessarily required sophisticated (or any) watercraft (14). Consequently, early sites on Trinidad should be considered differently when compared with other islands in the Antilles where long-distance seafaring and more advanced wayfinding skills were likely required to colonize (3, 7). After Trinidad, our results suggest two distinct clusters of colonization estimates modeled from ca. 58002500 cal years B.P. and 1800500 cal years B.P. (Figs. 1 and 2).

The two clusters fit well with generally accepted cultural divisions in the Caribbean. The first cluster, ca. 58002500 cal years B.P., suggests two distinct population dispersals into the Caribbean that span the Archaic and the inception of the Ceramic Age. The earliest settled islands in the first cluster of our model, ca. 58002500 cal years B.P., are Cuba, Hispaniola, and Puerto Rico in the Greater Antilles; Gaudeloupe, St. Martin, Vieques, St. Thomas, Barbuda, Antigua, and Montserrat in the northern Lesser Antilles; Barbados and Grenada in the southern Lesser Antilles; and Aruba, Bonaire, and Curaao, located relatively close (27, 88, and 65 km, respectively) to mainland South America, along with Tobago, which is 35 km northeast of Trinidad (Fig. 1). Before our chronometric hygiene, the oldest reported radiocarbon dates in the Greater Antilles suggested that Archaic populations reached the area as early as ca. 74006900 cal years B.P. (3, 5). Together, these results for earliest settlement are consistent with the southward route hypothesis and suggest that some of the largest and most resource-rich islands in the northern Caribbean were settled first (14). In addition, our analysis places Curaao in the earliest cluster, which may be explained by its close proximity to mainland South America. Barbados represents an exception and has long been thought to be an interesting case of anomalous early settlement of the southern Lesser Antilles; our results continue to support this notion (3, 32).

These results suggest that after the initial settlement of larger islands in the Greater Antilles and some of the smaller islands close to the mainland during the Archaic period, subsequent Ceramic Age settlement focused again on additional smaller islands close to the mainland and several in the northern Lesser Antilles, including those close to islands previously settled during the Archaic. This is not entirely unexpected, for subsequent population dispersals such as Saladoid are likely to have followed similar trajectories, particularly if there had been a long tradition of ancestral groups traveling between the mainland and the Antilles over the course of centuries or even millennia.

The second cluster of colonization estimates fall between ca. 1800 and 500 cal years B.P. and corresponds to another burst of activity in which several islands in both the northern (St. John, St. Eustatius, Nevis, and Anguilla) and southern (St. Lucia and Carriacou) Lesser Antilles were colonized. Settlement of the Bahamian Archipelago also takes place within this time period on Grand Turk and San Salvador. It is possible that the chronologies reflect multiple groups moving in various directions (northern and southern) simultaneously, an expected outcome as trade and exchange relationships quickly accelerated after Saladoid occupation (4).

Our results place Anguilla within this later cluster, which likely reflects the results of chronometric hygiene and the removal of the oldest dates for the island given that many of these are reported without provenience and had to be excluded from analysis. The previously accepted earliest radiocarbon determinations from Anguilla were on Lobatus sp. shell tools from surface contexts. However, given the lack of stratigraphic control, those determinations were discarded from our analysis. This does not rule out an earlier settlement of the island, but currently well-anchored radiocarbon evidence is lacking.

The research presented here has important implications for examining previous explanatory models of human dispersal into the Caribbean. First, with the use of only the most secure radiocarbon determinations, our results do not support an initial northward stepping stone pattern, once the dominant scenario and resurrected by proponents of recently collected paleoenvironmental data (17). Instead, our results suggest that islands in the Greater Antilles, in the northern Lesser Antilles, and located very close to the South American mainland have the earliest reliable radiocarbon determinations and modeled chronologies. These data are consistent with the general predictions of island biogeography in which the closest and largest islands are colonized first (33, 34), as well as the southward route hypothesis, whereby the largest and/or most northerly islands in the Antilles were initially colonized with subsequent settlement proceeding southward through the Lesser Antilles. These results are also supported by previous chronometric hygiene analyses (5), seafaring simulations (32), fine-grained ceramic analysis (35), and predictions of the ideal free distribution model (16).

Despite consistency with previously proposed models, there are some islands that were settled anomalously later than would be expected or not at all. For example, Jamaica has no known Archaic or Saladoid settlements, with the earliest sites containing Ostionoid ceramics (post ca. 1400 B.P.). The Cayman Islands have no evidence for settlement before European arrival, despite several attempts by researchers to locate archaeological sites (3, 36). The disparity in these dates could be attributed to environmental factors, such as rough sea conditions that complicated successful navigation to these islands (37), survey and excavation bias, the obscuring of evidence due to natural and/or cultural processes (e.g., sea level changes, volcanism, commercial development), or other unknown reasons. This demonstrates that the investigation of when and how island regions were colonized must be treated on an island-by-island basis and not generalized across whole regions or archipelagos, as many other variables (e.g., cultural, oceanographic, and geologic) likely influenced population dispersals.

Our analysis, while using the most robust chronological dataset yet compiled for the Caribbean, is still limited by incomplete or unpublished information as well as biased survey coverage for various sites and islands. Suggested colonization estimates are presented using only the most secure chronological data available, but doing so led to the exclusion of more than 1000 radiocarbon determinations. The very nature of chronometric hygiene means that in addition to removing erroneous assays, it is likely some dates that were discarded from further analysis are in fact representative of cultural activities during that time but do not fulfill the imposed criteria (38, 39). A recent discussion by Dye (40) suggests that these problems of chronometric hygiene and single-phase Bayesian models can potentially be resolved using two-phase models. Dye (40) took this approach for examining Pacific Island colonization and modeled the first phase using radiocarbon dates from precolonization paleoenvironmental data that directly preceded the first evidence for human colonization. This first phase of the model helps to establish a cutoff point for the second colonization phase of the model, which serves as a step in conjunction with chronometric hygiene in deciding what chronometric data are most reliable. While robust and reliable precolonization paleoenvironmental data are currently lacking for most Caribbean islands [cf. (17)], the use of two-phase Bayesian models in future studies will likely improve the accuracy and precision of our colonization estimates. Another argument is that temporally diagnostic objects such as pottery could be used in the absence of radiocarbon determinations to potentially fill in gaps created by chronometric hygiene. However, without the inclusion of additional absolute chronometric techniques (e.g., thermoluminescence and uranium-thorium), pottery and other diagnostic artifacts such as typologically distinct lithics only serve as good chronological markers when they are first anchored by reliable absolute dates. For example, Cedrosan Saladoid pottery, thought only to occur in pre-2000 year B.P. sites, has been recovered on some islands like Carriacou, where the earliest acceptable dates are much later in time ca. 15501375 cal years B.P. (95% HPD) (with only 4.3% of determinations from the island rejected). One implication of our revised colonization chronologies is that other long-accepted temporal events in Caribbean culture history such as subdivisions within pottery typologies during the Ceramic Age (e.g., Troumassoid and Ostionoid) are also likely in need of critical reexamination.

Limitations resulting from the chronometric hygiene protocol could also be circumvented in the future with more detailed reporting and calibration of radiocarbon data, including taxonomic identification of samples, laboratory number, and radiocarbon age. More complete reporting would increase the reliability and, thus, the number of acceptable radiocarbon determinations (i.e., Classes 1 and 2) for many sites and islands across the region, an issue that is still pervasive even in more recent syntheses of data for the Archaic [e.g., (41)]. To return to the example of the Hichmans site on Nevis, all nine determinations were designated as Class 3 because they were from unidentified marine shell or reported without sufficient provenience (24). If this information was published or made available by the author or the radiocarbon laboratory, then this could possibly aid in refining the colonization estimate for Nevis.

The present database will be further advanced as additional information is made available or if part of the original dated samples were saved and redated. A best practice approach to managing legacy dates is to rerun the radiocarbon sample if any part of the original sample remains to improve precision. For other samples, if part of the original specimen remains, it may be possible to identify the taxon to avoid issues such as the old wood problem. Regardless, the results show spatiotemporal patterns consistent with previous chronometric hygiene studies, seafaring simulations, and theoretical models of population ecology. Our supporting evidence of previously proposed hypotheses is also potentially falsifiable with additional archaeological evidence. For example, recently published radiocarbon determinations from Grenada suggest a previously unidentified Archaic component (35). It is quite possible that expanded research programs on other islands could also push back dates of colonization and strengthen existing chronologies.

Interpretations of archaeological sites, assemblages, and other remnants of human behavior hinge on developing temporal frameworks largely built on radiocarbon determinations. This study, which involved compiling the largest dataset of radiocarbon determinations from more than 50 islands in the Caribbean, subjecting them to a rigorous chronometric hygiene protocol, and constructing Bayesian models to derive probabilistic colonization estimates, demonstrates that only around half of the currently available radiocarbon determinations are acceptable for chronology building. The paltry number of Class 1 determinations (n = 10) is especially concerning as these are considered by scholars elsewhere to be the only form of acceptable samples to use in archaeological research [e.g., (11)]. This means that only 0.4% of available 2484 radiocarbon determinations from the Caribbean would be acceptable if the same standards used in other regions were applied here. That many of the radiocarbon determinations in our database were discarded because of a lack of reporting of critical information underscores the importance of transparency when presenting results and conclusions. Given that the average cost of a single radiocarbon determination can be hundreds of dollars, it is not unreasonable to assume that this database represents an investment of around $1 million worth of radiocarbon determinations that have been largely funded by government agencies, not including the associated costs of obtaining sample material. Many radiocarbon determinations are paid for with taxpayer money, and with recent increased scrutiny of publicly funded research in many parts of the world, archaeologists must take responsibility to ensure that their samples are robust, reported in full, and widely available.

Overall, results from chronometric hygiene and Bayesian analysis of acceptable radiocarbon determinations suggest direct movement from South America to the northern Caribbean (Cuba, Hispaniola, and Puerto Rico and the northern Lesser Antilles) that initially bypassed the southern Lesser Antilles, with the exception of Barbados and possibly Grenada, which have evidencealbeit limitedfor Archaic colonization. The later colonization estimate for islands in the southern Lesser Antilles supports the southward route hypothesis and the predictions of ideal free distribution and does not support the oft-cited and recently reinvigorated stepping-stone model.

Like many of the current models used by Caribbean scholars to explain past human lifeways that hinge on secure and reliable radiocarbon determinations, these will require further quantitative testing and closer scrutiny of samples used for developing both local and regional chronologies. The analyses presented in this study can also be used to develop testable hypotheses for predicting when those islands not included in our analysis were colonized. Overall, this study demonstrates the need for increased rigor in the reporting of radiocarbon determinations to adequately assess their efficacy and maintain chronological control to ensure that interpretive models are satisfactorily anchored in time and accurately reflect, to the best of our ability, the multitude of cultural behaviors that happened in the past.

A chronometric hygiene protocol was applied to critically assess the reliability of radiocarbon determinations in relation to target events. Careful application of stricter criteria improves confidence that the dated radiocarbon event reliably relates to human activity (5, 10, 11). Dates were placed into four separate classes, the two most acceptable of which were modeled using Bayesian analysis (30). Class 1 dates, which fit the most stringent criteria, are from short-lived terrestrial material (i.e., plant remains or juvenile fauna) identified to taxon, terrestrial animal bone identified to taxon and sampled using AMS, and must include both sufficient provenience information (i.e., not from surface contexts, evidence of secure archaeological context) and the processing laboratory name and number. Class 2 dates include charcoal or charred material not identified to taxon, marine shell identified to taxon, and culturally modified shell (e.g., adzes). These dates must also include sufficient provenience information and the processing laboratory number. Class 3 dates are without some component of the above contextual information and also include marine shell dates not identified to taxon, bulk sediment, or shell samples containing multiple individuals, radiometric dates on human bone apatite, or have a radiocarbon age of 300 years B.P. or younger. Radiocarbon dates less than 300 years B.P. were excluded from analysis because the 95% posterior probability would exceed beyond the range of modern age. Unidentified marine shell was given a Class 3 value because some may belong to long-lived species or have other unresolved issues, such as the inbuilt age associated with mobile and/or carnivorous gastropods that ingest older carbon from limestone substrates. Class 4 dates were rejected because they lacked critical information, were not from a secure cultural context, or were originally published as modern dates and rejected by the original author(s). Radiocarbon dates from paleoenvironmental studies were rejected as Class 4 unless a date was collected on anthropogenically introduced plant taxa or were from a secure archaeological context because their association with anthropogenic activity cannot otherwise be demonstrated and, thus, may date contexts before human arrival.

Terrestrial and marine radiocarbon determinations were calibrated using Intcal13 and Marine13, respectively (30, 42). Radiocarbon determinations on human bone were calibrated using a 50%:50% Intcal13/Marine13 curve with a 12% error to account for the mixed marine and terrestrial diet common in the region. This 50%/50% ratio has been applied in other dietary studies [e.g., (43)], although few published studies address how dietary ratio may influence radiocarbon date calibration. Cook et al. (44) recommend using an error of 10% when groups are not consuming C4 plants; however, we selected a more conservative error of 12% to account for the presence of C4 plants in prehistoric Caribbean diets. Furthermore, marine-based subsistence strategies varied between individuals, across islands or archipelagos, and through time (45, 46). At this stage, it is not possible to develop a template for calibrating human bone other than to say that diets were likely mixed to some degree (47, 48). Future isotopic research on island-specific and temporally specific dietary ratios can be used to refine marine and terrestrial ratios for human bones. In addition, given both the paucity of interisland and intraisland local marine carbon offsets for the Caribbean (5, 49), no local marine reservoir correction (R) was applied to marine determinations, although there should be a concerted effort to obtain these in the future. However, we have applied the standard reservoir correction to marine dates.

Bayesian statistical models are increasingly used by archaeologists for modeling a range of temporal phenomena, from individual site chronologies to large-scale regional processes, and are particularly useful for radiocarbon datasets because they allow the analyst to incorporate prior information, such as stratigraphy or other known chronological information, into the estimation of probability distributions for groups of radiocarbon determinations. A strength of Bayesian models for archaeological studies is their ability to provide estimated date ranges for undated archaeological contexts, such as the onset, temporal duration, or end of a phenomenon of interest. Three key parameters of any Bayesian model are the prior, the likelihood, and the posterior. In archaeological applications, the prior is any chronological information or observations that are inferred before any radiocarbon data are collected or processed (e.g., stratigraphy), the likelihood is information obtained from the calibrated radiocarbon date range, and the posterior is an estimated calendar date range expressed probabilistically as the highest posterior density (HPD) region based on the relationship between the prior and likelihood (30). An evaluation of how well the model fits the radiocarbon data is expressed quantitatively as an agreement index, with agreement indices over 60% being the commonly accepted threshold for a good fit (50).

Following recent Bayesian approaches to island colonization modeling in the Pacific [e.g., (40, 5153)], here we model the colonization of the Caribbean islands using single-phase Bayesian models in OxCal 4.3.2 (30). This method involves combining radiocarbon dates from multiple strata and sites into a single group with the goal of providing a simple structural framework to estimate the onset of colonization using the collective dates for the island. Using this approach, all uncalibrated conventional radiocarbon age determinations were grouped into a single unordered phase by island (table S4) using the Sequence, Boundary, and Phase functions in OxCal. The model then calibrates these determinations based on prior information (other early dates in the Phase), and the modeled range of the Boundary start provides the colonization estimate. Here, we provide both 68 and 95% HPD probabilities for these colonization estimates, and all date ranges were rounded outward to the nearest five using OxCals round function (54).

Nearly all Class 2 determinations are from potentially long-lived species or unidentified wood samples and present inbuilt age problems. To address this issue, we treated each of these radiocarbon determinations as having a 100% probability of including some amount of inbuilt age using an Exponential Outlier (Charcoal) model using the Charcoal_Outlier model (31, 55). The prior assumption in this type of model is that the correct age of the modeled events is younger than the unmodeled calibrated dates by some unknown amount of time. Thus, the Charcoal_Outlier model is expected to produce somewhat younger age estimates (31). We selected a 100-year outlier model because although Caribbean peoples were likely using dry scrub forest taxa, many of which were slow-growth species, use of these trees for fuelwood likely involved coppicing, which would have sustained forests while providing younger limbs for anthropogenic use. Commonly recovered tree species include lignum vitae (Guaiacum sp.), buttonwood (Conocarpus erectus), caper tree (Capparis sp.), strong bark (Bourreria sp.), wild lime (Zanthoxylum fagara), and mangrove (56). Given this ethnobotanical information, we elected to use a 100-year outlier model.

A large proportion of our dataset is composed of radiocarbon determinations on unidentified wood and wood charcoal that likely have unknown inbuilt ages. Thus, the modeled date estimates derived from these samples may also be too old. To address this, we modeled each island with unidentified wood samples in three ways: (i) as a simple single-phase models with no additional parameters; (ii) treating each radiocarbon determination as having 100% probability of having between 1 and 100 years of inbuilt age using a Charcoal_Outlier model; and (iii) treating each radiocarbon determination as having 100% probability of having between 1 and 1000 years inbuilt age using a Charcoal_Outlier model (table S4; see Supplementary Materials) (31). Assuming a 100% probability of samples having inbuilt age is intentionally conservative as not all samples may have considerable inbuilt age.

In another set of sensitivity analyses, Cuba was modeled with and without legacy datesradiocarbon determinations with large standard errors (e.g., >100 years)because, although imprecise, these samples likely still provide an accurate measurement of the target event when derived from secure archaeological contexts. Bayesian modeling accounts for imprecision of legacy dates and can still produce acceptable models (54). To test the efficacy of incorporating legacy dates, we modeled Cuba with and without legacy dates.

The third set of sensitivity analyses was to test how the model for Puerto Rico improves when modeled with fewer radiocarbon determinations. Modeling all 445 radiocarbon determinations does not produce an acceptable model, but the model agreement increases when fewer dates are modeled (tables S5 and S6; Supplementary Materials). In addition, the oldest radiocarbon determination in the Phase does not have an acceptable agreement index until it is only modeled with 100 radiocarbon determinations.

Last, we tested how islands with many younger dates potentially skew the models and produce younger colonization estimates. To test this, we modeled Trinidad and Puerto Rico using the Tau Boundary function in OxCal, which exponentially weights radiocarbon determinations at one end of the grouping.

Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/5/12/eaar7806/DC1

Supplementary Text

Table S1. Radiocarbon determinations from 55 Caribbean islands with their assigned class value.

Table S2. The 100-year outlier model results and parameters for 26 islands.

Table S3. The 100-year outlier model plots with 95% probability ranges.

Table S4. SQL code for the 100-year outlier models, 1000-year outlier models, and single-phase models.

Table S5. Modeled colonization estimates for Puerto Rico with a decreasing number of dates.

Table S6. Single-phase model results and parameters for Puerto Rico with a decreasing number of dates.

Table S7. Sensitivity analyses results.

Table S8. The 1000-year outlier model results and parameters for 26 islands.

Table S9. The 1000-year outlier model plots with 95% probability ranges.

Table S10. Single-phase model results and parameters for 26 islands.

Table S11. Single-phase model plots with 95% probability ranges.

Table S12. Originally reported sample materials with current taxonomic identification.

Table S13. Radiocarbon laboratory abbreviation, name, and country of operation.

Table S14. Bibliographic information for radiocarbon determinations.

References (5760)

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

W. F. Keegan, C. L. Hoffman, The Caribbean Before Columbus (Oxford University Press, 2017).

I. Rouse, Migrations in Prehistory (Yale University Press, 1986).

S. M. Fitzpatrick, M. Kappers, C. M. Giovas, The southward route hypothesis: examining Carriacous chronological position in Antillean prehistory, in Island Shores and Distant Pasts: Archaeological and Biological Approaches to the Pre-Columbian Settlement of the Caribbean, S. M. Fitzpatrick, A. H. Ross, Eds. (Gainesville, Florida, University Press of Florida, 2010), pp. 163176.

P. E. Siegel (Ed.), Island Historical Ecology: Socionatural Landscapes of the Eastern and Southern Caribbean (Berghahn Books, 2018).

P. E. Siegel, J. G. Jones, D. M. Pearsall, N. P. Dunning, P. Farrell, N. A. Duncan, and J. H. Curtis, Ecosystem engineering during the human occupations of the Lesser Antilles, in Early Settlers of the Insular Caribbean: Dearchaizing the Archaic, C. L. Hofman, A. T. Antczak, Eds. (Leiden, Sidestone Press, 2019) pp. 7788.

S. M. Wilson, The Prehistory of Nevis, a Small Island in the Lesser Antilles (Yale University Press, 2006).

D. D. Davis, Jolly Beach and the Preceramic Occupation of Antigua, West Indies (Yale University Press, 2000).

L. A. Carlson, thesis, University of Florida (1999).

R. T. Callaghan, Crossing the Guadeloupe passage in the Archaic Age, in Island Shores and Distant Pasts: Archaeological and Biological Approaches to Pre-Columbian Settlement of the Caribbean, S. M. Fitzpatrick and A. H. Ross, Eds. (Gainesville, Florida, University of Florida Press, 2010), pp. 127147.

R. H. MacArthur, E. O. Wilson, The Theory of Island Biogeography (Princeton University Press, 1967).

W. F. Keegan, J. M. Diamond, Colonization of islands by humans: A biogeographical perspective, in Advances in Archaeological Method and Theory, M. B. Schiffer, Ed. (San Diego, Academic Press, 1987), pp. 4992, vol. 10.

C. L. Hofman, A. T. Antczak, Eds., Early Settlers of the Insular Caribbean: Dearchaizing the Archaic (Sidestone Press, 2019).

L. A. Carlson, W. F. Keegan, Resource depletion in the prehistoric northern West Indies, in Voyages of Discovery: The Archaeology of Islands, S. M. Fitzpatrick, Ed. (Westport, Connecticut, Praeger, 2004), pp. 85107.

L. A. Newsom, E. S. Wing, On land and sea: Native American uses of biological resources in the West Indies (University of Alabama Press, 2004).

J. G. Crock, J. Petersen, Inter-island exchange, settlement hierarchy, and a Tano-related chiefdom on the Anguilla Bank, Northern Lesser Antilles, in Late Ceramic Age Societies in the Eastern Caribbean, A. Delpuech, C. L. Hofman, Eds. (Oxford, British Archaeological Reports, 2004), pp. 139158.

J. G. Crock, Interisland interaction and the development of chiefdoms in the Eastern Caribbean (University of Pittsburgh, 2000).

J. G. Crock, Archaeological evidence of eastern Tanos: Late Ceramic Age interaction between the Greater Antilles and the northern Lesser Antilles, in Proceedings of the International Congress for Caribbean Archaeology 20, M. C. Tavrez, M. A. Garca Arvalo, Eds. (Santo Domingo, Departamento de Difusion y Relaciones Pblicas del Museo del Hombre Dominicano, 2004), pp. 835842.

Acknowledgments: We thank the five anonymous reviewers and M. Aldenderfer who provided insightful comments and suggestions that improved the analysis and the manuscript. We also thank the following scholars who provided us with unpublished dates or clarification on published dates: P. Allsworth-Jones, D. Anderson, A. Bain, D. Bates, L. Beckel, D. Bonnissent, A. Bright, M. Buckley, D. Burley, A. Cherkinsky, J. Cherry, R. Colten, I. Conolley, J. Cooper, J. G. Crock, A. Curet, C. Espenshade, A.-M. Faucher, S. Hackenberger, C. Hamann, D. Hamilton, J. Hanna, A. Hastings, V. Harvey, S. P. Horn, M. Kappers, C. Kraan, A. Krus, J. Laffoon, M. Lee, E. Lundberg, Y. N. Storde, J. Oliver, D. Pendergast, W. Pestle, B. Reed, I. Rivera-Collazo, R. Rodrguez-Ramos, M. Roksandic, A. Samson, I. Shearn, P. Sinelli, D. Watters, B. Worthington, the staffs of the University of Arizona Accelerator Mass Spectrometry Laboratory, the Center for Applied Isotope Studies at the University of Georgia, the Leibniz-Labor fr Altersbestimmung und Isotopenforschung at Christian-Albrechts-Universitt zu Kiel, the SUERC Radiocarbon Dating Laboratory, and the ngstrmlaboratory Tandem Laboratory at Uppsala Universitet. J. Miller, A. Poteate, and D. Sailors assisted with the data collection and provided feedback on an early version of the manuscript. A. Anderson, C. Lipo, T. Rieth, T. Dye, and T. Leppard provided valuable comments on earlier drafts. Funding: The authors received no funding for this work. Author contributions: All authors conceived the project. M.F.N., R.J.D., and J.H.S. completed the Bayesian statistical analysis. M.F.N., R.J.D., J.H.S., and S.M.F. wrote the manuscript. R.J.D., S.M.F., M.F.N., and J.H.S. created the figures and tables. M.F.N. prepared the Supplementary Materials, and all authors participated in the data collection and chronometric hygiene analyses. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.

Read more here:
Reevaluating human colonization of the Caribbean using chronometric hygiene and Bayesian modeling - Science Advances

Battle Of The Bulge Remembered – New Haven Independent

Jim Morgia remembers the bitter cold and the three German tiger tanks his unit destroyed, saving the inhabitants of a nearby town.

Lou Celentano remembers his anti-tank gun firing 125 shells in an hour and the luck of hitting the treads of German tanks, at the front and back of a column, halting an advance.

I was a little boy then when all this happened, he said gazing out at his audience, and how grateful I am to be sitting here listening to good, peaceful people talking.

Those were some of the poignant recollections of the Battle of the Bulge, a final and brutal German offensive of World War Two, whose 75th anniversary was marked by a program of remembrance headlined by Morgia and Celentano, local vets, now in their late 90s, who participated in the battle.

Their standing-room-only audience Monday night, in the downstairs community room of the New Haven Free Public Library Ives Main Branch, comprised many members of Home Haven, one of the citys pioneering aging-in-place organizations, and one of the sponsors of the event.

Their stories and those of related by the children of ex-soldiers not longer with us, including Anton Pritchard and city arts and civic leader Newt Schenck, who spent four months in German prison camps, were formally recorded during the presentations.

They are to be part of the New Haven Story Project, an online repository of tales of New Haven. The library plans to launch the repository in January in partnership with the New Haven Museum, said library staffer Gina Bingham, who is helming the project.

Monday nights program was for World War Two history buffs. They were treated to personal stories of young soldiers trying to survive and writing letters home that conveyed a hyperlocal perspective along with the mysteries of human behavior brought during crisis.

Its really a struggle, he wrote between the animal and whats good in men.

Then he related how in the midst of a German shelling he passed a barn burning. He noticed three horses inside still chained to posts. Their hides were already being singed. Pritchard was on a mission. There were more important things to do, he related in the letter, but he stopped and unchained the horses. Why? he asked himself. Why that small spontaneous gesture to save three equine lives amidst all the destruction of human lives around him?

Aimlee Lederman related the story of her husband Ezra Laderman, who was a young 19-year-old Brooklyn-born radio operator with the 69th Infantry Division during the battle. He later went on to be a distinguished American composer and dean of the Yale School of Music.

Rev. Susan Izard, Newt Schencks daughter, said she had pieced together from her fathers letters sent home how he served with a unit surrounded by German troops on Dec. 19, three days after the Battle of the Bulge had begun. Part of a group of young men who had been rushed through Yale so they could be commissioned and serve, he was sent to an officers prison camp.

He lost 60 pounds and barely survived, but when he was freed and the war was over, he wrote home, Darling mother . . . liberation is like being reborn.

In the question and answer period that followed the presentations, someone asked Sgt. Celentano (the rank he had risen to) what it was like to come home. He said he was anxious to be discharged but he was a young single guy, and discharges came first to married men. If you had three children, you were discharged before guys with two children. Finally, the unmarried guys.

Then, as he lifted up an artillery shell casing to show the audience, he said, with both candor and a sense of mystery, I was a little boy then, when all this happened. I managed to forget it. This piece of artillery was in the Battle of the Bulge.

Celentano said he could no longer remember even its caliber. He said he wasnt even sure the precise reason he had picked it up and had lugged it with his gear across Europe and across the Atlantic.

I guess I knew I was going to be talking to you, so I brought this home.

Those were some of the poignant recollections of the Battle of the Bulge, a final and brutal German offensive of World War Two, whose 75th anniversary was marked by a program of remembrance headlined by Morgia and Celentano, local vets, now in their late 90s, who participated in the battle.

How about recollections of the Black 761st tank battalion call the Black Panthers?

The Original Black Panthers Fought in the 761st Tank Battalion During WWIIThese African American heroes battled the Nazis but were still second class citizens in their home country.

In October of 1944, the 761st became the first African American tank battalion to see combat in World War II. And, by the end of the war, the Black Panthers had fought their way further east than nearly every other unit from the United States, receiving 391 decorations for heroism. They fought in France and Belgium, and were one of the first American battalions to meet the Russian Army in Austria. They also broke through Nazi Germanys Siegfried line, allowing General George S. Pattons troops to enter Germany. During the war, the 761st participated in four major Allied campaigns including the Battle of the Bulge.

https://www.history.com/news/761st-tank-battalion-black-panthers-liberators-battle-of-the-bulge

See more here:
Battle Of The Bulge Remembered - New Haven Independent

Remarks by DOJ’s Antitrust Division Head Signal Intensified Scrutiny by US Antitrust Enforcers of Digital Markets and Use of Aggregated Data -…

Recent events and commentary have signaled a broadening of government antitrust scrutiny of the use of aggregated data in digital markets. While the DOJ, FTC, and virtually all states attorneys general are engaged in highly publicized investigations of several Big Tech companies, there have been indications that enforcers also have set their sights on other industries where customer data plays an important competitive roleincluding recent remarks by Makan Delrahim, Assistant Attorney General for the DOJs Antitrust Division, at a November 8, 2019 conference at Harvard Law School.

During his discussion of industries where anticompetitive abuse of customer data might occur, Delrahim referred to digital transportation apps, food and restaurant recommendation apps, and the use of image-posting apps in connection with product promotion:

Need a ride? Your current location data can help get a driver to you within minutes. Looking for a new outfit? A recently pinned image can help suggest new staples for that evolving wardrobe. Looking for a place to dine? You get the picture. . . . The aggregation of large quantities of data can [] create avenues for abuse. . . . Such data, for example, can provide windows into the most intimate aspects of human choice and behavior, including personal health, emotional well-being, civic engagement, and financial fitness. It is becoming increasingly apparent that this uniquely personal aspect of consumer data is what makes it commercially valuable, especially for companies that are in the business of directly or indirectly selling predictions about human behavior.

Delrahim noted that many companies with business models premised on collecting and monetizing dataespecially companies providing digital services with zero price to consumershave escaped antitrust scrutiny thus far in the United States, as enforcers probing for anticompetitive effects traditionally have looked for higher-than-competitive prices and high market shares based on sales figures.

Significantly, Delrahim stated that it would be a grave mistake for antitrust assessment of digital markets going forward to focus solely on those traditional indicators of competitive harms. Rather, he said, to assess competitive harms in the digital marketplace, it is necessary to understand first that data itself is part of the price being paid by consumers, and that when a companys market dominance leaves consumers little choice but to turn over their personal data to obtain a service, that in itself could be anticompetitive harm in the form of reduced quality and consumer choice. As Delrahim explained:

[D]ata has economic value and some observers have said it is analogous to a new currency. . . . [F]irms can induce users to give up data by offering privacy protections and other measures to increase consumer confidence in the bargain. . . . We can, however, assess market conditions that enable dominant companies to degrade consumer bargaining power over their data. . . . [I]t would be a grave mistake to believe that privacy concerns can never play a role in antitrust analysis. . . . [S]ome consumers appear to hold revealed preference for privacy. . . . The goal of antitrust law is to ensure that firms compete through superior pricing, innovation, or quality. . . . Price is therefore only one dimension of competition, and non-price factors like innovation and quality are especially important in zero-price markets. Like other features that make a service appealing to a particular consumer, privacy is an important dimension of quality. For example, robust competition can spur companies to offer more or better privacy protections. Without competition, a dominant firm can more easily reduce qualitysuch as by decreasing privacy protectionswithout losing a significant number of users. . . . [T]hese non-price dimensions of competition deserve our attention and renewed focus in the digital marketplace.

Delrahims stated view of customer data as potentially part of the consideration paid by the consumer, if adopted by the courts, would represent a sea change in how modern U.S. antitrust law is applied. Organizations seeking to assess their potential antitrust liability will face novel and difficult questions of how to account for data in determining their market share and the competitive effects of their business practices, which will require careful legal analysis.

In that regard, Delrahim quoted an OECD report concluding that in markets where zero-prices are observed, market power is better measured by shares of control over data than shares of sales or any other traditional measures. European competition enforcers have made similar statements about how to assess digital market power, with Germanys competition authority recently concluding that [t]oday data are a decisive factor in competition and can be the essential factor for establishing the companys dominant position, since the attractiveness and value of the advertising spaces increase with the amount and detail of user data.

While Delrahim couched his discussion of consumer harms from data misuse in terms of economic injuries recognized under federal antitrust lawsuch as loss of quality and consumer choicehis strongly stated view that values like privacy and other non-price dimensions of competition must be taken into account nevertheless represents a shift from how enforcers and courts have analyzed anticompetitive harms in recent decades, having tended to focus on objective competitive measures such as price and output levels.

Deputy Attorney General Jeffrey Rosen likewise signaled a broadening of antitrust enforcers focus beyond price and output levels in remarks to the ABA on November 18, 2019, quoting Justice Blacks opinion in Northern Pacific Railway v. United Statesa decision from 1958, in an era when courts still often held that the legitimate concerns of the Sherman Act extended beyond purely economic harms and benefits:

The Sherman Act was designed to be a comprehensive charter of economic liberty aimed at preserving free and unfettered competition as the rule of trade. It rests on the premise that the unrestrained interaction of competitive forces will yield the best allocation of our economic resources, the lowest prices, the highest quality, and the greatest material progress, while at the same time providing an environment conductive to the preservation of our democratic political and social institutions.[1]

Compounding the growing antitrust risks for companies reliant on aggregated data is the fact that there appears to be a bipartisan desire in Washington for stronger data-related antitrust enforcement. In September 2019, the House Judiciary Committee issued document requests demanding emails and other records from some of the [technology and data] industrys top chief executives as they look for evidence of anticompetitive behavior, as the Wall Street Journal recently reported. In October 2019, several Democratic and Republican U.S. Senators introduced legislation that would requir[e] social media giants to give consumers ways to move their personal data to another platform at any time, in order to loosen the grip social media platforms have on their consumers through the long-term collection and storage of their data and give rival platforms a chance at competing, declaring that [c]onsumers should have the flexibility to choose new online platforms without artificial barriers to entry.

While assessing an organizations risk exposure from this apparent shift in antitrust policy would require an individualized legal analysis, U.S. antitrust enforcers have hinted at the types of businesses they currently are focused on. For example, Delrahim alluded to zero-price digital services reliant on consumers submitting personal data, in markets where a new entrant often cannot compete successfully . . . because it lacks access to the same volume and type of data, and he referred specifically to digital apps involving transportation, restaurant recommendation, and image posting. The types of conduct European competition enforcers have challenged in recent years, which we reviewed previously, also may provide insight into the business activities that U.S. enforcers are probing. Notably, some firms facing government scrutiny of planned acquisitions raising data issues have capitalized on the ongoing investigations of top technology companies, persuading regulators that their planned transactions will enable them to compete more effectively against that handful of top technology companies.

It is not clear how courts will resolve the difficult new questions of antitrust law raised by data-driven markets. But what is clear is that the U.S. antitrust enforcement landscape is changing quickly, resulting in significant risks and uncertainties for companies in the digital marketplaceparticularly those whose business models are reliant on aggregated customer data. It likely will be prudent for such companies to develop legal strategies for mitigating those risks while this enforcement activity is still in its early stages.

Read the original:
Remarks by DOJ's Antitrust Division Head Signal Intensified Scrutiny by US Antitrust Enforcers of Digital Markets and Use of Aggregated Data -...