Category Archives: Human Behavior

Urban growth and the emergent statistics of cities – Science Advances

INTRODUCTION

Classical approaches to urban theoryin economic geography (13) and, more recently, in complex systems (4)often treat cities as spatial equilibria, where a balance of benefits and costs is achieved out of a set of social and economic exchanges, including wages, land rents, and transportation costs (1, 35). While these modeling approaches have proven powerful for generating quantitative predictions in agreement with many observed properties of cities (35), they leave unresolved two fundamental problems: the problem of statistics and the problem of growth.

Both growth and statistics denote a broad set of phenomena that must be unpacked so that we can fully appreciate what is at stake. By statistics, we mean that in dealing with real cities, we must appreciate the wide variation between individuals and places. This variation has positive manifestations in that cities are extremely diverse in terms of the types of the people and lifestyles they support, including a broad set of coexisting cultures, professions, languages, races, and ethnicities (69). This interdependent functional diversity is what J. Jacobs famously called organized complexity and is at the heart of the kind of problem a city is (10). Negative expressions of these same heterogeneities are also familiar, such as ethnic, racial, and economic segregation (11, 12), inequality, and variable access to justice and opportunity. Moreover, it has been observed that these differences between places and people within each city are persistent over time (9, 12) and do not have the fleeting character of noisy fluctuations in statistical physics. Instead, they can pile up over time and lead to patterns of cumulative advantage and disadvantage (9, 13), which are at the root of most challenges of human development. Thus, the problem of statistics in cities deals not only with the existence of structural differences on how the same quantity is distributed across different people and places but also with the temporal persistence and amplification of these effects.

By growth, we mean that (modern) cities are characterized by fast, typically exponential change across many variables. On the one hand, modern cities tend to experience annual population growth rates between about a fraction of 1 and 3 to 4%, as we shall see below. Exceptions exist at either end at least over some periods of time, as different places experience contextually specific factors. However, the principal change in modern cities is the fast pace of their economic growth and technological transformations. Across the world today, we observe rates of urban economic growth that are typically larger than those in their corresponding populations, reaching in some cases 10% a year, with 2 to 4% being typical (14). These growth rates mean that the size of a citys economy doubles every few decades, making it possible to transition from poverty to wealth in one or two generations, as has happened in many places over the last century. With such fast growth rates at play, how is it tenable to model cities as spatial equilibria? Even more importantly, how do different resource growth rates, experienced by different households, neighborhoods, or cities, shape the heterogeneity (inequality) of outcomes for different people? Why are cities not torn apart by differential growth more often?

It turns out that these two problems, of statistics and of growth, are intimately connected and must be tackled together. This is a twist on classical statistical mechanics linking the strength of fluctuations to dissipation around equilibrium (15), which, in the context of exponential growth processes in populations, takes a character that we may call fluctuation-amplification, typical of evolutionary processes (16). The literature of complex systems applied to urban growth, especially from the perspective of geography, has demonstrated the importance of this type of stochastic nonlinear dynamics with strong feedbacks for a number of decades (17, 18).

To continue to make progress on these issues, an analytical approach is necessary that identifies and articulates the essential joint mechanics of scaling, growth, and statistics in cities. Here, we show how this synthesis can be constructed and illustrate its theoretical and empirical implications using stochastic simulations and a particularly long time series of wages and populations of U.S. metropolitan statistical areas (MSAs) covering nearly five decades. The central insight is the realization that the budget constraint used to define functional cities (14) is not static or homogeneous across agents but must be managed adaptively to promote growth and avoid instability. This is because what is being controlled are stochastic net resource flows over time, as incomes minus costs, not static quantities such as forces. It is precisely this fluctuating difference that accumulates to drive resource growth for each agent and for cities as populations of agents, with implications for both aggregate growth and inequality.

The manuscript is organized following the scheme of Fig. 1. We first show how scaling analysis isolates the parameters that control urban growth, providing a number of quantitative targets for explanation and prediction. We then introduce standard theory for stochastic geometric growth and outline its properties. This allows us to establish the connection between well-known models of cities as spatial equilibria and processes of stochastic growth for individual agents. This connection disaggregates a city-wide budget condition to the level of agents and requires that they act strategically in their own self-interest to control fluctuations in net incomes, which is naturally handled via adaptive temporal averaging of expenditures. This is the central assumption of this manuscript, which leads to the expectation that net income volatilities are kept finite and small and that temporal averages of incomes and expenditures become statistically dependent. A number of results follow from standard limit theorems: The statistics of resources, incomes, and costs at the agent and group level become asymptotically lognormal, even as these quantities grow exponentially. The self-similarity of growth processes across group sizes also emerges, defining running couplings characterizing the mean growth rate for populations and its associated volatility. Explicitly computing these quantities allows us to identify the circumstances when urban scaling is conserved by the system dynamics. Conversely, we show when these conditions are violated, creating corrections to mean-field scaling exponents when volatilities are scale dependent but small and the breakdown of scaling if they become large. These procedures are illustrated using data on wages for U.S. metropolitan areas. We finish with a discussion of the significance of the results toward a general statistical dynamics of cities and its relation to analogous dynamics of resource flows in other complex systems.

Basic assumptions are shown as blue boxes, and derived results are shown as red boxes; arrows indicate outcomes, while dashed lines represent alternative scenarios. The budget condition, y c, is the common basic assumption for urban agents, generalizing energy conservation in simpler systems. Recognizing its dynamical, stochastic nature leads to the central assumption of the manuscript that agents must actively control its associated volatility; the simplest way to do this is through the time averaging of expenditures (consumption smoothing). Then, the resource growth rate volatility, 2, becomes finite and small, both at the individual and group levels. This leads to emergent stochastic geometric dynamics of resources both at the individual and population levels with exponential growth and lognormal statistics observable at long times (right). Averaging over populations derives the growth rate statistics for cities, which determines when dynamics become self-similar across scales and preserve urban scaling (left): First, if under group averaging variations of the growth rate () are correlated to those in agent resources (r) inequality will change within the population. Second, if effective growth rates are independent of population size, the dynamics becomes self-similar and urban scaling is preserved over time. Alternatively, if the growth rate volatility(2) is population size dependent, corrections to mean-field exponents result: They are calculable via B 0 and are controlled by the volatilitys magnitude. For large 2(N), the statistics become dominated by fluctuations, and urban scaling breaks down. The existence of strong group volatilities contradicts the assumption of effective control at lower levels. This regime would be unstable, signaling the loss of control over resource flows for most of the population and entailing wide-spread crises and eventual collapse. See text for detailed notation.

Scaling analysis provides a simple and straightforward way to characterize urban quantities (19) and extract average agglomeration effects at play across cities of different sizes in an urban system. This section also shows that scaling analysis provides an efficient parameterization of growth processes, different from growth accounting in economics (20).

The starting point is to write an extensive positive quantity, Yi(t), here the total wages paid in city i over a time period t, asYi(Ni,t)=Y0(t)Ni(t)ei(t)(1)where Y0(t) is time dependent but independent of city population size Ni(t), which also changes over time due to standard demographic processes. The parameter is the scaling exponent, and the quantity i(t) is the time-dependent deviation from the average scaling prediction for city i. This expression is exact because any deviation from the average scaling relation, Yi(Ni, t) = Y0(t)Ni(t), is absorbed in the residuals i(t) (see section S1).

Averaging over cities can now be used to isolate a few quantities of interest. To do this, let us first take the logarithm of Eq. 1lnYi(Ni,t)=lnY0(t)+lnNi+i(t)(2)followed by the average over cities ln Y(t) = ln Y0(t) + ln Ni. This ensemble average is defined explicitly by lnY(t)=1Nci=1NclnYi(Ni,t), lnN(t)=1Nci=1NclnNi(t), where Nc is the total number of cities in the urban system such as the United States. We will refer to the quantities ln Y(t) and ln N(t) as centers (21), which are collective coordinates tracking the temporal motion of the entire system of cities (yellow symbols in Fig. 2, A and B) analogous to center-of-mass coordinates in many-body physics.

(A) Total wages for U.S. metropolitan areas 19692016. Each circle is a city in a given year from blue (1969) to brown (2015). Yellow squares show the urban systems centers ( ln N, ln Y), which account for collective economic and population growth (movement, upward and to the right). Urban scaling relations for each year (black lines) are derived through the consideration of a short-term spatial equilibrium (inset), which changes on a very slow time scale. (B) Centered data, obtained from (A) by removing the centers motion (inset). This allows the decomposition of temporal change into two separate processes: collective growth (centers motion) and deviations from scaling, i(t), characteristic of each city i. We see that scaling with a common exponent (global fit = 1.114, 95% confidence interval = [1.111, 1.117], R2 = 0.935) is preserved over time, and net growth is a property of the urban system and not of individual cities. (C) The statistics of deviations, obtained from the residuals of the centered scaling fit of (B). While the distribution is well localized and symmetrical, it is not very well fit by a normal distribution (blue line). Instead, the red dashed line, which follows from theory developed in the paper, produces a much better account of the data. (D) The deviations i(t) of a few selected cities: Silicon Valley (San JoseSanta Clara MSA) and Boulder, CO show two of the more exceptional trajectories in wage gains for their city sizes, whereas Las Vegas, NV and Havasu, AZ illustrate wage losses. New York City, Los Angeles, and the exceptionally poor McAllen, TX show no relative change in their positions over nearly 50 years.

By definition, the ensemble average of the deviations is zero, (t) = 0, so that the second (variance) and higher moments of the become the leading quantities of interest. From these expressions, we can write the deviations i(t) asi(t)=lnYi(Ni,t)Y0(t)Ni(t)=[lnYi(Ni,t)lnY(t)][lnNi(t)lnN(t)](3)

The first expression is the most common interpretation of the i as (multiplicative) residuals from the scaling relation, whereas the second makes their status as deviations from the collective coordinates (centers) explicit. For these reasons, the i(t) provides a city size-independent measure of city performance and are also known as scale-adjusted metropolitan indicators (SAMIs) (22). Characterizing these three quantities, the two centers and the statistics of the , gives a complete description of growth and deviations in a system of cities and separates collective effects from idiosyncratic events in each city.

Figure 2 illustrates the meaning of these quantities. Figure 2A shows the total wages, Yi(t), for U.S. MSAs between 1969 and 2016 (47 years), year by year (colors light blue to brown). The growth trajectory of some specific cities, such as New York City, Los Angeles, Chicago, or Silicon Valley (San JoseSanta Clara MSA), is easily visualized in this way. The solid lines show the scaling relation for each year (see caption for details). We see how scaling is a good fit to the data each year, reproducing a slowly shifting spatial equilibrium in each instance (inset) (4). We also see how the position of the center (yellow squares) moves from year to year, reflecting the overall growth in population (shifts to the right) and especially in wages (movement upward). These results include both real and nominal growth of wages due to inflation.

Figure 2B shows the result of removing collective growth by moving all data clouds so that their centers coincide at the origin (0,0) (21). We see that removing the centers motion (inset) results in the reproduction of the same scaling pattern at each time, with additional small and slow moving deviations, i(t), changing only slightly from year to year. Figure 2C shows the histogram of these deviations (gray) about the overall best fit scaling relation (Fig. 2B), pooled across all years. We observe that the statistical distribution is well localized and symmetrical about the origin, but that is not very well fit by a normal distribution (blue line). A better fit is provided by another model (dashed read line), which will be derived below. Last, in Fig. 2D, we see the change in i(t) over time for some of the most extreme trajectories. Specifically, some cities became substantially richer in relative terms over this period (Silicon Valley and Boulder), some experienced loss in economic status (Las Vegas, NV and Havasu, AZ), and a few others, such as New York, Los Angeles and McAllen, TX (one of the worst performers, by this measure), have not changed much. These trajectories also show how slow the change in the s is. Particular events that affect different cities at specific times are easily identifiable, such as the dot-com economic boom and bust around 2000, specifically for Silicon Valley and Boulder.

We now seek to connect the macroscopic statistics of entire cities to a microscopic general model of single agents behavior. We note that the budget condition, which is taken as the starting point of an important set of urban theories (14), is a version of the fundamental law of energy conservation. Hence, it must apply to every single agent and to cities as collections of agents.

To see this, let us start by introducing a variable, r(t), denoting the accumulation of the net quantity of y(t) over time t. For example, if y stands for an agents income, then r becomes its monetary wealth, but we should think of r more generally as resources that can be grown over time and used in turn (reinvested) to generate more y and so on. An important noneconomic example, which applies to premodern human societies and other biological populations, is when r is stored energy and y is an energy income per unit time. We write the dynamics of r, given y, asdr(t)dt=y(t)c(t)=r(t)r(t)(4)where r is the stochastic growth rate of resources. The first equality in Eq. 4 is pure accounting, stating that resources grow by the difference between income and costs, c, (i.e., net income) over some time interval, dt. Costs in cities are local and include real estate rents, transportation, and consumption, as well as others, such as health care and losses resulting from crime or poor urban services. In this sense, costs and benefits are also affected by migration decisions about where to live and work. The centerpiece of this equation is the difference between income and costs y(t) c(t), which must be balanced by all agents in their specific environments. For urban agents, this difference is the budget condition for the spatial equilibrium that defines a city according to the Alonso model of economic geography and urban scaling theory (1, 3, 4). In the original version, this difference is typically set to zero, although the meaning of incomes and costs is rather flexible and can include savings (23). In urban scaling theory, this difference is nonzero in general (4) and becomes the target of maximization through the self-consistency of infrastructural and social network properties. This implies that a positive difference between incomes and transportation costs is necessary for cities to exist (Fig. 2A, inset) and to generate exponential resource growth. It also implies that the scaling of resources, incomes, and costs has the same population size dependence characterized by a single common exponent for all these quantities, > 1.

The second equality in Eq. 4 is a definition of the growth rate r implying that r(t)y(t)r(t)c(t)r(t). Equation 4 is not an arbitrary modeling choice: It is the standard starting point for modeling population growth and human behavior where time, effort, or resources are invested strategically. Among many examples, it is the standard model for city population growth (24), the standard model of financial mathematics and asset pricing (25, 26), and the one good wealth accumulation model, which is the basic tool in economics to analyze dynamical issues of wealth inequality (23). Stochastic proportional growth and resulting lognormal (and power law) distributions are associated with many forms of human behavior, including the statistics for the time to complete a task, epidemic dynamics, demography and even the statistics of marriage age [see (27) for a review].

Let us see how this model works in practice. Because the equation is nonlinear (the stochastic term is multiplicative), we have to be careful and use the rules of stochastic (It) calculus to integrate its solution in time, leading tolnr(t)r(0)=(rr22)t+(t)(5)where the randr2 are the mean and variance of r, respectively, in the usual sense of those obtained over the probability density of r. Now, let us define the average effective growth rate r=rr22. This quantity is fundamental in geometric random growth models and will recur in the discussion below. Keeping track of physical dimensions tells us that randr are temporal rates and have dimensions of (time)1. Thus, the standard deviation (SD) r (known as the volatility) has dimensions of (time)1/2. The stochastic noise (t) is the sum over the integration time, t, or more explicitly, (t)=l=1tr(tl), with r(tl)=r(tl)r. This is a random variable with zero mean. Because it is the sum of stochastic variables, we expect (t) to express universal behavior as a consequence of the central limit theorem (25). In the simplest case, where r is statistically independent across time with finite variance, we obtain that = rW(t) is a Wiener process, which is a normal variable with zero mean and variance 2(t)=r2t. This will later define the property of ergodicity for stochastic growth, which means that for long-time averages of growth rates, the mean dominates the variance. This is not to be confused with the more general property with the same name in statistical physics, which means that all allowable spaces of a dynamical system will be sampled subject to constraints. The point of the present paper is to show that path dependence for particular cities can coexist with simple emergent statistics for the ensemble of cities.

A number of key general results follow from this solution and associated limiting theorems. First, (i) the central limit of implies that lnr(t)r(0) approaches, in the same limit of long times, a Gaussian variable with time-dependent mean rt and variance r2t. This implies, in turn, that (ii) r(t) is asymptotically distributed as a lognormal variable, a result that will become important later. In turn, (iii) the temporal mean growth rate 1tlnr(t)r(0)=r+(t)tr, for long times, as a result of the behavior of t1/2r. Last, (iv) the characteristic time, t*=r2(rr22)2, marks the interval necessary for net exponential growth to become apparent over the (shorter-term) effect of fluctuations, which average out for longer times.

These properties are illustrated in Fig. 3 (A to C), obtained from numerical simulations of Eq. 4, with r taken as Gaussian white noise. The asymptotic behavior of all quantities depends on whether the effective growth rate is positive r > 0 or equivalently r>r2/2 (Fig. 3, A and C). When this condition holds, there is net growth (Fig. 3A, blue and orange trajectories). Growth of r becomes apparent on a time scale longer than t* (Fig. 3A), which can be very short when the volatility is small. In this regime, the distribution narrows on the scale of the mean as t becomes larger, and predictable exponential growth emerges. However, when r0 is not sufficient to guarantee long-term growth; instead, a finite threshold r>r2/2 must be overcome. Approaching this threshold from a regime with growth, an agent will experience wild fluctuations as t* goes to infinity (Fig. 3C, inset) and will struggle to tell whether growth persists and estimate its time scale to plan. As a consequence, low volatility and positive average rates are necessary for sustained growth (Fig. 3C). Given these results for individual agents, it becomes critical to establish the conditions for these dynamics to apply also for populations such as for entire cities (Fig. 3D) and to determine how corresponding growth rates change across scales.

(A) Example of growth trajectories for a simple process of geometric Brownian motion (Eq. 4). The blue trajectory shows typical growth with small fluctuations and positive effective growth rate, the orange line shows a similar situation with larger fluctuations, and the green line shows a trajectory with critical r = 0. The purple and red lines illustrate negative effective growth rate trajectories. The critical growth time, t*, is shown for growing trajectories. (B) An ensemble of trajectories with stochastic growth rates similar to those of U.S. MSAs, starting with the same initial conditions. The yellow line shows the temporal trajectory of the ensemble average, and the black lines show the 95% confidence interval. Note that both the mean and the SD are time dependent (see text). The inset shows the resource distribution at a later time, which becomes asymptotically lognormal (red line). (C) The general properties of stochastic growth imply that a positive growth rate is necessary to overcome temporal decay due to rate fluctuations. If volatility increases, growth will ultimately stop, and decay will ensue. The critical point r=rr22=0 is characterized by large fluctuations with a diverging t* so that agents will not be able to tell whether they are experiencing growth and may be unable to exert effective control (see text). (D) Under general conditions, multiplicative random growth can be self-similar across group sizes, providing a simple theory that applies at all scales, from individual agents to populations and cities (see section S3) (29). However, the key parameters of the theory run across scales and are in general sensitive to both group size, temporal averaging, and inequality. These dependences define urban scaling as a dynamical statistical theory beyond the mean-field approximation.

From the general properties of stochastic growth processes, we can conclude that any agent seeking growth must aim at a positive mean growth rate and small volatility. The conundrum is that the volatility and the mean growth rate are, to a large extent, properties of the environment, outside the agents control. What is under the agents control, however, are his/her own actions, which we show next can adapt to extrinsic circumstances via processes local in time so as to produce low volatility and stable growth.

It is important to realize that, besides levels of population aggregation, there is also a hierarchy of time scales involved in the process of balancing costs and benefits and observing growth (Fig. 3D). Over the very short term, there will be moments when the agent is resource flow negative, e.g., while shopping. However, judicious choices over time should result in more even positive net flow over the longer term, integrating together periods when incomes are larger than costs (at work) and vice versa (at home, socializing, etc.). This process of balancing costs and benefits over time is necessary in dissipative complex systems because there are always resources lost in any activity or exchange. Balancing costs and benefits over time creates strong correlations between y, c, and r and results on ratios, y/r and c/r, that can become independent of the level of wealth, as we show next.

Consider the basic accounting (Eq. 4) for a single agent, y(t) c(t) = r(t)r(t). As we have seen, dividing by r(t) > 0 gives us the definition of the growth rate r(t). Defining the two resulting ratios as b(t) y(t)/r(t) and a(t) c(t)/r(t) and averaging over time leads to1t0tdt[b(t)a(t)]=ba+1t0tdt(r(t)r)ba=r(6)

This means, in general, that we can also define r(t)=r+r(t), where r(t) is the error (or fluctuations) away from the growth rates temporal mean such that 1t0tdtr(t)0, as we have seen for (t) in the previous section.

What kind of process sets the statistical properties of these fluctuations? On a short-term basis, fluctuations will be large if a, b vary strongly and independently of each other. Then, the amplitude of r will be large over some period of time and, if negative, may deplete stored resources (r 0), placing the agent at risk of death or bankruptcy. Thus, it is in the vital self-interest of the agent to act so as to minimize, or at least control, fluctuations.

How is this to be achieved? The point is that the variations in expenditures, a(t), should not just be seen as passive costs but rather as strategic dynamical investments under the agents control. Conversely, the returns on this investment, b(t), are stochastic and will always fluctuate because of environmental factors (Fig. 4A). Thus, a(t) should be chosen to generate a target growth rate and reduce fluctuations, in other words, to achieve stable and predictable growth (Fig. 4, B and C).

(A) Example trajectories for the income-to-resources and costs-to-resources ratios, b(t) (red) and a(t) (blue), respectively. Note that when income is larger than costs, there can be growth, but fluctuations need to be controlled. (B) Control scheme to deliver average growth rate and tame fluctuations r(t). Costs a(t) become a control variable that, in part, adapts to environmental fluctuations to generate r(t) with small, known variance. (C) The dynamics of the resulting error r(t) is now centered around zero and (D) displays a Gaussian distribution (red line) with variance given by the ratio of the environmental variance to control parameters (see text). In this way, adaptive agents behavior can lead to predictable growth in stochastic environments with a chosen variance.

To demonstrate how this can be achieved, we write the returns as b(t)=b+v(t) and the investment as a(t)=a+u(t). Here, v(t) are (stochastic) variations in returns, whereas u(t) will play a role of a control variable adjusted by the agent. This leads toba+v(t)u(t)=r+r(t)r(t)=v(t)u(t)(7)

We must now specify how control is implemented to tame the errors. Most general practical controllers are in the Proportional-Integral-Derivative (PID) class (28), which specifies u(t) as a function of the error, r(t), asu(t)=kPr(t)+kI0tr(t)dt+kDdrdt(8)where kP, kI, and kD are constants (in time) to be chosen by the agent. These three terms allow for different kinds of strategy to reduce fluctuations: kP is the magnitude of an instantaneous response against the fluctuation, kI refers to averaging of the error over time (known as smoothing, because averaged errors are smaller and converge to zero), and kD describes a corrective reaction in the direction of the temporal change in the error. Of these, only smoothing by time integration will prove essential. Note also that u(t) is a simple quantity that can be updated locally in time via the current observed error, r(t), and its addition and subtraction to the integral and difference, which requires remembering only two numbers. The stochastic dynamics of the errors is best captured via the derivative of Eq. 8dudt=kPdrdt+kIr(t)+kDd2rdt2kDd2rdt2+(kP+1)drdt+kIr=dvdt(9)

This equation for the error describes a simple driven oscillator: It is familiar from stochastic calculus when we take dvdt to be white noise with variance 2. The solution is provided in section S2, showing that r converges to a normal distribution with zero mean and variance r2=22kP+1kI (see Fig. 4, C and D). Making kI larger has the double effect of accelerating the temporal convergence to a time-independent distribution and narrowing the error variance. The other parameters, kP and kD, can be set to zero, leading to very simple control based on the temporal averaging of the fluctuations. The effect of the environmental variance 2 is simply to increase the error variance proportionally. Thus, the control process effectively filters out environmental shocks and makes the net-income variance smaller as a function of parameters chosen by the agent. This is a very simple general mechanism that allows agents to cope with environmental uncertainty and generate stable growth by adjusting their expenditures over time. Much more sophisticated strategies are possible that can maximize growth rates if more of the structure of returns, b, are known (28).

We see how averaging expenditures over time (known as consumption smoothing in economics) gives a general mechanism whereby agents can make their average resource growth rate take on a target value, up to stochastic fluctuations with variances given by the balance between the unpredictability of the environment and the quality of their control. Effective control generates strong statistical correlations between income and costs over time, which constitute the basis for (a spatial) equilibrium. In this light, variations between agents may persist as the result of differences in their specific experienced environments and/or the quality of their control. Exposing these issues requires the consideration of averages over populations of agents as in Fig. 3D, to which we now turn.

To compute the growth dynamics for a city, we now define the averages over a population of size G, rG=1Gj=1Grj, where rj are individual js resources and so on for growth rates, incomes, and costs (see section S6 for a summary of notation). To derive the corresponding dynamics, we take these averages over Eq. 4drGdt=yGcG=(r)G(10)where we dropped the r subscripts on the rate, for simplicity, so that in this section, rG G. The average of the product is(r)G=1Gj=1Gjrj=GrG+covarG(,r)=[G+covarG(,r/rG)]rG(11)

The quantity GG+covarG(,r/rG) is the effective stochastic growth rate for the group average resources, rG. This quantity equals the simple arithmetic group average, G, plus a correction due to the fact that growth rate variations may not be statistically independent from variations in resources across individuals. The covariance term is familiar from evolutionary theory in the context of the Price equation (16) and signals selection. For example, if richer individuals experience higher growth rates across the group, then the average growth rate will be higher and vice versa. This flags the important issue that pursuing the highest possible group-level growth rates in a heterogeneous population will increase inequality. Conversely, pursuing growth such that poorer individuals enjoy higher rates leads to more equitable outcomes in distribution but subtracts from the average G because the covariance is negative.

To complete the derivation, we now characterize the mean and stochastic components of G. We express the individual growth rate, as in the previous section, j=j+j, which leads to G=1Gj=1Gj=1Gj=1G(j+j)=G+G, where G is the group mean of individual temporal means and G is a stochastic noise term resulting from the group average of the errors for each individual. The properties of G are inherited from those of each agent and their statistical correlations. The mean remains zero, while the variance is given by G2=1G2j,kGjkjk, where j and k are the volatilities for agents j and k and jk is the correlation matrix between them. (The correlation matrix is symmetric, with 1 jk 1 and with ones along the diagonal, corresponding to each agents squared volatilities).

In the simplest case, when errors are statistically independent across agents, jk = 0 for k j, and if all SD are the same, k2=r2, we have that G=1Gj=1GjG2=1Gr2. Then, the magnitude of fluctuations is reduced by group size and vanishes in the infinite G limit. Thus, if errors are independent across individuals, both long times and large-population pooling leads to a convergence to the behavior set by the temporal means. This, curiously, implies that the group average grows faster than the agents temporal average in general and provides a strong quantitative argument for pooling resources either via government action or risk management instruments, such as insurance (26).

The case of nonindependent variables is interesting because the treatment of the last section suggests that it would follow from different agents either experiencing correlated fluctuations and/or generating coordinated institutional control responses, which is likely in many circumstances. When all variables are fully correlated jk = 1 and G2=r2, the volatility associated with rG becomes independent of group size. In urban settings, we may expect some correlation between agents as they experience a common spatial and socioeconomic environment of the city. For U.S. MSAs, G2 is approximately constant in G (see fig. S3).

The covariance term between individual growth rates and resources adds additional correctionscovarG(,rrG)=[1Gj=1G(jG1)(rjrG1)]G+[1Gj=1G(jG1)(rjrG1)]G=covarG(G,rrG)G+covarG(G,rrG)G(12)

With these results in hand, we can now write the time evolution of average group resources asdrGdt=GrG,withG=G+G,G=[1+covarG(G,rrG)]G,G=[1+covarG(G,rrG)]G(13)

We see that the statistical behavior of rG is set by the dependencies of these quantities (see section S3 for discussion). When G and G2 are independent of rG (but may depend on G and t) and G obeys the conditions of the central limit theorem, the population average resources rG will follow a multiplicative random growth process (Fig. 3D). This process, similar to Eq. 4, will then integrate to givelnrG(t)rG(0)=(GG22)t+GW(t)(14)showing that if W(t) converges to a normal variable as the result of the central limit theorem, then the statistics of rG(t) become lognormal at long times (see section S3 for an example and further discussion of necessary conditions, exceptions, and related results) (29). It is important to stress that growth rates and volatilities now run (i.e., change) with group sizes, G, and time, t, depending on the correlations captured by the several covariance terms (see Fig. 3D).

We are now ready to express the quantities in scaling relations as functions of stochastic growth rates. This will provide us with a statistical theory that derives urban scaling beyond mean-field calculations (4). To keep the notation simple, the index i denotes cities. We take each city to be a group with G = Ni and write the simplified notation i=Ni, i=Ni, and so on. We will also write the averages of these quantities over the ensemble of cities as = i (see section S6 for a summary of notation).

Running scaling exponents and the emergence of scale invariance. Let us see when a power law scaling relation is a conserved quantity of the stochastic growth dynamics. We start with the integral trajectory for total resources, Ri(t), lnRi(t)Ri(0)=it+iW(t). This equation is ergodic in the sense of stochastic population dynamics because long-time averages coincide with ensemble averages (30)(1tlnRi(t)Ri(0)i)2=i2W2(t)t2i2t0(15)

This property of long-time means specifies necessary conditions for scaling to hold over time. To see this, define BiBi(lnNi)didlnNii(t)=BidlnNi(0)+BilnNi+O[(lnNi)2](16)where (0) is independent of time and scale. Bi varies slowly with lnNi so that Bi is also independent of scale but could depend on time. Bi is analogous to a beta function expressing the change (running) of a coupling with scale in statistical physics (15). Replacing it into Eq. 15 obtainsRi(t)Ri(0)eitY0(t)Ni(t)+Bit(17)which shows that if Bi is nonzero, then the scaling exponent (t)+Bit becomes time dependent in general and is not conserved by the dynamics of growth. Scaling relations will then vary over time, becoming steeper (larger exponent), if Bi>0, or shallower, if Bi<0. It is also possible that the integral Eq. 16 yields a more complicated function of lnNi and time. Under time averaging and control, it is natural for Bi1/t as we have seen, resulting in a time-independent change of scaling exponent.

To see this, consider that the volatility i22 in the effective growth rate is, in general, both time and population scale dependent, while the mean i is independent of both. This means that, in most circumstances, Bi(lnNi)=12di2dlnNi, which should be small because of the agents control over fluctuations. Consider the example i2(Ni)=r2tNi, Bi=2i2(Ni), which leads to the exact result, r22NlnN. This shows that the scaling exponent , while time independent, increases with city size, N. In this case, only at sufficiently large N>>(r2/2)1/ will the value of coincide with that predicted by mean-field scaling theory (4). This is not an issue if r2 is small. Otherwise, for smaller cities, may become measurably smaller than for larger ones. Because the magnitude of variations away from scaling is urban system and quantity dependent, this may help account for some variations of observed scaling exponents in different nations and for different urban properties (31, 32). It also implies correlations between the behavior of the prefactor, Y0(t), the variance, and the scaling exponent , as noted recently in (33).

These results show that strict scaling invariance is predicated on Bi=didlnNi0, which is analogous to a renormalization group fixed point in statistical mechanics (34) applied to the population growth rate. Away from this fixed point, we have now shown how to compute corrections to scaling exponents, which are the result of the scale-dependent statistics of growth rates. Last, note that the scale independence of growth rates for cities is a standard assumption known as Gibrats law (or law of proportional growth) (24). This assumption is necessary to derive Zipfs law for the statistics of city sizes. Figure S3 illustrates this general analysis with the growth rates and variances for wages in U.S. metropolitan areas since 1969, showing that the effective growth rates are city size independent to an excellent approximation, justifying the observed persistence of scaling with a time-invariant exponent.

Equations of motion for prefactors and scaling residuals. We now translate stochastic growth into equations of motion for both scaling prefactors and residuals; details of the derivations are given in section S4. For the prefactors, we obtaindlnR0dt=dlnRdtdlnNdt(18)which is a function of only the centers dynamics. Because the centers are averages over all cities, no higher order statistics plays out in these quantities. This dynamics of the scaling prefactor is important because it measures the urban system (nation) wide per capita baseline growth, a form of endogenous intensive economic growth.

For the residuals, we obtaindirdt=(i)ddt(lnNilnN)+(i)=i(NiN)+(i)(19)where Ni=ddtlnNi and N=ddtlnN. This equation has a number of interesting properties: The most important is that it essentially describes a random walk driven by the terms, which set the variance, (r)2. The two other terms enforce the convergence to the population averages in terms of growth rates of resources and population and guarantee that r = 0 is preserved by the urban systems growth dynamics.

The emergent statistics of urban indicators. The statistics of resources follows from integrating Eq. 19, leading to the general expectation that the statistics of the r become normal at long times (see section S4). This means, in turn, that cumulative urban indicators (stocks) are expected by the same argument to be lognormal, as we saw more directly above. Flow quantities, such as income or costs, are often more accessible empirically (Fig. 2). Their statistics follow from the analysis of the previous sections, where we wrote Yi=biRi=(bi+vi)RilnYi=lnRi+ln(bi+vi). Substituting the scaling relations for Ri, Yi, this implies that i(t)=ir(t)+lnR0lnY0+lnbi. Taking averages over cities obtains the constraint lnY0 ln R0 = ln b, which allows us to writei(t)=ir(t)+lnbilnb(20)

This shows that the statistics of income are set by two different processes, the first resulting from the statistics of associated resources and the second due to stochastic returns. The first piece is characterized by the accumulation of variations over time, which entails time averaging and is expected to become approximately normal. The second term is instantaneous and consequently not subject to limit theorems. Hence, it can have more arbitrary statistics.

To see this, we return to the analysis of stochastic returns bi under agents adaptive control to obtain the explicit time evolution equationdi(t)dir(t)+(lnbilnb)dt+[ibidWi(t)bdW(t)](21)where the force dvi/dt was taken here to be white noise dWi (the differential of the Wigner process Wt) with variance i2, as above. If dvi/dt has nonrandom components, the expression is similar but more complicated. Here, dW is the average stochastic force over cities, and we assumed that fluctuations are uncorrelated to population variations in and b. This also implies that the quantity i=ir(lnbilnb) inherits the property of ergodicity from ir. Figure S4 shows the income growth rates for U.S. metropolitan areas over time, including its noise-driven equation of motion and the property for wages where fluctuations away from the mean trajectory of growth fall over time (roughly as 1/t, inset) to become negligible for long times.

Note that in the limit of strong control, at the individual level and/or as an emergent average within cities when the i/bi<<1, the stochastic terms will be small, and the statistics of income will approximate that of resources as a normal distribution for the i. In addition, this derivation leads to a set of quantitative expectations that can be checked against the data: Figure 5A shows that the quantity t22 2t behaves approximately like the displacement of a one-dimensional (1D) random walk. This is well described by the straight line in time with slope given by the variance, although empirically we also observe shorter periods of acceleration or deceleration relative to the main trend. Figure 5B shows an analogous picture depicting each SAMI, i, trajectory, starting all cities at i = 0 in 1969. This demonstrates the spread of the SAMIs over time according to the behavior of a 1D random walk (red line, the same as straight line in Fig. 5A). Figure 5C shows the volatility and mean growth rates for all cities over the 47 years and corresponding estimates from measurements of dispersion over time (Fig. 5, A and B) and over the ensemble of cities: The observed statistical agreement of these two strategies for measuring the square volatility demonstrate the ergodicity of the statistics of i(t) once drift has been removed. Last, Fig. 5 (A and D) shows that the income residuals variance is actually time dependent, spreading very slowly over time as predicted by the derived lognormal part of the distribution. The overall distribution is better described, however, by the sum of two Gaussians, one broad and one narrow, corresponding to the two terms in Eq. 21 (red dashed line in Fig. 2C). It is only because the annual volatilities are so small that this temporal pooling and a deduction of a pure lognormal behavior appeared reasonable for flow variables in earlier work (22, 35).

(A) On the average over cities, the displacement from their initial deviations in 1969 grows linearly (red line) (gradient = 0.00108, 95% confidence interval = [0.00102, 0.00115]; intercept = 2.13279, 95% confidence interval = [2.25885, 2.00672], R2 = 0.93), as expected from pure random diffusion of the growth rates. Note that this is a mean temporal behavior and that there are periods when deviations grow faster or slower. Periods of economic recession are shown in gray. (B) The trajectory of deviations for all cities (different colors) but having set all deviations in 1969 to zero so that all trajectories depart from a common origin. The red line indicates the diffusive behavior, same as in (A), clearly showing that deviations tend to increase in magnitude over time. (C) The prediction of the wage growth volatility for U.S. MSAs by three methods: the fit of (A) and (B) and the averages over time and sets of cities, demonstrating the ergodic character of the statistical dynamics. Shaded areas show the overlapping 95% intervals in these estimates. (D) The distribution of deviations, year by year, using the same color scheme as in Fig. 2 (A and B). We see that, unlike our first approach in Fig. 2C, the width of the distributions is increasing slowly over time (brown most recent) and that the data for wages (a flow) should be fit by a distribution that is well described as the sum of two Gaussians: a universal broad distribution due to resource compounding and a contingent short-term narrow distribution (Eq. 20), which depends on most recent environmental shocks.

We showed how quantitative urban theory can be taken beyond a stationary approach based on an average budget constraint, characteristic of spatial equilibrium. In its place, we proposed the primacy of stochastic growth processes and agents strategic behavior as the dynamical statistical theory from which more particular results follow (Fig. 1). This provides a common foundation for nonequilibrium modeling of cities across scales (17, 18) and shows how these processes are associated with urban scaling and agglomeration effects. From this point of view, we see how the budget condition of spatial equilibrium models becomes the emergent property of a much more fundamental process, whereby agents subject to stochastic resource flows (incomes and costs) must develop adaptive strategies to reduce potentially fatal volatility. This point of departure is both necessary for dissipative complex systems and is very general so that it offers a number of connections with the statistical dynamics of other natural and engineered systems (28, 36, 37).

The key advantage of this bottom-up stochastic approach is that it naturally unifies processes of resource flow management (equilibrium), growth, and statistics. Hence, the framework emphasizes the critical role played by growth rate variations in a number of important urban phenomena. Specifically, we showed how the properties of the growth rate volatility are implicated in the (non-)preservation of urban scale invariance and set the boundary between growth and decay regimes, including the time scale for exponential growth to become manifest as fluctuations average out. In particular, we demonstrated how the general property of ergodicity in population dynamics and formal demography (30) is intimately connected, together with a renormalization fixed point condition on the growth rates, to the emergence of mean-field scaling relations (4).

There are a number of important consequences for urban theory that these results clarify and unify. First, they show how spatial equilibrium is, after all, consistent with observed exponential growth in cities both economic and demographic, which has been an assumption in previous models. Second, they show how to derive macroscopic statistical behavior for cities and urban systems from microscopic strategic choices at the agents level and provide expressions for how to aggregate growth rates over time and populations. Third, this process exposes issues of inequality of wealth and income and how they are compounded over time (23). Specifically, the quantities discussed here show how policies aimed at maximizing aggregate economic growth may naturally deemphasize the relative growth of poorest sections of the population. Last, and in many ways the central motivation of the paper, the results derived here demonstrate that the statistics of most urban indicators are not universal in a simple sense. Rather, they are emergent as the consequence of limit theorems under stochastic (exponential) growth. In particular, the statistics of urban indicators that account for incomes and costs are the result of a mixture of a more universal component, inherited from their association to accumulating quantities and corresponding limit theorems, and a nonuniversal part, arising from the short term hustle (accidents and the quality of the agents control) in variable stochastic environments. Thus, statistical tests to evaluate the Gaussianity of urban (log-)quantities (22, 35, 38), to be meaningful, must be performed with care and explicitly acknowledge the distinct distributions of different urban indicators.

The models for the budget constraint, the growth of resources, and associated control strategies introduced here are standard starting points in demography, geography, economics, and finance (2325). They can clearly be made more complex and, where necessary, also more realistic. The concept of resources and incomes is not 1D. Issues of energy, monetary wealth, knowledge, and social capital all contribute to resource growth in human populations. The extent to which these quantities, which can all be accumulated, interact with each other is critical for a general understanding of human development. Models for the dynamics of volatilities and more sophisticated control of fluctuations and maximization of growth rates may also become important. Some inspiration should be derived from population biology and mathematical finance (16, 25), where such models are more developed. Last, data on detailed expenditures, wealth, and other financial and social characteristics are becoming increasingly available for households at finer temporal resolutions (14) and will be critical to test and improve the ideas introduced here and to identify systematic heterogeneities in agents behavior, e.g., associated with conditions of poverty and uncertainty.

The approach developed here can be applied to other contexts beyond the contemporary United States but requires appropriate contextualization. In all societies, household adaptive management of resources is likely to remain important. However, in more collectivist societies or in those with stronger top-down governance, the aggregate management of benefits and costs will replace, to a larger extent, bottom-up agency. As shown, managing aggregate costs at the societal level can achieve considerable benefits because this strategy minimizes some risk. Its success hinges, however, on the effective investment of resources that generate society-wide benefits and their redistribution and related inequality. In circumstances of low growth, such as in most preindustrial societies, adaptive control of resources and the associated dynamics of volatility provide us with an important window into their (in)stability. This allows us to connect more proximate explanations of collapse, e.g., related to environmental stresses or violence, to the broader collective and political dynamics of societies, expressed as the capacity to manage shocks or disintegrate instead.

Empirically, the U.S. urban system, at least in terms of changes in total wages in MSAs, turns out to be very well-behaved: Its growth volatilities are almost always very small, fluctuations converge to limiting statistics quickly, and scaling relations are conserved over time. However, our theoretical results show that these properties pertain only to quantities and systems of cities with small, population sizeindependent growth rate volatilities. In the United States, over the past nearly 50 years, despite a number of notable events, observed average square volatilities associated with wages and population growth are about one order of magnitude smaller than average growth rates, making their effects almost negligible. It will be interesting in the future to investigate other urban systems and quantities characterized by larger volatility, such as crime or innovation (22, 33, 35), for which the present framework makes a number of testable predictions.

The flip side of the observed constancy and stability of growth rates in American cities is that extant wage disparities become very slow to reverse. The typical square displacement in over nearly five decades (Fig. 5A; 2t) is just 0.054. Assuming a similar rate of change in the past means that the observed variance in deviations from scaling at the beginning of our dataset (in 1969, about 0.043) would have been the product of the previous 40 years, taking us back to the time of the roaring 1920s and the subsequent great depression. Thus, the answer to the question at the beginning of this paper about predicting the magnitude of deviations from scaling in any given year is now recast not so much in terms of parameters of stationary statistics (33). Rather, this variance is the result of accounting for the accumulation of much smaller accidents and variations that make up the stochastic history of cities, which compound short-term noisy growth under partial control of heterogeneous agents over entire urban areas and long-time periods of decades (22). This is the quantitative sense in which history matters for cities, and their development becomes path dependent (18, 39, 40).

Visit link:
Urban growth and the emergent statistics of cities - Science Advances

Revisiting Genealogy And DNA Testing With Libby Copeland On Thursday’s Access Utah – Utah Public Radio

Thursday's Access Utah episode.

You swab your cheek or spit into a vial, then send it away to a lab somewhere. Weeks later you get a report that might tell you where your ancestors came from or if you carry certain genetic risks. Or the report could reveal a long-buried family secret and upend your entire sense of identity.

In The Lost Family: How DNA Testing is Upending Who We Are, journalistLibby Copelandinvestigates what happens when we embark on a vast social experiment with little understanding of the ramifications. Copeland explores the culture of genealogy buffs, the science of DNA, and the business of companies like Ancestry and 23andMe, all while tracing the story of one woman, her unusual results, and a relentless methodical drive for answers that becomes a thoroughly modern genetic detective story.

Libby Copeland is an award-winning journalist and author who writes from New York about culture, science, and human behavior. As a freelance journalist, she writes for such media outlets as The Atlantic, Slate, New York, Smithsonian, The New York Times, The New Republic, Esquire.com, The Wall Street Journal, Fast Company, and Glamour.

A graduate of the University of Pennsylvania, she was a 2010 media fellow at Stanford Universitys Hoover Institution. Her article for Esquire.com, Kates Still Here, won Hearst Magazines 2017 Editorial Excellence Award for reported feature or profile. She previously won first prize in the feature specialty category from the Society for Features Journalism (then called AASFE). She lives in Westchester, NY, with her husband and two children.

Go here to see the original:
Revisiting Genealogy And DNA Testing With Libby Copeland On Thursday's Access Utah - Utah Public Radio

So That We May Live – Jewish Exponent

By Rabbi Jason BonderParshat Shoftim

Last year, I was sitting in a learning session led by the Union for Reform Judaisms vice president for Israel and Reform Zionism, Rabbi Josh Weinberg. He made an important observation about a verse in this weeks portion, and I wrote it down immediately.

In our portion of Shotftim this week, we encounter a widely quoted commandment Tzedek tzedek tirdof Justice, justice, shalt thou follow (Deuteronomy 16:20). When presenting this verse, Weinberg quibbled, Many of us quote this so much that we can forget theres more to the verse.

That line is still in my notebook from that session. When I returned to my notes, one year and one pandemic later, that comment leapt of the page. This year, reading further into the verse is crucial. Here is the verse in its entirety.

Justice, justice, shalt thou follow, that thou mayest live, and inherit the land which the Lord thy God giveth thee.

The coronavirus pandemic and this verse from Torah remind us that justice is not only a slogan or catchy phrase. It is a matter of life and death. If we are fortunate enough to have our basic needs met, and if we live lives with a fair amount of comfort, it may be easy for us to forget the urgency of pursuing justice.

The verse quoted above is the translation from the 1917 Jewish Publication Society translation. I found it interesting that in the 1985 society translation of this very same verse, the English reads, Justice, justice, shall you pursue, that you may thrive and occupy the land that the Lord your God is giving you. I am sure there is good reason for the word choice and change. Yet I cant help but wonder if in the translation of the word tichyeh (live or thrive) there is a reflection of a kind of lack of urgency that is so natural to human behavior.

Reading beyond the first words does not mean that the beginning of the verse isnt crucial. In fact, with a renewed sense that the pursuit of justice is imperative, the words carry with them great lessons for today as we navigate the uncharted waters of COVID-19.

The 11th-century commentator Rashi (Rabbi Solomon ben Isaac) learns from this verse that you have to go and pursue justice by seeking out a good court. If you were to take the easy path of taking your dispute or problem to any old court, you may wind up with a ruling that is less than just. In our modern day when the information available to us is truly endless, this teaching becomes quite relevant to the idea of justice.

When we have questions about how to behave safely when it comes to transmission of the virus, we cannot rely on the very first website we find or the email sent to us by a friend. We have to pursue the knowledge of the experts. We must find the very brightest and most accomplished folks who are dedicating themselves to keeping us safe from coronavirusThe 13th-century sage Nahmanides (Rabbi Moses ben Nahman) builds on the commentary of Rashi, saying that you must leave your own place to find justice if you know there is a place where the sages are superior.

The 12th-century scholar Rabbi Abraham ibn Ezra finds great meaning in the repetition of the word justice. He argues that this duplication is there in the Torah to remind us that justice sometimes results in loss and other times in gain. We must pursue justice regardless.

In our trying times, it is so important to ask ourselves, am I opposed to behaving a certain way because it is inconvenient or is it because it is unjust? If wearing a mask makes me feel uncool or, if wearing a mask makes me feel itchy, is that a case of injustice? No. This is what Ibn Ezra warned of all those years ago. Sometimes, in the pursuit of justice, we are inconvenienced.

Ibn Ezra follows up on the previous explanation with another thought. He says that perhaps justice is written twice to remind us that we must pursue justice time and time again, all the days of our lives. This means that each day is another opportunity to do the right thing and protect others, even if we messed up yesterday.

Living through this pandemic has taken its toll, and we are not even close to done. That can be overwhelming. But it can also inspire us to makes sure that we are making our society as fair and just as possible when the stakes are so very high.

I conclude with a huge thank you to all of the essential workers who are out there saving and sustaining us. Justice, justice, may we pursue, so that all of us may live and thrive in a more equitable world.

Rabbi Jason Bonder is the associate rabbi at Congregation Beth Or in Maple Glen. The Board of Rabbis of Greater Philadelphia is proud to provide diverse perspectives on Torah commentary for the Jewish Exponent. The opinions expressed in this column are the authors own and do not reflect the view of the Board of Rabbis.

Originally posted here:
So That We May Live - Jewish Exponent

From ‘inflation whipsaw’ to ‘hysteresis’: RBI’s MPC warns of COVID’s after-effects – CNBCTV18

Members of the Reserve Bank of Indias Monetary Policy Committee (MPC) deliberated at length about the effects of the COVID-19 pandemic on growth, inflation and human behavior in general at the meeting held on August 4-6 even as the committee decided to hold the benchmark interest rate.

Minutes of the meeting, released today, showed that MPC unanimously agreed that to stand pat on interest rates but also said the RBI should be ready to act to further help the economy when needed.

Between February 2019 and now, the MPC has cut the benchmark repo rate by 2.5 percentage points, from 6.5 percent to 4 percent, and opined that the recent spike in inflation may be temporary.

Members of the MPC, however, cautioned that the task of policymakers would be difficult as they balanced the need to stimulate the economy after the COVID-19 shock while trying to keep inflation in check.

The prospect of an inflation whipsaw, a phrase used by Markus Brunnermeir at Princeton, is probably the right way to look at inflation going forward, i.e., there are different inflation/deflation pressures that need to be watched carefully, said MPCs external member Chetan Ghate, who participated in his last monetary policy meeting, along with the other members Ravindra Dholakia and Pami Dua.

Ghate was referring to the Princeton professors seminar in May, which talked about the dynamics between factors that are driving opposing forces of deflation and inflation, in light of COVID-19 shock and the money printing exercises convened by governments and central banks globally.

On the upside, a perfect storm of cost push pressures, accommodative monetary policy, and adverse food supply shocks could lead to a pickup in inflation, Ghate said. On the downside, the paradox of thrift, i.e., forced saving pressure induced by a de-facto lock-down, could be a potent disinflationary force.

In terms of output losses, the MPC member said that the worst is almost surely behind us.

Ghate also warned that macro policy must ensure that a temporary COVID-type shock to the Indian economy does not become permanent.

Economists call this hysteresis," he said, referring to the phenomenon where a temporary phenomenon brings about a permanent change in society.

"In a post-COVID world hysteresis will be driven by human behaviour. Despite the economy opening up, people will still hesitate to go out and spend, Ghate noted.

RBI Governor Shaktkanta Das maintained that while there was headroom for further monetary policy action, at this juncture it is important to keep our arsenal dry and use it judiciously.

Read the original here:
From 'inflation whipsaw' to 'hysteresis': RBI's MPC warns of COVID's after-effects - CNBCTV18

DC adjusting to life back in the classroom – Defiance Crescent News

Defiance College has one goal for the fall of 2020: to give students what they expect a full campus experience with face-to-face instruction, vibrant campus life, and robust athletics. The COVID-19 pandemic may have altered the look of campus, but not the underlying spirit of DC. Classes get underway on Tuesday.

Prior to students returning, DC put in place a number changes to help the campus remain healthy this fall. The college is hopeful that campus life will return to normal soon. At this time, these measures are only put in place for the fall semester and will be extended to spring semester if necessary.

The college has remained in constant contact with the Defiance County Health Department, as well as the Ohio Department of Education, since the beginning of the COVID-19 pandemic. It is the goal of the college to maintain compliance with all local, state and federal health mandates.

"As an educational institution, we are continually guiding the Defiance College community about the importance of abiding by and modeling the safety protocols suggested by leading health experts," stated DC President Richanne C. Mankey. "Students are excited to be here to learn, to enrich their lives, and to grow as individuals. Parents who were on-campus this week to move-in their students saw ourplans at work and told us how they gained confidence in our systems to putpeople and safety first. We know that our ability to succeedlies within the DC communitys willingness to use and layer the most recent health and safety guidelines."

COVID-19 preparedness and response has been handled by the Defiance College Incident Command (IC) team. The IC team is made up of representatives from across the entire DC community and has met continuously since early March. All policies, procedures and changes are brought before the IC team as a way to make sure all of campus is represented in the decisions being made.

Here is a brief sampling of what the IC team has approved: All students were required to submit a negative COVID-19 test before they arrived on campus. Once the test was completed, students were encouraged to self-isolate as much as possible. This was done so all students can be at the lowest health risk possible before returning to campus, thus giving administration a zero baseline to work from if contact tracing were ever needed.

Part of leading in a pandemic is being prepared. The IC team has a plan in place if a member of the DC student body should test positive for COVID-19. A simplified version of the plan is as follows: the student is asked to self-isolate, contact tracing is conducted, and preparations are put in place to ensure the individual can still participate remotely in classes. Residential students in isolation will have meals brought to their room. The procedure is the same for faculty and staff except they are asked to remain at home and, if their health allows, continue to work remotely.

"There is a level of detail in our preparation that I hope is reassuring," said Mankey."The colleges to do list (to prepare campus for the start of fall semester) has been extensive, and it continues to grow and change as the data changes and decisions of external entities are made that affect the college.What will be expected of all of us may not be considered fun; the expectations are important if we want to move beyond the pandemic as quickly as our human behavior will allow."

Everyone on campus is required to wear a mask while around other people. This includes in classes, offices and in the residence halls. Students are not required to wear a mask while in their rooms. Faculty, staff and students are required to conduct daily health checks before they arrive on campus. Residential students will need to do so before they leave their rooms.

The IC team is encouraging the DC community to layer preventative measures. This means using the word "and" instead of the word "or." Wear a mask over your mouth and nose, and physically distance, and wash your hands with soap and water for 20 seconds, and use hand sanitizer, and disinfect your work space before and after use. Layering means using the measures at the same time rather than using only one measure at a time.

Faculty created an alternative teaching schedule in order to reduce contact. Classes with fewer than 10 students meet face-to-face one day and virtually (at the same scheduled time) another. Classes with more than 10 students are split into two groups. Both groups meet face-to-face and virtually, but not at the same time. All classrooms have been rearranged to increase physical distancing. While in the classroom, professors will wear face masks or face shields and students will wear face masks.

"It has been exciting to see DC come together and evolve in order to keep the campus as healthy as possible," noted vice president for academic affairs Dr. Agnes Caldwell. "One major change to the academic routine is that rooms will be sanitized before and after each session. Faculty have been willing to be flexible and have continued to move quickly to adjust to the academic uncertainties."

All areas on campus have been reconfigured to increase physical distancing. For example: computer labs have every other computer station closed, the dining hall seating has been reduced and an overflow area created, and the Pilgrim Library study areas are at a reduced capacity.

As of Aug. 20, the athletic conference, with guidance from the NCAA Board of Governors, has postponed all high-contact sports until spring of 2021. This includes football, soccer and volleyball. The Heartland Collegiate Athletic Conference (HCAC) also is requiring student-athletes be tested frequently for COVID-19. As the situation evolves, so too may the guidance from the state, county and NCAA. For the most up-to-date information on the changes to Defiance Colleges athletic schedules, visit: http://www.defianceathletics.com.

"Although it is disappointing news for our student-athletes, coaches, staff and campus community, we understand the priority is the health and safety of everyone involved in athletics and on our campus," said Defiance athletic director Derek Woodley. "We are committed to providing a meaningful and engaging experience for our student-athletes and will develop a plan for the affected sports to include practices, workouts, and training opportunities in the fall.

The college IC team is not making COVID-19 decisions in a vacuum. The health and safety of the people in the DC community is a top priority, as is the overall student experience. All COVID-19-related decisions are thoroughly considered with the best decision chosen. Sometimes the top decision needs to be altered within hours or days in order to follow best practices. As stated earlier, the goal is to give students what they expect a full DC experience or, at least, as close as is currently possible in a pandemic.

For up-to-date information on the colleges response to COVID-19, visit http://www.defiance.edu/covid19fall.

Originally posted here:
DC adjusting to life back in the classroom - Defiance Crescent News

Nature and nurture both contribute to gender inequality in leadership but that doesn’t mean patriarchy is forever – The Conversation US

Gender expectations can make it harder for women to achieve positions of leadership. Mandel Ngan/AFP via Getty Images

Kamala Harris candidacy as vice president of the United States provoked familiar criticism, based in part on her identity as a woman. Critics find her too angry, too confident, too competitive. But when women do act less competitively, they are seen as less capable of leadership. This is the double-bind women face when aspiring to leadership positions.

To overcome it, we need to understand where it comes from. Why do gender norms privilege men as leaders?

Some psychologists tie the origins of gender norms to aspects of our nature the greater physical strength of men and pregnancy and breastfeeding in women. The idea is that in our hunter-gatherer ancestors, physical strength made men more efficient at, and thus more likely to specialize in, tasks like hunting or warfare. Ancestral women specialized in tasks like infant care, which could be compromised by excessive risk-taking or competitiveness. This got the ball rolling, so the argument goes, toward gender norms that women be less competitive than men, including in the pursuit of leadership.

As an evolutionary anthropologist who studies leadership, I think this evolutionary explanation is not especially persuasive on its own. My view is that gender norms are not just influenced by the evolution of our bodies, but also by the evolution of our minds.

Men didnt specialize in tasks like hunting just because of greater muscle mass, but also because men evolved to take risks to show-off and to overtly compete more than women. These are only average differences many women are more overtly competitive than the average man.

Nevertheless, evolved sex differences in behavior contribute to but neither determine nor ethically justify the gender norms that societies create. I suggest that taking an evolutionary perspective can actually help reduce gender inequality in leadership.

Across animal species, males tend to compete more violently and more frequently than females. Many evolutionary biologists theorize this is due to sex differences in parental investment. As females spend time bearing and nursing young, males have access to a smaller remaining pool of potential mates. Facing greater competition over mates, males tend to evolve greater body mass, weaponry such as horns, and physical aggression to prevail against rivals. Females tend to evolve greater selectivity in their use of aggression, in part because injury can impede parenting.

Do human beings fit these trends? A man of average physical strength is stronger than 99% of women. Even in the most egalitarian small-scale societies, studies find that men are likely to be more physically aggressive and more likely to directly compete against others.

Across studies, women are more often observed to engage in indirect competition, such as gossip or social exclusion. Womens willingness to compete may also be more selective. For example, when competition directly benefits their children or when results are not made public, women, on average, can be as competitive as men.

Men may also have evolved greater motivation to compete by forming large, hierarchical coalitions of same-sex peers. Men can be quicker to resolve low-level conflicts which goes along with valuing relationships based on how much they help with coalition-building. Womens same-sex coalitions tend to be smaller and more egalitarian, enforced through threat of social exclusion.

Historically, these average sex differences influenced the creation of gender norms to which women and men were expected to conform. These norms restricted womens activities beyond the household and increased mens control over politics.

Importantly, different environments can strengthen or weaken sex differences. Evolution is not deterministic when it comes to human behavior. For example, in societies where warfare was frequent or food production was more reliant on mens labor, youre more likely to find cultural emphasis on male competitiveness and coalition-building and restriction of womens opportunities.

Recognizing the influence of evolution on behavior and gender norms isnt just of academic interest. I think it can suggest ways to reduce gender inequality in leadership in the real world.

First, trying to get women and men to on average behave the same like simply encouraging women to lean in is unlikely to have tremendous effect.

Second, people should call attention to those traits that help elevate many unqualified men to positions of power. These traits include larger body size, and mens greater tendency to self-promote and to exaggerate their competence.

Third, people should scrutinize the extent to which organizations reward mens more than womens preferred forms of competition and cooperation. Organizational goals can suffer when competitive masculinity dominates an organizations culture.

Fourth, organizations that have a more equitable mix of male and female leaders have access to more diverse leadership styles. This is a good thing when it comes to tackling all kinds of challenges. In certain scenarios, leader effectiveness may hinge more on risk-seeking, direct competitiveness and creation of rigid hierarchies on average favoring male leaders.

In other contexts, perhaps the majority, leader effectiveness may depend more on risk aversion, less direct forms of competition, and more empathy-driven forms of relationship-building on average favoring women leaders. This case has been made for responses of women-led governments to the current coronavirus pandemic, particularly relative to the bravado of presidents like Donald Trump or Jair Bolsonaro.

[Deep knowledge, daily. Sign up for The Conversations newsletter.]

Finally, people can rely on other human tendencies including the impulse to emulate the prestigious to chip away at gender norms that favor men as leaders. The more that existing leaders, male or female, promote women as leaders, the more it normalizes women at the top. A now-famous study in India randomly assigned villages to elect women as chief councilors; girls in those villages subsequently completed more years of formal education and were more likely to aspire to careers outside the home.

Patriarchy is not an inevitable consequence of human nature. Rather, better understanding of the latter is key to ending the double-bind that keeps women out of leadership.

Go here to see the original:
Nature and nurture both contribute to gender inequality in leadership but that doesn't mean patriarchy is forever - The Conversation US

Researchers urge caution over study linking marijuana to autism – Spectrum

Rolling research: Pregnant women are increasingly using marijuana, prompting researchers to examine its effects on babies development.

FangXiaNuo / iStock

Women who use marijuana while pregnant may be more likely to give birth to an autistic child, according to a study published last week in Nature Medicine1.

The findings generated widespread press coverage, but researchers are calling for a cautious interpretation of the results in part because the association surfaced through an analysis of birth records, not a controlled study.

This is still a database study and its not going to answer all the questions, says lead investigator Daniel Corsi, senior research associate at the Ottawa Hospital Research Institute in Canada. We dont have perfect data.

The findings are provocative, particularly given the large study size, says Stephen Sheinkopf, associate professor of psychiatry and human behavior at Brown University in Providence, Rhode Island, who was not involved in the work.

Women are increasingly using marijuana during pregnancy, especially as more states in the United States and other countries legalize its use2. The trend has raised questions about how the substance affects fetal development.

But scientists need to take care in communicating the new results, Sheinkopf says: These are going to be viewed not only by the public but also by policymakers.

The researchers tracked diagnoses of neurodevelopmental conditions, including autism, in more than 500,000 children born between 2007 and 2012 in Ontario, Canada. They used a birth registry to identify mothers who used cannabis during pregnancy. At a first trimester check-in, 0.6 percent of the mothers in the registry reported that they had.

Corsi and his colleagues also checked whether any of the children in the registry had been diagnosed with autism after age 18 months, or attention deficit hyperactivity disorder (ADHD), intellectual disability or learning disorders after age 4.

Of the half-million registered children, 7,125 were diagnosed with autism, the team found. And, Corsi says, the prevalence of autism was higher among children born to women who had used marijuana during pregnancy: 2.22 percent, compared with 1.41 percent among women who had not.

But marijuana users differed from nonusers in many other ways that could affect pregnancy outcomes: For example, they were far more likely to have a psychiatric condition, and to use other substances, such as alcohol and prescription drugs, during pregnancy.

To control for these potential confounding factors, the researchers pared down the non-user group from nearly 500,000 to around 170,000 to match them to the user group more closely.

The association remained after the matching, Corsi says, with 2.45 percent of cannabis-exposed children receiving an autism diagnosis, compared with 1.46 percent of children who were not exposed. It also stood after controlling for other factors, such as examining women who used cannabis but no other substances.

Its compelling that their primary finding of that association with autism was able to be upheld, says Rose Schrott, a doctoral candidate at Duke University in Durham, North Carolina, who was not involved in the research but has studied the effects of marijuana on autism genes3. The findings provide a strong foundation for additional, more tightly controlled studies, such as in animal models, she says.

There are other confounding factors that the retrospective data cant capture, Corsi and others say.

For example, the information on a mothers psychiatric condition only captures her diagnosis, and does not take into account undiagnosed conditions or those in the father or other family members. Also, the socioeconomic status may be skewed because researchers measured it using census data on the area where the mothers lived, rather than individual household income.

And the data on marijuana use indicates only whether a mother used marijuana at all, not how much or often or whether for recreational or medicinal purposes to treat nausea, for example. Demonstrating that more marijuana use leads to a stronger association with autism would strengthen the finding, says Keely Cheslack-Postava, research scientist at the New York State Psychiatric Institute in New York City, who was not involved in the research.

Its a great use of the data that was there, but I would like to see that kind of evidence in the future to help us really assess if this is a true association, Cheslack-Postava says. As it stands, the study shows that the relationship between marijuana and autism is a question that deserves further examination.

The study may underestimate marijuana use, Sheinkopf says, because mothers may be reluctant to report marijuana use during pregnancy due to stigma or concerns about legal repercussions.

Theres a long history of efforts to harshly criminalize drug use during pregnancy, and this is damaging to mothers and babies because it shunts women from the healthcare system to the legal system in really damaging ways, he says. We as clinical scientists need to advocate for the findings to be used to improve healthcare and not for the purposes of criminalization of moms.

Future studies could examine cannabis use in a research setting, where privacy may be better protected than it is in a doctors office. Corsi is also planning studies that use blood or urine samples to precisely measure cannabis levels during pregnancy.

Excerpt from:
Researchers urge caution over study linking marijuana to autism - Spectrum

What Is Needed to Fix Californias Coronavirus Testing? – Governing

(TNS) The ability to get tested for the coronavirus, and get test results quickly, has been one of the most unpredictable and frustrating parts of the nations pandemic response.

While some people are getting test results back within a day, others as recently as last week were waiting two weeks or more way past the window of time when a positive test result can be used to find a sick individuals contacts to trace and contain spread.

Theres been some improvement in recent days, with times tightening in some areas. But the fact is that testing availability has fluctuated dramatically during the pandemic, and it may again.

The situation went from being nearly impossible to get tested at the start of the pandemic, because hospitals and medical clinics did not have had enough tests, to much-improved by late April through early June, when testing supply stabilized and the Bay Area flattened the curve. During that period, demand for testing was relatively low, and the medical and lab system could collect specimens and process tests in relatively short order.

But by late June, as the summer surge began to take hold in the Bay Area, testing in many parts of the region faltered. As more and more people sought testing, turnaround times for test results stretched to nearly three weeks for some patients though the most seriously ill patients were typically able to get results within a day.

Last week, turnaround times began improving. Quest Diagnostics, the largest lab provider in many regions, says it is now reporting test results in two to three days. Napa County, which earlier this month was seeing wait times of up to 19 days, is now seeing wait times of two to three days, health officials said.

So what needs to be done to prevent future testing backlogs in the event of another surge? The Chronicle sought input from local health officials and laboratory directors on what it would take to improve testing for the long haul. They zeroed in on a mix of policy, technology and human behavior.

National strategy to distribute testing supplies to labs with fastest turnaround times. Labs across the country are competing for the same limited supplies of reagents, plastic pipette tips and other parts and chemicals needed to perform coronavirus tests. And there is little transparency for why some labs are getting more supplies, or supplies more quickly than others, said several health officers and lab directors. Its become a little bit of a Wild West, said Dr. Ori Tzvieli, deputy health officer for Contra Costa County. Theres not a coordinated strategy. Thats been frustrating.

Many academic labs have the ability to turn tests around faster than commercial labs, but they appear to be lower down on the priority list to receive supplies from manufacturers, said Dr. David Lubarsky, CEO of UC Davis Health. Those supplies should be going to labs that can do tests in 24 or 48 hours, he said. It seems commercial labs are getting the lions share of supplies, which is like throwing them into the ocean, he said.

Californias testing task force is working to build out the supply chain for swabs, collection kits and other supplies, and has issued a survey to local public health departments and academic labs to assess supply limitations to ensure all labs are being used at full capacity, according to the California Department of Public Health.

Reduce reliance on large commercial labs, instead using labs that can get results faster: Health care providers and publicly funded testing sites should be sending specimens to labs that have faster turnaround times, experts said. Because large labs like Quest and Labcorp have long been the standard lab services providers for hospitals and clinics, they were among the first that states, counties and health care providers turned to for coronavirus testing. But they became overwhelmed by the demand.

The state has worked with its testing contractors Optum and Verily, which operate dozens of state-funded testing sites to identify additional labs, said a spokeswoman for the California Department of Public Health. Verily, which sends tests to Quest, plans to bring on two additional large labs this month, a company spokeswoman said. Experts stress the need to continue spreading tests around to other labs run by academic institutions or private companies. Some of this is already underway.

In the Bay Area, for instance, labs at San Franciscos Chan Zuckerberg Biohub and UC Berkeleys Innovative Genomics Institute (IGI) are processing tests within 24 to 48 hours for county public health departments and vulnerable populations in the East Bay. San Franciscos city testing program found early success in securing fast turnaround times by contracting with Color Genomics, the Burlingame firm whose lab is turning tests around in 24 to 48 hours. Other counties, like Alameda and Marin, later began contracting with Color as well to get faster results. And Contra Costa County recently approved contracts with additional private labs that have promised turnaround times of two to three days. Having multiple contracts with multiple labs will allow us to be nimble and flexible regarding which labs we send tests to, and not reliant on a single or a couple labs if they experience testing delays, Contra Costa County Health Officer Dr. Chris Farnitano said during a Tuesday update to the Board of Supervisors.

Develop faster tests for surveillance, and deploy them widely: Antigen tests are gaining traction among researchers as one potential way to test large numbers of people quickly, without gumming up the lab system. The vast majority of coronavirus testing is currently done through PCR testing at labs. This type of test detects the presence of the virus genetic material (RNA) and has long been considered the standard for testing for respiratory viruses. Antigen tests detect viral proteins through a less involved, faster and cheaper process that can report results on the spot within minutes, rather than sending the specimen to a lab, which can take days. But they are less sensitive than PCR tests. Some epidemiologists say the tradeoff may be worth it because antigen tests would enable far more people to get tested frequently, get results back fast, and likely still catch most cases. Some envision a day when people can get an at-home test, and then test themselves every morning so they know if they are negative and can go to work or school, or positive and should stay home.

I dont see another solution at the moment, Dr. Michael Mina, a Harvard epidemiologist and proponent of rapid antigen testing said last week during UCSF Medical Grand Rounds, a weekly meeting of medical experts. This is simple technology. ... This is the kind of thing we shouldnt be asking: Do companies have them ready to build at the point? We should be saying, How do we get the federal government to use all their might and resources to start making these? And for a fraction of the cost of the most recent stimulus bill passed for coronavirus response, we can have every American using one of these every single day for a year.

Antigen tests are not yet widely used. In July, federal health officials began shipping millions of the tests to nursing homes across the United States, including more than a dozen in the Bay Area. There are two antigen tests that have received FDA Emergency Use Authorization, made by Becton Dickinson and Quidel, and both can only be done for symptomatic people. If the data on antigen testing turns out to be good and the FDA authorizes their use for asymptomatic people as well opening the door for daily at-home testing that could be a game changer, said Nam Tran, who oversees coronavirus testing at UC Davis Medical Center.

Experts also say saliva tests which similarly are not yet widely used, except by some professional sports teams to test players and staff also hold promise. UC Berkeleys IGI in late July began a research study on saliva tests, administering them to thousands of UC Berkeley students, faculty and staff to see if the test is sensitive and specific enough to use in a clinical setting. If it is, it could greatly increase access to testing since saliva is easier to collect than having a health care professional do a nasal swab. And it could potentially be done in peoples homes, making testing more frequent and accessible.

Rethink who needs to get tested: The early narrative around testing was to test everyone, regardless of the severity of symptoms and even if they did not have symptoms, since many people with the coronavirus are asymptomatic. But thats part of the reason testing demand is overwhelming supply, said Solano County Health Officer Dr. Bela Matyas. Testing should be limited to people when knowing the result will affect their treatment plan, such as hospitalized patients, and for surveillance in nursing homes and prisons where the risk of spread is highest, Matyas said. The so-called worried well shouldnt bother getting tested, at least not when the testing system is overwhelmed, and neither should mildly symptomatic people they should simply assume they have it and self-quarantine for 10 days, he said. That would help clear up the backlog.

Adjust human behavior: Local health officials have identified social gatherings of friends and family, where households mix, to be one of the most common ways the coronavirus is spreading. If people practiced social distancing and mask-wearing more consistently in these settings, it would drive down demand for testing by driving down the disease rate in the population. All of the control is in our own hands if we exercise control, said Matyas. If we practiced some level of social distancing in our social interactions with family and friends, wed control the outbreak and thereby control the testing problem. But thats not an easy thing for people to connect the dots on. ... Its a low-tech, cheap solution.

2020 the San Francisco Chronicle.Distributed byTribune Content Agency, LLC.

Read the original:
What Is Needed to Fix Californias Coronavirus Testing? - Governing

Tough, timely and team-driven: 50 years of energy research – Princeton University

Princetons vital research across the spectrum of environmental issues is today and will continue to be pivotal to solving some of humanitys toughest problems. Our impact is built on a long, deep, broad legacy of personal commitment, intellectual leadership, perseverance and innovation. This article is part of a series to present the sweep of Princetons environmental excellence over the past half-century.

Yueh-Lin (Lynn) Loo's moment of clarity came while sitting at a long wooden conference table at Princeton University's Maeder Hall Auditorium. The director of the Andlinger Center for Energy and the Environment was leading a meeting with a renowned Princeton political scientist, psychologist, economist and esteemed engineering colleagues, who were all gathered to discuss a massive problem: how to provide energy to the world while simultaneously eliminating greenhouse gas emissions.

I got goosebumps along the back of my neck, said Loo, the Theodora D. 78 and William H. Walton III 74 Professor in Engineering and professor of chemical and biological engineering.

Seeing so many experts at the table, many of whom had never worked on energy before, showed me that we had built something whose sum was greater than the individual parts, said Loo.

The scene took place in June 2019 at the inaugural workshop of Rapid Switch, an international research collaboration spearheaded by the Andlinger Center. Its focus is accelerating decarbonization efforts globally, region by region and sector by sector.

Fifty years before that meeting in Maeder Hall, in 1969, a similar new collaboration was being forged at the University. A wave of energy and environmental problems were coming to bear in the United States, from the Cuyahoga River fire to the Santa Barbara oil well blowout. Universities were grappling with how to respond.

Princeton University President Robert F. Goheen learned about a young physicist, an assistant professor at Yale University, with a passion for environmental issues. The professor had just published a book, Patient Earth, and was looking to move from the abstract realm of physics to research that would more directly protect the planet. Robert Socolow was recruited to build a multidisciplinary research program on energy and the environment, the first of its kind at the University.

Princetons offer could not have been more exciting to me; building an interdisciplinary center was exactly what I wanted to do, said Socolow, now professor of mechanical and aerospace engineering, emeritus. What became the Center for Energy and Environmental Studies (CEES) was formally founded in 1971 in the School of Engineering and Applied Science, as Robert Jahn became its dean. Irvin Glassman, a former professor of mechanical and aerospace engineering, during his tenure as CEES director put the center on the campus map.

Photo courtesy of the Andlinger Center for Energy and the Environment

According to Socolow, the Universitys response to the environmental issues of the day was unusually robust. He said other Ivy League institutions were not looking to invest in a tenure-track faculty member with an environmental mission.

During its 30-year span, CEES responded to national issues and provided meaningful, timely research.

Robert Williams, a senior research scientist, emeritus, and founder of CEES Energy Systems Analysis Group, and Frank von Hippel, professor of public and international affairs, emeritus, and senior research physicist, led the burgeoning research areas energy systems and energy security. With increased interest in conservation and nuclear power after the 1973 oil crisis, the researchers made recommendations on nuclear security, and demonstrated the economic and environmental benefits of cogeneration in power plants.

The centers research paved the way for the passage of the Public Utility Regulatory Policies Act (PURPA), which promoted energy conservation and deregulated the electric industry in favor of competition among electric producers. The centers research ranged from developing the study of energy use in buildings to evaluating energy use and fuel efficiency for the automobile industry to sustainable global development, with close collaborations with top researchers in Brazil, India and Europe. In 1993, Williams and von Hippel were named Macarthur Genius Fellows. It was the first time two scholars from the same academic unit of any university were recognized at the same time with this honor.

Photo courtesy of Steven Cowley

During the same period, Princeton University became a leader in the field of nuclear fusion, the source of energy that powers stars, including the sun. In 1951, Princeton atrophysicist Lyman Spitzer met with the Atomic Energy Commission and proposed a method for controlled fusion on Earth. Realizing that fusion could become an inexhaustible energy source, the Commission greenlighted the project. After being declassified in 1958, the program became the Princeton Plasma Physics Laboratory, a U.S. national lab managed by Princeton University.

"Princeton's always been the leader in the world. Princeton Plasma Physics Laboratory is the most famous lab in fusion and has been since it was declassified in 1958, said Steven Cowley, director of the Princeton Plasma Physics Laboratory (PPPL) and former chief executive of the United Kingdom Atomic Energy Authority.

PPPL was the first in the world to produce substantial amounts of fusion energy, generating 10 million watts of power with a core fusion temperature of 250 million degrees Celsius in 1994.

The draw of fusion energy today is that it could be a nearly limitless source of carbon-free energy that would help the United States and world lower its carbon footprint by weaning energy systems off of oil, coal and gas. PPPL is currently working to develop strategies to lay the groundwork for commercializing fusion in the latter half of the century, while collaborating on the worlds most advanced fusion reactor, the International Thermonuclear Experimental Reactor, or ITER, which is under construction in France.

In the late 20thcentury, the focus among scientists tracking environmental problems shifted to another issue: carbon dioxide. Fears around whether fossil fuels would run out, whether they would be affordable, nuclear accidents, and air and water pollution, were slowly overtaken by concerns of the greenhouse effect on the global climate.

We were beginning to understand that everyday human activities could overwhelm the earth, said Socolow.

Robert Williams, of the Energy Systems Analysis Group, recognized that carbon dioxide could be removed from the flues of power plants and stored instead of being released into the atmosphere. Geologists identified that there was adequate geological storage underground, which environmentalists regarded as a safer option compared to ocean storage. Williams and Socolow caught the attention of BP, which was looking to the academic community for support in this area. They teamed up with Stephen Pacala, the Frederick D. Petrie Professor in Ecology and Evolutionary Biology, and their proposal to BP was chosen over proposals from Stanford and MIT.

Very few people at Princeton thought we could beat Stanford and MIT, said Socolow. But we presented ourselves as looking at a whole environmental problem, not at narrow parts of it.

BP awarded the multi-million-dollar grant to Princeton, which established the Carbon Mitigation Initiative (CMI) in 2000 as part of the Princeton Environmental Institute. To this day, CMIresearch focuses on advancing measurements and modeling of atmospheric, ocean, land and ice biogeochemical processes along with energy technology and integration to address the carbon and climate change problem.

In 2007, the Intergovernmental Panel on Climate Change (IPCC) issued its fourth assessment report, which presented the scientific consensus around climate change and pointed to human activity as the cause.

For Emily Carter, who was then a Princeton professor of mechanical and aerospace engineering and applied and computational mathematics, it was the first time the IPCC report unequivocally convinced her that human beings are having a profound effect on the climate. At that moment, Carter, a chemist and physicist by training, shifted her entire research program to focus on sustainable energy. Carter made the decision to be very intentional with her work, and ensure that every grant she wrote was making use of my expertise to try to get us off of fossil fuels, to work on sustainable energy technologies, said Carter in an interview for the She Roars podcast.

At the same time, University leadership saw the need to redouble efforts to contribute meaningfully to the pressing issues of energy and climate, as it had done for the environment, under president Harold T. Shapiro more than a decade before with the founding of the Princeton Environmental Institute.

Shirley M. Tilghman, president of the University, emeritus, said she knew that the University had to act, and it had to be dramatic and significant.

If we were a serious research university in the 21st century, we had to have a strong presence in the field of energy research, said Tilghman, who is also a professor of molecular biology and public affairs, emeritus.

Tilghman found the support she needed to launch a new effort in alumnus Gerhard R. Andlinger of the Class of 1952, a businessman and philanthropist, who donated $100 million to establish the Andlinger Center for Energy and the Environment in 2008. Its mission was to create solutions for energy and environmental problems, and Carter would be the founding director.

Carter charged ahead with confidence, bringing to life the vision of a pan-University center dedicated to developing solutions, with a focus not just on engineering, but also on policy and human behavior. She sought out every relevant department to participate and collaborate with the center through grants, partnerships and recruitment efforts. The Andlinger Center brought in joint-appointed faculty members who worked on solutions ranging from low-carbon cements to technologies for improved power delivery to frameworks for environmental decision-making.

Loo, then associate director, succeeded Carter, who became dean of Princetons engineering school and later provost at UCLA. As associate director, Loo founded the centers corporate partnership program, Princeton E-ffiliates Partnership, which aims enable transformational innovations and move technologies quickly to market by engaging with industry stakeholders.

Loo strategically focused on external engagement and high-impact projects, and guided the research community to work on what she saw as the practical, unanswered questions of the century. She incorporated the Energy Systems Analysis Group into the center. She challenged researchers to identify pathways to decarbonization that are feasible and effective in all parts of the world, including areas still ramping up access to energy for growing populations.

The center also developed and launched the University's first executive education program, aimed at equipping decisiomakers to think critically and creatively about their roles in solving environmental and climate problems and to guide their organizations in support of this. In his opening remarks to the participants in 2018, Princeton University President Christopher L. Eisgruber said that the program, executed in collaboration with the World Economic Forum for its class of Young Global Leaders, exemplifies the Universitys increased commitment to partnerships that bring together the academy and entrepreneurial sectors to drive impact and progress. The center continues to investigate and assess new ways for countries, communities and companies to thrive while protecting the environment and mitigating climate effects.

Five decades after Socolow's initiative helped lay the foundations for modern environmental research and a decade since the Andlinger Centers establishment, Loo takes pride in the community the Andlinger Center has built and everyone who continues to join.

With Rapid Switch, Loo hopes to bring all necessary specialties together to expand global energy access and stymie climate change, building on a strong history of collaboration and action in this realm. No individual research group or even whole institution, will have all the expertise to solve the complex challenges, "but what I can do is bring people together, Loo said.

Read the original post:
Tough, timely and team-driven: 50 years of energy research - Princeton University

When Several Lines Are Better Than One – Knowledge@Wharton – Knowledge@Wharton

Everyone knows the existential dread that comes along with standing in line for what seems like an eternity. But new research by Wharton operations, information and decisions professor Hummy Song, Guillaume Roels from INSEAD and Mor Armony from New York Universitys Stern School of Business suggests that knowledge-based industries should rethink how they approach this aspect of customer service. In this article, originally published in INSEAD Knowledge, the researchers write about their findings and how operational design can change organizational culture and improve performance.

Weve all been in lines that seem to last forever, especially if we choose our queue at the checkout and the one next to ours is moving faster. You know the existential dread that comes along with standing in a dedicated queue and waiting interminably. To make service of all kinds more efficient, the predominant thinking in operations management is to form a single serpentine queue that feeds different servers a pooled queue.

Traditional operations management theory has determined that pooling is more efficient. And it may be, if tasks or widgets are the items in the queue and its machines, not human beings, that are processing them. In a system with dedicated queues, its possible to have one thats empty and another queue thats full but no way to rebalance this. If the queue contains customers, naturally they can switch to the empty queue. But when we consider job assignments, for example, these cant just move across queues. So the dedicated queue is viewed as less efficient than a pooled one in terms of throughput and waiting time.

An impactful paper by Hummy Song and her co-authors focused on waiting rooms in emergency departments and found that when a part of the emergency department (ED) at a Kaiser Permanente hospital in California changed from a pooled queue to dedicated queues, patients had shorter wait times and a shorter length of stay. In the pooled setup, patients in the waiting room were assigned to a physician only when one became available. The switch to a dedicated system meant that as soon as patients were triaged, they were assigned to a particular physician and that physicians queue. Interestingly the researchers found the opposite of traditional efficiency in queueing theory; patients had a shorter stay in the ED when they were in dedicated queues. Physicians anecdotally described how they felt more responsible in the dedicated setup for the people assigned to them in the waiting room before they actually saw them as a patient.

Its unusual in operations management to consider people in all their humanity, with their own idiosyncratic biases and behaviors.

Its unusual in operations management to consider people in all their humanity, with their own idiosyncratic biases and behaviors. In Pooling Queues with Strategic Servers: The Effects of Customer Ownership, forthcoming in Operations Research, we show that efficiency is improved across the system if organizations consider a concept that may be unfamiliar to scholars in this area: customer ownership. Service providers may develop a greater sense of obligation and accountability when they see all the customers in their queue as belonging to them rather than as an indiscriminate pool of demand.

We modelled this upending of queueing theory using customer ownership as the motivator. We described the split in servers sense of customer ownership between when the customers enter the system and when they are right in front of the server. Our theory is human servers have human reactions that impact operational effectiveness like how long someone spends in an ED.

When Does a Person Become a Customer?

When we talk about customer ownership, its like a sense of responsibility that ED doctors had for people in the waiting room when they were triaged. Other doctors may feel customer ownership when the patient is in front of them. In our model, we stripped out financial incentive notions imagine call center workers who get a bonus dependent on short wait times, for instance to consider customer ownership on its own. (In fact, doctors at Kaiser are paid a fixed wage, so they have no financial incentive to see more patients.) Organizational behavior has documented a sense of organizational ownership, but customer ownership had not been previously analytically modelled nor had its consequences on process performance been considered.

In the model, we broke down customers who are already in the room versus the entire scope of the system. System-wide customer ownership is a combination of the people who are currently being served plus those still in the queue.

Servers either care about the customer they are currently serving or not only about that person, but also future customers as well. Incorporated in customer ownership is an interesting time dimension, whether servers focus on the present or the future and how they behave.

The Type of Task Matters

With a combination of game theory and queueing theory, one of the innovations of this paper is how we model the discretion that servers have in terms of their choice of the pace of work, which seems endogenous in practice.

In some cases, servers have very limited discretion. For instance, if you have to administer a survey of ten yes/no questions, you might have limited flexibility for taking much more or much less than the five minutes the survey was designed to last. But if the task is more knowledge-intensive, like physicians seeing a variety of cases in the ED, its up to the server to decide how much time is needed. There is a clear distinction between the routine tasks where servers have some discretion and those that are typically more knowledge-intensive where servers may have more discretion about how much time they need to complete it effectively.

The type of service matters when choosing an efficient queueing system.

The type of service matters when choosing an efficient queueing system. With a standard type of task, the traditional theory that pooling queues are the most effective mechanism holds. But if the service provided is knowledge-intensive, its important to understand that the effect can be flipped.

We modelled the utility of servers and how their notion of customer ownership maximizes it. This paper formalizes what was observed in Songs earlier work and demonstrates that the phenomenon can be justified on rational grounds. Our work is grounded in practice, and we built a theory to explain how it is transferrable to other contexts.

Our paper highlights the importance of accounting for human behavior on the part of the server, shifting attention away from the customers and the human impact on process performance.

Broader Implications of Customer Ownership

Queues arent only at the grocers or the airport. Managers in certain domains may need to consider redesigning their queueing systems not only when it comes to assigning customers to servers but also assigning work to team members. Another aspect to consider is the attention that individual contributors in knowledge-intensive services have on their own task queues. Think emails, assignments and other deliverables. Our paper suggests that in knowledge-intensive services where workers have a lot of discretion about the amount of time spent on a project, queues need to be managed a little bit differently. We find dedicating assignments to certain servers rather than pooling them to be more efficient.

Customer ownership is a concept that reflects organizational culture. As such, it can modified, like other aspects of culture. Operations management often takes organizational culture for granted; our paper shows that operational design can shape it and thus impact performance. In particular, no one had previously pointed to queue configuration, which is an important operational lever, as a way to shape organizational culture. Yet switching to dedicated queues can lead to greater customer ownership.

When we think about queues, we usually think about them from the customers point of view. But we need to look at the human on the other end of the queue. Including a servers customer ownership in consideration when planning queues will shorten the time for everyone.

Here is the original post:
When Several Lines Are Better Than One - Knowledge@Wharton - Knowledge@Wharton