The NBER Reporter 2007 Number 4: Program and Working Group Meetings
Severe flight-to-quality episodes involve uncertainty about the environment and not simply risk about asset payoffs. The uncertainty is triggered by unusual events and untested financial innovations that lead agents to question their world-view. Caballero and Krishnamurthy present a model of crises and central bank policy that incorporates Knightian uncertainty. Their model can explain crisis regularities, such as market-wide capital immobility, agents' disengagement from risk, and liquidity hoarding. They identify a social cost of these behaviors, and a benefit of a lender-of-last-resort facility. The benefit is particularly high because public and private insurance are complements during uncertainty-driven crises.
Lucas proposes a model to describe the evolution of real GDPs in the world economy that is intended to apply to all open economies. The five parameters of the model are calibrated using the Sachs-Warner definition of openness and time-series and cross-section data on incomes and other variables from the nineteenth and twentieth centuries. The model predicts convergence of income levels and growth rates and has strong but reasonable implications for transition dynamics.
Adams, Einav, and Levin present new evidence on consumer liquidity constraints and the credit market conditions that might give rise to them. Their analysis is based on unique data from a large auto sales company that serves the subprime market. They first document the role of short-term liquidity in driving purchasing behavior, including sharp increases in demand during tax rebate season and a high sensitivity to minimum down payment requirements. They then explore the informational problems facing subprime lenders. They find that default rates rise significantly with loan size, providing a rationale for lenders to impose loan caps because of moral hazard. They also find that borrowers at the highest risk of default demand the largest loans, but the degree of adverse selection is mitigated substantially by effective risk-based pricing.
Amador and Weill present a micro-founded economy with money where agents are uncertain about an aggregate productivity parameter and the monetary aggregate. They show that when agents learn from the distribution of prices, an increase in public information about the monetary aggregate can reduce the information content of the price system and welfare.
A central assumption of open economy macro models with nominal rigidities relates to the currency in which goods are priced, whether there is so-called producer currency pricing or local currency pricing. This has important implications for exchange rate pass-through and optimal exchange rate policy. Using novel transaction level information on currency and prices for U.S. imports, Gopinath, Itskhoki, and Rigobon show that even conditional on a price change, there is a large difference in the pass-through of the average good priced in dollars (25 percent) versus non-dollars (95 percent). This finding is contrary to the assumption in a large class of models that the currency of pricing is exogenous and is evidence of an important selection effect that results from endogenous currency choice. The authors describe a model of optimal currency choice in an environment of staggered price setting and show that the empirical evidence strongly supports the model's predictions of the relation between currency choice and pass-through. They further document evidence of significant real rigidities, with the pass-through of dollar pricers increasing above 50 percent in the long run. Finally, rhey numerically illustrate the currency choice decision in both a Calvo and a menu-cost model with variable mark-ups and imported intermediate inputs and evaluate the ability of these models to match pass-through patterns documented in the data.
Bloom,Sadun, and Van Reenen show that U.S. multinationals operating in the United Kingdom (UK) have higher productivity than non-U.S. multinationals in the UK, primarily because of the higher productivity of their information technologies (IT). Furthermore, establishments that are taken over by U.S. multinationals increase the productivity of their IT, whereas observationally identical establishments taken over by non-U.S. multinationals do not. One explanation for these patterns is that U.S. firms are organized in a way that allows them to use new technologies more efficiently. The authors develop a model of endogenously chosen organizational form and IT to explain these new micro and macro findings.
Acharya and Viswanathan consider collateral as a way to relax rationing in a moral hazard set-up. Rationed firms must liquidate some of their assets. Non-rationed firms purchase assets, but their borrowing capacity also is limited by moral hazard. The equilibrium price exhibits cash-in-the-market pricing and depends on the entire distribution of liquidity shocks in the economy. As the intensity of moral hazard varies, the equilibrium price and the level of collateral requirements will be negatively related. Contrary to models in which collateral requirements are exogenously specified, here collateral has a stabilizing role on prices: for any given intensity of moral hazard problem, asset sales are smaller and equilibrium price, in turn, is higher in the presence of optimal collateral requirements. This price-stabilizing role implies that the ex-ante debt capacity of firms is higher with collateral, which in turn stabilizes prices further, resulting in an important feedback: collateral reduces the proportion of ex-ante rationed firms and thus leads to greater market participation.
Feedback effects from asset prices to firm cash flows have been documented empirically. This finding raises a question: How are asset prices determined if price affects the fundamental value, which in turn affects the price? In such an environment, by buying assets that others are buying, investors insure high future cash flows for the firm and subsequent high returns for themselves. Hence, investors have an incentive to coordinate, which may generate self-fulfilling beliefs and multiple equilibria. Using insights from global games, Ozdenoren and Yuan pin down investors' beliefs, analyze equilibrium prices, and find that strong feedback leads to higher excess volatility.
Diebold and Strasser use market microstructure theory to derive the cross-correlation function between latent returns and market microstructure noise. The cross-correlation at zero displacement is typically negative, and cross-correlations at nonzero displacements are positive and decay geometrically. When market makers are very risk averse, the cross-correlation pattern is inverted. The results may be useful for choosing among different market microstructure models and estimation of noise-robust measures of realized volatility.
At the end of 2006 the New York Stock Exchange (NYSE) introduced its Hybrid market. Hybrid greatly expanded electronic trading and reduced floor trading. Using this change as an instrument for floor activity, Hendershott and Moulton test the benefits of the repeated interactions available on the floor against the costs of on-floor traders being advantaged relative to off-floor traders. They find that measures of trading costs increase as floor activity decreases. In addition, cooperation on the floor decreases and transitory volatility increases. However, Hybrid increases speed of execution. Together these findings support the existence of a tradeoff along these measures of market quality. Hybrid moved the NYSE to a different position on that tradeoff.
Nigmatullin, Tyurin, and Yin use a recent sample of data on Dow Jones (DJIA) stocks traded on the NYSE to study relationships between the state of the limit order book and high-frequency returns, volatility, and other market variables in the framework of heterogeneous vector autoregressions. They show that most of the information contained in the limit order book can be captured by four interpretable common factors related to the book's "shape". They demonstrate that these common factors have significant interactions with price volatility and instantaneous returns, even after controlling for total trade volume and signed order flow. However, the estimated coefficients across stocks exhibit considerable heterogeneity, which should be taken into account for proper inference. The impulse-response analysis reveals significant feedback between the shape of the limit order book and transaction volume, bid-ask spread, return, and short-run volatility.
Chemmanur, He, and Hu use a large sample of transaction-level institutional trading data to study the role of institutional investors in Seasoned Equity Offerings (SEOs), and in particular, to distinguish between a "manipulative trading" role and an "information production" role, as postulated by the theoretical literature on SEOs. The data allow them to explicitly identify institutional SEO share allocations for the first time in the literature. They study whether institutional investors have private information about SEOs and its consequences for SEO share allocation; institutional trading before and after the SEO and realized trading profitability, and the SEO discount. Their results can be summarized as follows: First, institutions are able to identify and obtain more allocations in SEOs with better long-term returns. Second, more pre-offer net buying of the SEO firm's equity by institutional investors is associated with more institutional SEO share allocations, and also more post-offer net buying. Third, institutions flip only a small fraction of their SEO share allocations: 3.2 percent during the first two days post-SEO. However, this lack of flipping does not appear to be costly to institutional investors: there is no significant difference between the extent of underpricing and the realized profitability of institutional SEO share allocation sales. Fourth, institutional investors' post-SEO trading significantly outperforms a naive buy-and-hold trading strategy in SEOs. Further, the profitability of post-offer trading in SEOs where institutions obtained allocations is higher than that of trading in SEOs where they did not obtain allocations. Finally, more pre-offer institutional net buying and larger institutional SEO share allocations are associated with a smaller SEO discount. Overall, these results are consistent with an information production rather than a manipulative trading role for institutional investors in SEOs.[back to top]
Benigno analyzes a dynamic model of consumption and portfolio decisions in which agents seek robust choices against some misspecification of the model's probability distribution. This near-rational environment can, at the same time, explain imperfect international portfolio diversification and break the link between cross-country consumption correlation and the real exchange rate as usually implied by standard preference specifications. Portfolio decisions imply moment restrictions on asset prices that are useful for extracting information on the degree of near-rationality present in the data.
The dramatic increase in the gross stock of foreign assets and liabilities has revived interest in the portfolio theory of international investment. Evidence on the validity of this theory has always been scarce and inconclusive. Hau and Rey derive testable empirical implications from microeconomic foundations. They then use a new comprehensive dataset on the investment decisions of approximately 2,000 international equity funds domiciled in four different currency areas to revisit the empirical relevance of international portfolio rebalancing. The disaggregated data structure allows them to examine whether foreign-exchange-and-equity risk triggers the predicted rebalancing at the fund level. They find strong support for portfolio rebalancing behavior aimed at reducing both exchange rate and equity risk exposure for most countries.
If the United States persistently earned substantially more on its foreign investments ("U.S. claims") than foreigners earned on their U.S. investments ("U.S. liabilities"), there would be an increased likelihood that the current environment of sizeable global imbalances would evolve in a benign manner. However, Curcuru, Dvorak, and Warnock find that the returns differential of U.S. claims over U.S. liabilities is far smaller than previously reported and, importantly, is near zero for portfolio equity and debt securities. For portfolio securities, they confirm their finding using a separate dataset on the actual foreign equity and bond portfolios of U.S. investors and the U.S. equity and bond portfolios of foreign investors; in the context of equity and bond portfolios, they find no evidence that the United States can count on earning more on its claims than it pays on its liabilities. Finally, they show that their finding of a near zero returns differential is consistent with observed patterns of cumulated current account deficits, the net international investment position, and the net income balance.
Lane and Shambaugh aim to gain a better empirical understanding of the international financial implications of currency movements. To this end, they construct a database of international currency exposures for a large panel of countries over 1990-2004. They show that trade-weighted exchange rate indexes are insufficient for understanding the financial impact of currency movements. Further, they demonstrate that many developing countries hold short foreign-currency positions, leaving them open to negative valuation effects when the domestic currency depreciates. However, they also show that many of these countries have substantially reduced their foreign currency exposure over the last decade. Last, they show that their currency measure has high explanatory power for the valuation term in net foreign asset dynamics: exchange rate valuation shocks are sizable, not quickly reversed, and may entail substantial wealth shocks.
Financial globalization was off to a rocky start in emerging economies hit by Sudden Stops since the mid-1990s. Foreign reserves grew very rapidly during this period, and hence it is often argued that we live in the era of a New Merchantilism in which large stocks of reserves are a war-chest for defense against Sudden Stops. Durdu, Mendoza, and Terrones conduct a quantitative assessment of this argument using a stochastic intertemporal equilibrium framework with incomplete asset markets in which precautionary saving affects foreign assets via three mechanisms: business cycle volatility, financial globalization, and Sudden Stop risk. In this framework, Sudden Stops are an equilibrium outcome produced by an endogenous credit constraint that triggers Irving Fisher's debt-deflation mechanism. Their results show that financial globalization and Sudden Stop risk are plausible explanations of the observed surge in reserves but business cycle volatility is not. In fact, business cycle volatility has declined in the post-globalization period. These results hold whether they use the formulation of intertemporal preferences of the Bewley-Aiyagari-Hugget class of precautionary savings models or the Uzawa-Epstein setup with endogenous time preference.
Farhi and Gabaix propose a new model of exchange rates that yields a theory of the forward premium puzzle. Their explanation combines two ingredients: the possibility of rare economic disasters and an asset view of the exchange rate. Their model is frictionless, has complete markets, and works for an arbitrary number of countries. In the model, rare worldwide disasters can occur and affect each country's productivity. Each country's exposure to disaster risk varies over time according to a mean-reverting process. Risky countries command high risk premia: they feature a depreciated exchange rate and a high interest rate. As their risk premium reverts to the mean, their exchange rate appreciates. Therefore, the currencies of high interest rate countries appreciate on average. This provides an explanation for the forward premium puzzle (a.k.a. uncovered interest rate parity puzzle). They then extend the framework to incorporate two factors: a disaster risk factor and a business cycle factor. They calibrate the model and obtain quantitatively realistic values for the volatility of the exchange rate, the forward premium puzzle regression coefficients, and near-random walk exchange rate dynamics. Finally, they work out a model of the stock market, which allows them to make a series of predictions about the joint behavior of exchange rates, bonds, options, and stocks across countries. The evidence from the options market appears to be supportive of the model.
Is social capital long lasting? Does it affect long-term economic performance? To answer these questions, Guiso, Sapienza, and Zingales test Putnam's conjecture that today's marked differences in social capital between the North and South of Italy are attributable to the culture of independence fostered by the free city states experience in the North of Italy at the turn of the first millennium. The researchers show that the medieval experience of independence does have an impact on social capital within the North, even when they instrument for the probability of becoming a city state with historical factors (such as the Etruscan origin of the city and the presence of a bishop in year 1,000). More importantly, they show that the difference in social capital between towns that had the characteristics to become independent in the Middle Ages and towns that did not exists only in the North (where most of these towns did become independent), not in the South (where the power of the Norman kingdom prevented them from doing so). Their difference-in-difference estimates suggest that at least 50 percent of the North-South gap in social capital is attributable to the lack of a free city state experience in the South.
Bombardini and Trebbi investigate the relationship between the size of interest groups, in terms of voter representation, and their campaign contributions to politicians. The authors uncover a robust hump-shaped relationship between the voting share of an interest group and its contributions to a legislator. This pattern is rationalized in a simultaneous bilateral bargaining model where the size of an interest group affects the amount of surplus to be split with the politician (thereby increasing contributions), but is also correlated with the strength of direct voter support that the group can offer instead of money (thereby decreasing contributions). The model yields simple structural equations that are estimated at the district level, with data on individual and PAC donations and local employment by sector. This procedure yields structural estimates of electoral uncertainty and politicians' effectiveness as perceived by the interest groups. The approach also implicitly delivers a novel method for estimating the impact of campaign spending on election outcomes: the authors find that an additional vote costs a politician between 100 and 400 dollars depending on the district.
Banerjee and Pande examine how increased voter ethnicization, defined as a greater preference for the party representing one's ethnic group, affects politician quality. If politics is characterized by incomplete policy commitment, then ethnicization reduces average winner quality for the pro-majority party with the opposite true for the minority party. The effect increases with greater numerical dominance of the majority (and thus social homogeneity). Empirical evidence from a survey on politician corruption that the authors conducted in North India is remarkably consistent with their theoretical predictions.
Knight and Schiff investigate the role of momentum in sequential voting systems, such as the U.S. presidential primary. In particular, they develop and estimate a simple discrete choice econometric model with social learning, which generates momentum effects. In the model, voters are uncertain about candidate quality, and voters in late states attempt to infer the information held by voters in early states from aggregate voting returns. Candidates experience momentum effects when their performance in early states exceeds voter expectations. The magnitude of momentum effects depends on voters' prior beliefs about the quality of candidates, expectations about candidate performance, and the degree of variation in state-level preferences. The empirical application focuses on the 2004 Democratic primary. The authors find that Kerry benefitted substantially from surprising wins in early states and took votes away from Dean, who stumbled in early states after holding strong leads in polling data prior to the primary season. The estimated model demonstrates that social learning is strongest in early states and, by the end of the campaign, returns in other states are virtually ignored by voters in the latest states. Finally, the authors simulate the election under a counterfactual simultaneous primary and find that the primary would have been much closer under such a systembecause of the absence of momentum effects.
The impact of segregation on Black political efficacy is theoretically ambiguous. On one hand, increased contact among Blacks in more segregated areas may mean that Blacks are better able to coordinate political behavior. On the other hand, lesser contact with non-Blacks may mean that Blacks have less political influence over voters of other races. Ananat and Washington investigate this question empirically. They find that exogenous increases in segregation lead to decreases in Black civic efficacy, as measured by an ability to elect Representatives who vote liberally and, more specifically, in favor of legislation that is favored by Blacks. This tendency for Representatives from more segregated metropolitan statistical areas (MSAs) to vote more conservatively arises in spite of the fact that Blacks in more segregated areas hold more liberal political views than do Blacks in less segregated locales. The authors find that this decrease in efficacy is driven by greater divergence between Black and non-Black political views in the most segregated areas. Because Blacks are a minority in every MSA, increased divergence by race implies that the mean Black voter viewpoint is farther away from the mean voter viewpoint. The authors offer suggestive evidence that this increased divergence is attributable to both lower "contact" and to selection of more conservative non-Blacks into more segregated MSAs. Thus, reduced Black political efficacy may be one reason that Blacks in exogenously more segregated areas experience worse economic outcomes.
Previous theoretical and empirical research on term limits has focused on the problem of accountability that is, the possibility that term-limited politicians exert less effort than those who are eligible to run for reelection. Alt and his co-authors present a model with both accountability and selection effects, in which term limits not only cause incumbents to shirk but also interfere with voters' ability to reelect high-quality candidates. The authors also show how this model can be extended to address the possibility that incumbent quality increases with experience in office. They evaluate the model by taking advantage of variation in gubernatorial term-limit laws across states and over time. They find evidence suggesting that term limits not only cause politicians to shirk, but also reduce expected incumbent quality by limiting voters' ability to select competent politicians and by limiting incumbents' ability to gain experience in office.
Bernheim and Rangel propose a universal choice-theoretic framework for evaluating economic welfare with the following features: 1) In principle, it encompasses all behavioral models; it is applicable irrespective of the processes generating behavior, or of the positive model used to describe behavior. 2) It subsumes standard welfare economics both as a special case (when standard choice axioms are satisfied) and as a limiting case (when behavioral anomalies are small). 3) Like standard welfare economics, it requires only data on choices. 4) It is easily applied in the context of specific behavioral theories, such as the , model of time inconsistency, for which it has novel normative implications. 5) It generates natural counterparts for the standard tools of applied welfare analysis, including compensating and equivalent variation, consumer surplus, Pareto optimality, and the contract curve, and permits a broad generalization of the first welfare theorem.6) Though not universally discerning, it lendsitself to principled refinements.
Engel, Fischer, Galetovic note that public-private partnerships (PPPs) cannot be justified because they free public funds. When PPPs are desirable because the private sector is more efficient, then the contract that optimally trades demand risk, user-fee distortions, and the opportunity cost of public funds is characterized by a minimum revenue guarantee and a cap on the firm's revenues. Yet income guarantees and revenue sharing arrangements observed in practice differ fundamentally from those suggested by the optimal contract. The optimal contract can be implemented via a competitive auction with realistic informational requirements, and risk allocation under the optimal contract suggests that PPPs are closer to public provision than to privatization.
Actual and projected population aging threatens the financial viability of Pay-As-You-Go (PAYGO) pension programs in many countries. Reforms that adjust the general level of taxes or benefits in PAYGO programs can address the problem of fiscal sustainability, but would have little effect on incentives for work. A new kind of pension program, called Notional Defined Contribution or Non-financial Defined Contribution (NDC), is intended to address both fiscal stability and labor supply incentives. Sweden has developed and implemented an NDC system and some other countries have followed suit. Germany has recently adopted pension reforms that reflect some of the NDC principles, and France is also considering doing so. NDC plans are intended to mimic the structure and incentives of Defined Contribution plans, but with rates of return that are feasible within the PAYGO framework. Auerbach and Lee analyze generational uncertainty and risk sharing in a context of economic and demographic uncertainty, with the goal of finding more general properties of a variety of PAYGO pension systems. They compare the performance of the Swedish system and variants based on it, the German system, and versions of the U.S. system, modified to ensure fiscal sustainability.
Mills and his co-authors evaluate the first controlled field experiment on Individual Development Accounts (IDAs). Including their own contributions and matching funds, treatment group members in the Tulsa, Oklahoma program could accumulate $6,750 for home purchase or $4,500 for other qualified uses. Almost all treatment group members opened accounts, but many withdrew all funds for unqualified purposes. Among renters at the beginning of the experiment, the IDA increased homeownership rates after four years by 7-11 percentage points and reduced non-retirement financial assets by $700-$1,000. The IDA had almost no other discernable effect on other subsidized assets, overall wealth, or poverty rates.
Optimal policy rules including those regarding income taxation, commodity taxation, public goods, and externalities are typically derived in models with preferences that are homogeneous. Kaplow reconsiders many central results for the case in which preferences for commodities, public goods, and externalities are heterogeneous. When preference differences are observable, standard second-best results in basic settings are unaffected, except those for the optimal income tax. Optimal marginal income tax rates may be higher or lower on types who derive more utility from various goods, depending on the nature of preference differences and the concavity of the social welfare function. When preference differences are unobservable, all policy rules may change. The determinants of even the direction of optimal rule adjustments are many and subtle.
Fang and Gavazza investigate how the employment-based health insurance system in the United States affects individuals' life-cycle health-care decisions. They take the view that health is a form of human capital that affects workers' productivity on the job and derive implications of employees' turnover on the incentives to undertake health investment. Their model suggests that employee turnover leads to dynamic inefficiencies in health investment and, particularly, it suggests that an employment-based health insurance system in the United States might lead to an inefficiently low level of individual health during individuals' working ages. Moreover, they show that under-investment in health is positively related to the turnover rate in the workers' industry and increases medical expenditure in retirement. The authors provide empirical evidence for the predictions of the model using two datasets, the Medical Expenditure Panel Survey (MEPS) and the Health and Retirement Study (HRS). In MEPS, they find that employers in industries with high turnover rates are much less likely to offer health insurance to their workers. When employers offer health insurance, the contracts have higher deductibles, and employers' contributions to insurance premiums are lower in high turnover industries. Moreover, workers in high turnover industries have lower medical expenditures and undertake less preventive care. In HRS, the authors find instead that individuals who were employed in high turnover industries have higher medical expenditures when retired. The magnitude of these estimates suggests a significant degree of intertemporal inefficiency in health investment in the United States as a result of the employment-based health insurance system. The authors also evaluate and cast doubt on alternative explanations.
Shleifer and his co-authors present two new datasets that might be of interest to researchers and describe some basic statistical relationships using these and other datasets. The first dataset computes comparable effective corporate tax rates for 85 countries, using a survey of PriceWaterhouseCoopers local offices designed to estimate all corporate, labor, and "other" taxes that each country levies on "the same" firm.The researchers show that these rates are sharply lower than marginal tax rates around the world. The second dataset, collected from national statistical offices, presents official registration rates by new firms in 62 countries. The authors use these datasets, as well as additional publicly available data, to present cross-country evidence that corporate taxes have a large and significant adverse effect on corporate investment and entrepreneurship. They are also associated with a larger unofficial economy, greater reliance on debt versus equity finance, and possibly lower economic growth. Although the tax data presented here has some advantages over what is already available, it is also limited in a number of ways. It does not consider taxes that do not enter the profit and loss statement (such as the VAT), or where the statutory incidence is not on the business. Nor can the researchers deal with the crucial issues of incidence.
Using a variety of datasets in two countries, Cutler and Lleras-Muney examine the relation between education and health behaviors. With a number of different theories, they are able to account for a good share of the education gradient. Income, health insurance, and family background account for about 30 percent. Their most surprising result is that education seems to influence cognitive ability, and cognitive ability in turn leads to healthier behaviors. They estimate that cognitive ability accounts for about 20 percent of the education gradient. Many economic theories stress the role of tastes in explaining behavioral differences: better educated people will have lower discount rates or risk aversion. None of their proxies for discounting or risk aversion explain any of the gradients in health behaviors. They also find that theories that suggest that personality factors, such as sense of control of one self or over one's life, do not account for much of the education gradient. Finally, they find some evidence that the social environment is healthier for the better educated, and accounts for about 10 percent of the education gradient.
The United States has a higher infant mortality rate than most other developed nations. Electronic medical records (EMR) and other healthcare information technology (IT) improvements could reduce that rate, by standardizing treatment options and improving monitoring. Miller and Tucker use variations in state medical privacy laws that affect the usefulness of healthcare IT to quantify how much healthcare IT improves neonatal outcomes. They find that adoption of healthcare IT by hospitals in a county significantly reduces infant mortality rates in that county, and that the gains are significantly larger for African-Americans than for Whites. They also find that adoption of radiological information systems in particular matters more in counties with above-average ultrasound use.
Although policymakers have increasingly turned to provider report cards as a tool to improve health care quality, existing studies of their effects provide mixed evidence that they influence consumer choices. Bundorf and her co-authors examine the effects of providing consumers with quality information in the context of fertility clinics providing Assisted Reproductive Therapies (ART). They report three main findings. First, clinics with higher birthrates had larger market shares after versus before the adoption of report cards. Second, clinics with a disproportionate share of young, easy-to-treat patients had lower market shares after versus before adoption. This suggests that consumers take into account information on patient mix when evaluating clinic outcomes. Third, report cards had larger effects on consumers and clinics from states with ART insurance coverage mandates. The researchers conclude that quality report cards have potential to influence provider behavior in this setting.
Aizcorbe reported on plans by the Bureau of Economic Analysis (BEA) to construct a set of satellite accounts for health that will provide supplementary measures for health spending. Measures for this large and growing sector of the economy have limitations that have been documented in the academic literature. The problems are particularly acute for the official price indexes that the BEA uses to deflate nominal expenditures into measures of real spending. One problem with these price indexes is that they do not accurately account for the introduction of new treatments that provide the same outcomesin terms of improved healthbut at lower cost. Preliminary results from work by BEA researchers demonstrated that this problem is numerically important for indexes that cover a wide range of conditions. Work is currently underway to explore different strategies and data sources to correct this problem.
Japanese atomic bomb survivors irradiated 8-25 weeks after ovulation subsequently suffered reduced IQ [Otake and Schull, 1998]. Whether these findings generalize to low doses (less than 10 mGy) has not been established. Almond, Edlund, and Palme exploit the natural experiment generated by the Chernobyl nuclear accident in April 1986, which caused a spike in radiation levels in Sweden. In a comprehensive dataset of 562,637 Swedes born 1983-8, the researchers find that the cohort in utero during the Chernobyl accident had worse school outcomes than adjacent birth cohorts, and this deterioration was largest for those exposed approximately 8-25 weeks post conception. Moreover, they find larger damage among students born in regions that received more fallout: students from the eight most affected municipalities were 3.6 percentage points less likely to qualify to high school as a result of the fallout. These findings suggest that fetal exposure to ionizing radiation damages cognitive ability at radiation levels previously considered safe.
Health care spending varies widely across markets, yet there is little evidence that higher spending translates into better health outcomes, possibly because of endogeneity bias. The main innovation in Doyle's paper compares outcomes of patients who are exposed to different health care systems that were not designed for them: patients who are far from home when a health emergency strikes. The universe of emergencies in Florida from 1996-2003 is considered, and visitors who become ill in high-spending areas have significantly lower mortality rates compared to similar visitors in lower-spending areas. The results are robust across different types of patients and within groups of destinations that appear to be close demand substitutes.
Gaynor, Kleiner, and Vogt develop and estimate a new model of hospital costs, using a new output index to examine the cost efficiencies brought about by increases in the magnitude of operations (scale economies) and the variety of services offered (scope economies) by hospitals. Using this output index, the researchers estimate a translog cost function with a dataset of 320 California hospitals for the year 2003. Their results show that scale economies are exhausted at a higher level than previously estimated (around 280 beds). There is evidence of scope economies between primary and secondary care, and between tertiary and outpatient care. The cost function specification also allows for the possibility of scope economies across conditions within primary, secondary, and tertiary care. The estimates show scope dis-economies across conditions for more intensive types of treatment (tertiary and secondary care) and scope economies across conditions for less intensive types of treatment (primary care).
Equilibrium models imply that the real value of debt in the hands of the public must equal the expected present-value of surpluses. Empirical models of fiscal policy typically do not impose this condition and often do not even include debt. Absence of debt from empirical models can produce non-invertible representations, obscuring the true present-value relation, even if it holds in the data. Chung and Leeper show that small VAR models of fiscal policy may not be invertible and that expanding the information set to include government debt has quantitatively important implications. Then they impose the present-value condition on an identified VAR and characterize the way in which the present-value support of debt varies across types of fiscal shocks. The role of expected primary surpluses in supporting innovations to debt depends on the nature of the shock. Debt is supported almost entirely by changes in the present-value of surpluses for some fiscal shocks, but for other fiscal shocks surpluses fail to adjust, leaving a large role for expected changes in discount rates. Horizons over which debt innovations are financed are longon the order of 50 years or more.
Can government policies that increase the monopoly power of firms and the militancy of unions increase output? Eggertsson studies this question in a dynamic general equilibrium model with nominal frictions and shows that these policies are expansionary when certain "emergency" conditions apply. He argues that these emergency conditions zero interest rates and deflationwere satisfied during the Great Depression in the United States. Therefore, the New Deal, which facilitated monopolies and union militancy, was expansionary, according to the model. This conclusion is contrary to the one reached by Cole and Ohanian (2004), who argue that the New Deal was contractionary. The main reason for this divergence is that the current model incorporates nominal frictions so that inflation expectations play a central role in the analysis. The New Deal has a strong effect on inflation expectations in the model, changing excessive deflation to modest inflation, thereby lowering real interest rates and stimulating spending.
A central assumption of open economy macro models with nominal rigidities relates to the currency in which goods are priced, whether there is so-called producer currency pricing or local currency pricing. This has important implications for exchange rate pass-through and optimal exchange rate policy. Using novel transaction-level information on currency and prices for U.S. imports, Gopinath, Itskhoki, and Rigobon show, that even conditional on a price change, there is a large difference in the pass-through of the average good priced in dollars (25 percent) versus non-dollars (95 percent). This finding is contrary to the assumption in a large class of models that the currency of pricing is exogenous and is evidence of an important selection effect that results from endogenous currency choice. The researchers describe a model of optimal currency choice in an environment of staggered price setting and show that the empirical evidence strongly supports the model's predictions of the relation between currency choice and pass-through. They further document evidence of significant real rigidities, with the pass-through of dollar pricers increasing above 50 percent in the long-run. Finally, they numerically illustrate the currency choice decision in both a Calvo and a menu-cost model with variable mark-ups and imported intermediate inputs and evaluate the ability of these models to match pass-through patterns documented in the data.
Christiano and his co-authors explore the dynamic effects of news about a future technology improvement which turns out ex post to be overoptimistic. They find that it is difficult to generate a boom-bust cycle (a period in which stock prices, consumption, investment, and employment all rise and then crash) in response to such a news shock, in a standard real business cycle model. However, a monetized version of the model that stresses sticky wages and an inflation-targeting monetary policy naturally generates a welfare-reducing boom-bust cycle in response to a news shock. They explore the possibility that integrating credit growth into monetary policy may result in improved performance. They also discuss the robustness of their analysis to alternative specifications of the labor market, in which wage-setting frictions do not distort ongoing firm/worker relations.
Suppose the nominal money supply could be cut literally overnight by, say, 20 percent. What would happen to prices, wages, output, Velde asks? The answer can be found in 1720s France, where just such an experiment was carried out, repeatedly. Prices adjusted instantaneously and fully on one market only, that for foreign exchange. Prices on other markets (such as commodities) as well as prices of manufactured goods and industrial wages fell slowly, over many months, and not by the full amount of the nominal reduction. Coincidentally or not, the industrial sector (as represented by manufacturing of woolen cloths) experienced a contraction of 30 percent. When the government changed course and increased the nominal money supply overnight by 20 percent , prices responded much more, and the woolen industry rebounded.
In the data, a sizable fraction of price changes are temporary price reductions referred to as sales. Existing models include no role for sales. Hence, when confronted with data in which a large fraction of price changes are sales related, the models must either exclude sales from the data or leave them in and implicitly treat sales like any other price change. When sales are included, prices change frequently and standard sticky price models with this high frequencyof price changes predict small effects from money shocks. If sales are excluded, prices change much less frequently and a standard sticky price model with this low frequency of price changes predicts much larger effects of money shocks. Kehoe and Midrigan add a motive for sales in a parsimonious extension of existing sticky models. They show that the model can account for most of the patterns of sales in the data. Using their model as the data generating process, they evaluate the existing approaches and find that neither well approximates the real effects of money in their economy in which sales are explicitly modeled.
In two laboratory experiments, Benjamin, Choi, and Strickland examine whether norms associated with one's social identity affect time and risk preferences. When they make ethnic identity salient to Asian-American subjects, the subjects make more patient choices. When they make race salient to black subjects, the non-immigrant blacks (but not the immigrant blacks) make more risk-averse choices. Making gender identity salient causes choices to conform to gender norms that the subject believes are relatively more common. These results provide evidence that identity effects play a role in shaping U.S. demographic patterns in economic behaviors and outcomes.
Alesina and Giuliano study the importance of culture, as measured by the strength of family ties, on economic behavior and attitudes. They define their measure of family ties using individual responses from the World Value Survey regarding the role of the family, and the love and respect that children need to have for their parents, for over 70 countries. They show that strong family ties imply more reliance on the family as an economic unit that provides goods and services and less on the market and the government for social insurance. With strong family ties home production is higher; labor force participation of women and youngsters and geographical mobility are lower. Families are larger (higher fertility and higher family size) with strong family ties, which is consistent with the idea of the family as an important economic unit. The researchers present evidence on cross-country regressions. To assess causality, they look at the behavior of second generation immigrants in the United States and use a variable based on the grammatical rule of pronoun drop as an instrument for family ties. Their results overall indicate a significant influence of the strength of family ties on economic outcomes.
Al-Najjar studies individuals who use frequentist statistical models to draw secure or robust inferences from i.i.d. data. The main contribution of this paper is a steady-state model in which distinct statistical models are consistent with empirical evidence, even as data increases without bound. Individuals may hold different beliefs and interpret their environment differently even though they know each other's statistical model and base their inferences on identical data. The behavior modeled here is that of rational individuals confronting an environment in which learning is hard, rather than ones beset by cognitive limitations or behavioral biases.
Malmendier and Nagel investigate whether differences in individuals' experiences of macroeconomic shocks affect long-term risk attitudes, as is often suggested for the generation that experienced the Great Depression. Using data from the Survey of Consumer Finances from 1964-2004, the authors find that birth-cohorts that have experienced high stock market returns throughout their life report lower risk aversion, are more likely to be stock market participants, and, if they participate, invest a higher fraction of liquid wealth in stocks. They also find that cohorts that have experienced high inflation are less likely to hold bonds. These results are estimated controlling for age, year effects, and a broad set of household characteristics. The estimates indicate that stock market returns and inflation early in life affect risk-taking several decades later. However, more recent returns have a stronger effect, which fades away slowly as time progresses. Thus, the experience of risky asset payoffs over the course of an individuals' life affects subsequent risk-taking. These results explain, for example, the relatively low rates of stock market participation among young households in the early 1980s (following the disappointing stock market returns in the 1970s depression) and the relatively high participation rates of young investors in the late 1990s (following the boom years in the 1990s).
In the past two elections, richer people were more likely to vote Republican while richer states were more likely to vote Democratic. This switch is an aggregation reversal, where an individual relationship, like income and Republicanism, is reversed at some level of aggregation. Aggregation reversals can occur when an independent variable affects an outcome both directly and indirectly through a correlation with beliefs. For example, income increases the desire for low taxes but decreases belief in Republican social causes. If beliefs are learned socially, then aggregation can magnify the connection between the independent variable and beliefs, which can cause an aggregation reversal. Glaeser and Sacerdote estimate the model's parameters for three examples of aggregation reversals and show with these parameters that the model predicts the observed reversals.
Many North American college students have trouble satisfying degree requirements in a timely manner. Angrist and his co-authors report on a randomized field experiment involving two strategies designed to improve academic performance among entering full-time undergraduates at a large Canadian university. One treatment group ("services") was offered peer advising and organized study groups. Another ("incentives") was offered substantial merit-scholarships for solid, but not necessarily top, first-year grades. A third treatment group combined both interventions. Service take-up rates were much higher for women than for men, and for students offered both services and incentives, than for those offered services alone. No program had an effect on men's grades or other measures of academic performance. However, the fall and first-year grades of women in the combined group were significantly higher than those of women in the control group, and women in the latter group earned more course credit and were less likely than those in the control group to be on academic probation. These differentials persisted through the end of the second year, in spite of the fact that incentives were given in the first year only. The results suggest that the study skills acquired in response to a combination of services and incentives can have a lasting effect, and that the combination of services and incentives is more promising than either alone.
Over the past decade there has been a decline in the fraction of papers in top economics journals written by economists from the highest-ranked economics departments. Ellison documents this fact and uses additional data on publications and citations to assess various potential explanations. Several observations are consistent with the hypothesis that the Internet improves the ability of high-profile authors to disseminate their research without going through the traditional peer-review process.
Weinberg and his co-authors study methods for evaluating instruction in higher education. They explore student evaluations of instruction and a variety of alternatives, and develop a simple model to illustrate the biases inherent in student evaluations. Measuring learning using grades in future courses, they show that student evaluations are positively related to current grades but uncorrelated with learning, once current grades are controlled. They also show that the weak relationship between learning and student evaluations arises in part because students are not aware of how much they have learned in a course. In the conclusion of their paper they discuss alternative methods for evaluating teaching.
Today's college enrollees are more likely to work, and work more, than those of the past. October CPS data reveal that since 1970, average labor supply among 18-to-22-year-old full-time, four-year undergraduates has nearly doubled, from 5 hours to 9.6 hours per week. Nearly half of these "traditional" college students work for pay in a given week, and the average working student works 21 hours per week. Borrowing constraints are a plausible culprit, but would have to be much more pervasive than commonly thought to explain rising employment even among wealthy students. Scott-Clayton evaluates the credit constraints hypothesis along with several alternative explanations for the increase in student labor supply, including changes in demographic composition, rising wages, rising returns to work experience, declines in educational quality, institutional crowding, and declining preferences for leisure. Using multiple data sources, she concludes that none of these alternative hypotheses come close to fully explaining the dramatic change over time. When broadly defined to include "fuzzy" constraints on borrowing for discretionary consumption as well as self-imposed constraints on borrowing, credit constraints may be driving the trend even among high-income populations.
The Youth-in-Transition Survey allows identification of a series of decision points where youth already in university decide whether to continue or to exit without graduating. This is a specific aspect of university participation. Johnson finds that there is little evidence that either a higher level of tuition or a change in tuition alters the probability that a Canadian youth, once in university, leaves without obtaining a degree. Thus any policy effort around university persistence for youth should focus on non-tuition factors, and debate around the appropriate level of tuition should not focus on persistence.
Rezende analyzes the effect of an accountability system in the Brazilian college market. For each discipline, colleges were assigned a grade that depended on the scores of their students on the ENC, an annual mandatory exam. Those grades were then disclosed to the public, giving applicants information about college quality. The system also established rewards and penalties based on the colleges' grades. He finds that the ENC had a substantial effect on different measures of college quality, such as faculty education and the proportion of full-time faculty. The detailed information from this unique dataset and the fact that the ENC started being required for different disciplines in different years allow him to control for time-specific effects, thus minimizing the bias caused by policy endogeneity. Indeed, he finds strong evidence on the importance of controlling for time-specific effects: estimates of the impact of the ENC on college quality more than double when he does not take those effects into account. The ENC also positively affects the ratio between applicants and vacancies, and decreases the faculty and the entering class sizes. The results suggest that its introduction fostered competition and favored colleges entering the market.
Zingales and his co-authors use the Program for International Student Assessment PISA) to analyze the gender gap in mathematics and reading in 41 countries. They find that, on average, 15-year-old girls underperform in mathematics and outperform in reading with respect to boys. Further, the gender gap in mathematics is negatively correlated with the level of emancipation of women in society suggesting that, at least in part, the gap is environmentally driven. In emancipated societies the gender gap in math is not reduced by higher investment in math (more homework and instructional time), or different educational styles, or by increasing women's self esteem. Women's emancipation affects relative math performance by changing the role models: in emancipated society, the role models are gender-neutral and women are equally influenced by other successful men and women; in less emancipated society, women's achievement is affected by successful students of the same gender.
Lavy and his co-authors estimate the extent of ability peer effects in the classroom and explore the underlying mechanisms through which these peer effects operate. They identify as high ability students those who are enrolled at least one year ahead of their birth cohort ("skippers") and as low ability students those who are enrolled at least one year behind their birth cohort ("repeaters"). They show that while there are marked differences between the academic performance and behavior of skippers/repeaters and the regular students, the status of skippers and repeaters is mostly determined by first grade; therefore, it is unlikely to have been affected by classroom peers (and to suffer from the reflection problem). Using within-school variation in the proportion of these low and high ability students across cohorts of middle and high school students in Israel, they find that the proportion of high achieving peers in class has no effect on the academic performance of most regular students but it does affect positively the outcomes of the brightest among the regular students. In contrast, the proportion of low achieving peers has a negative effect on the performance of regular students, especially those located at the lower end of the ability distribution. An exploration of the underlying mechanisms of these peer effects shows that, relative to regular students, repeaters report that teachers are better in the individual treatment of students and in instilling the capacity for individual study. However, a higher proportion of low achieving students results in deterioration of teachers' pedagogical practices, with detrimental effects on the quality of inter-student relationships and the relationships between teachers and students, and somewhat increases the level of violence and classroom disruptions.
In traditional models of ability signaling, higher ability workers aquire more education and employers use education to statistically discriminate in setting wages. Arcidiacono and his co-authors argue that education plays a much more direct role in the labor market. Using data from the National Longitudinal Survey of Youth (NLSY), they show that college allows individuals to directly reveal their ability to potential employers. Their results suggest that ability is observed nearly perfectly for college graduates but is revealed to the labor market more gradually for high school graduates. Consistent with the notion that ability is directly revealed in college market, the researchers do not find any racial differences in wages or in returns to ability for college graduates. By contrast, blacks earn 6-10 percent less than whites of comparable ability in the high school market, a difference that might arise as a result of statistical discrimination. The fact that a wage penalty exists for blacks in the high school but not the college labor market also helps to explain why, conditional on ability, blacks are more likely to earn a college degree.
Models of investment and borrowing for education typically treat the family as a unitary decisionmaker. Doing so may conceal the nature of borrowing constraints, because adults with college-age children are likely to be at a life-cycle stage where credit constraints are not important. Brown and his co-authors instead propose a simple model of altruistic parents and a child in which both parties can make investments for education and other purposes and parents can transfer cash to their child. The model implies that educational investment is inefficient for intergenerationally constrained parent-child pairs. The constraint arises because parents of constrained children rationally do not pay the share of college expenses that is assumed by federal financial aid formulas. The model highlights new empirical implications of borrowing constraints for education, which the researchers examine making use of data from the Health and Retirement Study and the NLSY-97. The data are consistent with quantitatively important borrowing constraints for higher education.
De Giorgi and his co-authors investigate whether peers' behavior influences the choice of college major. Using a unique dataset of students at Bocconi University and exploiting the organization of teaching at this institution, they are able to identify the endogenous effect of peers on such decisions through a novel identification strategy that solves the common econometric problems of studies of social interactions. Results show that, indeed, one is more likely to choose a major when many of her peers make the same choice. The authors estimate that, when it diverts students from majors in which they seem to have a relative ability advantage, this effect leads to lower average grades and graduation mark, a penalty that could cost up to 1,117 USD a year in the labor market.
Neal and his co-author, based on fifth grade test scores from the Chicago Public Schools, show that both the introduction of No Child Left Behind (NCLB) in 2002 and the introduction of similar district level reforms in 1996 generated noteworthy increases in reading and math scores among students in the middle of the achievement distribution. Nonetheless, the least academically advantaged students in Chicago did not score higher in math or reading following the introduction of accountability, and the researchers find only mixed evidence of score gains among the most advantaged students. A large existing literature argues that accountability systems built around standardized tests greatly affect the amount of time that teachers devote to different topics. These results for fifth graders in Chicago, as well as related results for sixth graders after the 1996 reform, suggest that the choice of the proficiency standard in such accountability systems determines the amount of time that teachers devote to students of different ability levels.
Gomes and his co-author revisit the theoretical relationship between financial leverage and stock returns in a dynamic world where both the corporate investment and finance decisions are endogenous. They find that the link between leverage and stock returns is more complex than the static textbook examples suggest and usually depends on the investment opportunities available to the firm. In the presence of financial market imperfections, leverage and investment are generally correlated, so that highly levered firms are also mature firms with relatively more (safe) book assets and fewer (risky) growth opportunities. The researchers use a quantitative version of their model to generate empirical predictions concerning the empirical relationship between leverage and returns. They test these implications in actual data and find support for them.
Campanale and his co-authors provide a thorough characterization of the asset returns implied by a simple general equilibrium production economy with convex investment adjustment costs. When households have EpsteinZin preferences, there exist plausible parameter values such that the model generates unconditional mean riskfree rate and equity return, and volatility of consumption growth, which are in line with historical averages for the U.S. economy. Consistent with the data, the price-dividend ratio is pro-cyclical and stock returns are predictable ( increasingly so as the time horizon increases), while dividend growth is not. The model also implies realistic values for the correlation of the risk-free rate with output growth and consumption growth and the correlation pattern between risk-free rate, equity return, and equity premium. The risk implied by the model is rather low. Given the work of Rabin (2000) among others, it is not surprising that the Epstein-Zin agent exhibits a much higher risk aversion when faced with substantially larger risks. This shortcoming, however, does not extend to the case in which agents are disappointment averse, in the sense of Gul (1991). When faced with a lottery that has a coefficient of variation one hundred times as large as that implied by this model, a disappointment averse agent displays the same relative risk aversion as an expected utility agent with logarithmic utility!
Recent empirical studies evaluate the performance of investment strategies using contemporaneously measured loadings to proxy for conditional risk. Boguth and his co-authors demonstrate that such procedures lead to potentially large biases in alpha when payoffs are nonlinear. They combine lagged portfolio and component realized betas with standard instruments to improve performance analysis, and find that conditioning information reduces momentum alphas by 20-40 percent relative to unconditional estimates. Overconditioned alphas are up to 2.5 times larger than appropriately conditioned measures.
Lettau and Wachter propose a dynamic risk-based model capable of jointly explaining the term structure of interest rates, returns on the aggregate market, and the risk and return characteristics of value and growth stocks. Both the term structure of interest rates and returns on value and growth stocks convey information about how the representative investor values cash flows of different maturities. The researchers model how the representative investor perceives risks of these cash flows by specifying a parsimonious stochastic discount factor for the economy. Shocks to dividend growth, the real interest rate, and expected inflation are priced, but shocks to the price of risk are not. Given reasonable assumptions for dividends and inflation, they show that the model can simultaneously account for the behavior of aggregate stock returns, an upward-sloping yield curve, the failure of the expectations hypothesis, and the poor performance of the capital asset pricing model.
Three of the most fundamental changes in the economy since the early 1970s have been the increase in the importance of organizational capital in production, the increase in managerial income inequality, and the increase in payouts to the owners of firms. Lustig and his co-authors suggest that there is a unified explanation for these changes: the arrival and gradual adoption of information technology since the 1970s has stimulated the accumulation of organizational capital in existing firms. Because owners are better diversified than managers, the optimal division of rents from this organizational capital has the owners bearing most of the cash-flow risk. In the model here, the IT revolution benefits the owners and the managers in large successful firms, but not the managers in small firms. The resulting increase in managerial compensation inequality and the increase in payouts to owners compare favorably to those established in the data.
Calvet and his co-authors investigate the dynamics of individual portfolios in a unique dataset containing the disaggregated wealth and income of all households in Sweden. Between 1999 and 2002, the average share of risky assets in the financial portfolio of participants fell moderately, implying little aggregate rebalancing in response to the decline in risky asset prices during this period. The researchers show that these aggregate patterns conceal strong household-level evidence of active rebalancing, which on average offsets about one half of idiosyncratic passive variations in the risky asset share. Sophisticated households with greater education, wealth, and income, and with better diversified portfolios, tend to rebalance more actively. There is some evidence that households rebalance towards a higher risky share as they become richer. The researchers also study the decisions to enter and exit risky financial markets, and patterns of rebalancing for individual assets. They find that households are more likely to fully sell directly held stocks if those stocks have performed well, and more likely to exit direct stockholding if their stock portfolios have performed well; but these relationships reverse for mutual funds. The results are consistent with previous research on the disposition effect among direct stockholders and performance sensitivity among mutual fund investors. When households continue to hold individual assets, however, they rebalance both stocks and mutual funds to offset about one sixth of the passive variations in individual asset shares.
Fehr and his co-authors report on several experiments on the optimal allocation of ownership rights. The experiments confirm the property rights approach by showing that the ownership structure affects relationship-specific investments and that subjects attain the most efficient ownership allocation despite starting from different initial conditions. However, in contrast to the property rights approach, the most efficient ownership structure is joint ownership. These results cannot be explained by the self-interest model, nor by models that assume that all people behave fairly, but they are largely consistent with the theory of inequity aversion that focuses on the interaction between selfish and fair players.
In a controlled laboratory experiment, Ederer and Manso show that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Despite receiving lower average wages, subjects under such an incentive scheme were more innovative and produced higher profits than subjects under fixed-wage and standard pay-for-performance incentive schemes. These results suggest that incentives, as long as appropriately designed, are useful in motivating creativity and innovation.
Bottazi and her co-authors examine the effect of trust in a micro-economic environment, where it is clearly exogenous. Using hand-collected data on European venture capital, they show that the Eurobarometer measure of trust among nations significantly affects investment decisions. This holds even after they control for investor and company fixed effects, geographic distance, information, and transaction costs. The national identity of venture capital firms' partners also matters for the effect of trust. The researchers also consider the relationship between trust and sophisticated contracts involving contingent control rights. They find that trust and sophisticated contracts are complements, not substitutes.
Kaniel and his co-authors show that dispositional optimism is a stable personality trait with a pronounced effect on job search outcomes. Studying a cohort of MBA students, they find that repeated within-person measurements of dispositional optimism are highly correlated over time and are not explained by major life events, such as classroom performance or job placement outcomes. Dispositional optimism does affect job market perceptions and outcomes. Optimistic students tend to place less weight on the importance of landing a job after graduation. They also believe that their initial starting salaries will be higher than those of their peers, but these beliefs do not materialize. In spite of these negatives, optimists outperform their peers in the job search process in many respects. They are more likely to hold summer internships by the spring of their first year, and they receive full-time job offers faster than their peers. There is no evidence, however, that they find lower quality jobs, and that this accounts for faster times to first job offer.
Using a detailed dataset with assessments of CEO candidates for companies involved in private equity (PE) transactions including both buyout (LBO) and venture capital (VC) deals Kaplan and his co-authors study how CEOs' characteristics and abilities relate to hiring decisions, PE investment decisions, and subsequent performance. The candidates are assessed on more than 40 individual characteristics in seven general areas: leadership, personal, intellectual, motivational, interpersonal, technical, and specific. In general, all characteristics and abilities are highly correlated. For both LBO and VC firms, outside CEO candidates are more highly rated than incumbents. Both LBO and VC firms are more likely to hire and invest in more highly rated and talented CEOs, and the investors also value "soft" or team-related skills in the hiring decisions. However, these skills are not necessarily associated with greater success. For LBO deals in particular, "hard" abilities and execution skills predict success. Finally, the researchers find that incumbents are no more likely to succeed than outside CEOs, holding observable talent and ability constant.
Recent neuroeconomics research suggests that incidental affect can influence financial choices by changing activation in brain areas related to emotion processing. Thus, changes in one's affective state due to stimuli unrelated to the financial decision at hand may change people's propensity to take financial risks. Kuhnen and Knutson examine whether this effect operates through preferences, beliefs, or both, and test the generality of the phenomenon. They find that affective stimuli can influence subjects' beliefs about investing as well as their preferences. Specifically, valenced stimuli change risk preferences, while arousing stimuli increase subjects' confidence in probability estimation. However, the affective stimuli no longer influence financial risk taking in one-shot investment decisions if they lack immediate feedback. These findings help to illuminate both how and when incidental affective stimuli can influence financial risk taking.
Davis and Harrigan merge the Melitz (2003) model with a heterogeneous firm variant of the Shapiro-Stiglitz (1977) model of efficiency wages. Their combined model features involuntary unemployment and heterogeneous wages for homogeneous labor. The selection effects of trade liberalization operate according to firm marginal cost, as in Melitz. But here these marginal costs reflect both variation in marginal physical productivity and firm-specific efficiency wages. For this reason, firms vulnerable under trade liberalization have high firm wages relative to firm productivity, possibly for both high- and low-wage jobs, or what workers may perceive as "good" or "bad" jobs. The labor rents associated with "good" jobs distort producer output decisions and tend to raise aggregate unemployment. The firm exit decision effectively treats the wage received by labor as an externality, since exit is driven by high marginal costs, whether because of low physical productivity or high wages. The merged model provides a convenient framework for articulating the efficiency enhancing effects of trade, even as it makes sense of claims that trade may threaten (some) good jobs at good wages.
Helpman and Itskhoki study a two-country two-sector model of international trade in which one sector produces homogeneous products while the other produces differentiated products. The differentiated-product industry has firm heterogeneity, monopolistic competition, search and matching in its labor market, and wage bargaining. Some of the workers searching for jobs end up being unemployed. Countries are similar except for frictions in their labor markets. The researchers study the interaction of labor market rigidities and trade impediments in shaping welfare, trade flows, productivity, price levels, and unemployment rates. They show that both countries gain from trade but that the flexible country which has lower labor market frictions gains proportionately more. A flexible labor market confers comparative advantage; the flexible country exports differentiated products on net. A country benefits by lowering frictions in its labor market, but this harms the country's trade partner. And, the simultaneous proportional lowering of labor market frictions in both countries benefits both of them. The model generates rich patterns of unemployment. Specifically, trade integration which benefits both countries may raise their rates of unemployment. Moreover, differences in rates of unemployment do not necessarily reflect differences in labor market rigidities; the rate of unemployment can be higher or lower in the flexible country. Finally, the researchers show that the flexible country has both higher total factor productivity and a lower price level, which operates against the standard Balassa-Samuelson effect.
The welfare effects of trade shocks depend crucially on the nature and magnitude of the costs workers face in moving between sectors. The existing trade literature does not directly address this, assuming perfect mobility or complete immobility, or adopting reduced-form approaches to estimation. Artuc and his co-authors present a model of dynamic labor adjustment that does and, moreover, which is consistent with a key empirical fact: that intersectoral gross flows greatly exceed net flows. Using an Euler-type equilibrium condition, they estimate the mean and the variance of workers' switching costs from the U.S. March Current Population Surveys. They estimate high values of both parameters, implying both slow adjustment of the economy and sharp movements in wages in response to a trade shock. Simulations of a trade liberalization indicate that despite the high estimated adjustment cost, in terms of lifetime welfare, the liberalization is Pareto-improving. The explanation for this surprising finding -- which would be missed by a reduced-form approach -- is that the high variance-to-costs ensures high rates of gross flow; this helps spread the liberalization's benefits around.
Antras and Staiger study the trade policy choices of governments in an environment in which some of the trade flows being taxed or subsidized involve the exchange of customized inputs, and the contracts governing these transactions are incomplete. They show that the second-best policies that emerge in this environment entail free trade in final goods but not in intermediate inputs, since import or export subsidies targeted to inputs can alleviate the international hold-up problem. They next show that the Nash equilibrium policy choices of governments do not coincide with internationally efficient choices, and that the Nash policies imply an inefficiently low level of intermediate input trade across countries. The reason is that in their environment trade policy choices serve a dual role: they can enhance investment by suppliers but, because of ex-post bargaining over prices, they can also be used to redistribute profits across countries. The inefficiencies inherent in the Nash policy choices of governments not only result in suboptimal input subsidies, but also in positive distortions in final-good prices, even when countries cannot affect world (untaxed) prices in those goods. As a result, an international trade agreement that brings countries to the efficiency frontier will necessarily increase trade in inputs, but it may require a reduction in final-goods trade. When governments are not motivated by the impact of their policies on ex-post negotiated international input prices, the resulting policy choices are efficient, and hence a modified terms-of-trade interpretation of the purpose of trade agreements can be offered, but only when governments maximize real national income. If governments' preferences are sensitive to political economy (distributional) concerns, the purpose of a trade agreement becomes more complex, and cannot be reduced to solving a simple terms-of-trade problem.
Hanson and Xiang develop a simple empirical method to test two versions of the Melitz (2003) model, one with global fixed export costs and one with bilateral fixed export costs. With global costs, import sales per product variety (relative to domestic sales per variety) are decreasing in variable trade costs, as a result of adjustment occurring along the intensive margin of trade. With bilateral costs, imports per product variety are increasing in fixed trade costs, because of adjustment occurring along the extensive margin. The researchers apply their approach to data on imports of U.S. motion pictures in 44 countries over 1995-2005. Imports per product variety are decreasing in geographic distance, linguistic distance, and other measures of trade costs, consistent with adjustment to these costs occurring along the intensive margin. There is relatively little variation in the number of U.S. movies that countries import but wide variation in the box-office revenues per movie. The data thus appear to reject the bilateral-fixed-export-cost model in favor of the global-fixed-export-cost model.
Moenius, Rauch, and Trindade develop a gravity model of international trade in which border effects, impacts of migrants, and effects of past trading relationships are all based in networks of entrepreneurs. In their model workers leave their former employers to become entrepreneurs, and found new firms by partnering with former colleagues or with workers who left a different employer. In the absence of migration, the first type of partnership creates trade only within a country and the second type creates trade both within and across countries, so that total trade displays country border effects. Migrants, however, can match with former colleagues across country boundaries. Similarly, past trading relationships facilitate search for partners across country boundaries. This model generates a decomposition of bilateral trade into number of partnerships or matches and value per match. Standard gravity model variables are shown to affect number of matches and value per match differently; distance, for example, is predicted to decrease number of matches but leave value per match unaffected. Following Besedes and Prusa (2006), who count "relationships" between the United States and its trading partners by the number of product varieties for which positive trade is observed, we use these "links" and "value per link" as our empirical proxies for number of matches and value per match. Preliminary estimates using OECD data on trade and migration, both within the OECD and between the OECD and non-OECD countries, and U.S. Department of Commerce trade data, support the sharpest predictions of the theory for the impacts on number of links and value per link of distance, migrants, colonial ties, and the interactions between them.
The classical Heckscher-Ohlin-Mundell paradigm states that trade and capital mobility are substitutes, in the sense that trade integration reduces the incentives for capital to flow to capital-scarce countries. Antras and Caballero show that in a world with heterogeneous financial development, the classic conclusion does not hold. In particular, in less financially developed economies (South), trade and capital mobility are complements. Within a dynamic framework, the complementarity carries over to (financial) capital flows. This interaction implies that deepening trade integration in South raises net capital inflows (or reduces net capital outflows). It also implies that, at the global level, protectionism may backfire if the goal is to rebalance capital flows, when these are already heading from South to North. This perspective also has implications for the effects of trade integration on factor prices. In contrast to the Heckscher-Ohlin model, trade liberalization always decreases the wage-rental in South: an anti-Stolper-Samuelson result.
Aizenman and Spiegel identify factors associated with takeoff -- a sustained period of high growth following a period of stagnation. They examine a panel of 241 "stagnation episodes" from 146 countries; 54 percent of these episodes are followed by takeoffs. Countries that experience takeoffs average 2.3 percent annual growth following their stagnation episodes, while those that do not average no growth; 46 percent of the takeoffs are "sustained," that is lasting eight years or longer. Using probit estimation, the researchers find that de jure trade openness is positively and significantly associated with takeoffs. A single standard deviation increase in de jure trade openness is associated with a 55 percent increase in the probability of a takeoff in their default specification. They also find evidence that capital account openness encourages takeoff responses, although this channel is less robust. Measures of de facto trade openness, as well as a variety of other potential conditioning variables, are found to be poor predictors of takeoffs. The authors also examine the determinants of nations achieving sustained takeoffs. While they fail to find a significant role for openness in determining whether or not takeoffs are sustained, they do find a role for output composition: takeoffs in countries with more commodity-intensive output bundles are less likely to be sustained, while takeoffs in countries that are more service-intensive are more likely to be sustained. This suggests that adverse terms-of-trade shocks prevalent among commodity exports may play a role in ending long-term high growth episodes.
There is widespread evidence of excess return predictability in financial markets. For the foreign exchange market, a number of studies have documented that the predictability of excess returns is closely related to the predictability of expectational errors of excess returns. In this paper, Bachetta and his co-authors investigate the link between the predictability of excess returns and expectational errors in a much broader set of financial markets, using data on survey expectations of market participants in the stock market, the foreign exchange market, and the bond and money markets in various countries. The results are striking. First, in markets where there is significant excess return predictability, expectational errors of excess returns are predictable as well, with the same sign and often even with similar magnitude. This is the case for forex, stock, and bond markets. Second, in the only market where excess returns are generally not predictable, the money market, expectational errors are not predictable either. These findings suggest that an explanation for the predictability of excess returns must be closely linked to an explanation for the predictability of expectational errors.
Hong and his co-author attempt to measure the effect of competition on bias in the context of analyst earnings forecasts, which are known to be excessively optimistic because of conflicts of interest. Their instrument for competition is mergers of brokerage houses, which result in the firing of analysts because of redundancy (for example, one of the two oil analysts is let go) and other reasons, such as culture clash. The researchers use this decrease in analyst coverage for stocks covered by both merging houses before the merger (the treatment sample) to measure the causal effect of competition on bias. They find that the treatment sample simultaneously experiences a decrease in analyst coverage and an increase in optimism bias, the year after the merger, relative to a control group of stocks; this is consistent with competition reducing bias. The implied economic effect from this natural experiment is significantly larger than estimates from OLS regressions that do not correct for the endogeneity of coverage. And, this effect is much more significant for stocks with little initial analyst coverage or competition.
Bakerand his co-authors develop and test a catering theory of nominal stock prices. The theory predicts that managers set share prices at lower levels when investors place higher valuations on low-price firms and at higher levels when investors favor high-price firms. Using several measures of time-varying catering incentives based on valuation ratios, split announcement effects, and future returns, the researchers find empirical support for these predictions in both time-series and firm-level data. The cross-sectional relationship between capitalization and nominal share price suggests that managers may be trying to categorize their firms as small firms when investors favor small firms.
The preferred risk habitat hypothesis, introduced here by Dorn and his co-author , is that individual investors select stocks with volatilities commensurate with their risk aversion; more risk-averse individuals pick lower-volatility stocks. The investors' portfolio perspective overlooks return correlations. The data, 1995-2000 holdings of over 20,000 customers of a German broker, are consistent with the predictions of the hypothesis: the portfolios contain highly similar stocks in terms of volatility; when stocks are sold they are replaced by stocks of similar volatilities; and, the more risk averse customers indeed hold less volatile stocks. Cross-sectionally, the more risk averse investors also have a stronger tendency to invest in mutual funds. Major improvements in diversification are concentrated during periods when investors add money to their account.
Using news data on S&P 500 firms, Tetlock investigates stock market responses to public news stories that may contain stale information. He uses several empirical proxies for news articles with old information, including variables based on past news events, media coverage, analyst coverage, and liquidity. He finds that market reactions to stale news stories partially reverse in the next week. By contrast, reactions to stories with more new information reverse to a much smaller extent, or even continue. Return reversals after stale news stories are much larger in stocks with a high fraction of small trades. These results and others are consistent with the hypothesis that individual investors overreact to stale information, exerting temporary pressure on asset prices.
Korniotis and his co-author show that a portfolio choice framework with cognitive abilities resolves three recent puzzles identified in the retail investor literature: portfolio concentration, excess trading, and local bias. In all three instances, portfolio decisions could be induced by superior information or a psychological bias. Using imputed cognitive ability measures and both investor-level and aggregated stock-level tests, the researchers show that high cognitive ability investors hold concentrated portfolios, trade actively, and prefer to hold local stocks because of an informational advantage. Consequently, they earn higher risk-adjusted returns. In contrast, the decisions of low cognitive ability investors reflect psychological biases (overconfidence and familiarity), which lead to lower risk-adjusted performance. Overall, both behavioral and rational explanations for the three puzzles are supported empirically, but they apply to low and high cognitive ability investor groups, respectively.
Bresnahan and his co-authors model the internal organization of the firm to answer a longstanding question about creative destruction. Incumbent dominant firms, long successful in an existing technology, are often much less successful in a new technological era. Theories of firm heterogeneity explain only part of this phenomenon, by examining how new firms (entrepreneurs or other outsiders to the industry) have the capability to invent technologies existing dominant firms do not. This heterogeneity cannot, however, explain the difficulties of merging a new firm into the incumbent dominant firm, or of creating a division in the incumbent dominant firm to compete in the new era. The researchers show that diseconomies of scope between new and old businesses supplied by the same firm explain the pattern of unsuccessful dominant firms. Critically, the scope diseconomies do not arise from differences between the old and new technologies (which could be accommodated by a firm-within-a-firm organizational model.) Instead, scope diseconomies arise because an old business and a new business have optimal relationships to customers or to markets which are inconsistent with one another. This model thus locates the solution to the Schumpeterian puzzle in the organizational economics of the firm.
Bloom and his co-authors collect original data on the degree of decentralization in several thousand firms located in the United States, Europe, and Asia. Specifically, they focus on the autonomy of local plant managers from their Corporate Headquarters in their decisions over hiring, investment, production, and sales. They find that American and Northern European firms are much more decentralized than those from Southern Europe and Asia, both domestically and as multinationals abroad. Three factors are associated with greater decentralization: first, stronger product market competition, which arguably makes manager's local knowledge more important because of greater time-sensitivity of decisionmaking. Second, higher trust in the plant's region of location (and/or multinational's home country), which may help to sustain effective delegation because of enhanced cooperation. And third, the prevalence of hierarchical religions, such as Catholicism and Islam, which may lead managers to have weaker preferences for autonomous decision making. These factors appear important across countries, across regions within countries, and for multinationals according to their country of ownership. If -- as suggested by the literature -- decentralization is complementary to some forms of information and communication technology, then Catholic countries with lower trust and competition, like France and Italy, may benefit less from an era of rapid technological change than Protestant countries with greater trust and competition, like Sweden and the United States.
Dew-Becker and Gordon's paper is about the strong negative tradeoff between productivity and employment growth. They document this tradeoff in the raw data, and in regressions that control for the two-way causation between productivity and employment growth, and they show that there is a robust negative correlation between productivity and employment growth across countries and time. They simplify the task of explaining intra-EU differences in the performance by reducing the dimensionality of the issue from the 15 EU countries to four EU country groups, chosen by geography. They provide a comprehensive analysis of the role of policy and institutional variables in causing changes in productivity and employment per capita growth across these country groups. Using both a calibrated theoretical model and several reduced-form regressions, they document the strong effects of European policies that raised labor costs, such as the tax wedge, employment and product market regulation, unemployment compensation, and union density, in causing employment to fall and productivity to rise before 1995, and for this process to be reversed after 1995. They conclude with policy implications and propose a new framework for thinking about EU policy reforms. Their new policy framework suggests that policy changes be assessed as much on their effects on government budgets as on productivity, or employment, because the productivity-employment tradeoff causes some policy changes to have a negligible effect on growth in output per capita.
Moser argues that the ability to keep innovations secret may be a key determinant of patenting. To test this hypothesis, she examines a newly-collected dataset of more than 7,000 American and British innovations at four world's fairs between 1851 and 1915. Exhibition data show that the industry where an innovation is made is the single most important determinant of patenting. Urbanization, high innovative quality, and low costs of patenting also encourage patenting, but these influences are small compared with industry effects. If the effectiveness of secrecy is an important factor in inventors' patenting decisions, then scientific breakthroughs, which facilitate reverse-engineering, should increase inventors' propensity to patent. The discovery of the periodic table in 1869 offers an opportunity to test this idea. Exhibition data show that patenting rates for chemical innovations increased substantially after the introduction of the periodic table, both over time and relative to other industries.