The NBER Reporter Spring 2003: News
NBER Research Associate N. Gregory Mankiw, Director of the National Bureau's Program of Research on Monetary Economics, has been nominated by President Bush to become Chairman of the Council of Economic Advisers. His appointment requires Senate confirmation. Mankiw replaces NBER Research Associate R. Glenn Hubbard, who will be returning to Columbia University.
Mankiw is the Allie S. Freed Professor of Economics at Harvard University. He received his A.B. from Princeton University and his Ph.D. from MIT. His NBER affiliation began in 1985, and he has twice served as Director of the NBER's Monetary Economics Program. Mankiw also edited the NBER volume Monetary Policy, which was published by the University of Chicago Press in 1994.
Four of the recent Chairs of the President's Council of Economic Advisers were also NBER Research Associates at the time of their nominations: Martin Feldstein, appointed by Ronald Reagan; Michael Boskin, under President George H.W. Bush; Joseph Stiglitz, President William J. Clinton; and R. Glenn Hubbard, President George W. Bush.
NBER Research Associates Christina D. Romer and David H. Romer will succeed N. Gregory Mankiw as co-directors of the NBER's Program on Monetary Economics. Mankiw has been nominated by President George W. Bush to chair the Council of Economic Advisers.
Christina Romer is currently the Class of 1957 Professor of Economics at the University of California at Berkeley. She is a specialist in monetary economics and economic history, and has studied changes in American business cycles over time and the causes of the Great Depression. David Romer is the Herman Royer Professor in Political Economy at the University of California at Berkeley. He is a specialist in monetary economics and macroeconomic theory. He has conducted research on New Keynesian microeconomic foundations, inflation, and the determinants of cross-country income differences. Together the Romers have conducted a number of studies on the effects and determinants of American monetary policy. They are co-editors of the NBER volume Reducing Inflation: Motivation and Strategy.
Christina Romer received her B.A. from the College of William and Mary in 1981 and her Ph.D in Economics from MIT in 1985. She has received the NSF Presidential Young Investigator Award and fellowships from the Sloan Foundation and the Guggenheim Foundation. David Romer received his A.B. from Princeton University in 1980 and his Ph.D. from MIT in 1985. He is the author of the leading graduate textbook in macroeconomics, Advanced Macroeconomics. Both taught at Princeton from 1985-8 before joining the Berkeley economics faculty.
They are married and have 3 children.
Douglas J. Holtz-Eakin, who has been affiliated with the NBER since 1985, has been chosen to head the Congressional Budget Office. On leave as a professor of economics at Syracuse University, he most recently had been Chief Economist of the President's Council of Economic Advisers (CEA).
In the past, Holtz-Eakin held academic appointments at Columbia University and Princeton University. He also served as Senior Staff Economist of the CEA in 1989-90.
Holtz-Eakin's long-standing interest is in the economics of public policy. His most recent NBER Working Paper (No. 8261), analyzed the distortion resulting from income versus estate taxation; in Working Paper No.7980 he discussed personal income taxes and the growth of small firms.
Ruth Prince Mack, a member of the NBER's research staff in New York City from the 1940s through the 1960s, died in early March. She was 99 years old.
Mack was the author of a Technical Paper in 1954 titled Factors Influencing Consumption: An Experimental Analysis of Shoe Buying. Her other work at the NBER was the two Studies in Business Cycles that she wrote and which were published in 1956 and 1967, respectively: Consumption and Business Fluctuations: A Case Study of the Shoe, Leather, Hide Sequence and Information, Expectations, and Inventory Fluctuations: A Study of Materials Stock on Hand and on Order.
Cummins and Lewis examine the reaction of the stock prices of U.S. property-casualty insurers to the World Trade Center (WTC) terrorist attack of September 11, 2001. Theories of insurance market equilibrium and theories of long-term contracting predict that large loss events that deplete capital and increase uncertainty will affect weakly capitalized insurers more significantly than stronger firms. The results here are consistent with this prediction. Insurance stock prices generally declined following the WTC attack. However, the stock prices of insurers with strong financial ratings rebounded after the first post-event week, while those of weaker insurers did not, consistent with the flight-to-quality hypothesis.
The broad spectrum of domestic terrorism in the United States traditionally has been underwritten as an insurance risk without the need for elaborate risk management tools. The disparate aims, origins, and domiciles of the various U.S. terrorist groups provide an intrinsic degree of portfolio diversification, and risk accumulations are bounded by the limited damage objectives and capabilities of disaffected U.S. citizens. By comparison with the sizeable database of domestic terrorist acts, the record of foreign attacks is barren: until 9/11, the U.S. mainland had not been attacked since the British burned the White House in 1812. Against a background of al-Qaeda threats to "destroy" America, insurers are obliged by the Terrorism Risk Insurance Act of 2002 to offer insurance coverage against foreign attacks. Woo outlines steps by which this coverage may be managed in a risk-informed manner, using knowledge of al-Qaeda modus operandi, expressed in a logical form suitable for decisionmaking on pricing and accumulating terrorism risk.
Russell provides estimates of the costs and benefits of the Terrorism Risk Insurance Act of 2002. He estimates the expected costs to the taxpayer of the federal insurance backstop to be around $6 billion. The chief benefit of the Act is the increase in construction jobs made possible by construction loans now protected against terrorism risk by the resuscitation of the private terrorism insurance market. Russell estimates the number of these jobs at around 65,000, substantially less than the administration claim of 300,000. Other benefits are likely to be small, suggesting that (at the rate of $10,000 per job created) the costs of the Act may substantially exceed the benefits.
Lakdawalla and Zanjani investigate the rationale for public intervention in the terrorism insurance market. They argue that government subsidies for terror insurance are aimed, in part, at discouraging self-protection and limiting the negative externalities associated with self-protection. Cautious self-protective behavior by a target can hurt public goods, including national prestige, if it is seen as "giving in" to the terrorists, and may increase the loss probabilities faced by others if it encourages terrorists to substitute toward more vulnerable targets. The authors argue that these externalities distinguish the terrorism insurance market and help to explain why availability problems in this market have engendered much stronger government responses than similar problems in other catastrophe markets.
Cohen tests the predictions of adverse selection models using data from the automobile insurance market. In contrast to what recent research has suggested, she finds that the evidence is consistent with the presence of informational asymmetries in this market: higher insurance coverage is correlated with more accidents. Consistent with the presence of learning by policyholders about their risk type, such a coverage correlation exists only for policyholders who have had three or more years of driving experience prior to joining their insurer. Consistent with the presence of learning by insurers about repeat customers, Cohen finds that, as the experience of the insurer with a group of policyholders increases, the coverage-accidents correlation declines in magnitude and eventually disappears. Finally, consistent with insurers having more information about their repeat customers than would be available to other insurers, she finds that policyholders under-report their past claims when joining the insurer and that policyholders who leave the insurer are disproportionately those with a poor claims history with the insurer.
In practice, an age profile of premiums that decreases with age might result in such high premiums for younger individuals that insurance might be considered unaffordable. Herring and Pauly use medical expenditure data to estimate an optimal competitive age-based premium schedule for a benchmark renewable health insurance policy. They find that the amount of prepayment by younger individuals necessary to cover future claims is mitigated by three factors: high-risk individuals either will recover or die; low-risk expected expense increases with age; and the likelihood of developing a high-risk condition increases with age. The resulting optimal premium path generally increases with age. In addition, the authors find that actual premium paths exhibited by purchasers of individual insurance with guaranteed renewability are close to an optimal schedule.
Davidoff, Brown, and Diamond advance the theory of annuity demand in several new directions. First, they derive sufficient conditions under which complete annuitization is optimal, showing that this well-known result holds true in a more general setting than in Yaari (1965). Specifically, when markets are complete, sufficient conditions need not impose exponential discounting, intertemporal separability, or obedience of expected utility axioms on preferences; nor do annuities need be actuarially fair, or longevity risk the only source of consumption uncertainty. All that is required is that consumers have no bequest motive and that annuities pay a rate of return greater than that of otherwise matching conventional assets, net of administrative costs. Second, the authors show that full annuitization may not be optimal when markets are incomplete. Some annuitization is optimal as long as conventional markets are complete. The incompleteness of markets can lead to zero annuitization, but the conditions on both annuity and bond markets are stringent. Third, the authors extend the simulation literature that calculates the utility gains from annuitization by considering consumers whose utility depends both on present consumption and on "standard-of-living" to which they have become accustomed. The value of annuitization hinges critically on the size of the initial standard-of-living relative to wealth.
Mahul examines optimal catastrophic risk sharing arrangements within a pool whose financial resources may be insufficient to pay all valid claims in full. Under this threat of default on payment, the mutuality principle and the standard allocation of aggregate risk based on individual risk tolerances hold within a sub-pool of agents who decide to stay in the pool after a cataclysmic event. In the context of the insurance market, this constraint precludes obtaining a first-best optimal participating policy with full insurance and a variable premium. The second-best optimal insurance contract provides full (marginal) coverage above an ex post variable deductible. This deductible is such that the total claims paid to the policyholders and the financial resources of the pool are equalized. This innovative contract contrasts with current catastrophe insurance schemes based on pro rated indemnification should the insured losses exceed the capital available.
Boulatov and Jaffee connect the traditional financial and insurance literatures in the context of real option theory. They use the analogy between insurance and investment under uncertainty in order to study the general equilibrium in an insurance industry with heterogeneous competing firms. They show that even under the assumption of rationality (defined in the traditional sense), insurance companies should be treated as strategic agents when catastrophic events are possible. The authors derive the equilibrium investment (insurance policies) using a generalization of the standard model of partially reversible investment under uncertainty.
Doherty and Kleindorfer ask whether market insurance can occur naturally under conditions of ambiguity and, if so, what contractual and market structure it should assume. They find that, far from impeding insurance contracting, ambiguity alone can provide a sufficient basis for risk sharing of catastrophic losses. The gains from risk sharing between the parties cover a wide range of parameters and derive from two mechanisms. The first is differences in ambiguity aversion between the primary insurer and the reinsurer. Greater ambiguity aversion on the part of the latter motivates reinsurance. This gain is quite intuitive. If the reinsurer dislikes ambiguity less than the insurers, then a price can be found such that both of them gain from the transfer of risk. The second mechanism comes from the catastrophic nature of the loss. Unless the catastrophic event hits all primary insurers to the same degree, the reinsurance mechanism can be used to diversify risk within the catastrophic lass state.
Heal and Kunreuther extend their earlier analysis of interdependent security issues to a general class of problems involving discrete interdependent risks. There is a threat of an event that can only happen once, and the risk depends on actions taken by others. Any agent's incentive to invest in managing the risk depends on the actions of others. Security problems at airlines and in computer networks come into this category, as do problems of risk management at organizations facing the possibility of bankruptcy, and individuals' choices about whether to be vaccinated against an infectious disease. Surprisingly the framework also covers certain aspects of investment in R and D. Here, the authors extend their earlier analysis to cover heterogeneous agents and to characterize the tipping phenomenon.
The NBER's new Working Group on Entrepreneurship met in Cambridge on February 1. Josh Lerner, NBER and Harvard University, organized this program:
Joseph Schumpeter argued in Capitalism, Socialism and Democracy that the rise of large firms' investments in in-house R and D spelled the doom of the entrepreneur. Lamoreaux and Sokoloff explore this idea by analyzing the career patterns of three cohorts of inventors from the late nineteenth and early twentieth century. They find that over time highly productive inventors were increasingly likely to form long-term attachments with firms. In the Northeast, these attachments seem to have taken the form of employment positions within large firms, but in the Midwest inventors were more likely to become principals in firms bearing their names. Entrepreneurship, therefore, was by no means dead, but the increasing capital requirements -- both financial and human -- for effective invention, and the need for inventors to establish a reputation before they could attract support, made it more difficult for creative people to pursue careers as inventors. The relative numbers of highly productive inventors in the population correspondingly decreased, as did patenting rates per capita.
The theory Lazear proposes is that entrepreneurs are jacks-of-all-trades who may not excel in any one skill, but are competent in many. He presents a model of the choice to become an entrepreneur, the primary implication of which is that individuals with balanced skills are more likely than others to become entrepreneurs. The model has implications for the proportion of entrepreneurs by occupation and by income and yields a number of predictions for the distribution of income by entrepreneurial status. Using a dataset of Stanford alumni, Lazear tests the predictions and finds that they hold. In particular, by far the most important determinant of entrepreneurship is having background in a large number of different roles. Further, income distribution predictions, for example, that there are a disproportionate number of entrepreneurs in the upper tail of the distribution, are borne out.
Bittler, Moskowitz, and Vissing-Jorgensen augment the standard principal-agent model to accommodate an entrepreneurial setting, in which effort, ownership, and firm size are determined endogenously. They test the model's predictions (some novel) using new data on entrepreneurial effort and wealth. Accounting for unobserved firm heterogeneity using instrumental variables, they find that entrepreneurial ownership shares increase with outside wealth, decrease with firm risk, and decrease with firm size. Effort increases with ownership and size, and both ownership and effort increase firm performance. The magnitude of the effects in the cross-section of firms suggests that agency theory is important for explaining the large average ownership shares of entrepreneurs.
Gans, Hsu, and Stern consider the impact of the intellectual property (IP) system on the timing of cooperation/licensing by start-up technology entrepreneurs. While productive efficiency considerations argue in favor of early licensing, delays in the granting of patent rights induce an informational asymmetry between the inventor and potential licensees. Employing a dataset combining information about the timing of patent grants and cooperative licensing, the authors establish three key findings: 1) pre-grant licensing is quite common, occurring in more than 50 percent of their sample; 2) the prevalence of pre-grant licensing varies across different economic environments; and 3) the hazard rate for achieving a cooperative licensing agreement nearly doubles with the granting of formal IP rights. Though additional work remains to be done, these findings suggest that uncertainties in the patent system may delay efficient bargaining by serving as a source of informational asymmetry between start-up innovators and other firms crucial to the commercialization process.
Klepper analyzes the evolution of the geographic distribution of producers in the television receiver and automobile industries. Both industries experienced sharp shakeouts and evolved to be oligopolies, suggestive of increasing returns. The television receiver industry initially was concentrated regionally but evolved to be more dispersed over time. In contrast, the automobile industry initially was dispersed but evolved to be heavily concentrated around one city, Detroit, MI, which initially had no producers. Neither pattern conforms to theories that portray agglomerations as being beneficial to the firms that populate them. Klepper develops and tests an alternative theory to explain the geographic evolution of the two industries. Firms are assumed to differ in terms of their initial competence at the time of entry, which shapes their long-term performance. They acquire their competence from firms in related industries and prior entrants into the new industry. He analyzes the location and performance of entrants in both industries and shows how differential importance in the two sources of competence plays a critical role in explaining the contrasting evolution of the geographic structure of the two industries.
The NBER's Program on Economic Fluctuations and Growth met in San Francisco on February 7. Fernando Alvarez, NBER and University of Chicago, and Robert King, NBER and Boston University, organized this program:
Ang, Piazzesi, and Wei build a dynamic model for GDP growth and yields that completely characterizes expectations of GDP. The model does not permit arbitrage. Contrary to previous studies, this paper concludes that the short rate has more predictive power than any term spread. The authors confirm this finding by forecasting GDP out-of-sample. The model also recommends the use of lagged GDP and the longest maturity yield to measure slope. Greater efficiency enables the yield-curve model to produce superior out-of-sample GDP forecasts than unconstrained ordinary least squares at all horizons.
Athey, Atkeson, and Kehoe analyze monetary policy design in an economy with an agreed-upon social welfare function that depends on the randomly fluctuating state of the economy. The monetary authority has private information about that state. In the model, well-designed rules trade off society's desire to give the monetary authority flexibility to react to its private information against society's need to guard against the standard time-inconsistency problem arising form the temptation to stimulate the economy with unexpected inflation. The authors find that the optimal degree of monetary policy discretion is decreasing in the severity of the time-inconsistency problem. As this problem becomes sufficiently severe, the optimal degree of discretion is zero. They also find that, despite the apparent complexity of this dynamic mechanism design problem, society can implement the optimal policy simply by legislating an inflation cap that specifies the highest allowable inflation rate.
Jovanovic and Rousseau argue that takeovers have played a major role in speeding up the diffusion of new technology. The role of takeovers is similar to that of entry and exit of firms. The authors focus on and compare two periods: 1890-1930, during which electricity and the internal combustion engine spread through the U.S. economy, and 1971-2001, the Information Age.
Veracierto evaluates how well a real business cycle (RBC) model that incorporates search and leisure decisions simultaneously can account for the observed behavior of employment, unemployment, and being out of the labor force. This work contrasts with the previous RBC literature, which analyzed employment or hours fluctuations either by lumping together unemployment and out-of-the-labor-force into a single non-employment state or by assuming fixed labor force participation. Once the three employment states are introduced explicitly, Veracierto finds that the RBC model generates highly counterfactual labor market dynamics.
Chang and Kim investigate the mapping from individual to aggregate labor supply using a general equilibrium heterogeneous-agent model with an incomplete market. They calibrate the nature of heterogeneity among workers using wage data from the Panel Survey of Income Dynamics. The gross worker flow between employment and nonemployment, and the cross-sectional earnings and wealth distributions in the model, are comparable to those in the micro data. The authors find that the aggregate labor supply elasticity of such an economy is around one, bigger than micro estimates but smaller than those often assumed in aggregate models.
Khan and Thomas develop an equilibrium business cycle model in which final goods' producers pursue generalized inventory policies with respect to intermediate goods, a consequence of nonconvex factor adjustment costs. Calibrating the model to reproduce the average inventory-to-sales ratio in postwar U.S. data, the authors find that it explains half of the cyclical variability of inventory investment. Moreover, inventory accumulation is strongly procyclical, and production is more volatile than sales, as in the data. The model economy exhibits a business cycle similar to that of a comparable benchmark without inventories, although the authors do observe somewhat higher variability in employment and lower variability in consumption and investment. Thus, equilibrium analysis, which necessarily endogenizes final sales, alters our understanding of the role of inventory accumulation for cyclical movements in GDP. The presence of inventories does not substantially raise the variability of production, because it dampens movements in final sales.
The NBER's Program on Industrial Organization met at the Bureau's California Office on February 7 and 8. Dennis W. Carlton and Austan Goolsbee, both of NBER and University of Chicago, organized the program meeting, at which these papers were discussed:
Greenstein and Mazzeo examine the role of differentiation strategies in the development of markets for local telecommunication services in the late 1990s. The prior literature has used models of interaction among homogenous firms, but this paper is motivated by the claim that entrants differ substantially in their product offerings and business strategies. Exploiting a new, detailed dataset of CLEC (Competitive Local Exchange Carriers) entry into over 700 U.S. cities, the authors take advantage of recent developments in the analysis of entry and competition among differentiated firms. They find strong evidence that CLECs take account of both potential market demand and the business strategies of competitors when making their entry decisions. This suggests that firms' incentives to differentiate their services should shape the policy debate for competitive local telecommunications.
When quality is produced with fixed costs, a high-quality firm can undercut its rivals' prices and may find it profitable to invest more in quality as market size grows. As a result, a market can remain concentrated even as it grows large. By contrast, when quality is produced with variable costs, a wide range of product qualities can coexist in the market because they are offered at different prices. Larger markets will fragment and offer products with a wider range of qualities. Using U.S. urban areas as markets, Perry and Waldfogel examine the relationships between market size and product quality -- and between market size and product concentration -- for two industries that differ in their quality production process. The authors document that in the restaurant industry, where quality is produced largely with variable costs, the range of qualities increases with market size, with each "product" maintaining a small market share. In daily newspapers, where quality is produced with fixed costs, the average quality of products increases with market size, and the market does not fragment as it grows.
Bamberger, Carleton, and Neumann empirically investigate the effect of two recent domestic airline alliances. They find that both alliances benefited consumers: average fares fell by about 5 to 7 percent after the creation of the alliances on those city pairs affected by the alliances. They also find that total traffic increased 6 percent after the creation of at least one of the alliances. The average fare and traffic effects arise in part because the alliance partners' rivals respond to the increased competition from an alliance. Finally, the authors find that the size of the alliance effect on average fares depends on the pre-alliance level of competition on a city pair, with the effect being larger on those city pairs where the level of competition was initially relatively low.
Petrin and Train describe two methods for correcting an omitted variables problem in discrete choice models: a fixed effects approach and a control function approach. The control function approach is easier to implement and applicable in situations for which the fixed effects approach is not. The authors apply both methods to a cross-section of disaggregated data on customer's choice among television options including cable, satellite, and antenna. As theory predicts, the estimated price response rises substantially when either correction is applied. All of the estimated parameters and the implied price elasticities are very similar for both methods.
Two salient features of the competitive structure of the U.S. mutual fund industry are the large number of funds and the sizeable dispersion in the fees funds charge investors, even within narrow asset classes. Differences in portfolio financial performance alone do not seem able to fully explain these features. Hortaçsu and Syverson focus on the retail S&P 500 index funds sector, where they find similar patterns of fund proliferation and price dispersion. This suggests that costly investor search and non-portfolio fund differentiation may play an important role in the mutual fund industry. To quantify the welfare impact of such factors in the market for mutual funds, the authors construct a model of industry equilibrium in which consumers conduct costly search over differentiated products. Using panel data on fund fees and market shares in the retail S&P 500 index fund sector, the authors find that fairly small search costs can explain the considerable price dispersion in the sector. Further, consumers value funds' observable non-portfolio attributes -- such as fund age and the number of other funds in the same fund family -- in largely plausible ways. Finally, the authors investigate the possibility that there are too many funds in the sector from a social welfare standpoint. They quantify the welfare impact of a counterfactual sector structure where entry is restricted to a single fund; they find that restricting entry would yield nontrivial gains from reduced search costs and productivity gains from scale economies. However, these may be counterbalanced by sizeable losses from monopoly market power and reduced product variety.
What is the role of firms and markets in mediating the division of labor? Garicano and Hubbard use confidential microdata from the Census of Services to examine law firms' boundaries. First they examine how the specialization of lawyers and firms increases as lawyers' returns to specialization increase. In fields where lawyers increasingly specialize with market size, the relationship between the share of lawyers who work in a field-specialized firm and market size indicates whether firms or markets more efficiently mediate relationships between lawyers in this and other fields. The authors then examine which pairs of specialists tend to work in the same versus different firms; this provides evidence on the scope of firms that are not field-specialized. They find that whether firms or markets mediate the division of labor varies across fields in a way that corresponds to differences in the value of cross-field referrals, consistent with Garicano and Santos's (2001) proposition that firms facilitate specialization by mediating exchanges of economic opportunities more efficiently than markets.
In 1997 there was an important change in direct-to-consumer (DTC) advertising of ethical drugs. For the first time, the Food and Drug Administration (FDA) permitted brand-specific DTC ads on TV without a "brief summary" of comprehensive risk information. This led to a three-fold growth in DTC advertising expenditure over four years, followed by an intensive debate about the effects of DTC advertising on patient and doctor behaviors. Iizuka and Jin empirically examine the effects of DTC ads on ethical drugs by combining 1996-9 DTC advertising data with the annual National Ambulatory Medical Care Survey (NAMCS). The authors find that DTC advertising leads to a large increase in the number of outpatient drug visits, a moderate increase in the time spent with doctors, but has no effect on doctors' specific choice among prescription drugs within a therapeutic class. Consistent with the proponents' claim, this finding suggests that DTC ads encourage patient visits but do not challenge doctors' authority in the specific choice of prescription drugs. The authors cannot rule out the possibilities, however, that DTC ads may induce doctors to use prescription drugs over alternative treatments, and that doctors may spend extra time clarifying DTC ads if they do not prescribe the most advertised drug(s). The results suggest that the effect of DTC advertising is primarily market-expanding rather than business-stealing, and therefore DTC advertising is a public good for all drugs in the same therapeutic class.
A new NBER Working Group on Personnel Economics met in Cambridge on March 6 and 7. Edward P. Lazear, NBER and Stanford University, organized the meeting, at which these papers were discussed:
Early statements on internal labor markets view firms as consisting of ports-of-entry jobs and other jobs. Workers are hired into the former and promoted to the latter. In the strictest form, external hiring only takes place at certain job levels and thereafter workers are insulated from the forces of market competition. Lazear and Oyer use data from the Swedish Employers' Confederation to determine the existence of ports of entry in firms that represent a large part of the Swedish economy. Although there is a great deal of promotion from within, at every level there remains significant hiring from the outside. The data are more consistent with tournament theory, or with theories of firm-specific human capital, than they are with the more rigid institutional views.
Bowlus and Vilhuber lay out a search model that explicitly takes into account the information flow prior to a mass layoff. Using universal wage data files that allow them to identify individuals working with healthy and displacing firms, both at the time of displacement and at any other time period, the authors test the predictions of the model about re-employment wage differentials. Workers leaving a "distressed" firm have higher re-employment wages than workers who stay with the distressed firm until displacement. This result is robust to the inclusion of controls for worker quality and unobservable firm characteristics.
Huck, Kübler and Weibull study the interplay between economic incentives and social norms in firms. They outline a simple model of team production, in which workers' efforts are substitutes, and analyze this situation under different social norms. The main focus is on "efficiency norms," that is, norms that arise from workers' desire for, or peer pressure towards, social efficiency for the team as a whole. The authors examine the possibility of multiple equilibriums and the effect of economic incentives on the set of equilibriums, in particular whether economic incentives that are too strong may knock out efficient equilibriums. Partnerships, complementarity in production, stochastic production, and alternative incentive schemes also are considered.
Kwon and Milgrom use data from the private sector in Sweden to look at how institutional settings influence firms' recruitment strategies, and the interaction between occupation boundaries and firm boundaries. The Swedish data encompass entire populations of establishments in the private sector, including characteristics of employees and occupations and information about wages and work hours. The data also cover entire subpopulations of workers in the private sector for a 20-year period. The authors ask: What were the hiring patterns of Swedish private employers in 1978 and 1988? And, how did these patterns affect the individual white-collar worker pay? They find that employers commonly hire from both within and outside the firm to all the different ranks in the firm, contradicting simple theories that higher ranks are filled by promotion from within the firm. Smaller firms tend to hire more from outside, and to higher ranks, than the larger firms. Large firms tend to fill job slots from different occupations but from within the firm. Filling jobs with outside hires (irrespective of within or outside occupation) is most common for the top ranks. White-collar workers' wages increase with occupation tenure and general labor market tenure, possibly reflecting the accumulation of occupation-specific and general human capital, but wages decrease with firm tenure. Generally, hiring appears restricted as much by occupation-specific and general human capital as by the boundaries of the firm.
Grund compares the wage policy of a German and a U.S. firm, focusing on the relationship between wages and hierarchies. Prior studies examined only one particular firm, but in this paper two plants with the same owners and similar production processes in different institutional environments are inspected. Grund finds convex wage profiles over the hierarchy levels of both plants. The U.S. plant shows considerably higher intensity of intra-firm competition in terms of higher intra-level wage inequality and yearly promotion rate. In contrast, wages are more distinctly attached to hierarchy levels in the German firm, as the wage regressions show.
Frederiksen and Westergaard-Nielsen study individual job separations and their associated destination states for all individuals in the private sector in Denmark during 1980-95, accounting for their magnitude and cyclical flows. The authors find that individual and workplace characteristics as well as business cycle effects are important in explaining individual behavior. They find that structural and growth policies reduce transitions into unemployment but, in general, have different implications for the economy. Policy interventions targeting displaced workers coming from plant closures are inefficient, the authors argue.
How much do responses to firm opinion surveys vary among workplaces as opposed to among workers? Is there a genuine "workplace effect" in employee opinion surveys? To the extent that systematic differences in attitudes exist across workplaces, do these differences help predict workplace economic outcomes, such as productivity or turnover? Bartel, Freeman, Ichniowski, and Kleiner examine these questions across branches of a large commercial bank in the New York metropolitan area. The bank provided files from its 1994 and 1996 employee opinion surveys under the condition that the authors not use its name in the publication of their results. The sample contains data on 2245 employees working in 193 New York area bank branches in 1994, and 1439 employees working in 142 branches in 1996. The smaller sample sizes for 1996 are attributable to closings of 51 branches between 1994 and 1996. The authors supplement the bank data with information from the National Longitudinal Survey of Youth (NLSY) on job satisfaction for the same individual over time. Finally, they combine the employee attitude data with information on the branches' financial performance, characteristics of the branches' local markets, and characteristics of the branch employees, to see whether branch level attitudes help predict employee turnover and productivity in the two cross sections and over time. They conclude that there is a genuine branch, or workplace, effect on how workers view their workplace. Differences in attitudes are highly positively correlated among branches over time; this suggests that workplace effects are strongly persistent. The NLSY data show a higher correlation of attitudes for the same worker at a given workplace than at different workplaces; this also points to the existence of a genuine workplace effect. Further, branches where workers have more favorable attitudes toward the firm have lower turnover and higher productivity.
Lengermann argues that relying on wages as a proxy for skill may be problematic. Using a newly developed longitudinal dataset linking virtually the entire universe of workers in the state of Illinois to their employers, he decomposes wages into components due, not only to person and firm heterogeneity, but also to the characteristics of their co-workers. Such "co-worker effects" capture the impact of a weighted sum of the characteristics of all workers in a firm on each individual employee's wage. Lengermann relies on the person-specific component of wages to proxy for co-worker "skills." Because these person effects are unknown ex ante, he first obtains them from a preliminary regression that excludes any role for co-workers. His estimates imply that a single standard deviation increase in both a firm's average person effect and experience level is associated, on average, with wage increases of 3 percent to 5 percent. Firms that increase the wage premiums they pay workers appear to do so in conjunction with upgrading worker quality. Interestingly, the average effect masks considerable variation in the relative importance of co-workers across industries. After allowing the co-worker parameters to vary across 2 digit industries, he finds that industry average co-worker effects explain 26 percent of observed inter-industry wage differentials. Finally, he decomposes the overall distribution of wages into components due to persons, firms, and co-workers. While co-worker effects indeed serve to exacerbate wage inequality, the tendency for high and low skilled workers to sort non-randomly into firms plays a considerably more prominent role.
Eriksson and Ortega test three theories for why firms introduce job rotation schemes: employee learning, employer learning, and employee motivation. The earlier literature used either information about establishment characteristics or data coming from personnel records of a single firm. In order to improve upon this, the authors use a unique dataset constructed by merging information from a fairly detailed survey directed at Danish private sector firms with linked employer-employee panel data. This allows them to include firm and workforce characteristics as well as firms' human resource management practices as explanatory variables, and hence to carry out a more comprehensive analysis.
The dynamism of the U.S. economy is particularly apparent in the reallocation of workers into, out of, and within the labor market. Knowing the order of magnitude of this allocation is important for a variety of empirical exercises -- not least of which is the empirical matching function. Golan and Lane analyze the dynamics of worker allocation for the state of Illinois -- both for the entire workforce and for demographic subgroups. They perform their estimation with minimal distributional assumptions. The main results are that even among the more stable groups of workers, there is a great deal of reallocation; and that the reallocation behavior of different groups (gender, low/high wage, age) is significantly different.
The NBER's Program on Development of the American Economy met in Cambridge on March 8. Program Director Claudia Goldin of Harvard University organized the meeting. The following papers were discussed:
Economic and social theorists have modeled race and ethnicity as one form of personal identity produced in response to the costliness of adopting and maintaining a specific identity. Bodenhorn and Ruebeck look at the free African-American population in the mid-nineteenth century to investigate the costs and benefits of adopting alternative racial identities. During this period light-skinned African-Americans could, and often did, choose to differentiate themselves from dark-skinned African-Americans. The authors model the choice as an extensive-form game, in which whites choose whether to accept a separate mulatto identity and mixed-race individuals then choose whether to adopt a mulatto identity. Adopting a mulatto identity generates pecuniary gains, but imposes psychic costs. The authors quantify "the complexion gap" and find that mulattoes held significantly more wealth than blacks. Finally, they relate the complexion gap to community factors and find that the benefits of adopting a mulatto identity increased with the absolute size of the mulatto community, but decreased as the mulatto percentage of the African-American population increased at the neighborhood and city level. Thus, mulattoes benefited from white preferences when they represented a modest share of the African-American population. Yet if most African-Americans in a city were light-skinned, they became black in the eyes of whites and received no special treatment.
Costa notes that differentials between blacks and whites in both birth weights and prematurity and stillbirth rates have been persistent over the entire twentieth century. Differences in prematurity rates explain a large proportion of the black-white gap in birth weights, she finds, both among babies born under Johns Hopkins physicians in the early twentieth century and babies in the 1988 National Maternal and Infant Health Survey. In the early twentieth century, untreated syphilis was the primary observable explaining differences in black-white prematurity and stillbirth rates. Today the primary observable explaining differences in prematurity rates is the low marriage rate of black women. Maternal birth weight accounts for 5-8 percent of the gap in black-white birth weights in the recent data, suggesting a role for intergenerational factors. The Johns Hopkins data also illustrate the value of breast-feeding in the early twentieth century: black babies fared better than white babies in terms of mortality and weight gain during the first ten days of life spent in the hospital largely because they were more likely to be breast-fed.
Moriguchi studies the dynamic evolution of the human resources management (HRM) practices of American corporations during the 1920s and 1930s. It has been claimed that private welfare capitalism -- employers' provision of non-wage benefits, greater employment security, and employee representation to their blue-collar workers -- collapsed during the Great Depression and was replaced by the welfare state and industrial unionism under the New Deal regime. However, the recent literature reveals considerable differences among firms. Using data from 14 elite manufacturing firms, Moriguchi tests the implications of implicit contract theory and investigates the effect of the Depression on welfare capitalism and the subsequent development of corporate HRM practices. He identifies positive relationships between the severity of the Depression and the degree of repudiation, but also finds that firms with a higher commitment to workers were less likely to repudiate and more likely to remain unorganized and retain implicit contractual relations.
It is often argued that branching stabilizes banking systems by facilitating diversification of bank portfolios; however, previous empirical research on the Great Depression cannot be reconciled with this view. Analyses using state-level data find that states allowing branch banking had lower failure rates, while those examining individual banks find that branch banks were more likely to fail. Carlson and Mitchener argue that an alternative hypothesis can reconcile these seemingly disparate findings. Using data on national banks from the 1920s and 1930s, the authors show that branch banking increases competition and forces weak banks to exit the banking system. This consolidation strengthens the system as a whole without necessarily strengthening the branch banks themselves.
In 1900 bovine tuberculosis represented a growing threat to both animal and human health. In 1917, shortly after a series of scientific breakthroughs allowed the early detection of TB in cattle, the USDA embarked on a national campaign to eradicate the disease. This was a wholly unprecedented and highly controversial effort, with state and federal agents inspecting nearly every cattle farm in the country, testing the animals, and condemning nearly 4 million reactors to slaughter without full compensation. Olmstead and Rhode analyze how the eradication program functioned, how incentives were aligned to insure widespread participation without excessive moral hazard problems, and why the United States led most European nations in controlling the disease. The U.S. campaign was a spectacular success. For the farm sector alone, the annual benefits ranged between five and twelve times the annual costs. This represented a small part of the story, because the most important benefit was reducing human death and suffering.
Electricity and information technology (IT) are perhaps the two most important general-purpose technologies (GPTs) to date. Jovanovic and Rousseau analyze how the U.S. economy reacted to them. The Electricity and IT eras are similar, but they also differ in important ways. Electrification was adopted more broadly, whereas IT seems to be technologically more revolutionary. The productivity slowdown is stronger in the IT era, but the ongoing spread of IT and its continuing precipitous price decline are reasons for optimism about growth in the coming decades.
The NBER's Program on Health Economics met in Cambridge on March 14. Program Director Michael Grossman organized the meeting, at which these papers were discussed:
Microeconomic analyses typically suggest that worker health makes an important contribution to productivity and wages. Weil (2001) uses estimates of the individual-level relationship between health and wages to calibrate an aggregate production function and suggests that differences in health are roughly as important as differences in education in explaining cross-country differences in gross domestic product per worker. Bloom, Canning, and Sevilla directly estimate the effect of health on worker productivity using cross-country macroeconomic data. They find a positive and significant effect. In addition, the estimated effect of health on aggregate output is consistent with the size of the effect found in microeconomic studies.
In the past, many economists have treated smokers and addicts as rational, time-consistent utility-maximizers. In recent years, that view has come under attack by those who argue that smokers and other addicts exhibit time-inconsistency and problems of self-control. This conflict is significant, but intractable, because these two theories have very similar positive implications but wildly different normative ones. However, while the positive implications are qualitatively similar, they differ quantitatively. Bhattacharya and Lakdawalla use this fact and ask which model better fits the actual lifetime decision making patterns of smokers. Using repeated cross-sectional data from the National Health Interview Surveys, they estimate structural models of rational addiction and time-inconsistency. Their results reveal surprisingly little evidence in favor of time-inconsistency.
Compliance with anti-diabetic medications is crucial to reducing complications such as blindness, amputations, heart disease, and stroke among diabetics. Dor and Encinosa examine compliance within 90 days after the completion of anti-diabetic drug prescriptions. About a third of the population never complies, a third always complies, and the remaining third partially complies. The authors find that the drug coinsurance rate has the effect of reducing compliance, after they control for chronic conditions, number of previous refills, and demographic characteristics. An increase from 20 percent to 75 percent coinsurance results in the share of those who never comply increasing by 27 percent and reduces the share of fully compliant persons by almost 11 percent. An increase in the copayment from $6 to $10 results in a 13 percent increase in the share of non-compliant persons, and a nearly 11 percent reduction in the share of fully compliant persons. This same increase in copayment would reduce annual drug costs nationally by $177 million, simply by increasing non-compliance. But, this increase in non-compliance also would increase the rate of diabetic complications, resulting in an additional $433.5 million in costs annually.
Joyce, Kaestner, Korenman, and Heushaw analyze the association between state and federal welfare reform and births and abortions. State reform consists of a series of waivers from rules governing the Aid to Families with Dependent Children (AFDC) prior to the Welfare Reform Act (PRWORA). The authors also examine the association between implementation of Temporary Assistance to Needy Families (TANF), the program that replaced AFDC following PRWORA, and births and abortions. They then look more closely at one aspect of reform, the family cap, and its association with reproductive choices. In order to carry out these analyses, they collected and analyzed individual abortion records from 21 states, the largest compilation of such data ever attempted. There is some evidence of an increase in abortion associated with TANF. Among blacks, abortions rise after TANF among older women, but there is no decline in births. There is also a rise in the abortion ratio among black women with two or more live births in states with family caps. Overall, there is at best only a modest change in births and abortions associated with TANF.
Reichman, Corman, and Noonan use data from the national longitudinal Fragile Families and Child Wellbeing Study of mostly unwed parents to estimate how poor child health affects one potential human resource available to that child: the presence of a father. The authors look at whether parents are living in the same household one year after the child's birth and also, more generally, at how their relationships changed along a continuum (married, cohabiting, romantically involved, friends, or not involved) during the same one-year period. Since it may not be easy to characterize poor child health as a random event, the authors account for the potential endogeneity of child health in their models. They find that having an infant in poor health reduces the likelihood that parents will live together and increases the likelihood that they will become less committed to their relationship.
LoSasso and Buchmueller present the first national estimates of the effects of the SCHIP expansions on insurance coverage. Using CPS data on insurance coverage during the years 1996 through 2000, they find that SCHIP had a small, but statistically significant positive effect on insurance coverage. Between 4 percent and 10 percent of children who meet income eligibility standards for the new program gained public insurance. These estimates indicate that states were more successful in enrolling children in SCHIP than they were with prior Medicaid expansions focused on children just above the poverty line. Crowd-out of private health insurance was in line with estimates for the Medicaid expansions of the early 1990s, between 18 percent and 50 percent.
The NBER's Program on Productivity met in Cambridge on March 14. Shane Greenstein, NBER and Northwestern University, organized the meeting. These papers were discussed:
Atrostic and Nguyen use new plant-level data on information technology (IT) collected by the U.S. Census Bureau to provide evidence on the labor productivity impact of IT across U.S. manufacturing plants in IT-producing and IT-using industries (defined under the North American Industrial Classification System). Previous plant-level studies examining the link between productivity and computers or other IT in the United States typically focused on the presence of computers, either using data on the stock of computer capital or on current IT or computer investment as proxies for the computer stock. Studies that have detailed information on IT generally have a relatively small sample, do not include smaller plants, or are limited to specific manufacturing industries. The data for this study, in contrast, are collected from about 30,000 plants across the U.S. manufacturing sector. The authors use a direct measure of IT: information on the presence of computer networks. They find that computer networks have a positive and significant effect on labor productivity after they control for other important factors, such as capital intensity and other plant characteristics, and even after taking account of possible endogeneity of the computer network variable. Also, small plants appear to use computer networks more efficiently than large plants.
Using two panels of U.S. manufacturing industries, Bessen estimates capital adjustment costs from 1961 to 1996. He finds that adjustment costs rose sharply from 1974-83: they more than doubled, from about 3 percent of output to around 7 percent. Moreover, this increase is specifically associated with a shift to investment in IT. But such large adoption costs imply that the Solow residual mismeasures productivity growth: adoption costs are resource costs that represent an unmeasured investment. Bessen finds that when this investment is included, productivity grew about 0.5 percent per year faster than official measures during the 1970s and early 1980s, reducing the size of the productivity "slowdown." Indeed, estimated productivity growth rates were roughly the same from 1974-88 as from 1949-73. Thus technology transitions critically affect productivity growth measurement.
Elfenbein and Lerner test theoretical propositions from the technology licensing literature and from the literature on information and control in alliances, using a sample of over 100 Internet portal alliance contracts. The technology licensing literature suggests that one of the major factors driving firms' decisions to transact exclusively for an innovation is the magnitude or importance of the innovation. The authors find some support for this hypothesis, but in this setting exclusivity decisions also relate to other factors. The literature on information and control in alliances suggests that the use of verifiable performance measures to allocate state contingent decision rights depends on the level of information asymmetry between the two parties and on the precision of the information. The authors test these propositions by looking at how the timing of agreements (a proxy for environmental uncertainty) and exclusivity restrictions (a proxy for incentive conflict) affect the use of a subset of available performance measures. Consistent with the literature, they find that contracts involve fewer contingencies as industries have matured. Where incentive conflicts are potentially greater, more contingencies are used.
Flamm starts by describing the history of Moore's Law, and explains why it has such potentially wide-ranging consequences. He then shows how a Moore's Law prediction must be coupled with other assumptions in order to produce an economically meaningful link to what is the key economic variable of the information age: the cost or price of electronic functionality, as implemented in a semiconductor integrated circuit. Flamm then relates the historical evolution of semiconductor prices through the mid-1990s to developments in several key parameters over this historical period. He surveys the evidence on acceleration in the rate of decline in leading edge semiconductor prices in the mid-1990s and suggests that measured increases in historical rates of decline seem unlikely to persist. Finally, he explores the nature of the man-made historical and institutional economic processes that made these technical accomplishments possible, and argues that their consequence has been an unappreciated but radical transformation of the industrial framework in which R and D is undertaken within the global semiconductor industry.
The personal computer Central Processing Unit (CPU) has undergone a dramatic improvement in quality, accompanied by an equally remarkable drop in prices in the 1990s. How have these developments in the CPU market affected consumer welfare? Song estimates demand for CPUs and measures consumer welfare. The welfare calculations show that consumer surplus makes up approximately 90 percent of the total social surplus and that a large part of the welfare gains comes from the introduction of new products. Simulation results show how "quality competition" among firms can generate a large gap between the reservation and actual prices.
Dougherty, Inklaar, McGuckin, and Van Ark examine the structure of R and D internationally using information collected in interviews of 25 multinational companies in four high-tech industries from the United States, European Union, and Japan. They look at the composition of R and D activities within the firm, how they are structured internationally, and how they are changing. The focus is on how the production of research and development differ, based on the relative uncertainty of research output as compared to development. The economic distinctions between research and development have broad implications for how companies allocate their resources, and they help to explain recent trends, including research becoming more concentrated in the United States and the development of commercial products more dispersed worldwide. The authors examine changes in international R and D costs using new purchasing power estimates for R and D and compare these to changes in the trends of R and D investments internationally from 1987 to 1997.
The NBER's Program on International Finance and Macroeconomics met in Cambridge on March 21. Richard Lyons, NBER and University of California, Berkeley, and Andres Velasco, NBER and Harvard University, organized this program:
Monetary instruments differ in their transparency -- how easy it is for the public to monitor the instrument -- and their tightness -- how closely linked they are to inflation. Tightness is always desirable in a monetary policy instrument. Atkeson and Kehoe show that transparency is desirable when there is a credibility problem, in that the government cannot commit to its policy. They illustrate their argument by considering a classic question in international economics: is the exchange rate or the money growth rate the better instrument of monetary policy? Their analysis suggests that the greater transparency of exchange rates means that if both instruments are equally tight, the exchange rate is preferred.
With inflation under control in many middle income countries (MICs), swings in credit, investment, and asset prices now have the most effect on these countries. Tornell and Westermann present a framework for analyzing how credit market shocks are propagated and amplified in MICs. In their model the strength of the credit channel derives from two key characteristics of MICs: a sharp asymmetry across the tradables (T) sector and the more bank-dependent nontradables (N) sector; and a significant degree of currency mismatch in the N-sector. This makes movements in the real exchange rate the driving element in the amplification of shocks. Using quarterly data for a group of MICs, the authors find evidence for a strong credit channel, for a balance sheet effect, and for asymmetric sectorial responses. Their findings indicate that inflation targeting is not sufficient to guarantee economic stability, because such a policy might overlook the development of lending booms and associated sectorial asymmetries.
Tille explores the optimal monetary policy reaction to productivity shocks in an open economy. Earlier studies assumed that countries specialize in producing particular goods, but he enriches the analysis by allowing for incomplete specialization. He confirms the finding of Obstfeld and Rogoff (2000) -- who build on Friedman (1953) -- that a flexible exchange rate is highly valuable in delivering the optimal response to country-specific shocks. However, its value is much smaller when shocks are sector-specific, because exchange rate fluctuations then lead to misallocations between different firms within a sector. The limitation on the value of flexibility is sizable even when specialization is high.
Auguste, Dominguez, Kamil, and Tesar examine the surprising performance of the Argentine stock market in the midst of the country's most recent financial crisis as well as the role of cross-listed stocks in Argentine capital flight. Although Argentine investors were subject to capital controls, they were able to purchase cross-listed stocks for pesos in Argentina, convert them into dollar-denominated shares, re-sell them in New York, and deposit the dollar proceeds in U.S. bank accounts. The authors show that: 1) ADR discounts went as high as 45 percent (indicating that Argentine investors were willing to pay significant amounts in order to legally move their funds abroad); 2) the implicit peso-dollar exchange rate on the eve of the devaluation anticipated a 42 percent fall in the value of the peso relative to the dollar; 3) local market factors in Argentina became more important in pricing peso denominated stocks with associated ADRs, while the same stocks in New York were mainly priced based on global factors; and 4) capital outflow using the ADR and CEDEAR markets was substantial (their estimate for ADRs is between $835 million and $3.4 billion).
Aguiar and Gopinath use a firm-level dataset to show that foreign acquisitions increased by 91 percent in East Asia between 1996 and 1998, while intra-national merger activity declined. Firm liquidity plays a significant and sizeable role in explaining both the increase in foreign acquisitions and the decline in the price of acquisitions during the crisis. This contrasts with the role of liquidity in non-crisis years and in non-crisis economies in the region. This effect is also most prominent in the tradable sector. Quantitatively, the observed decline in liquidity can explain 25 percent of the increase in foreign acquisition activity in the tradable sectors. The nature of M and A activity supports liquidity-based explanations of the East Asian crisis and provides an explanation for the puzzling stability of FDI inflows during the crises.
Imbs, Mumtaz, Ravn, and Rey show the importance of a dynamic aggregation bias in accounting for the Purchasing Power Parity (PPP) puzzle. They prove that established time series and panel methods substantially exaggerate the persistence of real exchange rates because of heterogeneity in the dynamics of disaggregated relative prices. When heterogeneity is properly taken into account, estimates of the real exchange rate half-life fall dramatically, to little more than one year, or significantly below Rogoff's "consensus view" of three to five years. The authors show that corrected estimates are consistent with plausible nominal rigidities, thus, arguably, solving the PPP puzzle.
The NBER's Program on International Trade and Investment met in Cambridge on March 28 and 29. Program Director Robert C. Feenstra, NBER and University of California, Davis, organized this program:
Rose estimates the effect on international trade of multilateral trade agreements: the World Trade Organization (WTO); its predecessor, the Generalized Agreement on Tariffs and Trade (GATT); and the Generalized System of Preferences (GSP) extended from rich countries to developing countries. He uses a standard "gravity"model of bilateral merchandise trade and a large panel data set covering over 50 years and 175 countries. An extensive search reveals little evidence that countries joining or belonging to the GATT/WTO have different trade patterns from outsiders. The GSP does seem to have a strong effect, and is associated with an approximate doubling of trade.
Previous literature has discussed the procedural biases that exist in U.S. Department of Commerce (USDOC) calculations of dumping margins. Blonigen examines the evolution of discretionary practices and their role in the rapid increase in average USDOC dumping margins since 1980. He finds that USDOC discretionary practices, including use of "facts available" and cost of production tests, have played a major role in rising dumping margins, with little evidence that changes in U.S. antidumping law or composition of investigated products and countries have had much effect. Importantly, the evolving effect of discretionary practices is attributable not only to increasing use of these practices over time, but also to apparent changes in implementation of these practices which signal a higher increase in the dumping margin whenever they are applied.
Bernard, Jensen, and Schott examine the response of industries and firms to changes in trade costs. They test the predictions of recent equilibrium models of international trade with heterogeneous firms. Using disaggregated U.S. import data, the authors create a new measure of trade costs over time and industries. As the models predict, productivity growth is faster in industries with falling trade costs. The authors also find evidence supporting the major hypotheses of the heterogeneous firm models. Firms in industries with falling (relative) trade costs are more likely to die or become exporters. Existing exporters increase their shipments abroad. The results are strongest for industries most likely to be producing horizontally differentiated tradeable goods.
Keller and Yeaple estimate international technology spillovers to U.S. manufacturing firms via imports and foreign direct investment (FDI) between 1987 and 1996. In contrast to earlier work, their results suggest that FDI leads to significant productivity gains for domestic firms. The size of FDI spillovers is economically important, accounting for about 14 percent of productivity growth in U.S. firms between 1987 and 1996. In addition, there is some evidence of imports-related spillovers, but it is weaker than for FDI. The authors also give a detailed account of why their study leads to different results from those found in previous work. Their analysis indicates that their results are likely to generalize to other countries and periods.
Aizenman and Spiegel study the implications of institutional efficiency on the pattern of foreign direct investment (FDI). They posit that domestic agents have a comparative advantage over foreign agents in overcoming some of the obstacles associated with corruption and weak institutions. They model these circumstances in a principal-agent framework with costly ex-post monitoring and enforcement of an ex-ante labor contract. Ex-post monitoring and enforcement costs are assumed to be lower for domestic entrepreneurs than for foreign ones, but foreign producers enjoy a countervailing productivity advantage. Under these asymmetries, multinationals pay higher wages than domestic producers, in line with the insight of efficiency wages and with the evidence about the "multinationals wage premium." FDI also is more sensitive to increases in enforcement costs. The authors compare institutional efficiency levels for a large cross section of countries in 1989 to subsequent FDI flows from 1990 to 1999. They find that institutional efficiency is associated positively with the ratio of subsequent FDI flows to gross fixed capital formation and to private investment. This is true for both simple cross-sections and for cross-sections weighted by country size.
In the context of the Allied bombing of Japanese cities and industries in WWII, Davis and Weinstein develop a new empirical test for multiple equilibriums and then apply it to data for 114 Japanese cities in eight manufacturing industries. The data reject the existence of multiple equilibriums. In the aftermath of even gargantuan shocks, a city typically recovers not only its population and its share of aggregate manufacturing, but even the specific industries it had before.
Goldberg and Pavcnik study the relationship between trade liberalization and informality. It is often claimed that increased foreign competition in developing countries leads to an expansion of the informal sector, defined as the sector that does not comply with labor market legislation. Using data from two countries that experienced large trade barrier reductions in the 1980s and 1990s, Brazil and Colombia, the authors examine the response of the informal sector to liberalization. They find no evidence of a relationship between trade policy and informality in Brazil. In Colombia, though, there is evidence of such a relationship, but only for the period preceding a major labor market reform which increased the flexibility of the labor market. These results point to the significance of labor market institutions in assessing the effects of trade policy on the labor market.
The NBER's Program on Children, directed by Jonathan Gruber of MIT, met in Cambridge on April 3. Members and guests discussed these papers:
Of the ten million uninsured children in 1996, nearly half were eligible for the public health insurance program, Medicaid, but not enrolled. Little is known about the reasons low-income families fail to use public programs or the consequences of failing to use them. Using detailed information on Medicaid outreach, enrollment, and hospitalization rates in California, Aizer finds that information and administrative costs are significant deterrents to program take-up. Controlling for selection into Medicaid, enrolling children early in Medicaid leads to a more efficient allocation of health care resources by promoting primary ambulatory care over more expensive hospital-based care resulting in fewer avoidable hospitalizations, she finds.
Modern theory on sex allocation predicts that parents may be able to vary the sex of their offspring according to the prospects for two-parent care. Using data pooled from four publicly available longitudinal studies, Norberg finds that parents who were living with an opposite-sex spouse or partner before the child's conception or birth were significantly more likely to have a male child than parents who were living apart. This effect is observable even when the comparisons are made between siblings, and even when those comparisons are made before the children's conceptions. This "partnership status effect" may be the result of modern reproductive exposures, but a paternal investment effect would fit closely with the predictions of adaptive sex allocation theory.
Almond and Chay use the substantial improvements in health among the cohorts of black infants born during the 1960s to estimate the long-run effects of early life health conditions. The microdata contained in the annual Natality Detail files provides information on the demographic and socioeconomic characteristics and health risk factors of mothers giving birth, as well as the health outcomes of the infant. These data can be linked to the infant health conditions that prevailed in the state and year in which the mother was born. The evidence suggests a strong link between infant health and both adult health and infant health of the subsequent generation. For example, the dramatic improvements in health among black infants born in Mississippi during the 1960s are mirrored by improved health among black women born in Mississippi during the 1960s who gave birth during the 1980s and 1990s. A black adult female born after 1964 has lower rates of diabetes and other risk factors and is less likely to give birth to a low birth weight infant than a black woman born before 1964. This pattern does not hold for white, Mississippi-born women, who experienced much smaller infant health improvements during the Civil Rights Era. The authors find similar associations in other states that were affected by the social programs of the 1960s.
Figlio shows that schools respond to high-stakes testing by selectively disciplining their students. Schools have an incentive to keep high-performing students in school and low-performing students out of school during the testing window in order to maximize aggregate test scores. The evidence supports this hypothesis -- these patterns are precisely what are observed in the data, but only for students in grades that are tested with high stakes for the school. Since students suspended during the testing window are significantly more likely to miss the examination, this result suggests that schools may be deliberately attempting to reshape the testing pool in response to high-stakes testing. On the other hand, schools have an incentive to keep marginal students in school during the pre-testing preparation period, and have less of an incentive to keep very high or very low-performing students in school during this so-called "cram" period. Again, this pattern is observed in the data. Taken together, these results indicate that schools may be using student discipline as a tool to manipulate aggregate test scores.
The hypothesized effects of educational attainment on adult civic engagement and attitudes provide some of the most important justifications for government intervention in the market for education. In this study, Dee presents evidence on whether these externalities exist. He assesses and implements two strategies for identifying the effects of educational attainment. One is based on the availability of junior and community colleges; the other on changes in teen exposure to child labor laws. The results suggest that educational attainment has large and statistically significant effects on subsequent voter participation and support for free speech. Dee also finds that additional schooling appears to increase the quality of civic knowledge as measured by the frequency of newspaper readership.
Banerjee, Cole, Duflo, and Linden present the results of a two-year randomized evaluation of a large-scale remedial education program, conducted in Mumbai and Vadodara, India, along with the preliminary results of a randomized evaluation of a computer-assisted learning program in Vadodara. The remedial education program hires young women from the community to teach basic literacy and numeracy to children who reach "standard three or four" without having mastered these competencies. The program, implemented by a non-governmental organization in collaboration with the government, is extremely cheap (it cost 5 dollars per child per year) and is easily replicable: it is now implemented in 20 Indian cities, and reaches tens of thousands of children. The authors find the program to be very effective: on average, it increased learning by 0.15 standard deviations in the first year, and 0.39 in the second year. The gains are the largest for children at the bottom of the distribution: children in the bottom third gain 0.18 standard deviations in the first year, and 0.59 in the second year. The results are very similar in the two standards, and in the two cities. At the margin, extending this program would be 4.5 to 6 times more cost effective than hiring new teachers. The preliminary results of the computer assisted learning program, which is planned to be widely implemented in India, are less impressive: on average, the program increases test scores by an insignificant 0.10 standard deviations. The effect is higher (and significant) in schools where the remedial education program is also present. On the basis of these estimates, extending the computer assisted learning program would appear less cost effective than hiring new teachers.
The NBER's Program on Public Economics met in Cambridge on April 10 and 11. Program Director James Poterba, NBER and MIT, organized the meeting. These papers were discussed:
Local public goods financed from a national tax base provide concentrated benefits to recipient jurisdictions but dispersed costs, creating incentives for legislators to increase own-district spending but to restrain aggregate spending because of the associated tax costs. Theoretically, therefore, one would predict inefficiencies in the allocation of public goods, but there is little direct evidence that individual legislators respond to these incentives. Knight analyzes 1998 Congressional votes on transportation project funding. He shows that legislators respond to common pool incentives: the probability of supporting the projects increases with own-district spending and decreases with the tax burden associated with aggregate spending. Having found that legislators do respond to such incentives, Knight calculates the efficient level of public goods. The results suggest over-spending in the aggregate, especially in politically powerful districts, and a large associated deadweight loss.
Coate argues that campaign finance policy, in the form of contribution limits and matching public financing, can be Pareto-improving even under the most optimistic assumptions concerning the role of campaign advertising and the rationality of voters. The argument assumes that candidates use campaign contributions to convey truthful information to voters about their qualifications for office and that voters update their beliefs rationally on the basis of the information they have seen. It also assumes that campaign contributions are provided by interest groups and that candidates can offer to provide policy favors for their interest groups to attract higher contributions.
Long-term care represents one of the largest uninsured financial risks facing the elderly in the United States, and yet we have virtually no evidence to help explain the extremely limited nature of the private market for long-term care insurance. Brown and Finkelstein develop two complementary analytical tools to begin filling this void: a framework for assessing the "money's worth" of private long-term care insurance policies and a model of the insurance value of a long-term care insurance contract for a risk averse, life-cycle consumer. Using state-of-the-art actuarial data on long-term care utilization probabilities and comprehensive market data on insurance policy characteristics and premiums, they find that private long-term care insurance contracts are priced lower than is actuarially fair for women and higher for men. For a policy that covers all types of paid care, a 65-year old male can expect to receive 57 to 73 cents in present discounted value of benefits for every dollar paid in expected present value of premiums; by contrast, a 65-year old woman can expect to receive about $1.12 to $1.42 in benefits for every dollar paid in premiums. These results suggest that, given existing market prices and the presence of Medicaid as a payer-of-last-resort, private long-term care insurance is not valued by individuals throughout substantial portions of the wealth distribution. The very presence of Medicaid crowds out the purchase of private insurance for well over half of households, and significantly reduces the value of private insurance for the rest. In addition, marginal decreases in the "quality" of Medicaid-financed relative to privately-financed care substantially increase the value of private long-term care insurance. By contrast, even substantial reductions in premiums -- such as those that might be achieved by the new federal tax subsidies for long-term care insurance -- would be insufficient to make long-term care insurance attractive to the median individual.
Entrepreneurial activity is presumed to generate important spillovers, potentially justifying tax subsidies. How does the tax law affect individual incentives? How much of an impact has it had in practice? Cullen and Gordon first show theoretically that taxes can affect the incentives to be an entrepreneur simply because of differences in tax rates on business versus wage and salary income; differences in the tax treatment of losses versus profits through a progressive rate structure and through the option to incorporate; and risk-sharing with the government. They then provide empirical evidence -- using U.S. individual tax return data -- that these aspects of the tax law have had large effects on actual behavior.
Gokhale, Kotlikoff, and Sluchynsky use ESPlanner, a financial planning software program, to study the net work tax levied on workers with different earnings capacities. The authors focus on lifetime average and marginal net work tax rates, which are measured by comparing the present values of lifetime spending from working through retirement, both in the presence and in the absence of all tax-transfer programs. They report eight findings: 1) The fiscal system is highly progressive; couples working full time and earning the minimum wage receive 32 cents in benefits, net of taxes, for every dollar they earn. In contrast, households with million dollar salaries pay 51 cents in taxes, net of benefits, per dollar earned. 2) Net subsidies are provided only at the very bottom end of the income distribution. Average net work tax rates of couples earning 1.5 times the minimum wage ($32,100 per year) are 14 percent. For working couples earning 5 times the minimum wage ($107,000), the net tax rate is 38 percent. 3) While the poor face negative average taxes, like the middle class and the rich, they face positive marginal net taxes on working that exceed 50 percent. Moreover, certain low- and moderate-income households face substantially higher marginal net work tax rates than those faced by the rich. 4) Low-wage workers face confiscatory tax rates on switching from part-time to full-time work. 5) The same is true of secondary earning spouses in low-wage households. 6) The marginal net tax on working is particularly high for young households with low incomes. 7) Average and marginal net work tax rates are relatively insensitive to the assumed rate of real wage growth and the discount rate. 8) Major tax reforms, such as switching from income to consumption taxation, can have a significant effect on the fiscal system's overall progressivity.
The economic argument for subsidizing charitable giving relies on the positive externalities of charitable activities, particularly from the religious institutions that are the largest recipients of giving. But the net external effects of subsidies to religious giving will depend also on their potentially important indirect effect on religious participation. Religious participation can be a complement to, or a substitute for, the level of charitable giving. Understanding these spillover effects of charitable giving may be quite important, given the existing observational literature suggesting that religiosity is a major determinant of well-being among Americans. In his paper, Gruber investigates the impact of charitable subsidies on religious participation by using data over three decades from the General Social Survey; he also confirms the impact of such subsidies on religious giving using the Consumer Expenditure Survey. He finds strong evidence that religious giving and religious participation are substitutes: larger subsidies to charitable giving lead to more religious giving, but less religious attendance, with an implied elasticity of attendance with respect to religious giving of -0.92. These results have important implications for the debate over charitable subsidies. They also serve to validate economic models of religious participation.
The NBER's Program on Asset Pricing met in Chicago on April 11. Program Director John H. Cochrane and Lubos Pastor, both of NBER and University of Chicago, organized the meeting. These papers were discussed:
Sangvinatsos and Wachter consider the consumption and portfolio choice problem of a long-run investor who has access to nominal bonds and a stock portfolio. In the presence of unhedgeable inflation risk, there are multiple pricing kernels that produce the same bond prices, but a unique pricing kernel that equals the marginal utility of the investor. The authors extend their model to account for time-varying expected inflation and estimate it with data on inflation and term structure. The estimates imply that the bond portfolio for the long-run investor looks very different from the portfolio of a mean-variance optimizer. In particular, the desire to hedge changes in term premiums generates large hedging demands for long-term bonds.
Pan and Poteshman find strong evidence of information transmission from the options market to underlying stock prices. Taking advantage of a unique dataset from the Chicago Board of Options Exchange, the authors construct put-to-call volume ratios for underlying stocks, using only volume initiated by buyers to open new option positions. Performing daily cross-sectional analyses from 1990 to 2001, they find that buying stocks with low put/call ratios and selling stocks with high put/call ratios generates an expected return of 40 basis points per day and 1 percent per week. This result occurs during each year of the sample period, and is not affected by the exclusion of earnings announcement windows. Moreover, the result is stronger for smaller stocks, indicating that the options market may be a more important avenue for information transmission for stocks with less efficient information flow. This analysis also sheds light on the type of investors behind the informed option trading. Specifically, option trading from customers of full service brokers provides the strongest predictability. The authors further show that while public customers on average trade in the options market as contrarians -- buying fresh new puts on stocks that have done well and calls on stocks that have done poorly -- firm proprietary traders exhibit the opposite behavior. Finally, in contrast to the equity options market, there is no evidence in the index options market of informed trading.
Stambaugh explores the problem of making inferences about an asset that has passed a survival test failed by other assets with lower realized returns. The more commonality there is across assets in one's prior uncertainty about unknown parameters, the greater is the extent to which inferences about an asset's expected return (or its alpha) are affected by its having survived. In the absence of commonality, a sample average can possess substantial survival bias but can still equal the appropriate inference about an asset's expected return. Various forms of commonality in returns across assets also play key roles. Conditioning on survival usually lowers, but sometimes can raise, a surviving asset's inferred alpha. Survival bias, as typically computed, generally gives too severe an adjustment for survival unless one assumes that expected returns on all assets, dead and alive, are equal to a common value that is completely unknown.
Vassalou and Xing use Merton's (1974) option pricing model to compute default measures for individual firms and to assess the effect of default risk on equity returns. The size effect is a default effect, and this is also largely true for the book-to-market (BM) effect. Both exist only in segments of the market with high default risk. Default risk is systematic risk. The Fama-French (FF) factors contain some default-related information, but this is not the main reason that the FF model can explain the cross-section of equity returns.
Bansal, Dittmar, and Lundblad model dividend and consumption growth rates as a vector-autoregression (VAR), from which they measure the long-run response of dividend growth rates to consumption shocks. They find that this long-run cash flow beta can justify well over 50 percent of the difference in risk premiums across size, book-to-market, and industry sorted portfolios. Interestingly, the long-run cash flow betas explain about half of the dispersion in the standard Capital Asset Pricing Mode- based portfolio betas for these assets. The authors' model highlights the reasons for the failure of the market beta to justify the cross-section of risk premiums. The market beta itself is a weighted combination of cash flow betas and additional priced sources of risk. Each risk source's beta may be significant; however, a weighted combination of the betas may not be significant in explaining the cross-section of risk premiums, as each source of risk carries a distinct price. The results indicate that the size, book-to-market, and industry spreads are not puzzling from the perspective of economic models.
Lamont studies battles between short sellers and firms. Firms use a variety of means to impede short selling, including explicit or implicit legal threats, investigations, lawsuits, and various technical actions intended to create a short squeeze. Firms that take these actions create short-sale constraints. Consistent with the hypothesis that short-sale constraints allow stocks to be overpriced, firms taking anti-shorting actions in the next year have very low abnormal returns of about negative 2 percent per month.
The NBER's Program on Corporate Finance met in Chicago on April 11. Program Director Raghuram Rajan and Per Strömberg, both of NBER and University of Chicago, organized this program:
Axelson argues that an important friction in the issuance of financial securities is the fact that potential investors may be privately informed about the value of the underlying assets. He shows how security design can help to overcome this friction. In the single asset case, debt is often an optimal security when the number of potential investors is small, but equity becomes optimal as the degree of competition increases. In the multiple asset case, debt backed by a pool of assets is optimal if the number of assets is large relative to the degree of competition, but equity backed by individual assets is optimal when the number of assets is small relative to the degree of competition. Axelson uses the theory to interpret security design choices in financial markets.
Gatev and Strahan argue that banks have a unique ability to hedge against market-wide liquidity shocks. Deposit inflows provide a hedge for loan demand shocks that follow declines in market liquidity. Consequently, one dimension of bank "specialness" is that banks can insure firms against systematic declines in market liquidity at lower cost than other financial institutions. The authors provide supporting evidence from the commercial paper (CP) market. When market liquidity dries up and CP spreads increase, banks experience funding inflows. These inflows allow banks to meet increased loan demand from borrowers drawing funds from pre-existing CP backup lines, without running down their holdings of liquid assets. Moreover, the supply of cheap funds is large enough that pricing on new lines of credit actually falls as market spreads widen.
Kaplan and Schoar investigate individual fund returns in the private equity industry using a unique dataset collected by Venture Economics. They find a large degree of heterogeneity among fund returns. Those returns persist strongly across funds by private equity firms. The returns also improve with firm experience. Better performing funds are more likely to raise follow-on funds and raise larger funds than more poorly performing firms. This relationship is concave, so that top performing funds grow more slowly than the market average. Finally, the authors find that funds that are raised in boom times (and firms that are started in boom times) are less likely to raise a follow-on fund, suggesting that these funds perform worse. Several of these results differ substantially from those for mutual funds.
Mian uses a unique dataset that contains detailed information on every corporate loan outstanding in the banking sector of Pakistan during 2001-2 (52,000 loans). Using a simple empirical methodology that allows him to measure separately ex-ante monitoring, ex-post monitoring, "softness", renegotiation, recovery, and litigation of loans, Mian presents a number of new findings. First, concerns about weak external supervision can be overcome by market-discipline coupled with private incentives for banks. Second, government involvement is the key to poor banking. Third, even with private markets, the organizational hierarchy of banks can seriously limit their ability to lend to "soft information" firms -- firms in greatest need of intermediation.
La Porta, Lopez-de-Silanes, and Shleifer examine the effect of securities laws on market development in 49 countries. They find that public enforcement of laws benefits securities markets, especially in countries with efficient government bureaucracies. They also find that organization of private enforcement through disclosure and liability rules benefits securities markets in countries with both efficient and inefficient government bureaucracies.
Inderst and Müller show how the value, valuation, and success probability of start-ups depend on characteristics of the capital market. Market characteristics, including the return on investments, entry costs, and capital market transparency, affect the relative supply and demand for capital, and thus the relative bargaining power of entrepreneurs and venture capitalists. Relative bargaining power, in turn, determines ownership shares and incentives in start-up firms. In characterizing the short- and long-run dynamics of the venture capital market, the authors' model sheds light on the Internet boom and bust periods.
Bitler, Moskowitz, and Vissing-Jörgensen augment the standard principal-agent model to accommodate an entrepreneurial setting, where effort, ownership, and firm size are determined endogenously. They test the model's predictions (some novel) using new data on entrepreneurial effort and wealth. They find that entrepreneurial ownership shares increase with outside wealth, decrease with firm risk, and decrease with firm size; effort increases with ownership and size; and both ownership and effort increase firm performance. The magnitude of the effects in the cross-section of firms suggests that agency theory is important for explaining the large average ownership shares of entrepreneurs.
The NBER's Working Group on Behavioral Finance met in Chicago on April 12. Nicholas Barberis and Richard H. Thaler, both of NBER and University of Chicago, organized the meeting. These papers were discussed:
Krueger and Fortson note that since 1979, the Bureau of Labor Statistics (BLS) has nearly quadrupled the size of the sample used to estimate monthly employment changes. Although first-reported employment estimates are still noisy, the magnitude of sampling variability has declined in proportion to the increase in the sample size. Still, a regression analysis of changes in interest rates on the day the employment data are released finds no evidence that the bond market's reaction to employment news intensified in the late 1980s or 1990s; indeed, in the late 1990s and early 2000s the bond markets hardly reacted to unexpected employment news. For the time period as a whole, an unexpected increase of 200,000 jobs is associated with about a 6 basis point increase in the interest rate on 30-year Treasury bonds, and an 8 basis point increase in the interest rate on 3-month bills, all else equal. Additionally, unexpected changes in the unemployment rate and revisions to past months' employment estimates have statistically insignificant effects on long-term interest rates.
Dong, Hirshleifer, Richardson, and Hong Teoh show that irrational market misvaluation, at both the transaction and aggregate levels, affects the volume and character of takeover activity. The authors examine pre-takeover book/price ratios and pre-takeover ratios of residual-income-model-valuation-to-price for bidders, targets, and the aggregate stock market as proxies for market misvaluation. They find that misvaluation of bidders, targets, and the aggregate stock market influences the aggregate volume of takeovers, the means of payment chosen, the premiums paid, target hostility to the offer, the likelihood of offer success, bidder and target announcement period stock returns, post-takeover long-run returns, and the returns from diversifying transactions.
Merger intensity spikes in times of high market valuations (that is, when average market-to-book, M/B, ratios are at their highest). To explore whether this is the result of correlated valuation errors or behavioral mispricing, Rhodes-Kropf, Robinson, and Viswanathan decompose M/B into three components: firm-specific deviation from short-run industry valuations; short-run industry deviations from long-run values, and long-run value to book. The fact that high M/B buys lower M/B is driven mostly by firm-specific deviations from short-run industry average pricing. However, both targets and "acquires" are priced above their long-run industry average. When the authors find differences between bidders and targets in long-run value-to-book, they find that low buys high. They also find that the industry-specific component of M/B is highly positively correlated with merger intensity, and with the use of stock. However, long-run value-to-book is not correlated with cash merger intensity and is negatively correlated with stock merger intensity, leading to little overall correlation between long-run value-to-book and merger activity.
Brunnermeier and Parker introduce a tractable, structural model of subjective beliefs. Since agents who plan for the future care about expected future utility flows, current felicity can be increased by believing that better outcomes are more likely. On the other hand, expectations that are biased towards optimism worsen decisionmaking, leading to poorer realized outcomes on average. Optimal expectations balance these forces by maximizing the total well-being of an agent over time. The authors apply their framework of optimal expectations to three different economic settings. In a portfolio choice problem, agents overestimate the return of their investment and underdiversify. In general equilibrium, agents' prior beliefs are endogenously heterogeneous, leading to gambling. Second, in a consumption-saving problem with stochastic income, agents are both overconfident and overoptimistic, and consume more than implied by rational beliefs early in life. Third, in choosing when to undertake a single task with an uncertain cost, agents exhibit several features of procrastination, including regret, intertemporal preference reversal, and a greater readiness to accept commitment.
Analysts' earnings forecasts are influenced by their desire to win investment banking clients. Chan, Karceski, and Lakonishok hypothesize that the equity bull market of the 1990s, along with the boom in investment banking business, exacerbated analysts' conflict of interest and their incentives to strategically adjust forecasts in order to avoid earnings disappointments. The authors document shifts in: the distribution of earnings surprises; the market's response to surprises and forecast revisions; and in the predictability of non-negative surprises. Further confirmation is based on samples with conflicts of interest that are higher (including growth stocks and stocks with consecutive non-negative surprises) or lower (such as foreign markets).
Khwaja andMian analyze a unique dataset containing all daily firm-level trades of every broker trading on the stock exchange in Pakistan over a 32-month period. Examining broker behavior reveals that many brokers choose stocks in which they only trade for themselves rather than acting as intermediaries for outside investors. The authors find that when brokers trade on their own behalf in a stock -- act as "principals" -- they earn 4 percent to 8 percent higher annual rates of return. While broker "ability" does not explain this effect, anecdotes suggest it is caused by direct price manipulation by brokers. The authors find strong evidence for such manipulation: when prices are low, colluding brokers trade amongst themselves to artificially raise prices and attract naïve positive-feedback traders. Once prices have risen, the former exit, leaving the latter to suffer the ensuing price fall. Such manipulation of stock prices occurs in all types of stocks. However, the effect is larger in stocks of smaller firms, and for firms with less concentrated ownership. Finally, while the higher profitability of principals is not attributable to inherent broker attributes, the authors do find that more "able" brokers earn higher returns when they trade as a principal in a stock.
The NBER's Working Group on Environmental Economics met in Cambridge on April 12. Don Fullerton, NBER and University of Texas, Austin, organized the meeting. These papers were discussed:
One way to obtain a global public good is to set up an institution to buy it, with the nations of the world contributing to the cost according to whatever sharing arrangements make political sense. Bradford suggests a way to exploit this approach in order to limit the accumulations of greenhouse gases in the atmosphere. The "service" that produces the control is the reduction in the levels of emissions over time from what the nations otherwise would choose, also known as the "business as usual" emissions path. In the scheme as envisioned, which could be used in a successor agreement to the Kyoto Protocol, the fact that all nations are sellers of reductions ameliorates the enforcement problems typical of commitments to particular emission paths. Another difference from the Kyoto-style system: in the scheme sketched here, the distribution of burdens is explicit, rather than implicit, in the allowable emission amounts. An infectious disease only can be eradicated globally if it is eliminated in every country. But does this simply require international coordination, or does it also require cooperation? Using a model that blends epidemiology, economics, and game theory, Barrett shows that coordination will not always suffice, even when the global benefits of eradication exceed the costs. In general, eradication will require strong international institutions.
Copeland and Taylor develop a theory of resource management whereby the degree to which countries escape "the tragedy of the commons" is determined endogenously and linked explicitly to changes in world prices and other possible effects of market integration. The authors show how changes in world prices can move some countries from de facto open access situations to ones in which management replicates an unconstrained social planner. Not all countries can follow this path of institutional reform and the authors identify key country characteristics (mortality rates, resource growth rates, technology) that divide the world's set of resource rich countries into three categories. Category I countries will never be able to effectively manage their renewable resources. Category II countries exhibit de facto open access for low resource prices, but can maintain a limited form of resource management at higher prices. Category III countries can fully implement efficient management and can obtain the unconstrained first best outcome for some range of resource prices. For Category III countries, de facto open access and limited management are but transitory phases.
Babiker, Metcalf, and Reilly consider the efficiency implications of policies to reduce global carbon emissions in a world with pre-existing tax distortions. They first note that the weak double dividend -- the proposition that the welfare improvement from a tax reform in which environmental taxes are used to lower distorting taxes must be greater than the welfare improvement from a reform in which the environmental taxes are returned in a lump sum fashion -- need not hold in a world with multiple distortions. They then present a large-scale computable general equilibrium model of the world economy with distortionary taxation. They use this model to evaluate a number of policies to reduce carbon emissions. They find that the weak double dividend is not obtained in a number of European countries. Their results also demonstrate that the interplay between carbon policies and pre-existing taxes can differ markedly across countries. Thus, one must be cautious in extrapolating the results from a country specific analysis to other countries.
Olmstead, Hanemann, and Stavins analyze the influence of the price of water and the structure of water prices on residential water demand. They adapt a model from the labor economics literature -- the Hausman model of labor supply under progressive income taxation -- to estimate water demand under increasing-block prices. They apply this structural model to the most price-diverse, detailed, household-level water demand data now available to estimate the price elasticity of residential water demand. Their results indicate that the sensitivity of residential water demand to price is quite low, in contrast with results of previous studies using similar models to account for the piecewise-linear budget constraint of block prices. They also find, however, that price elasticity is higher and demand is lower among households facing block prices than among households facing uniform marginal prices. The impact of the price structure on demand appears to be greater than the impact of marginal price itself.
Does the endangered species act endanger species? Margolis, Osgood, and List derive two empirical measures to estimate the extent of preemptive habitat destruction. They illustrate the use of such measures by examining development decisions on more than 70,000 plots of land that are potentially Pygmy Owl-critical habit areas in Pima County, Arizona. Their models provide direct measures of the scale of preemptive habitat destruction in units that can indicate whether it is significant as an obstacle to conservation, or how much it adds to the social cost of achieving conservation goals. The preliminary findings suggest that preemption is occurring at rates that are statistically significant.