Evidence-based Economic Policy in Pakistan, March 8, 2019, noon

Evidence-based Economic Policy in Pakistan, March 8, 2019, noon

The Institute for South Asia Studies is hosting a lecture, On “Evidence-based Economic Policy” in Pakistan on Friday, March 8, 2019, by Atif Mian, Professor, Economics, Public Policy and Finance (Princeton University) and Co-Founder & Board Member, Center for Economic Research in Pakistan (CERP); Asim I. Khwaja, Professor, International Finance and Development (Harvard University) and Co-Founder & Board Member, Center for Economic Research in Pakistan (CERP); Maroof A. Syed, President & CEO, Center for Economic Research in Pakistan (CERP) and Director of Pakistan Strategy & Development, Evidence for Policy Design (EPOD-Harvard); Saad Gulzaar, Assistant Professor of Political Science, Stanford University. The event will be moderated by Munis Faruqui, Chair, Institute for South Asia Studies, Associate Professor of South and Southeast Asian Studies. The Clausen Center is co-sponsoring this event.

The best 10 Young Economists

The best 10 Young Economists

The best young economists

Our pick of the decade’s eight best young economists

They mostly want to change the world, not just fathom it

From The Economist, Dec 18th 2018

“The solution in Vietnam”, said William DePuy, an American general in 1966, “is more bombs, more shells, more napalm.” But where exactly to drop it all? To help guide the bombing, the Pentagon’s whizz kids calculated the threat posed by different hamlets to the American-backed government in South Vietnam. Fed with data capturing 169 criteria, their computer crunched the numbers into overall scores, which were then converted into letter grades: from a to e. The lower the grade, the heavier the bombing.

Almost 50 years later, these grades caught the eye of Melissa Dell, an economist at Harvard University. Those letters, she realised, created an unusually clean test of DePuy’s solution. A village scoring 1.5 and another scoring 1.49 would be almost equally insecure. But the first would get a d and the second an e, thus qualifying for heavier bombing. To judge the effectiveness of the onslaught, then, a researcher need only compare the two. Simple.

Or not. Inconveniently, the scores had not survived: only the letter grades (and the 169 indicators underlying them, preserved because of an ibm lawsuit). To resurrect the algorithm that linked the two, Ms Dell embarked on what she calls a “treasure hunt”. She stumbled on an old journal article which suggested the army had removed hundreds of musty records waiting to be catalogued by the National Archives. She tracked those files to Fort McNair where a military historian dug out the matrices she needed to reverse engineer the algorithm.

That kind of tenacity is one reason why Ms Dell, who is still in her 30s, is among the best economists of her generation. We arrived at that conclusion based on an investigative strategy somewhat less sophisticated than those for which she is celebrated: we asked around, seeking recommendations from senior members of the profession. They named over 60 promising young scholars. We narrowed that list down to eight economists who we think represent the future of the discipline: Ms Dell and her Harvard colleagues Isaiah Andrews, Nathaniel Hendren and Stefanie Stantcheva; Parag Pathak and Heidi Williams of the Massachusetts Institute of Technology (mit); Emi Nakamura of the University of California, Berkeley and Amir Sufi of the University of Chicago Booth School of Business. Taken together, they display an impressive combination of clever empiricism and serious-minded wonkery. They represent much of what’s right with economics as well as the acumen of top American universities in scooping up talent.

This is the fourth time we have assembled such a list, and a pattern emerges. The first group, from 1988, was dominated by brilliant theorists who brought new analytical approaches to bear on long-standing policy questions. Back then, theorists were treated like the “Mozarts” of the profession, according to one member of that generation. Two of these maestros have since been to Stockholm to collect Nobel prizes: Paul Krugman in 2008 and Jean Tirole in 2014.

In those days, empirical work enjoyed less prestige. As Edward Leamer of the University of California, Los Angeles noted earlier in the 1980s, “Hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else’s data analyses seriously.” It was easy for economists to proclaim a seemingly significant finding if they tweaked their statistical tests enough.

By 1998 theory was giving way to a new empiricism. One member of the cohort we chose that year, Harvard’s Michael Kremer, was arguing that randomised trials could revolutionise education, much as they had revolutionised medicine. Another, Caroline Hoxby of Stanford, showcased the creative potential of a “quasi-experimental” technique: the instrumental variable. She wanted to know whether competition for pupils improved school quality. But this was hard to gauge, because quality could also affect competition. To untie this knot, she employed an unlikely third factor—rivers—as an “instrument”. Places densely reticulated by rivers tend to be divided into many school districts, resulting in fiercer competition between them. If these locales also have better schools, it is presumably because of that competition. It is not because better schools cause more rivers.

This cohort’s Mozart—the empiricist with, if anything, “too many notes”—was Steven Levitt of the University of Chicago. In his view, “Economics is a science with excellent tools for gaining answers but a serious shortage of interesting questions,” as Stephen Dubner, a journalist, once put it. In pursuit of more compelling questions, he roamed freely, carrying his tools into unconventional and even quirky areas of research (penalty kicks, sumo and “The Weakest Link”, a game show). The result was “Freakonomics”, a bestseller written with Mr Dubner, and a phalanx of imitators.

Ten years later, many of our picks of 2008 also excelled in empirical work. Esther Duflo of mit institutionalised the randomised trials that Mr Kremer helped pioneer. Jesse Shapiro of Brown University—still under 40, but we are not allowing double dipping—delighted in some of the same empirical virtuosity as Mr Levitt.

The work exemplified by these two waves of economists (and many others) amounted to a “credibility revolution” in the discipline, wrote Joshua Angrist and Jörn-Steffen Pischke, authors of the revolutionary movement’s textbook, “Mostly Harmless Econometrics”. Like many revolutions, this one was founded on a change in the mode of production: the introduction of personal computers and digitisation, which brought large bodies of data into economists’ laps.

Like all revolutions, this one was followed by a backlash. The critics lodged three related objections. The first was a neglect of theory: the new empiricists were not always particularly interested in testing formal models of how the world worked. Their experiments or cleverly chosen instruments might show what caused what, but they could not always explain why. Their failure to distinguish mechanisms cast doubt on how general their findings might be. Like jamming musicians who never write anything down, they could not know if their best grooves would return in new settings.

The second objection was a lack of seriousness. “Freakonomics” had encouraged an emerging generation of economists to trivialise their subject, their critics alleged, somewhat unfairly. “Many young economists are going for the cute and the clever at the expense of working on hard and important foundational problems,” complained James Heckman, a Nobel laureate, in 2005.

The new empiricists were also accused of looking for keys under lampposts. Some showed more allegiance to their preferred investigative tools than to the subject or question under investigation. That left them little reason to return to the same question, unless they found more neat data or a new oblique approach. This hit-and-run approach makes some scholars nervous, since even a perfectly designed one-off experiment can deliver a “false positive”.

Delving deeper

Where does that leave today’s bright young things? This year’s cohort has certainly picked up its predecessors’ empirical virtuosity. Their papers are full of the neat tricks that enlivened the credibility revolution. Mr Pathak and his co-authors have compared pupils who only just made it into elite public schools with others who only just missed out, rather as Ms Dell compared villages on either side of the Pentagon’s bombing thresholds. The study showed that the top schools achieve top-tier results by the simple contrivance of admitting the best students, not necessarily by providing the best education. Ms Dell and her co-author showed that bombing stiffened villages’ resistance rather than breaking their resolve.

Ms Williams has exploited a number of institutional kinks in the American patent system to study medical innovation. Some patent examiners, for example, are known to be harder to impress than others. That allowed her to compare genes that were patented by lenient examiners with largely similar genes denied patents by their stricter colleagues. She and her co-author found that patents did not, as some claimed, inhibit follow-on research by other firms. This suggested that patent-holders were happy to let others use their intellectual property (for a fee).

Our economists of 2018 also show great doggedness in unearthing and refining new data. Ms Dell is interested in the economic consequences of America’s decision to “purge” managers from Japan’s biggest companies after 1945. To this end she is helping develop new computer-vision tools that will digitise musty, irregular tables of information from that time.

For a paper called “Dancing with the Stars”, which shows how inventors gain from interactions with each other. Ms Stantcheva and colleagues painstakingly linked some 800,000 people in a roster of European inventors to their employers, their location and their co-inventors in order to find out what sorts of propinquity were most propitious. Mr Hendren has joined forces with Harvard’s Raj Chetty (another of our alumni of 2008) to exploit an enormous cross-generational set of data from America’s census bureau. The data link 20m 30-somethings with their parents, who can be identified because they once claimed their offspring as dependents on their tax forms. The link has allowed Mr Hendren to study the transmission of inequality from one generation to the next.

The 2018 cohort’s combination of clever methods and dogged snuffling out of data comes along with a rejection of some of the more frolicsome manifestations of earlier new empiricists. Many of them display an admirable millennial earnestness. They are mostly tackling subjects that are both in line with long-standing economic concerns and of grave public importance. Ms Williams seeks a more rigorous understanding of technological progress in medicine and health care, which many commentators casually assert was the largest factor in improving people’s lives over the past century. Ms Dell is interested in the effects of economic institutions, such as the forced labour used in Peruvian silver mines before 1812. The lingering consequences of that colonial exploitation are visible, she says, in the stunted growth of Peruvian schoolchildren even today.

Ms Stantcheva studies tax, perhaps the least cute subject in the canon. As well as investigating the public opinions and values that shape today’s tax systems, she also studies taxation’s indirect and long-term consequences. Taxation can, for example, inhibit investments in training or scare off the inventors who drive innovation. On the other hand, successful professionals often have to work hard as a signal of their ability to their bosses, who cannot observe their aptitude directly. That rat race, she points out, limits their scope to slack off even in the face of high top rates of tax. With Thomas Piketty of the Paris School of Economics (the most obvious omission from our list in 2008) and another co-author, she has explored how tax rates affect rich people’s incentives to work, to underreport income, and to bargain for higher pay at the expense of their colleagues and shareholders. When that third incentive predominates, top rates as high as 80% might be justified.

Mr Hendren’s work on the market’s failures to provide health insurance was, he says, “ripped from the headlines” of the Obamacare debate. His more recent research on social mobility is almost as topical. The son of a black millionaire, he has found, has a 2-3% chance of being in prison. Among white men only those with parents earning $35,000 or less have odds of incarceration that high. Black disadvantage is not confined to bad neighbourhoods. Mr Hendren and his co-authors have discovered that black boys have lower rates of upward mobility than white boys in 99% of America’s localities. Young black women, on the other hand, typically earn a little more than white women with similarly poor parents. This research with Mr Chetty should inform a broad swathe of thinking about race in America.

Crisis? What crisis?

In short, our picks of 2018 are looking for the intellectual keys to important social puzzles; they are willing to move lampposts, turn on headlights or light candles to find them.

Mr Pathak provides a good example of this question-driven, issues-first approach. In his work on school choice he began by examining the matching algorithms that many American cities use to decide which pupils can attend oversubscribed schools. Previous systems encouraged parents who were in the know to rank less competitive “safety schools” above their true favourites. Mr Pathak’s research has helped promote mechanisms that allow parents to be honest.

Now that these improved formulae have caught on, Mr Pathak’s algorithmic expertise is less urgently required. A different kind of economist, committed to the algorithms more than the schools, might have dropped education for problems tractable to similar approaches in other fields. But Mr Pathak is exploring other ways to improve school quality instead.

This habit of sticking with big questions should make this generation of scholars less vulnerable to the curse of false positives. But this is not the only way in which the new crop is helping to clean up the academic literature. One rule of thumb when reading journals is that dull results that nonetheless reach publication are probably true, but that striking, eminently publishable stories should be taken with a pinch of salt. Mr Andrews’s quantitative work on these problems seeks to weigh out the appropriate salt per unit of splashiness. According to his calculations, studies showing that the minimum wage significantly hurts employment are three times more likely to be published than studies finding a negligible impact. Knowing the size of this bias, he and his co-author can then correct for it. They calculate that minimum wages probably damage employment only half as much as published studies alone would suggest.

Mr Andrews has also scrutinised the instrumental variables that featured so heavily in the credibility revolution. To work well, an instrument (such as the river networks Ms Hoxby used as a proxy for school competition) should be tightly linked to the explanatory factor under examination. Often the link is weaker than economists would like, and their efforts to allow for this may be less adequate than they suppose. Mr Andrews and his co-authors have reassessed the reliability of 17 articles published in the profession’s leading journal, suggesting better ways for economists to handle the instruments they use. “No econometrician has generated more widespread excitement than him in a very long time,” according to Edward Glaeser of Harvard (one of our 1998 batch).

So how have these question-driven economists tackled the biggest economic question of the past decade: the global financial crisis? That disaster posed a problem for quasi-experimental empirical methods, which work better for data-rich microeconomics than for macroeconomics, where the data are less plentiful. The scope for macroeconomic experimentation is also limited. On April Fools’ Day an economist circulated an abstract purportedly co-written by Ben Bernanke and Janet Yellen in which the former central bankers revealed they had raised and lowered interest rates randomly during their stints in office in a covert experiment known only to themselves. In reality, as Ms Nakamura points out, the Federal Reserve employs hundreds of phds to make sure its decisions are as responsive to the economy (and therefore non-random) as possible.

None of today’s bright young macroeconomists have reinvented their sub-discipline in the wake of the Great Recession in the way that John Maynard Keynes did after the Great Depression (although Keynes was already 52 when he published “The General Theory”). If they had they would have drawn more attention from the nominators of this list.

Yet, unlike our batch in 2008, this year’s group does contain two economists who have carried the credibility revolution some way into macroeconomics. Ms Nakamura, who writes many of her papers with Jon Steinsson, also at Berkeley, has used micro methods to answer macro questions. Working with the Bureau of Labour Statistics she has unpacked America’s inflation index, examining the prices for everything from health care to Cheerios entangled within it. Whereas macroeconomists typically look at quarterly national data, her work cuts up time and space much more finely. She has divided America into its 50 states and the passage of time into minutes. This has let her shed light on fiscal stimulus and the impact of monetary policy as seen through the half-hour window in which financial markets digest surprising nuances from Fed meetings.

One of her most provocative papers is also the simplest. She and her co-authors argue that America’s slow recovery from its recent recessions is not the result of a profound “secular stagnation” as posited by Larry Summers (one of our 1998 picks). Rather it reflects the fact that the rise in the number of working women, rapid for several decades after the war, has since slowed. In the past, the influx of women put overall employment on a strong upward trajectory. Thus after a recession, the economy had to create a lot of jobs to catch up with the rising trend. In more recent decades, employment trends have flattened. Thus even a relatively jobless recovery will restore the economy to its underlying path.

Our final pick, Mr Sufi, is, like Ms Nakamura, exploiting voluminous data unavailable to scholars of previous downturns to understand the Great Recession. Had America merely suffered from an asset bubble in housing (like the dotcom bubble of the 1990s) or a lending mishap (like the savings and loan crisis of the 1980s), it could have weathered the storm, he feels. But high levels of household debt made the spending fall unusually severe and the policy response (a banking rescue and low interest rates) surprisingly ineffective. Mr Sufi and Atif Mian of Princeton University find evidence for their macro-view in a micro-map of debt, spending and unemployment across America’s counties. The households of California’s Monterey county, for example, had debts worth 3.9 times their incomes on the eve of the crisis. Spending cutbacks in counties like this accounted for 65% of the jobs lost in America from 2007 to 2009, they estimate. The Obama administration’s failure to provide more debt relief for homeowners with negative equity was the biggest policy mistake of the Great Recession, they say.

Because they want to change the world, not just delight in its perversity, many of these economists engage closely with policy. Ms Stantcheva now sits on France’s equivalent of the council of economic advisers. Mr Sufi is pushing for mortgage payments to be linked to a local house-price index, falling when the index does, but allowing the lenders a small slice of the homeowners’ gains if the market rises. He and Mr Mian have also proposed linking student-loan repayments to the unemployment rate of recent graduates.

Intriguingly, this concern for real-world outcomes is pushing some of these young economists back towards theory. In recommending a policy reform, an economist is saying that it serves some objective better than the status quo. That objective needs a theoretical rationale. A goal like improving well-being might seem bland and unexceptionable. But most policies hurt some people while helping others. How should society weigh the hurt against the help?

Ms Stantcheva and Emmanuel Saez, of the University of California, Berkeley, have proposed a theoretical framework that accommodates different answers to that question (utilitarian, libertarian, Rawlsian, and so on). Meanwhile Mr Hendren has calculated that the American tax system is implicitly willing to impose $1.5-2 of hurt on rich people to provide $1 of help to the poor. That provides one possible benchmark for evaluating new policies.

Engaging with policy can take a toll. “I’ve testified in about 15 different school-committee meetings,” says Mr Pathak. “I’ve had families shouting at me.” But it is also stimulating, he adds, not just because it helps people, but also because it enriches research. “Testifying in school-committee meetings is one of the richest sources of research ideas I’ve ever had.”

When Thomas Menino, Boston’s long-serving former mayor, expressed concern that the city’s policy of busing kids to their school of choice across the city was undermining the sense of community around some schools, Mr Pathak looked into “walk zones”, which reserve systems some places for children living within walking distance. Seemingly innocuous details of such schemes turned out to have far-reaching effects. The theoretical subtleties he uncovered proved to be “incredibly rich”, Mr Pathak says, keeping him fruitfully busy for a couple of years on something that “there’s no way we would have looked at…without interacting with Boston and the mayor.” By answering practical questions rigorously, economists can both make themselves useful and be spurred in interesting new directions.

The importance of fingerwork

Mozart’s first biographer claimed that the child prodigy composed his music feverishly in his mind, without ever coming to the “klavier”. Many people came to believe that he could compose whole masterpieces while walking after dinner, travelling in a carriage or “in the quiet repose of the night”.

More recent musicology casts doubt on this account. Much of Mozart’s work was sketched out, or even improvised, on a keyboard; he is thought to have done little composition without one.

The theorists of the 1980s resembled the mythical Mozart of the popular imagination, completing beautiful deductive theories with their minds, before seeing how they played in the real world. The best young economists of today more closely resemble the less magical Mozart described by later scholars. Just as he walked back and forth between his compositional sketches and his piano, they move back and forth between their theoretical notation and their empirical instruments, searching for the keys to knowledge.