Saturday, December 28, 2013

NY Times attack by innuendo: Commodity price speculation edition

I've written a few times [e.g., 1, 2, 3, ] about commodity price speculation, arguing that speculation hasn't been the cause of volatility in recent years.  My views on this haven't changed, but I'm open to new arguments for why I and legions of other economists might be wrong. 

Overall, I haven't found this topic to be a very interesting, because those arguing that Wall Street caused the food price crisis, or caused oil prices to spike, really haven't presented any kind of logical argument.  All they do is point to the fact that Wall Street has gotten into the commodity game more than they have in the past.  But this isn't news and it falls far short of an explanation for how their trading activities are affecting prices. I haven't seen a serious study claiming a link, only magazine articles and opinion pieces, all thin on substance.  Professionally, I don't see this as interesting work.  This review lays things out pretty clearly.

I've been waiting for this issue to die.  But it seems there are political winds that won't let it.  Yesterday the New York Times came out with a new attack on academics, particularly Craig Pirrong and Scott Irwin.  These guys have been doing some consulting work, mainly for oil companies and the Chicago Mercantile Exchange.  These companies have also been giving money to their universities.  The documentary Inside Job is referenced (a great movie, by the way).  

There is a lot of innuendo in the story, and I do feel uncomfortable about academic economists getting cushy consulting contracts with big trading companies.  Still, the scale of what's mentioned seems tame relative to the kind of consulting gigs that seem commonplace in the economics realm of academia, or even the levels of corporate cash given to universities. 

The story is most notable for what it lacks: how, exactly, are the various companies distorting markets in a way that hurts consumers and or producers?  Inside Job describes the shady business of securities backed by stated-income mortgages, how Goldman Sachs was shorting the products it was selling to clients, etc.  We can see that there were shady business dealings and how they probably helped to fuel the real estate bubble.  So, where's the real underlying story in commodity market trading?

I don't think this story measures up to the New York Times' standard.  It's sad, because there might actually be a story here, but it would require a lot more legwork by Kocieniewski   That story, however, is probably a bit less salacious than what Kocieniewski was shooting for. My guess (I welcome challenges--I'm not sure about this) is that the  real story is a battle between old-school commodity groups (ADM, Cargill, and other so-called "majors" that both speculate and store commodities) and new competition from Wall Street, like JP Morgan Chase and Goldman Sachs.  I would further guess that some of the strangeness in commodity prices over recent years, like futures prices sometimes not converging to spot prices, is partly a refection of this changing marketplace.  Note, however, that the "strangeness" in pricing of which I speak has basically zero relevance to producer and consumer prices.  

My vague sense is that incompleteness in these markets---the fact that future contracts are only for certain months and certain delivery dates---procures a special advantage to the majors who control inventories at delivery points.  I'd further guess that Wall Street players are getting into commodities partly for diversification, and partly because they figure they can skim some the rents earned by the commodity players. 

Anyhow, I don't have all the details figured out.  It's something I started to pencil out once, but, like I said, I don't see this as especially useful for larger-scale questions about supply and demand, policy, climate change, etc. So I figure I've got better things to do.  

For the record, I've never had a consulting contract with commodity group or Wall Street firm.

Wednesday, November 20, 2013

Fixed Effects Infatuation

The fashionable thing to do in applied econometrics, going on 15 years or so, is to find a gigantic panel data set, come up with a cute question about whether some variable x causes another variable y, and test this hypothesis by running a regression of y on x plus a huge number of fixed effects to control for "unobserved heterogeneity" or deal with "omitted variable bias."  I've done a fair amount of work like this myself. The standard model is:

y_i,t = x_i,t + a_i + b_t + u_i,t

where a_i are fixed effects that span the cross section, b_t are fixed effects that span the time series, and u_i,t is the model error, which we hope is not associated with the causal variable x_i,t, once a_i

If you're really clever, you can find geographic or other kinds of groupings of individuals, like counties, and include group-by-year fixed effects:

y_i,t = x_i,t + a_i + b_g,t + u_i,t

The generalizable point of my lengthy post the other day on storage and agricultural impacts of climate change, was that this approach, while useful in some contexts, can have some big drawbacks. Increasingly, I fear applied econometricians misuse it.  They found their hammer and now everything is a nail.

What's wrong with fixed effects? 

A practical problem with fixed effects gone wild is that they generally purge the data set of most variation.  This may be useful if you hope to isolate some interesting localized variation that you can argue is exogenous.  But if the most interesting variation derives from a broader phenomenon, then there may be too little variation left over to identify an interesting effect.

A corollary to this point is that fixed effects tend to exaggerate attenuation bias of measurement errors since they will comprise a much larger share of the overall variation in x after fixed effects have been removed.

But there is a more fundamental problem.  To see this, take a step back and think generically about economics.  In economics, almost everything affects everything else, via prices and other kinds of costs and benefits.  Micro incentives affect choices, and those choices add up to affect prices, cost and benefits more broadly, and thus help to organize the ordinary business of life.  That's the essence of Adam's Smith's "invisible hand," supply and demand, and equilibrium theory, etc.  That insight, a unifying theoretical theme if there is one in economics, implies a fundamental connectedness of human activities over time and space.   It's not just that there are unobserved correlated factors; everything literally affects everything else.  On some level it's what connects us to ecologists, although some ecologists may be loath to admit an affinity with economics.

In contrast to the nature of economics, regression with fixed effects is a tool designed for experiments with repeated measures.  Heterogeneous observational units get different treatments, and they might be mutually affected by some outside factor, but the observational units don't affect each other.  They are, by assumption, siloed, at least with respect to consequences of the treatment (whatever your x is).  This design doesn't seem well suited to many kinds of observational data.

I'll put it another way.  Suppose your (hopefully) exogenous variable of choice is x, and x causes z, and then both x and z affect y.  Further, suppose the effects of x on z spill outside of the confines of your fixed-effects units.  Even if fixed effects don't purge all the variation in x, they may purge much of the path going from x to z and z to y, thereby biasing the reduced form link between x and y. In other words, fixed effects are endogenous.

None of this is to say that fixed effects, with careful account of correlated unobserved factors, can be very useful in many settings.  But the inferences we draw may be very limited.  And without care, we may draw conclusions that are very misleading. 

Monday, November 11, 2013

Can crop rotations cure dead zones?

It is now fairly well documented that much of the water quality problems leading to the infamous "dead zone" in the Gulf of Mexico (pictured above) come from fertilizer applications on corn. Fertilizer on corn is probably a big part of similar challenges in the Chesapeake Bay and Great Lakes.

This is a tough problem.  The Pigouvian solution---taxing fertilizer runoff, or possibly just fertilizer---would help.  But we can't forget that fertilizer is the main source of large crop productivity gains over the last 75 years, gains that have fed the world.  It's hard to see how even a large fertilizer tax would much reduce fertilizer applications on any given acre of corn.

However, one way to boost crop yields and reduce fertilizer applications is to rotate crops. Corn-soybean rotations are most ubiquitous, as soybean fixes nitrogen in the soil which reduces need for applications on subsequent corn plantings.  Rotation also reduces pest problems.  The yield boost on both crops is remarkable.  More rotation would mean less corn, and less fertilizer applied to remaining corn, at least in comparison to planting corn after corn, which still happens a fair amount.

I've got a new paper (actually, an old but newly revised paper), coauthored with Mike Livingston of USDA and Yue Zhang, a graduate student at NCSU, that might provide a useful take on this issue.  This paper has taken forever.  We've solved a fairly complex stochastic dynamic model that takes the variability of prices, yields and agronomic benefits of rotation into account. It's calibrated using the autoregressive properties of past prices and experimental plot data.  All of these stochastic/dynamics can matter for rotations. John Rust once told me that Bellman always thought crop rotations would be a great application for his recursive method of solving dynamic problems.

Here's the jist of what we found:

Always rotating, regardless of prices, is close to optimal, even though economically optimal planting may rotate much less frequently.  One implication is that reduced corn monoculture and fertilizer application rates might be implemented with modest incentive payments of  \$4 per acre or less, and quite possibly less than \$1 per acre.

In the past I've been skeptical that even a high fertilizer tax could have much influence on fertilizer use. But given low-cost substitutes like rotation, perhaps it wouldn't cost as much as some think to make substantial improvements in water quality.

Nathan Hendricks and coauthors have a somewhat different approach on the same issue (also see this paper).  It's hard to compare our models, but I gather they are saying roughly similar things.

Tuesday, November 5, 2013

Weather, storage and an old climate impact debate

This somewhat technical post is a belated followup to a comment I wrote with Tony Fisher, Michael Hanemann and Wolfram Schlenker, which was finally published last year in the American Economic Review.  I probably should have done this a long time ago, but I needed to do a little programming.  And I've basically been slammed nonstop.

First the back story:  The comment re-examines a paper by Deschanes and Greenstone (DG) that supposedly estimates a lower bound on the effects of climate change by relating county-level farm profits to weather.  They argue that year-to-year variation in weather is random---a fair proposition---and control for unobserved differences across counties using fixed effects.  This is all pretty standard technique.

The overarching argument was that with climate change, farmers could adapt (adjust their farming practices) in ways they cannot with weather, so the climate effect on farm profits would be more favorable than their estimated weather effect.

Now, bad physical outcomes in agriculture can actually be good for farmers' profits, since demand for most agricultural commodities is pretty steep: prices go up as quantities go down.  So, to control for the price effects they include year fixed effects.  And since farmers grow different crops in different parts of the country and there can be local price anomalies, they go further and use state-by-year fixed effects so as to squarely focus on quantity effects in all locations.

Our comment pointed out a few problems:  (1) there were some data errors like missing temperature data apparently coded with zeros and much of the Midwest and most of Iowa dropped from the sample without explanation; (2) in making climate predictions they applied state-level estimates to county-level baseline coefficients, in effect making climate predictions that regress to the state mean (e.g., Death Valley and Mt. Witney have different baselines but the same future); (3) all those fixed effects wash out over 99 percent of weather variation, leaving only data errors for estimation; (4) the standard errors didn't appropriately account for the panel nature of the spatially correlated errors.

These data and econometric issues got the most attention.  Correct these things and the results change a lot.  See the comment for details.

But, to our minds, there is a deeper problem with the whole approach.  Their measure of profits was really no such thing, at least not in an economic sense: it was reported sales minus a crude estimate of current expenditures.  The critical thing here is that farmers often do not sell what they produce.  About half the country's grain inventories are held on farm.  Farms also hold inventory in the form of capital and livestock, which can be held, divested or slaughtered.  Thus, effects of weather in one year may not show up in profits measured in that year.  And since inventories tend to be accumulated in plentiful times and divested in bad times, these inventory adjustments are going to be correlated with the weather and cause bias.

Although DG did not consider this point originally, they admitted it was a good one, but argued they had a simple solution: just include the lags of weather in the regression. When they attempted this, they found lagged weather was not significant, and thus concluded that this issue was unimportant.  This argument is presented in their reply to our comment.

We were skeptical about their proposed solution to the storage issue.  And so, one day long ago, I proposed to Michael Greenstone, that we test his proposed solution. We could solve a competitive storage model, assume farmers store as a competitive market would, and then simulate prices and quantities that vary randomly with the weather.  Then we could regress sales (consumption X price) against our constructed weather and lags of weather plus price controls. If the lags worked in this instance, where we knew the underlying physical structure, then it might work in reality.

Greenstone didn't like this idea, and we had limited space in the comment, so the storage stuff took a minimalist back seat. Hence this belated post.

So I recently coded a toy storage model in R, which is nice because anyone can download and run this thing  (R is free).  Also, this was part of a problem set I gave to my PhD students, so I had to do it anyway.

Here's the basic set up:

y    is production which varies randomly (like the weather).
q    is consumption, or what's clearly sold in a year.
p    is the market price, which varies inversely with q (the demand curve)
z    is the amount of the commodity on hand (y plus carryover from last year).

The point of the model is to figure out how much production to put in or take out of storage.  This requires numerical analysis (thus, the R code).  Dynamic equilibrium occurs when there is no arbitrage: where it's impossible to make money by storing more or storing less.

Once we've solved the model, which basically gives q, p as a function of z, we can simulate y with random draws and develop a path of q and p.  I chose a demand curve, interest rate and storage cost that can give rise to a fair amount of price variability and autocorrelation, which happens to fit the facts.  The code is here.

Now, given our simulated y, q and p, we might estimate:

(1)   q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + error

(the ... means additional lags, as many as you like.  I use five.)

This expression makes sense to me, and might have been what DG had in mind: quantity in any one year is a function of this year's weather and a reasonable number past years, all of which affect today's output via storage.  For the regression to fully capture the true effect of weather, the sum of b# coefficients should be one.

Alternatively we might estimate:

(2)   p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + error

This is almost like DG's profit regression, as costs of production in this toy model are zero, so "profit" is just total sales.   But DG wanted to control for price effects in order to account for the pure weather effect on quantity, since the above relationship, the sum of the b# coefficients is likely negative.  So, to do something akin to DG within the context of this toy model we need to control for price.  This might be something like:

(3)  p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + c p_t + error

Or, if you want to be a little more careful, recognizing there is a nonlinear relationship, we might have a more flexible control for p_t, and use a polynomial. Note that we cannot used fixed effects like DG because this isn't a panel.  I'll come back to this later.  In any case, with better controls we get:
 
(4)   p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + c1 p_t  + c2 p_t^2 + c3 p_t^3 +  error

At this point you should be worrying about having p_t on both the right and left side.  More on this in a moment.  First, let's take a look at the results:

Equation 1:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)     1.68       1.32    1.28     0.20
y               0.39       0.03   15.62     0.00
l.y             0.23       0.03    9.17     0.00
l2.y            0.10       0.03    3.83     0.00
l3.y            0.07       0.03    2.66     0.01
l4.y            0.07       0.03    2.69     0.01
l5.y            0.06       0.03    2.34     0.02


The sum of the y coefficients is 0.86.  I'm sure if you put in enough lags they would sum to 1. You shouldn't take the Std. Error or t-stats seriously for this or any of the other regressions, but that doesn't really matter for the points I want to make. Also, if you run the code, the exact results will differ because it will take a different random draw of y's, but the flavor will be the same.

Equation 2:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  4985.23     166.91   29.87        0
y             -72.15       3.19  -22.63        0
l.y           -43.67       3.20  -13.64        0
l2.y          -22.52       3.21   -7.03        0
l3.y          -15.61       3.21   -4.87        0
l4.y          -13.58       3.19   -4.26        0
l5.y          -12.26       3.19   -3.85        0


All the coefficients are negative.  As we expected, good physical outcomes for y mean bad news for profits, since prices fall through the floor.  If you know a little about the history of agriculture, this seems about right.  So, let's "control" for price.

Equation 3:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  2373.15     167.51   14.17        0
y             -28.12       2.91   -9.66        0
l.y           -17.72       2.10   -8.43        0
l2.y          -11.67       1.63   -7.17        0
l3.y           -8.07       1.57   -5.16        0
l4.y           -5.99       1.56   -3.84        0
l5.y           -5.68       1.54   -3.68        0
p               7.84       0.44   17.65        0


Oh, good, the coefficients are less negative.  But we still seem to have a problem.  So, let's improve our control for price by making it a 3rd order polynomial:

Equation 4:
            Estimate Std. Error       t value Pr(>|t|)
(Intercept)  1405.32          0  1.204123e+15     0.00
y               0.00          0  2.000000e-02     0.98
l.y             0.00          0  3.000000e-02     0.98
l2.y            0.00          0  6.200000e-01     0.53
l3.y            0.00          0 -3.200000e-01     0.75
l4.y            0.00          0 -9.500000e-01     0.34
l5.y            0.00          0 -2.410000e+00     0.02
poly(p, 3)1  2914.65          0  3.588634e+15     0.00
poly(p, 3)2  -716.53          0 -1.795882e+15     0.00
poly(p, 3)3     0.00          0  1.640000e+00     0.10


The y coefficients are now almost precisely zero. 

By DG's interpretation, we say that weather has no effect on profit outcomes and thus climate change is likely to have little influence on US agriculture.  Except in this simulation we know that in the underlying physical reality is that one unit of y ultimately has a one unit effect on the output.  DG's interpretation is clearly wrong.

What's going on here? 

The problem comes from an attempt to "control" for price.  Price, after all, is a key (the key?) consequence of the weather. Because storage theory predicts that prices incorporate all past production shocks, whether they are caused by weather or something else, in controlling for price, we remove all weather effects on quantities.  So, DG are ultimately mixing up cause and effect, in their case by using a zillion fixed effects. One should take care in adding "controls" that might actually be an effect, especially when you supposedly have a random source of variation.  David Freedman, the late statistician who famously critiqued regression analysis in the social sciences and provided inspiration to the modern empirical revolution in economics, often emphasized this point.

Now, some might argue that the above analysis is just a single crop, that it doesn't apply to DG's panel data. I'd argue that if you can't make it work in a simpler case, it's unlikely to work in a case that's more complicated.  More pointedly, this angle poses a catch 22 for the identification strategy: If  inclusion of state-by-year fixed effects does not absorb all historic weather shocks, then it implies that the weather shocks must have been crop- or substate-specific, in which case there is bias due to endogenous price movements even after the inclusion of these fixed effects. On the other hand, if enough fixed effects are included to account for all endogenous price movements, then lagged weather by definition does not add any additional information and should not be significant in the regression.  Prices are a sufficient statistic for all past and current shocks.

All of this is to show that the whole DG approach has problems.  However, I think the idea of using lagged weather is a good one if combined with a somewhat different approach.  We might, for example, relate all manner of endogenous outcomes (prices, quantities, and whatever else) to current and past weather. This is the correct  "reduced form."  From these relationships, combined with some minimalist economic structure, we might learn all kinds of interesting and useful things, and not just about climate change.   This observation, in my view, is the over-arching contribution of my new article with Wolfram Schlenker in the AER

I think there is a deeper lesson in this whole episode that gets at a broader conversation in the discipline about data-driven applied microeconomics over the last 20 years.  Following Angrist, Ashenfelter, Card and Krueger, among others, everyone's doing experiments and natural experiments.  A lot of this stuff has led to some interesting and useful discoveries.  And it's helped to weed out some applied econometric silliness.

Unfortunately, somewhere along the way, some folks lost sight of basic theory.   In many contexts we do need to attach our reduced forms to some theoretical structure in order to interpret them.  For example, bad weather causing profits to go up in agriculture actually makes sense, and indicates something bad for consumers and for society as a whole.

And in some contexts a little theory might help us remember what is and isn't exogenous.

Wednesday, October 23, 2013

What is the value of symbolic action?

Robert Stavins argues that largely symbolic actions do not help and may ultimately hurt the cause for action on climate change (HT Mark Thoma):
Over the past year or more, across the United States, there has been a groundswell of student activism pressing colleges and universities to divest their holdings in fossil fuel companies from their investment portfolios.  On October 3, 2013, after many months of assessment, discussion, and debate, the President of Harvard University, Drew Faust, issued a long, well-reasoned, and – in my view – ultimately sensiblestatement on “fossil fuel divestment,” in which she explained why she and the Corporation (Harvard’s governing board) do not believe that “university divestment from the fossil fuel industry is warranted or wise.”  I urge you to read her statement, and decide for yourself how compelling you find it, and whether and how it may apply to your institution, as well. 
About 10 days later, two leaders of the student movement at Harvard responded to President Faust in The Nation.  Andrew Revkin, writing at the New York Times Dot Earth blog, highlighted the fact that the students responded in part by saying, “We do not expect divestment to have a financial impact on fossil fuel companies …  Divestment is a moral and political strategy to expose the reckless business model of the fossil fuel industry that puts our world at risk. 
I agree with these students that fossil-fuel divestment by the University would not have financial impacts on the industry, and I also agree with their implication that it would be (potentially) of symbolic value only.  However, it is precisely because of this that I believe President Faust made the right decision.  Let me explain.
Some may feel exasperated.  If students cannot even make a symbolic or moral point, what can they do?  If your initial reaction is skepticism, I encourage you to click through and read the whole thing, including an earlier post where he addresses what individuals and small institutions can do to curb global warming.  His bottom line: 
Try to focus on actions that can make a real difference, as opposed to actions that may feel good or look good but have relatively little real-world impact, particularly when those feel-good/look-good actions have opportunity costs, that is, divert us from focusing on actions that would make a significant difference.  Climate change is a real and pressing problem.  Strong government actions will be required, as well as enlightened political leadership at the national and international levels.
Stavins also describes some reasons why symbolic activities might be counterproductive.  

I've got one small quibble.  Stavins is right that the over-arching actions required to curb global warming require national and international government. This may seem too remote for many individuals and institutions who want to actively engage.

But with CO2 concentrations already reaching over 400ppm, there is also a fairly large amount of warming already baked into our future.  Different locations will be affected in different ways, and state and local communities and governments need contingency plans and strategies for adaptation.  Building codes and land use regulations need to be revised.  There is also plenty of waste and inefficiency in current state and local policies.

So, if you want to act locally instead of nationally or globally, try to find practical ways for local governments and institutions to improve their regulatory systems.  Here in Hawai'i, we could probably manage our local public resources: energy, water and coastal ecosystems much better, regardless of climate change.  And given warming and sea level rise already anticipated, we need to develop sensible policies for adaptation.

The first step is to thoroughly educate yourself on the local challenges and policy tradeoffs.  Doing that is probably more work than you think.  As Stavins intimates, it seems people often focus on simple (but ultimately useless) symbolic actions because they're easy to do.  Perhaps we make ourselves feel better by talking gravely about the problems and in making great moral pronouncements about what other people should be doing.  Nevermind that all of this accomplishes precisely nothing.

Monday, September 30, 2013

Desperate times bring desperate measures

Update: Edwardo Porter makes the same point, only he does a much better job of it.

A bit off topic, but the impending government shutdown has me thinking in simple game theoretic terms.

Some on the left (and right) seem to think that Congressional actions are "crazy" as government shutdown is likely to hurt the Republican party.  After all, that's what happened the last time when Newt Gingrich shut down the government in 1995, which led to his demise and helped Clinton win reelection against Dole in 1996.

It's probably fair to guess that, while this time is different (isn't every time, at least a little?), the shutdown will likely hurt the Republican party.  So why are they doing it?  Are they really crazy?  Has the radical fringe taken over and leading us over the cliff to disaster?

Well maybe. But maybe their actions, even if potentially disastrous, are rational and not surprising given the circumstances.  It seems to me the Republican party is in a desperate situation, and desperate times rationally bring about desperate actions. It's possible, though probably unlikely, that Obama and the Democrats will cave and give Republicans something in exchange, like partial repeal of the health care law, for not blowing up the economy.  It also seems possible, though unlikely, that shutdown and/or default will hurt Democrats as much or more than Republicans.  Even if these are unlikely propositions, they have more than zero probability.

The alternative is that Republicans do nothing and let Obamacare be implemented, the economy continues to recover, and the nation's demographics steadily change, all of which basically ensures death of the modern Republican party. So, do they go for the Hail Mary pass or just give up?  It seems to me that a rational party goes for the Hail Mary pass, which is what they're doing.

So, the good news is that the Republican party, Tea Partiers included, probably isn't crazy.  The bad news is that it's hard to see how this whole thing plays out without the country, and possibly much of the world, being badly hurt.

Climate Change and Resource Rents


With the next IPCC report coming out, there's been more reporting on climate change issues.  Brad Plumer over a Wonkblog has nice summary that helps to illustrate how much climate change is already "baked in" so to speak.

I'd like to comment one point.  Brad writes "Humans can only burn about one-sixth of their fossil fuel reserves if they want to keep global warming below 2ºC."

I'd guess some might quibble with the measurement a bit, since viable reserves depends on price and technology, plus many unknowns about much fossil fuel there really is down there.  But this is probably in the ballpark, and possibly conservative.

Now imagine you own a lot of oil, coal and/or natural gas, you're reading Brad Plumber, and wondering what might happen to climate policy in the coming years.  Maybe not next year or even in the next five or ten years, but you might expect that eventually governments will start doing a lot more to curb fossil fuel use.  You might then want to sell your fossil fuels now or very soon, while you can.   If many resource owners feel this way, fossil fuel prices could fall and CO2 emissions would increase.  

This observation amounts to the so-called "green paradox."  Related arguments suggest that taxing carbon may have little influence on use, and subsidizing renewable fuels and alternative technologies, without taxing or otherwise limiting carbon-based fuels, might make global warming worse, since it could push emissions toward the present.

Research on these ideas, mostly theoretical, is pretty hot in environmental economics right now.  It seems like half the submissions I manage at JEEM touch on the green paradox in one way or another.  

All of it has me thinking about a point my advisor Peter Berck often made when I was in grad school. At the time, we were puzzling over different reasons why prices for non-renewable resources--mostly metals and fossil fuels--were not trending up like Hotelling's rule says they should.  Peter suggested that we may never use the resources up, because if we did, we'd choke on all the pollution.  Resource use would effectively be banned before all of it could be used. If resource owners recognized this, they'd have no incentive to hold or store natural resources and the resource rent (basically the intrinsic value based on its finite supply) would be zero, which could help explain non-increasing resource prices.

For all practical purposes, Peter understood the green paradox some 15-20 years ago.  Now the literature is finally playing catch up.  

Thursday, September 26, 2013

I've been touched by genius

Awesome news.  My colleague David Lobell just won the MacArthur grant.

http://news.stanford.edu/news/2013/september/macarthur-fellowship-awards-092513.html

Seriously, David is a fantastic colleague and very deserving of this award.  Also, I think we have some great new research in the pipeline and with any luck this might help bring some exposure to it.


Saturday, September 7, 2013

GGG is among the top 200 most influential economics blogs (just barely)

I just stumbled upon this ranking via Econobrowser, which is number 10, and one of the blogs I really like to visit.

Greed, Green and Grains is number 199.

Well, I guess that's not a crown jewel, but I'll take it, especially given how my little niche isn't one of the biggest fields of economics and how little time I have to dedicate to this thing. 

I realize posting is thin.  I will try to post when I can, but my commitments are just too many to post much these days.  G-FEED, which is steadily growing in influence, will have more posts because there a number of us contributing, some of whom are rapidly becoming the rock stars of science, with some major publications and media attention.

Tuesday, August 6, 2013

Crop insurance under climate change

How should crop insurance premiums adjust to a changing climate in order to remain actuarial fair?
Short answer:  Very slowly.

That seems pretty obvious to me, and hopefully to anyone who thinks about it for a few minutes, even if you think climate change is ultimately going to have big impacts.  Moreover, the way crop insurance premiums are already determined---as a function a farmer's own recent yield history---gradual adjustment of premiums will take place naturally.

So, what should USDA's Risk Management Agency do, if we think nasty crop outcomes like last year are going to be more frequent going forward?

Well, I'll abstain from making a recommendation, but I will say that if they do absolutely nothing, there will be no significant budgetary implications.

None of this is to say that there might not be other ways to improve crop insurance.

Update: So, if this issue is so unimportant, why do I mention it?  Because I'm seeing and hearing the question a lot, and my general sense is that energy and resources might be better spent on other issues.

Thursday, August 1, 2013

Integrated assessment models: What do they tell us about climate change policy?

"Very little,"  according to Robert Pindyck in a new working paper.

Integrated assessment models (IAMs to practitioners) stitch together projections from climate models, energy sector models, agronomic crop models, models of other sectors of the economy, and partial or general equilibrium models that account for price and interactions with the broader economy to derive a more comprehensive evaluation of costs and benefits from climate change.

Pindyck is understandably frustrated with the false sense of precision these models can impart.  As he explains, a few reasonable tweaks of any of these models can give very different estimates about the social cost of carbon---the price we should pay, but typically don't, for emitting CO2.

Pindyck raises some good criticisms about IAMs, or at least says out loud a lot of things that many economists have quietly said to each other.  I'm glad he's bringing our varying assumptions and wildly varying cost-of-carbon estimates out into the open for all to see.  Perhaps it will push us to make our modeling efforts a little more useful, or at least more transparent.

He's right to pick on false precision.  But I wonder: has anyone really been fooled?  My sense is no.  One positive thing about these modeling efforts is that they allow us to see which assumptions are most critical. They are nice (black?) boxes for testing out the sensitivity of X on overall climate impacts. This might help us frame more reasonable discussion about the possibilities and what we should do.  It might also help researchers focus future empirical efforts.

The extreme sensitivity of results to seemingly innocuous assumptions also shows how uncertain the impacts of climate change really are.  Indeed, not long ago Pindyck published a paper in JEEM with results that are extremely sensitive to his assumption that the world will end in 500 to 1000 years (an assumption that could be more transparent--see his footnote #13), among others.

So let's take our IAMs with salt, and encourage developers of the models to be as transparent as possible about their assumptions and how and why their models differ from each other.  But let's also not forget that they have a place in this business, albeit perhaps a bit less than IAM builders might have you believe.

Schneider and Schneider and Lane also have nice critiques of IAMs.

Sunday, July 28, 2013

GMOs: Franken food or technological savior?

Amy Harmon has a great in-depth story in the New York Times about the science and controversy surrounding GMO crops.  She builds the article around the worldwide problem of citrus greening, but nicely builds in abroader story about GMOs in general.

Another great source for learning more about the GMO controversy is the book Tomorrow's Table, by Pamela Ronald and Raoul Adamchak.

My own take on GMOs so far: The hysteria against them is likely overblown, but the extraordinary promises by technological optimists are overblown too.  Traditional breeding is a solid and, over the long run, often superior and less costly substitute to GMOs.  What's more worrisome to me is that intellectual property laws and regulatory costs may be acting to concentrate the seed business and make it less competitive.  These later issues are complex, not exactly my forte, and I don't presently see clear answers to any of it.

Anyhow, it's nice to see good reporting on an evocative topic.

Wednesday, July 24, 2013

Commodity Speculation or Market Power?

After seeing how much Goldman profited from selling MBS that they knew were junk, it's hard to feel sorry for Goldman receiving so much grief for its commodity storage and trading activities.  The worry seems to be that because Goldman has become increasingly involved in commodities markets that they must be manipulating prices for profit, and in the process pushing prices away from their fundamental values---ie., supply and demand.

Do we actually know whether there is a problem here? It's possible that Wall Street is trying to manipulate the market.  But this is a hard thing to do, even for a really big company, especially one that doesn't produce the stuff it's trying to monopolize.  Also bear in mind that anyone can buy and store commodities, so it's not like there are huge barriers to entry.  Those who have tried to corner commodity markets in the past haven't fared well.

My sense is that cornering a commodity market via hoarding is basically impossible once the market realizes what the major player(s) is doing.  And if they're having senate hearings about Goldman's storage and trading activities, I think it's fair to say the cat's out of the bag.

So, what is Goldman doing? If it's not a market power story I'd guess they're trying to buy low and sell high, just like everybody else. They probably believe they have a better handle on market fundamentals than other commodity speculators.  Perhaps they do.  But if this is all they are doing, then they are effectively reducing price volatility and helping to make the market work more efficiently.

On public radio this morning a reporter (sorry, I forget who), asked Omarova whether Goldman's profits just meant that consumers were paying higher prices.  Omarova said "that's absolutely right." But it's absolutely wrong if Goldman's just speculating.  Goldman's profits are coming out of the pockets of speculators who bet prices would fall when they rose, and vice versa.  In fact, that's probably the case if it's a market power issue too.

Anyway, if this is about Goldman trying to corner the storage market, that's a problem and Goldman deserves the grief they're receiving. But that strikes me as unlikely as it would be foolhardy.  My guess is that this is just speculation, which means Goldman's profits translate directly to better allocation of commodities over time, less commodity price volatility, and basically zero influence on average prices.

Tuesday, July 16, 2013

The Farm Bill, a.k.a Hunger Games

At this point in our broader political discourse, I probably shouldn't surprised about the House's vote on the farm bill, which continues generous support for wealthy farmers and eliminates food stamps. 

I'm trying to keep my blogging more positive and analytical than normative.  I think the analysis of this is pretty clear, so not much to say here that others, like Paul Krugman, have already done much better than I can. (Incidentally, the 1400+ comments on that article, many of which look very thoughtful, looks like a record to my recollection).

One thing I might add:  In my career studying agricultural policy in the US, I have heard many, many economists of all political stripes lambast our agricultural subsidies.  Greg Mankiw pointed to them as one of the key areas where most economists generally agree.  But I rarely hear economists of any political stripe criticize our food stamp program.  About the harshest economic criticism I've seen as that we should give people cash rather than food stamps.

We live in truly bizarre times.

Saturday, July 6, 2013

Macro, Multipliers and the Environment

A little follow up from my post the other day:  It's probably going too far to say investment to curb climate change, if made during a depression, is a free lunch.  But certainly the basic benefit-cost analysis for what constitutes the most efficient policy with respect to climate change, or any other environmental or public good, changes when there is another massive market failure at play.  Spending to reduce emissions would seem to have two benefits: reduced externalities plus closing the macro output gap.

In some ways it feels a little like the so-called "double-dividend" hypothesis: the idea that taxing pollution can solve the environmental externality while raising revenue that can reduce distortionary income or sales taxes.  That rather compelling idea still gets kicked around a lot, and there is probably a small truth to it, although the calculation turns out to be more subtle (see Goulder's review, for example).

At first blush, the macro double dividend seems like it could be much larger.  As the late James Tobin apparently used to say, it takes a lot of Harberger triangles to fill and Okun gap.  The old double dividend literature dabbles with the former, and now we're talking about the latter.  I'm not familiar enough with the literature to know whether there have been attempts to bridge these vastly different areas of economics.  It strikes me as a difficult thing to do.  And even if it were done well, likely hard to publish due to the macro wars.

Still, if environmental policy were to be structured with macro multipliers in mind, it could change the entire calculus about the relative benefits of standards versus prices, especially if one would induce more spending in the near term.  It might also alter the implications of uncertainty.  Standard micro analysis, which is fashionable in environmental economics, favors delayed timing of investments, but with small economic values at stake.  The macro effect would strongly favor investment now, with presumably big economic stakes.

Of course, there are public goods besides reducing environmental externalities.  Spending on basic infrastructure like roads, bridges, tunnels and railways might have similar double dividends.  So how do we more generally evaluate the costs and benefits of public policies in a depressed economy, assuming (as I would) that macro output gaps are real may be with us for awhile?

I don't know the answer to this question.  But there would seem to be a lot more to it than measuring multipliers.  So, who are the brave, inquisitive souls willing to dive in?


Wednesday, June 26, 2013

Taking action on climate change

Update: Krugman expanded on the job creation point in his column.

The Obama administration is side stepping congress and finally doing something about climate change.  The "action plan" has a nice outline of strategies, but no specifics.  It will be interesting to see what kinds of rules the EPA and DOE roll out in response to this initiative and how they will be justified under existing laws like the Clear Air Act.

Precedent for this kind of action was established by the Supreme Court awhile back.  If the Obama administration didn't take action soon, agencies would be sued by environmental groups and forced to do something.  So this kind of thing was bound to happen, one way or another.

In response, Paul Krugman makes an interesting and surely controversial point.  The new rules, whatever they turn out to be, will make energy more costly.  That's not to say action shouldn't be taken, but that there are tradeoffs involved with curbing climate change.  Krugman argues, however, that because these are not ordinary times, the costs may be considerably less.  Indeed, these rules may actually benefit the rest of the economy, not hurt it.  In other words, action on climate change could be free lunch.

Yes, this violates one of the first principles of economics.  But that sort of thing might actually happen when we have a depressed economy and vast inefficiency to begin with.  His reasoning is that we have too little demand right now, so that investments into alternative energy or carbon capture would employ resources that would otherwise sit idle.  And once those idle resources are employed, the economic activity they would generate would grow real income.  Put another way, since aggregate demand is insufficient, investments to curb global warming do not displace other kinds of investments and instead just add to GDP.

Environmental economists don't think this way, probably because depressed economies don't happen very often, and so the field pays little attention to macroeconomics.  But since the economy is depressed and likely to stay that way for at least another year or two, it does seem like a good time take action. After all, as Krugman likes to remind us again and again,  the latest evidence shows Keynesian ideas to be stronger than ever.  Let's eat that free lunch while we can.

Saturday, May 18, 2013

Do journal impact factors distort science?

From my inbox:
An ad hoc coalition of unlikely insurgents -- scientists, journal editors and publishers, scholarly societies, and research funders across many scientific disciplines -- today posted an international declaration calling on the world scientific community to eliminate the role of the journal impact factor (JIF) in evaluating research for funding, hiring, promotion, or institutional effectiveness.
Here's the rest of the story at Science Daily:

And a link to DORA, the "ad hoc coalition" in question.

It seems fairly obvious that impact factors do distort science.  But I wonder how much, and I also wonder if there are realistic alternatives that would do a better job of encouraging good science.

There are delicate tradeoffs here: some literatures seem to become mired within their own dark corners, forming small circles of scholars that speak a common language.  They review each others' work, sometimes because no one else can understand it, or sometimes because no one else cares to understand it.  The circle has high regard for itself, but the work is pointless to those residing outside of it.

At the same time, people obviously have very different ideas about what constitutes good science.

So, what does the right model for evaluating science look like?

Wednesday, May 15, 2013

Consensus Statements on Sea Level Rise

In my mailbox from the AGU:
After four days of scientific presentations about the state of knowledge on sea-level rise, the participants reached agreement on a number of important key statements. These statements are the reflection of the participants of the conference and not official positions from the sponsoring societies.
 
Earth scientists agree that the global sea level is rising at an accelerated rate overall in response to climate change.
Scientists have a professional responsibility to inform government, the public, and the private sector about the impacts of rising sea levels and extreme events, and the risks they pose.
 
The geological record indicates that the current rates of sea-level rise in many regions are unprecedented relative to rates of the last several thousand years.
Global sea-level rise has changed rapidly in the past and scientific projections show it will continue to rise over the course of this century, altering our coasts.
 
Extreme events and their associated impacts will be more damaging and pose higher risks in the immediate future than sea-level rise.
Increasing human activity, such as land use change and water management practices, adds stress to already fragile ecosystems and can affect coasts just as much as sea-level rise.
 
Sea-level rise will exacerbate the impacts of extreme events, such as hurricanes and storms, over the long-term.
Extreme events have contributed to loss of life, billions of dollars in damage to infrastructure, massive taxpayer funding for recovery, and degradation of our ecosystems.
 
In order to secure a sustainable future, society must learn to anticipate, live with, and adapt to the dynamics of a rapidly evolving coastal system.
Over time feasible choices may change as rising sea level limits certain options. Weighing the best decisions will require the sharing of scientific information, the coordination of policies and actions, and adaptive management approaches.
 
Well-informed policy decisions are imperative and should be based upon the best available science, recognizing the need for involvement of key stakeholders and relevant experts.
As we work to adapt to accelerating sea level rise, deep reductions in emissions remain one of the best ways to limit the magnitude and pace of rising seas and cut the costs of adaptation.
 

Spatial Econometric Peeves (wonkish)

Nearly all observational data show strong spatial patterns.  Location matters, partly due to geophysical attributes, partly because of history, and partly because all the things that follow from these two key factors tend to feedback and exaggerate spatial patterns.  If you're a data monkey you probably like to look at cool maps that illustrate spatial patterns, and spend a lot of time trying to make sense of them.  I know I do.

Most observational empirical studies in economics and other disciplines need to account for this general spatial connectedness of things.  In principal, you can do this two ways:  (1) develop a model of the spatial relationship; (2) account for the spatial connectedness by appropriately adjusting the standard errors of your regression model.

The first option is a truly heroic one, and most all attempts I've seen seem foolhardy.  Spatial geographic patters are extremely complex and follow from deep geophysical and social histories (read Guns, Germs and Steal). One is unlikely to uncover the full mechanism that underlies the spatial pattern.  When one "models" this spatial pattern, assumptions drive the result, and the assumptions are, almost always, a heroic leap of faith.

That leaves (2), which shouldn't be all that difficult using modern statistical techniques, but does take some care and perhaps a little experimentation.  It seems to me many are a little too blithe about it, and perhaps select methods that falsely exaggerate statistical significance.

Essentially, the problem is that there's normally a lot less information in a large data set than you think, because most observations from a region and/or time are correlated with other observations from that region and/or time.  In statistical speak, the errors are clustered.

To illustrate how much this matters, I'll share some preliminary regressions from a current project of mine.  Here I am predicting the natural log of corn yield using field-level data that span about 15 years on most of the corn fields in three major corn-producing U.S. states. I've got several hundred thousand observations. Yes, you read that right--it's a very rich data set.

But corn yields, as you can probably guess, tend to have a lot of spatial correlation. This happens in large part because weather, soils, and farming practices are spatially correlated.  However, there isn't a lot of serial correlation in weather from year to year, so, my data are highly correlated within years, and average outcomes have strong geographic correlation, but errors are mostly independent between years in a fixed location.

Where the amount of information in the data normally scales with the square root of the sample size, when the data are clustered spatially or otherwise, a conservative estimate for the amount of information is the square root of the number of clusters you have.  In this data set, we don't really have fixed clusters.  It's more like smooth overlapping clusters.  But we might proxy the "number" of clusters  around the square root of 45, the number of years X states I have, because most spatial correlation in weather fades out after about 500 miles.  Although these states border each other, so it may be even less than 45.  Now, I do have weather matched to each field depending on the field's individual planting date, which can vary a fair amount.  That adds some statistical power. So, I hope it's a bit better than the square root of 45.  Either way, in the ballpark of 45 is a whole lot less than several hundred thousand.

I regress the natural log of corn yield on

YEAR:             a time trend
log (potential):  (output of a crop model calibrated from daily weather inputs),
gdd:                  growing degree days (a temperature measure),
DD29:              degree days above 29C (a preferred measure of extreme heat),
prec & prec^2:  season precipitation and precipitation squared,
PDay:                number of days since Jan 1 until planting
interaction between DD29 and CO2 exposure.

CO2 exposure varies a little bit spatially, and also temporally, both due to a trend from burning fossil fuels and other emissions, as well as seasonal fluctuations following from tree and leaf growth (earlier planting tends to have higher CO2, and higher CO2 can improve radiation water use efficiency in corn, which can effectively make the plants more drought tolerant).

The standard regression output gives:

Coefficients:
                 Estimate Std. Error t value Pr(>|t|)    
(Intercept)     2.320e+00  3.014e-02   76.98   2e-16
I(YEAR - 2000)  1.291e-02  4.600e-04   28.06   2e-16
log(Potential)  5.697e-01  5.470e-03  104.14   2e-16
gdd             1.931e-04  4.177e-06   46.24   2e-16
DD29           -2.477e-02  1.149e-03  -21.56   2e-16
Prec            1.787e-02  9.424e-04   18.96   2e-16
I(Prec^2)      -4.939e-04  2.038e-05  -24.24   2e-16
PDay           -6.798e-03  6.269e-05 -108.45   2e-16
DD29:AvgCO2     6.229e-05  2.953e-06   21.09   2e-16

Notice the huge t-statistics: all the parameters look precisely identified.  But you should be skeptical.

Most people now use White "robust" standard errors, which uses a variance-covariance matrix constructed from the residuals to account for arbitrary heteroscedasticity.  Here's what that gives you:


                    Estimate   Std. Error   t value      Pr(>|t|)
(Intercept)     2.319894e+00 3.954834e-02  58.65970  0.000000e+00
I(YEAR - 2000)  1.290703e-02 5.362464e-04  24.06922 5.252870e-128
log(Potential)  5.696738e-01 7.161458e-03  79.54718  0.000000e+00
gdd             1.931294e-04 5.058033e-06  38.18271  0.000000e+00
DD29           -2.477002e-02 1.397239e-03 -17.72783  2.557376e-70
Prec            1.786707e-02 1.099087e-03  16.25627  2.016306e-59
I(Prec^2)      -4.938967e-04 2.327153e-05 -21.22321 5.830391e-100
PDay           -6.798270e-03 7.381894e-05 -92.09386  0.000000e+00
DD29:AvgCO2     6.229397e-05 3.616307e-06  17.22585  1.698989e-66

The standard errors are larger and the T-values smaller, but this standard approach still gives us extraordinary confidence in our estimates.

You should remain skeptical.  Here's what happens when I use robust standard errors clustered by year:


                Estimate Std. Error t value Pr(>|t|)    
(Intercept)     2.32e+00  5.57e-01  4.17 3.094e-05 ***
YEAR            1.29e-02  8.57e-03  1.52   0.12920    
log(Potential)  5.70e-01  9.11e-02  6.25 4.000e-10 ***
gdd             1.93e-04  7.89e-05  2.45   0.01443 *  
DD29           -2.48e-02  1.35e-02 -1.83   0.06719 .  
Prec            1.79e-02  1.06e-02  1.68   0.09243 .  
I(Prec^2)      -4.94e-04  2.15e-04 -2.29   0.02178 *  
PDay           -6.80e-03  8.17e-04 -8.32   2.2e-16 ***
DD29:AvgCO2     6.23e-05  3.50e-05  1.78   0.07510 .  


Standard errors are an order of magnitude larger and T-values are more humbling.  Planting date and potential yield come in very strong, but now everything else is just borderline significant.  It seems robust standard errors really aren't so robust.

But even if we cluster by year, we are probably missing some important dependence, since geographic regions may have similar errors across years, and in clustering by year, I assume all errors in one year are independent of all errors in other years.

If I cluster by state, the standard robust/clustering procedure will account for both geographic and time-series dependence within a state.  Since I know from earlier work that one state is about the extent of spatial correlation, this seems reasonable.  Here's what I get:

                  Estimate  Std. Error  t value  Pr(>|t|)    

(Intercept)     2.32e+00  1.1888e+00   1.9514 0.0510065 .  
YEAR            1.29e-02  4.6411e-03   2.7810 0.0054194 ** 
log(Potential)  5.70e-01  1.6938e-01   3.3632 0.0007706 ***
gdd             1.93e-04  2.2126e-04   0.8729 0.3827338    
DD29           -2.48e-02  2.6696e-02  -0.9279 0.3534781    
Prec            1.79e-02  1.2786e-02   1.3974 0.1622882    
I(Prec^2)      -4.94e-04  2.7371e-04  -1.8045 0.0711586 .  
PDay           -6.80e-03  4.9912e-04 -13.6205 < 2.2e-16 ***
DD29:AvgCO2     6.23e-05  6.8565e-05   0.9085 0.3635962    


Oops.  Now most of the weather variables have lost their statistical significance too.  But since I'm explicitly limiting assumed dependence in the cross section within years, now the time trend (YEAR) is significant, and it wasn't when clustering by YEAR.  We probably shouldn't take that significance very seriously, since some kinds of dependence (like technology) probably spans well beyond one state.

Note that this strategy of using large clusters combined with robust SE treatment (canned in STATA, for example) is what's recommended in Angrist and Pischke's Mostly Harmless Econometrics.

There are other ways of dealing with these kinds of problems.  For example, you can use a "block bootstrap" that resamples residuals whole years as a time, which preserves spatial correlation.  This is great in agricultural applications since weather is pretty much IID across years in a fixed locations and we should feel reasonably comfortable that there is little serial correlation.  One can also adapt the method by Conley for panel data.  Soloman Hsiang has graciously provided code here.  In earlier agriculture-related work, Wolfram Schlenker and I generally found that clustering by state gives similar standard errors as these methods.

The overarching lesson is: try it different ways and err on the side of least significance, because it's very easy to underestimate your standard errors and very hard to overestimate them.

And watch out for data errors: these have a way of screwing up both estimates and standard errors, sometimes quite dramatically.

If you had the patience to follow all of this, you might appreciate the footnotes and appendix in our recent comment on Deschenes and Greenstone.

Sunday, May 12, 2013

Laboratory Grown Meat: The Next Green Revolution?

From what I've learned about agriculture over the last 10 years, I'm increasingly skeptical that we'll see another green revolution like the last one.  Crop yields for the major staples appear to be reaching agronomic limits in advanced nations.  While there's still room for improvement in developing nations, a lot of the low hanging fruit seems to have been picked.  And then their are challenges with climate change, which could be beneficial in some places, but likely damaging in most places, and possibly severely damaging.

So, where's a technological optimist to turn?

It seems to me that if we have another green revolution, it's going to look more like this. Right now a 5 oz hamburger, grown in a petri dish rather than scraped off a dead animal, costs a reported $325,000.  That's one expensive burger.  But it is easy to imagine how costs could come down in time.

Anyway, there's obviously a lot of uncertainty about this sort of thing, not the least of which is consumer acceptance.  But in the long run, this kind of technology might do a lot to feed a burgeoning planet in a way that's a lot less environmentally damaging, and depending on your point of view, more humane.


Wednesday, April 24, 2013

How farmers could benefit from fertilizer taxes


Some of the worst water quality problems result from nutrient leaching and runoff from agricultural lands.  Nitrogen and phosphorus applied to cropland and not absorbed by crops in the process of photosynthesis will, one way or another, one day or another, end up in the water.  The same goes for animal waste. The nutrients cause algae blooms, reduced concentrations of dissolved oxygen, and diminished fisheries and ecosystem health (called eutrophication). 

While there has been some effort to deal with these problems, I know of no great success stories, and water quality continues to decline in the Mississippi, the Gulf of Mexico, and the Chesapeake, the Great Lakes, and countless other water bodies.

One obvious remedy would be to tax fertilizer.  This would be a nearly Pigouvian solution.  Better would be to tax runoff and leaching directly, but that’s basically impossible for practical reasons.

The obvious but rarely stated problem is that it would probably require an extraordinarily large tax to have any real influence on the quantity of fertilizer used.  And politically powerful farmers would cry foul, which is why this kind of tax will probably never happen.

But I wonder: What would the incidence of a fertilizer tax, broadly applied, really be?  Agriculture is fairly competitive.  And demand for agricultural commodities is nearly vertical—about as inelastic as anything.  The econ 101 analysis would suggest that burden of the tax would fall mainly on consumers.  That is, food commodity prices would go up enough to compensate for most all of the tax.

Now, I’ve seen some economists propose fertilizer taxes on a graduated scale.  If fertilizer is applied at a sufficiently low rate, no tax would be levied, but the tax would then rise sharply with higher application levels (which is where most runoff and leaching comes from).  This would be a little harder to monitor, but probably not too bad.  If done this way, the total tax bill would cost farmers far less, but cause the same reduction in fertilizer use.  And farmers would still get the full compensating price increase, since less output would be collectively produced.

I think it’s possible—indeed, very probable—that the induced rise in commodity prices would more than compensate farmers for the fertilizer taxes they would have to pay under the graduated tax system.  That is, a statutory tax on farmers could cause their profits to go up.

Anyway, I don’t think anyone has made this point or emphasized it very well.  And it’s an important one, at least politically speaking, because maybe farmers could get on board with a tax that actually benefits them.  I’m not sure if it would save the Chesapeake Bay or Great Lakes from eutrophication, but I bet it would do a lot more good than anything else that’s been tried.

Update: Of course, this is no free lunch: consumers would pay in higher food prices.

Renewable energy not as costly as some think

The other day Marshall and Sol took on Bjorn Lomborg for ignoring the benefits of curbing greenhouse gas emissions.  Indeed.  But Bjorn, am...