The jist is given in the opening paragraphs (quotes in green):
"To protect appropriations you were getting, you had to show progress," Matuszeski said. "So I think we had to overstate our progress."
It is a little strange to me that the post reports this as news. I'm no expert on the Chesepeake, but I know people who are, and all I've heard from everyone for years and years is that the Chesepeake was not improving. The Post documents much of this, in mostly he-said she-said fashion.
Here's a bit that grabbed me:
But, Matuszeski said, the EPA program was worried about losing congressional and state funding, which would jeopardize even the modest progress that was being made: "As public officials, you are driven by the idea that the American people like to be part of a winning team."
So the program published statistics, drawn from computer models, that showed pollution reductions that might occur in the future. They were not a snapshot of the bay as it really was -- in fact, Matuszeski said, the EPA did not know exactly how clean the bay really was, because it lacked adequate monitoring equipment.
But, he said, it was clear that the model's version of the Chesapeake was healthier than the real one.
"We had results that promised us future effects," Matuszeski said. But publicly, he said, "They were presented as 'effects,' and the assumption was that they were real-time."
Others within the cleanup's leadership had different opinions about what these numbers represented. Richard Batiuk, the EPA Chesapeake Bay Program Office's current associate director for science, said there was no intent to exaggerate: "Did we inaccurately apply that model? No."
Right: they didn't apply the model inaccurately. They just had an inaccurate model. And they knew very well that it was inaccurate.
There are more than a few economists who like to apply their models and report its predictions as 'evidence'. This kind of thing is commonplace in government. The sad and frustrating part is that assumptions beneath those models are typically not transparent and often never published. And it's easy to manufacture a model to deliver almost any conceivable conclusion. Easier, in my experience, than misleading with statistics.
It's no surprise that physical scientists struggle with these issues as much as economists do.
What's the cure? Simpler models, careful documentation of assumptions, and clear defense of those assumptions.
Here's a bit that grabbed me:
But, Matuszeski said, the EPA program was worried about losing congressional and state funding, which would jeopardize even the modest progress that was being made: "As public officials, you are driven by the idea that the American people like to be part of a winning team."
So the program published statistics, drawn from computer models, that showed pollution reductions that might occur in the future. They were not a snapshot of the bay as it really was -- in fact, Matuszeski said, the EPA did not know exactly how clean the bay really was, because it lacked adequate monitoring equipment.
But, he said, it was clear that the model's version of the Chesapeake was healthier than the real one.
"We had results that promised us future effects," Matuszeski said. But publicly, he said, "They were presented as 'effects,' and the assumption was that they were real-time."
Others within the cleanup's leadership had different opinions about what these numbers represented. Richard Batiuk, the EPA Chesapeake Bay Program Office's current associate director for science, said there was no intent to exaggerate: "Did we inaccurately apply that model? No."
Right: they didn't apply the model inaccurately. They just had an inaccurate model. And they knew very well that it was inaccurate.
There are more than a few economists who like to apply their models and report its predictions as 'evidence'. This kind of thing is commonplace in government. The sad and frustrating part is that assumptions beneath those models are typically not transparent and often never published. And it's easy to manufacture a model to deliver almost any conceivable conclusion. Easier, in my experience, than misleading with statistics.
It's no surprise that physical scientists struggle with these issues as much as economists do.
What's the cure? Simpler models, careful documentation of assumptions, and clear defense of those assumptions.