Um, who is this fellow and what makes him qualified to pass judgment on anything? He withholds any information about his qualifications or affiliation, and the only criterion he mentions that he used was that "Anyone with a basic understanding of the environmental laws, policies, facts, and/or data relevant to any particular article would conclude after only a brief review that the article was seriously flawed." Given the obvious chip on his shoulder, one might conclude he’s a lawyer, but he does not own up to that.So, you can see that Triple-Professor-Director Redacted is basically saying that "this fellow" should STFU and stop emailing "this august body."
So it sounds like his methodology was to use his “basic understanding” to make a brief review of hundreds of scholarly articles and declare them amateurish and shoddy. Is this really worth sharing with this august body?
Director of some institute
named chair Professor of one department
Professor of another department
Professor of yet another department
Well, the reply was even better...[and the professor's name is left in, for reasons you will see...]
This post is the longest I've ever put on the blog (5,400 words) but worth a read (luckily, it's not me writing!). It starts with a general comment on who's allowed to comment on academic work, then goes for the jugular, via a critique of Triple-Professor's own work. I added the title (with link) and abstract of the paper at the end, which "focuses on about 550 social science articles from peer-reviewed journals since the 1960’s that used quantitative research to study United States domestic environmental policies and practices..." and are flawed in one way or another.
My bottom line is that peer review is not as robust as we pretend. Outsiders with different perspectives and knowledge might find problems with papers that "proper" academics do not see, either because they accept some statements without question or because they are only experts on a narrow topic.
But that's not the bottom line of this post.
Oh man, wait until you read this one...
Mark K. Atlas wrote back:
As no one else responded to the questions that you raised about me and my report [see bottom of this post for the report name, abstract and link to the 715 pp paper], it is appropriate for me to do so. I should note that I am not an outsider intervening in this discussion, as I have subscribed to this e-mail list for over 10 years -- I suppose that I slipped in before it became an “august body.”
It appears that you made your comments after reading only the abstract of my report that was included in the e-mail sent by the other gentleman. I concluded that because although I expected varied reactions to my 715-page single-spaced report with 4,354 footnotes, one of them was not that my supposed “methodology was to use [my] ‘basic understanding’ to make a brief review” of the articles that I examined. Obsessive-compulsive perhaps, but not brief. I believe that I examined the studies more thoroughly than did the peer reviewers who approved the publication of the studies, and oftentimes more thoroughly than did the authors. It is not surprising that the mere abstract of my report did not describe my qualifications, but anyone who made it through at least the third page of my report would see those qualifications referred to.
As the report noted, I have been an environmental attorney for over 25 years, in consulting firms assisting the U.S. Environmental Protection Agency in developing and enforcing its programs and in private practice assisting large and small businesses, trade associations, and universities. I also taught environmental law and policy courses in academia. I also have published social science empirical environmental policy articles – um, more than you, but who’s counting. Thus, I believe that this background makes a prima facie case that I am qualified to evaluate the legal and factual accuracy of social science empirical environmental policy articles, which is the sole purpose of my report.
Ultimately, however, my purported qualifications are quite irrelevant to my report. You appear to share the mindset of some social science environmental policy researchers. Because, as my report describes, many social science environmental policy articles do not offer any meaningful substantiation for their supposed facts and because most social science environmental policy researchers have inadequate education, training, or experience in the substance of environmental law and policy, those researchers simply accept as a fact whatever they read in an article, perhaps tempered by what they perceive as the author’s qualifications. Their approach, and apparently also yours, is that their perceptions of an author’s qualifications determine the perceived quality of that author’s work products. In most of the rest of society the approach is that the perceived quality of an author’s work products determine our perception of the author’s qualifications – good quality work products demonstrate the author is qualified, not vice versa. Just because we are uncertain about some authors’ qualifications should not mean that we refuse to even look at their work products, and just because we believe that some authors are qualified does not mean that we should not check the veracity of their work products. For example, if you were flying on an airplane and other passengers told you that the airplane’s wings had fallen off, would you interrogate those passengers about their qualifications in aeronautical engineering or would you simply look out the window to check the airplane’s wings? Checking people’s supposed facts is more important than trying to judge their qualifications.
I would be delighted if no reader of my report initially assumed that anything in it was true. That is why, unlike social science empirical environmental policy articles, all of the report’s assertions cite publicly-available documents or data as substantiation. Readers can check this substantiation and decide for themselves if I got the facts right. If I did, this indicates that I am qualified; if I did not, this indicates that I am not qualified. If readers do not care or know how to check this substantiation, this indicates that they should not be dealing with environmental policy.
Your urging that people ignore a document, especially one that you apparently had not even looked at, because the qualifications of its author are unknown does not sound particularly scholarly. If your approach had been followed in the past, we probably would never have heard from that unknown Swiss patent clerk who proposed the theory of relativity. Of course, your approach also effectively trashes the existing system of academic journals, which requires that reviewers do not even know the names of the authors whose articles they review, much less the authors’ qualifications. Admittedly, my report also effectively trashes the existing peer review system, but because the peer reviewers, aside from the authors, might be unqualified. You also did not specify who gets to anoint authors as being qualified and thus worthy of having their research reviewed by other people. In general, I believe that advances in knowledge are more likely to come from encouraging as many people as possible to submit their research and then having a competent review system decide who is qualified based on the quality of their research.
I believe that anyone interested in effective environmental policy and in understanding environmental policy research would benefit from reading my report. I do not know whether this describes you, but even if not I believe that, despite your statements to the contrary, you would benefit from reviewing my report, if only to see the serious problems that I identified that relate to the one social science empirical environmental policy article that you had published by 2005, the end of the time period covered by my report. Although some of these problems can be easily located in my report by searching for the citation of your article (John W. Maxwell et al., Self-Regulation and Social Welfare: The Political Economy of Corporate Environmentalism, 43 J. LAW & ECON. 583), since you appear opposed to even looking at my report, I will summarize them here. (When I refer for purposes of brevity in the following discussion to “you,” it means you and the article’s co-authors, because you are collectively responsible for the article and I do not know if only a subset of the co-authors was individually responsible for a particular problem.)
First, you used federal Toxics Release Inventory (“TRI”) data as the measure of “emissions” of 17 chemicals. The only people who persistently claim that TRI data are an accurate measure of anything about chemicals are data-desperate academic researchers. My report has dozens of pages (in Chapter 6) explaining why TRI data would be useless for your purposes. Second, as noted in my report (p. 459), the chemical use threshold that triggered the TRI reporting requirement in the first year of your study time period – 1988 – was different from the subsequent years. Thus, when you examined the change in TRI “emissions” from 1988 to 1992, you compared TRI data from different reporting regimes. Simply starting your study time period in 1989 would have eliminated this problem.
Third, you consistently, repeatedly claimed that your TRI data measured “emissions” or releases. You never defined what are “emissions” and the term has no meaning in the TRI program. In the TRI program, and common usage, however, releases involve chemicals entering the environment. By examining your tabulations of TRI data, however, it is clear that you included in your emissions/releases the transfers of chemicals off-site to other facilities for treatment or disposal. Your 1998 version of this article (downloadable from the Social Science Research Network) appropriately included only TRI releases as emissions/releases. I am curious what epiphany that you had between 1998 and 2000 that convinced you to add TRI transfers to this variable and why you did not disclose it. Aside from contradicting what readers of your article would have assumed comprised your emissions/releases, your inclusion of transfers has serious implications for your analyses. For example, TRI facilities that transferred chemicals off-site for treatment or disposal would have higher amounts of your TRI emissions/releases than TRI facilities that treated or disposed of the same amount of chemicals on-site. As discussed in my report (pp. 482-483), this is because a facility that treats or disposes of chemicals on-site reports as TRI releases only the amounts of the chemicals that were released into the environment after treatment or disposal. A facility that transfers chemicals off-site for treatment, however, reports as TRI transfers the amounts of the chemicals that were transferred, even if only a minute portion of them ultimately were released into the environment after treatment. By mixing together TRI releases and transfers, you included methods of managing chemical wastes that are measured in very different ways. States whose TRI facilities were atypically likely to transfer chemicals off-site for treatment or disposal, rather than to do so on-site, would be incorrectly regarded by your measure as having relatively higher TRI emissions/releases.
You also assumed that TRI emissions/releases would decrease more if “a state has a law imposing strict liability in toxic waste cleanup cases.” (Your article at p. 609) This assumes, however, that the chemicals are released into the environment. Although this is the case for TRI releases, it is not necessarily true for TRI transfers, which by definition involve chemicals being sent to other facilities (see my report at pp. 479-480). For instance, it is realistically impossible that chemicals transferred by a facility by being discharged in wastewater to a publicly owned treatment works would make the facility liable for environmental contamination. Thus, there is no reason to assume that the existence of a strict liability law would affect that portion of your TRI emissions/releases that were actually transfers. Even if there were a release of those chemicals at the facility to which they were transferred, however, the strict liability law, if any, that applied to the release would be the one in the state to which the chemicals were transferred, which might be different from the state of the facility that initiated the transfer. You incorrectly assumed that all that was relevant was whether the state in which the facility was located had a strict liability law.
Fourth, in any event, as explained in my report (pp. 611-613), your distinguishing between states that had or lacked “a law imposing strict liability in toxic waste cleanup cases” was a fallacy. There was a strict liability law in such situations in every state because the federal Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (“CERCLA”) applied nationwide and it imposed strict liability for contamination cleanups. Thus, whether a state had its own strict liability law was irrelevant, unless the state’s law was significantly more favorable to plaintiffs than CERCLA. You did not check this. (Indeed, you did not even disclose how you obtained your information on state strict liability laws, which is remarkable because you had no environmental law qualifications. Based on the minimal information that you provided about this variable, however, it appears that – like other researchers who used this variable – you simply copied information from an Environmental Law Institute report.)
Fifth, even if a state had a strict liability law, it was very unlikely to be relevant to much of the TRI emissions/releases that you measured. As discussed in my report (p. 610), there can be no contaminated sites in the air. Realistically, there would virtually never be strict liability for cleanup of releases of chemicals into the air because any released chemicals should be widely dispersed, with not enough of the chemicals remaining concentrated in the air long enough to pose any long term environmental or public health threat and not enough falling to the ground in any one location to require a cleanup. Because the overwhelming majority of the TRI emissions/releases that you examined were releases into the air, it was irrelevant for them whether a state had a strict liability law.
Sixth, as also discussed in my report (pp. 609-610), it was not enough to simply determine whether a state had a strict liability law. Being in a state that had no strict liability law should have the same effect as being in a state that had a strict liability law but which did not cover the 17 chemicals on which you focused. Obviously, a critical assumption by you was that each state’s strict liability law applied to those 17 chemicals. Nothing in your article, however, indicates that you confirmed this, which would have required checking the statutes, and possibly regulations, that applied during your study time period in each of the 36 states with a strict liability law.
Seventh, anyone with a rudimentary understanding of CERCLA and similar state strict liability laws for contamination cleanups would have recognized the inherent contradiction between your statement that TRI “emissions are legal” (your article at p. 585) and your assumption that CERCLA (or similar state law) liability could be imposed for contamination resulting from these emissions. Under CERCLA, facilities are exempt from CERCLA liability for any contamination resulting from their releases that are in compliance with federal permits (42 U.S.C. § 9607(j)). Obviously, it would make no sense for the government to permit a facility to release certain amounts of certain chemicals, and to then hold it liable for contamination resulting from those releases. Thus, it was illogical for you to claim that facilities’ TRI emissions/releases would be influenced by strict liability laws.
Eighth, you stated that “[c]hanges in government regulation are not driving the reductions [in TRI emissions from 1988 to 1992], as these emissions are legal.” (Your article at p. 585) As discussed in my report (pp. 475-477), your article was not the only one to make this spectacularly ill-advised assumption. The fact that TRI emissions/releases were legal does not mean that they were unregulated, nor that those laws did not change significantly during your study time period. As my report described (pp. 67-71 and 667-671), the chemicals that you focused on were extensively regulated during your study time period and those laws changed substantially during that time. Consequently, your study’s underlying assumption that all changes in TRI emissions/releases during your study time period were voluntarily undertaken by facilities is preposterous.
Aside from these substantive deficiencies described in my report, there is a fatal problem not discussed there because it related to your article’s methodology, not its substance. Because you fortuitously initiated this discussion about my work product, however, it is appropriate for me to mention this other problem about your work product. Your article claimed that when you calculated for each state your measure of toxicity-weighted emissions in the state divided by the combined value of shipments for facilities in seven specified Standard Industrial Classification (“SIC”) codes in the state, “the data reveals [sic] Montana to be a clear outlier, with far higher emissions per unit of shipments in 1988 than any other state.“ (Your article at p. 610) Your table of summary statistics (your article at p. 607) certainly supports this assertion. According to that table, 3,031.53 was the maximum value of this measure for any state, compared to a mean of 151.647 across all 50 states. Although you did not state it explicitly, I assume that the maximum value was attributable to Montana, as that is what would define it as the outlier. Obviously, having a value that is 20 times higher than the mean makes it a clear outlier. If we subtract Montana’s value from the calculation of the mean, the difference naturally is even greater – Montana’s value is 33 times higher than the mean of 93 for the other 49 states. After pondering the reason for this weird result, you concluded that, well, Montana is just weird. Consequently, you included in your analyses a dummy variable that differentiated between the 49 normal states and weird Montana.
People who are conscientious about quality control would probably look at your results – one observation being bizarrely different from the rest -- and immediately question whether this might be due to a data entry, or similar, error. They would want the sources of the bizarre data to be checked for errors. Your measure was calculated by dividing toxicity-weighted emissions in a state by the combined value of shipments for seven SIC codes in the state. Because the former involves TRI data weighted by toxicity factors, the value of shipments is the easiest to check first, because it comes directly from Census Bureau reports. If any of the peer reviewers, or subsequent readers, of your article had done this simple check, the explanation for Montana’s weird results would have been obvious, and your study would have begun unraveling.
The 1988 value of shipments data are in an “Annual Survey of Manufactures: Geographic Area Statistics” Census Bureau report for 1989 conveniently and freely downloadable from Google Books. By investing just one minute in quality control that your article’s peer reviewers did not, we can see on page 3-75 that there were no value of shipments reported for your seven selected SIC codes in Montana in 1988. As sometimes happens with these economic surveys, for confidentiality reasons the Census Bureau does not report data if it might be possible for people to tie the data to specific facilities. The fact that these value of shipments are unreported does not necessarily mean that they are small, just that reporting them would raise confidentiality concerns. Also, the data might not be reported for other reasons.
Clearly, you did not record the value of shipments for Montana as zero, because if you divided Montana’s toxicity-weighted emissions (or any number) by zero it would not have resulted in a bizarrely large number, but rather an error message. In addition, your table of summary statistics stated that the minimum value of shipments for any state was $156 million, not zero (your table incorrectly indicated that the minimum was .156 million dollars, but the unit of measure for that variable actually was billions of dollars). Consequently, apparently you fabricated the data that you used for Montana.
Of course, fabricating data to fill in missing data is not necessarily improper. I used Census Bureau economic data for one of my published articles, but some were unreported for confidentiality reasons. However, the missing data comprised a minuscule portion of my data, I disclosed the problem in my article and described how I filled in the missing data, and my method for the latter could not produce strange results. In contrast, you disclosed neither the problem nor your purported solution, and your fabricated data produced obviously, and by your own admission, bizarre results. You claimed that your bizarre results were attributable to some unknown weirdness about Montana, whereas it seems far more plausible that they were due to your weird undisclosed method for fabricating its data.
The fact that your fabricated data for Montana produced a number 33 times higher than average should have been enough to motivate you to reexamine your undisclosed method. You also could have done another simple reality check to evaluate the validity of your undisclosed method, which also might have led to a better way of filling in the missing data. Economic data from the Census Bureau’s census of businesses every five years (years ending in 2 or 7) should be more complete than its mere samples of businesses in the intervening years. For example, although the value of shipments was reported for none of your seven selected SIC codes and only three of the other 13 SIC codes for Montana in 1988, it was reported for four of your seven selected SIC codes and five of the other 13 SIC codes in 1987 for Montana. According to the Census Bureau’s 1987 Census of Manufactures (conveniently and freely available from http://hdl.handle.net/2027/umn.31951d00291624g), the combined value of shipments for those four of your seven selected SIC codes in 1987 comprised 36% of the combined value of shipments for all of Montana’s manufacturing SIC codes (the total value of shipments for each state always is reported and includes the shipments that are not disclosed for particular SIC codes). Thus, even conservatively assuming that the three missing SIC codes produced no shipments, we know for certain that your seven selected SIC codes accounted for at least 36% of Montana’s total shipments. By subtracting from Montana’s total shipments all of the shipments for the nine SIC codes whose shipments were disclosed, what is left are the shipments accounted for by the SIC codes for which the data are missing. If we liberally assume that all of these remaining shipments are accounted for by just the three of your selected SIC codes for which value of shipments data were missing, then we know for certain that your seven selected SIC codes accounted for no more than 49% of Montana’s total shipments. Thus, these simple calculations provide a fairly narrow range for Montana’s missing data in 1987.
Doing the same calculations for the next census of businesses in 1992 produces a range of 31% to 53% for how much of Montana’s total shipments are attributable to your seven selected SIC codes. The fact that this range also is relatively narrow and is close to that of 1987 makes it reasonable to assume that these ranges are fairly stable over time and likely applicable to your desired 1988 data, unless we believe that 1988 was an aberrant year. I then took the extreme endpoints of these ranges – the 31% and 53% from 1992 – and assumed that these were the minimum and maximum percentages of Montana’s total shipments that were attributable to your seven selected SIC codes in 1988. Multiplying these percentages against Montana’s total shipments in 1988 provides two extreme alternatives for your seven selected SIC codes’ combined value of shipments. Then, using the toxicity weights listed in your article, I calculated the toxicity-weighted emissions for Montana in 1988 as you presumably did. Dividing the latter by each extreme alternative for your seven selected SIC codes’ value of shipments produces the range within which should fall your measure of toxicity-weighted emissions per value of shipments.
According to my calculations, your measure for Montana should be between 56 and 96, far from your bizarrely large 3,031.53, and not necessarily meaningfully different from the 93 average of the other 49 states. Instead of being 33 times larger than the other states’ average, my method indicates that Montana’s figure most likely is somewhat smaller than that average. Thus, this simple reality check indicates that your method for fabricating data was grievously wrong.
Another indication that something was wrong with your value of shipments data was easier to discover than going through the above process. As I mentioned earlier, your table of summary statistics listed the minimum value of shipments for your seven selected SIC codes for a state in 1988 as $156 million. This sounds suspiciously low for what should be seven major industries. This equals less than 8% of the combined value of shipments for all manufacturing SIC codes in the state with the smallest combined value of those shipments. According to your own summary statistics, this minimum is 99.4% lower than the average value of shipments of all states. By taking about 10 minutes to simply peruse the Census Bureau’s report on 1988 value of shipments, it is obvious that no state for which value of shipments data were reported had anywhere close to that small of a combined value of shipments for your seven selected SIC codes. Thus, this number is wildly incorrect. If this is the number that you fabricated for Montana, it is a further demonstration of the absurdity of that number. According to 1987 Census Bureau data, the combined value of shipments in Montana for six of your seven selected SIC codes was $1.087 billion. Thus, unless we believe that it is plausible for this number to have declined over 85% the next year, it is obviously wildly incorrect.
The problems with your value of shipments data, however, extend beyond this. There were three states in addition to Montana that had no reported value of shipments for your seven selected SIC codes in 1988. Presumably, you also fabricated data for those states, also without disclosing this. Even though those data might not have produced as bizarre results as for Montana, there is, at best, no reason to assume that they were remotely accurate.
The problems with your value of shipments data, however, extend beyond this. Aside from the four states lacking any reported value of shipments for your seven selected SIC codes in 1988, there were 20 states for which value of shipments were not reported for one or more of your seven selected SIC codes in 1988 because of Census Bureau confidentiality concerns. Thus, you either also fabricated data for those SIC codes in those states or you incorrectly treated those unreported data as zeroes, also without disclosing this.
The problems with your value of shipments data, however, extend beyond this. The dependent variable in your analysis was the change from 1988 to 1992 in toxicity-weighted emissions divided by the value of shipments for your seven selected SIC codes. Thus, 1992 value of shipments data also were used in your analyses. Aside from the 24 states lacking some or all of the value of shipments for your seven selected SIC codes in 1988, there were three states for which value of shipments were not reported for one or more of your seven selected SIC codes in 1992 because of Census Bureau confidentiality concerns. Therefore, you either also fabricated data for those SIC codes in those states or you incorrectly treated those unreported data as zeroes, also without disclosing this.
Furthermore, all of these missing data-related problems were, in effect, self-inflicted fatal wounds. The variable for which you used these data was created by dividing all of a state’s toxicity-weighted emissions by the state’s combined value of shipments for your seven selected SIC codes. I can understand why it would have been logical to divide all of a state’s toxicity-weighted emissions by all of the state’s value of shipments. I can understand why it would have been logical to divide a state’s combined toxicity-weighted emissions for your seven selected SIC codes by the state’s combined value of shipments for those SIC codes. I cannot understand, and you offer no explanation, why it was logical to divide all of a state’s toxicity-weighted emissions by the state’s combined value of shipments for just your seven selected SIC codes. A state that had an atypically large portion of its toxicity-weighted emissions originating from SIC codes other than your seven selected SIC codes would produce results that made it appear to be unusually pollution-intensive, simply because the combined value of shipments in the denominator of your calculation did not include those from these other SIC codes. Dividing all of a state’s toxicity-weighted emissions by all of the state’s value of shipments is not only more logical, it would have resulted in no missing data because the total value of shipments for each state is always reported in the Census Bureau’s data.
Thus, in summary, even ignoring all of the substantive problems in your article that my report identified, your article is invalidated by these obvious data problems. A majority of the observations in your analyses were based on fabricated or missing data, none of which you disclosed or explained how you addressed. Indeed, your 1998 version of this analysis was even more obviously incorrect in its reported value of shipments data, so you apparently had a problem with these data from the start.
If you believe that any aspect of my assessment of your article is incorrect, I, and presumably others, would be very interested in knowing why. Otherwise, it is clear that you should arrange for your article to be retracted. Unfortunately, your article is an excellent example of the major pervasively dysfunctional elements of social science empirical environmental policy research described in my report. You initiated this discussion by inquiring about my qualifications to evaluate that type of research. If reading my report did not answer that question, hopefully this discussion has. Thanks for asking.
Environmental Substance Abuse: The Substantive Competence of Social Science Empirical Environmental Policy Research
Mark K. Atlas
affiliation not provided to SSRN
December 22, 2010
In a 2002 article, social science scholars criticized legal scholars for violating empirical analysis principles in law review articles. Their review of hundreds of empirical law review articles led to a pervasively grim assessment of these articles and their authors, concluding that empirical legal scholarship was deeply flawed, with serious problems of inference and methodology everywhere. In essence, the 2002 article argued that although legal scholars’ articles might be substantively competent (i.e., knowledgeable about the law and facts), they were, at best, methodologically incompetent.
This Report reverses the 2002 article’s focus, assessing the substantive competence of social science empirical research articles, ignoring their methodological competence. This Report focuses on about 550 social science articles from peer-reviewed journals since the 1960’s that used quantitative research to study United States domestic environmental policies and practices. The 2002 article examined aspects of law review articles at which legal researchers might be deficient but at which social science researchers should be competent. This Report does the opposite by focusing on what legal researchers should be most expert – determining the relevant laws, government policies, and facts. Consequently, just as the 2002 article evaluated whether law review articles violated empirical research rules, this Report evaluates whether social science environmental policy articles were incorrect or incomplete about the relevant laws, government policies, or facts.
Although the 2002 article concluded that every empirical law review article was fatally flawed methodologically, this Report does not conclude that every social science environmental policy article was fatally flawed substantively. However, the overwhelming majority of those articles were substantively uninformed, amateurish, shoddy, and/or deceptive. Anyone with a basic understanding of the environmental laws, policies, facts, and/or data relevant to any particular article would conclude after only a brief review that the article was seriously flawed. Unfortunately, social science journals publishing environmental policy articles have been like runaway trains of invalid research that keep picking up new passengers. This Report explains in detail the substantive problems with each of these articles.