The paper by Vyn and McCullough (2014) should not have been published in its current form as the results are being misinterpreted and highly publicized in the press and in radio broadcasts. The core issue is the lack of power in the statistical tests, a problem partially acknowledged by the authors but then dismissed by their focusing attention on tests for the sensitivity of their model specification. The article appears to encourage the misinterpretation of its statistical findings.
Out of the 5414 sales, only 79 post-turbine sales are of properties within a 5 kilometer radius and the rest are within a 50 kilometer radius. The diversity of the houses in the sample is very large as indicated by their price range of ten thousand to two million dollars and by the relatively low R-squares (0.57) in the hedonic regressions. Given the small number of properties that may have been adversely affected and the great diversity of properties in the sample, it is not at all surprising that the regressions yield no ‘statistically significant’ results. The shortage of observations on properties close to the turbines cannot be overcome by extensive sensitivity testing of model form. The problem is with the lack of data not with model form and focusing on the form tends to obfuscate the issue.
The authors do recognize the data problem: “Unfortunately, there are relatively few observations in the post-turbine periods that are in close proximity to turbines” (p 375) and “Hence, these numbers of observations are likely too few to detect significant effects, which represents a major limitation of this analysis” (p 387). But there are three problems that should have been picked up and corrected through the peer review and editorial decision process.
First, the authors conclude:
“The empirical results generated by the hedonic models, using three different measures to account for disamenity effects, suggest that these turbines have not impacted the value of surrounding properties” (p 388). This is wrong for two reasons. First they could not discern an impact which is different from not having an impact. Second, they misuse the term ‘value’. If you have a choice between two identical properties, identical in all respects except that one is close to a turbine while the other is not and if you choose the far one, then the turbine has an effect on the value of the property. This hypothetical example tests the paper’s hypothesis using common sense rather than a statistical measure.
Second, the authors claim:
“The findings of this paper will provide evidence that may help to resolve the controversy that exists in Ontario regarding the impacts of wind turbines on property values” (p 369) and then proceed to do all they can to make a non-finding appear important and repeat the general statement that they found no significant impact. They correctly said in the CBC interview this morning that their study did not find a statistically significant price effect but the public and reporters, not being familiar with statistical terms interpret this as saying that there was no price effect. Not finding a statistically significant impact due to a data shortage does not mean that there was no significant (i.e. important) impact. This distinction was not made clear enough in the paper nor in the follow up interviews and newspaper articles.
Third, the reviewers and finally the editors should have insisted on the power of the statistical tests to be calculated and reported. I understand that editors in the major health science journals insist on this as their readers, doctors and other clinicians, are not always aware of statistical fine-points but they need to be fully aware of the qualifications before using the results to change their practice. Given the potential impact a misinterpretation of the findings could generate, the test of the power should be reported even in the abstract. The reader should be told how big an impact would have to be before it can be detected by a statistical test with this number of observations. Had the price of properties near the turbines been 10 percent lower than they actually were, would the model have yielded a statistically significant finding of a price decrease at say the 0.05 probability level? What about a 20 percent decrease, would it have been ‘statistically significant’? Answers to this type of question would have been easy to produce and far more relevant that sensitivity tests of the model form.
The paper deals with an important issue that can have serious policy implications affecting the well being of many people. The results can affect the location of wind turbine farms and the compensation claims of affected parties. Incorrect information or interpretations can be very hard to correct. In such cases, it is the journal editors’ responsibility to ensure that results are presented in a manner that, at the very least, does not encourage the misinterpretation of the findings.
Andrejs Skaburskis, Professor Emeritus
North American Editor: Urban Studies,
School of Urban and Regional Planning,
Kingston Ontario, Canada