Wait -- so apparently it's acceptable in economics to publish big important papers with broad-reaching public policy implications without releasing the data on which the paper was based? That would never fly in the sciences.
Regardless of what side you come down on w.r.t. this particular issue (and I realize it's a sensitive one), the fact that global policy was influenced by a paper that used secret, buggy Fortran code to manipulate a data set is extremely concerning. This is not good science.
Publishing data and source code should be a requirement these days.
"The test in science is whether findings can be replicated using different data and methods. More than two dozen reconstructions, using various statistical methods and combinations of proxy records, have supported the broad consensus shown in the original 1998 hockey-stick graph"
But the result wasn't discredited: there was in fact a normalization error in Mann et al's original findings, and we didn't find out until 8 years later because the data manipulation code was "proprietary."
Meanwhile, everyone proceeded as if it were fact.
We need to raise the bar of skepticism and with it the burden of proof. Show me your raw data. Show me your code for manipulating that data into the final result. I'm honestly surprised that publishing data + paper + a source code repository isn't required in order to be taken seriously these days.
"7) Basically then the MM05 criticism is simply about whether selected N. American tree rings should have been included, not that there was a mathematical flaw?
There is no serious rebuttal or debunking of Mcintyre contained within that post. Even if we ignore the details of that specific controversy, it is still obvious that there are
1) big problems in the climate science community with regard to data and replicability (see climateaudit for more than you could ever want)
2) massive statistical problems with reconstructions. See Mcshane and Wyner 2011.
I don't think it even matters what study we're talking about. A publicly-funded scientist who withholds "proprietary" source code used in a result of this magnitude is not doing good science.
The interesting question to me is whether any government policy was decided on this before it was debunked. I can't see any evidence of that necessarily, but would love a confirmation. Otherwise, isn't this exactly the type of thing that people are upset about regarding Reinhart-Rogoff?
Yes, though this has been a huge charge for guys like Levitt/Dubner (Freakonomics authors) - that data sets must be published in tandem with conclusions based on said data. (Assuming, of course, the data sets are not publicly available. It does no good to reproduce census data and the like!)
You should see how bad it is in my field of study - biomechanics. It's a pure black box. We have total cronyism, where one reviewer has disproportional power since he was the first to publish much of the modern stuff. Therefore, people will do anything to have him review their papers, and his papers are pushed through despite what many feel are seriously glaring errors in methodology and collection.
This is the walled garden crap that turns me off to "peer-reviewed research."
Feynman talked about an "extra" kind of integrity, where one bends over backwards to show how one might be wrong. In my field (organic chemistry), this is de rigeur, and it is enforced by the fact that you can release a paper on one day and have people report on how replicable it is within 24h.
For a much fuzzier field like economics, where the line between knowledge and opinion is extremely blurry, this "bending over backwards" to show how you might be wrong should be applied 100x .
Umm, that's not true. In plenty of fields it's common not to publish the raw data, and instead publish only some graphs that summarize a huge data set down to a couple of data points. This includes biology, physics, and medicine. On the contrary, people actively "protect" and "guard" their data. And yes, that's a big problem.
Or they selectively release data. A personal example: a researcher was happy to give me unpublished data for a famous study which turned in an exciting research; then I asked about a pilot study she led which the technical report indicated was turning in a null result - all of a sudden, of course she would not give me any of the data and told me I shouldn't be relying on unpublished research!
I avoided Poli Sci at every turn, so this is actually an honest question: how much science is in Poli Sci? It actually seems a bit like Computer Science, in that, if the degree has the word Science in it, it probably isn't.
There's quite a bit of data-anlytics type stuff in quantitative poli-sci these days. Now whether it's good science or not varies, much like in the rest of "big data", where there's no guarantee that either the data or the analysis is always good. There's some very careful stuff, and a lot of data-dredging on datasets of convenience (or over-extrapolation from limited data sets).
More traditional poli-sci is scientific in the sense of the social sciences, which has a fairly long history of epistemological debate I'm only vaguely familiar with. I think I would probably call it scientific in a certain sense, but maybe a different word is needed. I'd group it vaguely with disciplines like anthropology, archaeology, and linguistics as areas with quite a bit of methodological diversity, but still a more empirical orientation than you find in the humanities. Part of the issue is that there is data, but how to interpret the data is complex ("there is no such thing as raw data"). Though for mostly institutional reasons there are some people who are more philosophers or historians who also happen to be in poli-sci departments.
Exactly. We had two extremes at my department where one professor felt PS should be regarded as a hard science, and one that felt it belonged in the Humanities department.
At my school, it depended heavily on what your focus was. We had three professors, each focusing on their own area of specialization: American politics, international politics, and theory.
The American politics professor had an academic pedigree and insisted that Political Science could and should be a hard science backed up by facts and numbers. In his papers you need statistics, studies, and graphs. He focused very much on the methodology of your research and how well you could back up your claims with solid evidence.
The theoretical professor appreciated hard facts but it was much more important to him that you had a well-reasoned argument, and as _delirium said, would much rather read a 10-page paper full of epistemological debate than one twice that filled with detailed Bayesian analysis.
The international politics professor didn't care because he had tenure.
Yes it would because the test of scientific knowledge is whether it be confirmed with a different data set.
If a conclusion is only provable with one specific data set, then it can hardly be considered a universal objective truth of nature.
By analogy: if I drop a hammer and tell you that gravitational acceleration is 9.8 meters per second squared, you don't need to come over to my house and borrow my hammer to test it.
Sharing data and code is helpful for error checking, as it was in this case.
In this case the main publication that had impact was a book, which even in the physical sciences tend to be reviewed under different standards. There is some pretty out-there stuff published by major physicists in book form.