April 21, 2016

The Corruption of Scientific Inquiry

Many years ago, when I was a freshly baked Ph.D. with an internationally published dissertation and my first peer-reviewed article on the resume, I was asked to review an article for a journal. It is common practice to use published authors as double-blind referees in that author's field of expertise, so I happily accepted. The article was about the possibility of applying the Scandinavian welfare state to the Chinese economy, a topic that I found intriguing back then and I still think is interesting today (though for very different reasons than I did some 15 years ago). 

To my disappointment, the article was poorly written, the research behind it was spotty and there were significant factual errors and unsubstantiated conclusions in it. I pointed this out to the editor of the journal and expected the usual cordial "thank you for your effort". 

Instead, I got an e-mail lambasting me for having reached the wrong conclusion. Not only did this editor think the paper was of adequate scholarly quality, but it deserved to be published and maybe even lead the next issue of the journal. 

Needless to say, I was never invited to referee papers for said journal again.  

Fast forward a dozen years, namely to 2014.


I had just published my book Industrial Poverty: Yesterday Sweden, Today Europe, Tomorrow America (Gower Applied Research 2014, now under Routledge) where I make the argument that Europe's economic crisis is systemic and fundamentally caused by the welfare state. I spent an entire chapter in the book on examining the Swedish welfare-state crisis of the early 1990s, a crisis that left the former herald of all welfare states crippled, crooked and sorely inadequate at delivering on its promises to the people.

As I was browsing through the titles of papers to be presented at an academic conference, I came upon one that seemed to suggest that the Swedish welfare state was an excellent role model for certain general income security reforms here in the United States. Since the conference announcement website only gave away abstracts of papers, I looked up the author, contacted him and pointed to the conclusions that the abstract suggested. I explained that those conclusions seemed to rest on at least two premises that were actually false, and that at least one of the false premises was of such consequence that the entire paper - again as explained by the abstract - could be in jeopardy. I asked if he had a copy of the paper he could share, and explained, with reference to my book, that I would be happy to review it to assure that the rest was fully up to date. 

I got an e-mail back from the author claiming that the data they had used in the paper was perfectly fine and that my services were not needed. It should be pointed out that the data he referred to was old - not to say antiquated - and that his conclusion therefore would be grossly inaccurate. The most serious consequence of this would be that his "scholarly" work be used by policy makers to drastically change the income security system in the United States - with detrimental consequences for the nation's economy.

These are only two examples of an academic culture that is light years from the system of brilliant thinkers and bright ideas it was once created to be. In the first example, the editor had an agenda, either academic or political, that the author of the paper verified and spoke adoringly of. By giving preference to papers that validated his own views, the editor could boost his own stance in the academic community and gain more power, influence and research funds. 

In the second example the author was again unwilling to let facts disturb him, since his mind was already made up. He had a channel into influential politicians and wanted to use his credentials as a college professor and "scholar" to further his own political views. 

I have seen countless examples of the same kind of power or political corruption of the academic system. I have often wondered if the disappointing state of economics, both as an academic discipline and as a science, is due to the manic obsession of economists with multivariate regressions, second-degree differential equations and Cobb Douglas utility functions - or if it is part of a broader trend of corruption of modern science in general. 

One does not exclude the other of course, but with all the junk - pardon my Danish - that is being churned out by academic journals these days one cannot help wondering how we manage to keep the tabs of quality on it all.

The truth is, we don't. In a long but well-worth reading article at FirstThings.com, a computer programmer by the name William A Wilson points to the Emperor while loudly broadcasting all over the internet "look, he is naked": 
The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.
The article is an interesting, if wordy, review of modern practices in science (both natural and social except there is no mention of economics) that have eroded the quality of empirical work to the point where most reported research results claiming the positive outcome of a research project are in all likelihood false.

Wilson discusses some reasons why scientific journals publish studies the result of which are, for the most part, irreproducible or otherwise dismissed as false. I am not going to go into all the details here, just make the point that I have witnessed at least as troubling practices in economics. There, though, unlike in physics or biology, when the failure of one line of research is ignored or even replaced with a label of success, the consequences for our entire society and our economy are devastating.

A recent example, which I will discuss at length in an article of its own, is the serious errors made by economists at the World Bank when evaluating the Greek economic crisis. Their quantitative conclusion said that Greece could handle the harsh austerity measures imposed on the nation by the World Bank. You do not have to be an economist to know how grossly erroneous that research was: the Greek economy shrank by one quarter, unemployment reached depression levels and all kinds of suffering soon spread among the Greek public.

Nobody questioned the initial research at the World Bank. It is not far-fetched to assume that since the results validated one macroeconomic paradigm over another, and since that paradigm had prestigious proponents, nobody simply bothered to question the initial findings.

It was only when the Greek economy and Greek society were falling apart that questions were asked.

To the World Bank's credit, their economists later issued a mea culpa paper explaining their errors. It had some remedial effect on their work and the science of economics, but the damage to the Greek economy was already done.

It really does not matter why scientists misrepresent, falsify, exaggerate or fail to double-check their research. It does not matter if the motives are political influence, advancing agendas or rising in the ranks of academia. The effect is the same: inevitably, this will destroy the integrity of the institution of scientific inquiry.

Go read Wilson's paper. It is well worth the time. 

No comments:

Post a Comment