I’ve touched briefly on the psychology of research development in the past, but one question remains of interest to me: do we only publish and cite research that we want to believe is true, whether we know it or not?
Talking about biased science is by no means new. The topic of “Null” has been debated heavily in academia outside of myself, and it’s a well-known problem. Publishing is inherently an economic game, and therefore you can only really be successful by publishing work that is deemed “effective”.
But this creates a new problem entirely: how do we define effective? Researchers usually pride themselves on their open minds, and I don’t disagree that they have them. But when almost every single major scientific advance was ignored for some years/decades before it was accepted into the canon, it makes you wonder whether there is a dissonance at play. We seem to have an inherent bias that the already established research is correct, and new research that contradicts it is wrong. Similarly, we tend to believe research in new fields that is derivative of other content which we assume is “right”. It is nice to take explosive-sounding ideas with a grain of salt, but this works (ironically) better in theory than in practice. Once again, when much of the ground-breaking research gets ignored for years, this seems to hint more at a determined conservatism rather than critical thinking. In a critical thinking world, these papers would have been accepted as soon as their rigor was proven. What gives?
Human bias, of course. We stay stuck in our ways – we appreciate the status quo because its familiar to us, and too much change (the more groundbreaking, the worse it is) causes us to get stressed. I don’t think this problem could ever be solved since it is so ingrained into the human psyche, but I also don’t think it will hurt us too bad as long as there are a few critical thinkers out there in the world who know the good stuff when they see it.