I’ve reflected on bias in scientific production in recent posts:. on publication bias, and on how easy it is to create false positives. Being on a roll, I might as well continue. All part of my worries about the evidence base that (social) science is producing. My questions go way beyond that, bias infuses our engagement with the world at its most fundamental level, but what I consider preventable bias in what is the most accountable, reproducible and collective machinery for knowledge production we have, actually worries me. Because if we can’t even get that half right, we’ve lost the battle (mmm, questionable metaphor, but great for audio-visuals…).
One of the two particular biases I want to highlight in this post lies at the basis of my academic education as a cultural psychologist: the use of a very limited sub-set of humanity for experimental studies. Obvious in the dominance of undergraduate subjects in psychological research. But it’s equally the case in e.g. behavioural economics, or medical research (sorry in Dutch) in which for decades the model human was a white Western male of 70kg (so forget about women, or the rest of the world).
All understandable, reasons given by this commentator are convenience/laziness, cost, tradition and “good enough” data, but all fundamentally flawed, and surmountable. The scientific community just doesn’t care enough about it to take systemic action.
The other bias is also close to my heart, and to the whole business of evidence based policy and practice which has been my field for a while: the Randomized Controlled Trial (RCT) as the gold standard for figuring out how the world works. As Kaushik Basu argues in his recent paper on The Method of Randomization, Economic Policy, and Reasoned Intuition, randomized trials are good descriptive tools, but hyped for the wrong reasons. I am all for giving much more attention to the importance of good description. but knowing what, where, when is not yet knowing how and why. So, labelling this method the gold standard is not doing justice to the difficulties of drawing valid inferences from what we see here and now to how and why stuff and people work and operate the way they do. When we enter human affairs control may still be the ideal, like it is in physics, but in practice remains mostly fiction.
Before concluding let’s first enjoy a bit of basic but multi-layered human play that would be difficult to capture if the gold standard would be all we had:
So where does that leave us? Our common sense engagement with the world is very much a mediated affair, indirect, through mental constructs, that we are more or less or not at all aware of. The heuristics that underlie them are an evolutionary heritage of mind-boggling complexity and beauty, but, by definition, deeply biased and while “good enough” for many aspects of life, hardly an accurate window on the ultimate question of life the universe and everything (although they can come up with a brilliant answer like 42, so I am the last one to complain). It cannot be repeated too often: Daniel Kahneman should be obligatory reading for all. When going about the science business these heuristics are as active as ever. I consider that the hard part of the problem. You cannot disrobe at the doorstep, they accompany you in your deepest thoughts. They are all we have. I accept that. But what baffles me is that we cannot even get the easy part right. The game of science has some basic rules. We flaunt our disrespect without shame. Regarding nearly every aspect our deeds do not match our words. It is a sorry sight.
By way of concluding thought: there is lots of things we just cannot get right. Maybe I worry about the wrong ones.