As scientists, we like to think that we are objective in the interpretation of data. As humans, our personal value-based judgments creep into these assessments far more than we might realize. I often think of these biases in my career, both as a researcher (when writing papers or reviewing manuscripts) but also as an educator.
I mentioned at the beginning of the semester that I'm obligated to watch student presentations for a class on science communication. As the semester progresses, I'm getting a better handle on how students (at least in our department) are starting to think about scientific arguments. A few students have made what are, at least to me, surprising statements about the strengths and weaknesses of the primary literature they discuss: they relate that failure of results to support a hypothesis is a weakness of the paper (also vice versa, that "supporting the hypothesis" is a strength).
One of the first lessons I learned while doing scientific research is that (more often than not) unexpected results and questions can emerge from our experiments. You can learn something from any experiment, even if it's simply a better way to perform the experiment in the future. Unfortunately, research on science research (I know, so meta) indicates that results which fail to support the hypothesis are often discarded by researchers as "useless" or "no good" (you can read more about this phenomenon in association with the formation of the Journal of Negative Results in Biomedicine here). Albeit unfortunate, this behavior makes sense in the face of the competitive realm of science research, where a nice, concise story can make the difference between publishing or wallowing in academic purgatory.
It gives me pause, however, when students indicate such a value preference for results that support a hypothesis. In my mind, results aren't "good" or "bad," they simply...are. Is devaluation of negative results innate in our educational mindset? On the other hand, students vocalizing negative results as a weakness of the paper may be more semantic than scientific; for example, students aren't nuancing their argument to discuss particular experimental drawbacks, which is a completely justifiable concern. Alternatively, they might just be saying what they think we (the instructors) want to hear, and hoping we'll give them points for covering all the required topics listed on the rubric.
Regardless of the direct causes of such reasoning, I'm going to be keeping an eye (ear?) on such comments as I continue developing classes. When operating in isolation, these judgements about the value of results can be a starting point to some interesting discussions about epistemology and hypothesis testing. More often, though, these mindsets work in concert with other scientific misconceptions, which can result in huge problems for me as an educator. Fortunately, there are sections in my bioinformatics class that I'm explicitly developing so we can have discussions about whether the results we're obtaining are accurate and meaningful. I'm looking forward to embedding these higher-level reasoning skills into the regular content of the class.
No comments:
Post a Comment