14 October 2013

Journal club as training for reviewing papers.

The last two posts I've written were basically leading up to this set of musings about how journal club has helped me become a better reviewer of manuscripts. Peer review is one of the most lauded aspects of science, described as the foundation of academic integrity and part of a check-and-balance system of the scientific process. However, I find this particular task of academic life to be problematic, mostly because of the mysterious nature of the process. Moreover, most academics would agree there is a great deal of subjectivity in whether manuscripts are acceptable for publication. How much do these standards vary by scientific discipline (or subdiscipline, or model organism)? What is dealbreaker for a manuscript that is a red flag to keep it from being published?  Does anonymity factor into how we criticize papers?

Part of my impetus for thinking about this topic is a desire to be more effective at reviewing papers, which will hopefully allow me to write better papers myself. Some of these topics have been discussed at length elsewhere, but I'm interested in integrating existing obligations in my weekly schedule to this realm of professional development. I argue here that participating in journal club allows us to attune to how others evaluate manuscripts. Here are a few things I've noticed from attending various journal clubs and reading groups.

  1. Some people tend to like most papers, while other people tend to dislike all papers. This goes beyond the ability to critically analyze the content of their papers. Recently during a discussion, a few senior scientists actually related their method of reading papers in exactly those terms. The take home lesson on this point is that the general tone with which someone discusses your manuscript sometimes has little relation to scientific merit of the work, but rather, simply reflects how that person views scientific inquiry.
  2. There are vastly different standards for how much information to include in a manuscript. This is especially true when there is an enormous amount of information in supplementary material. Judicious use of methods summary and proper citations can make a huge difference in the reader being overwhelmed by uncertainty or accepting that the facts stated are adequate. These standards also vary substantially by subdiscipline.
  3. There is a bimodal distribution of acceptance for ambiguous or unknown statements. Some readers prefer to have caveats, study limitations, and generalized discussion of impacts explicitly stated outright, while others will always view such claims as a liability to the study. 
  4. Communicating results from new technologies in scientific manuscripts is a moving target. I know the most about genome sequencing technologies, which are one of the fastest-growing methods in biological sciences. Clear elucidation of the limitations and benefits of these technologies in the manuscript text is essential to reconcile misconceptions other scientists may have about these methods.

Of course, there are a few other ways we receive explicit feedback which helps normalize our standards for peer review. As authors, we receive feedback on manuscripts we've written and submitted. As reviewers, editors may forward the final decision and thoughts from other reviewers of the same manuscript. As editors, we see a wide breadth of submitted articles and have the best idea of publication standards. The problem with each of these viewpoints is that they are highly sample-size dependent. As an early-career scientist, I would be hard-pressed to have a decent representation of solid paper reviews from my own publications. Even as an editor, I imagine it would be easy to fall into a myopic view of science based on the standards of a single journal. Discussion groups with peer scientists, especially from a variety of career stages and subdisciplines, can be one of the best ways to stay abreast of fluctuating standards for scientific inquiry.


No comments: