14 October 2013

Journal club as training for reviewing papers.

The last two posts I've written were basically leading up to this set of musings about how journal club has helped me become a better reviewer of manuscripts. Peer review is one of the most lauded aspects of science, described as the foundation of academic integrity and part of a check-and-balance system of the scientific process. However, I find this particular task of academic life to be problematic, mostly because of the mysterious nature of the process. Moreover, most academics would agree there is a great deal of subjectivity in whether manuscripts are acceptable for publication. How much do these standards vary by scientific discipline (or subdiscipline, or model organism)? What is dealbreaker for a manuscript that is a red flag to keep it from being published?  Does anonymity factor into how we criticize papers?

Part of my impetus for thinking about this topic is a desire to be more effective at reviewing papers, which will hopefully allow me to write better papers myself. Some of these topics have been discussed at length elsewhere, but I'm interested in integrating existing obligations in my weekly schedule to this realm of professional development. I argue here that participating in journal club allows us to attune to how others evaluate manuscripts. Here are a few things I've noticed from attending various journal clubs and reading groups.

  1. Some people tend to like most papers, while other people tend to dislike all papers. This goes beyond the ability to critically analyze the content of their papers. Recently during a discussion, a few senior scientists actually related their method of reading papers in exactly those terms. The take home lesson on this point is that the general tone with which someone discusses your manuscript sometimes has little relation to scientific merit of the work, but rather, simply reflects how that person views scientific inquiry.
  2. There are vastly different standards for how much information to include in a manuscript. This is especially true when there is an enormous amount of information in supplementary material. Judicious use of methods summary and proper citations can make a huge difference in the reader being overwhelmed by uncertainty or accepting that the facts stated are adequate. These standards also vary substantially by subdiscipline.
  3. There is a bimodal distribution of acceptance for ambiguous or unknown statements. Some readers prefer to have caveats, study limitations, and generalized discussion of impacts explicitly stated outright, while others will always view such claims as a liability to the study. 
  4. Communicating results from new technologies in scientific manuscripts is a moving target. I know the most about genome sequencing technologies, which are one of the fastest-growing methods in biological sciences. Clear elucidation of the limitations and benefits of these technologies in the manuscript text is essential to reconcile misconceptions other scientists may have about these methods.

Of course, there are a few other ways we receive explicit feedback which helps normalize our standards for peer review. As authors, we receive feedback on manuscripts we've written and submitted. As reviewers, editors may forward the final decision and thoughts from other reviewers of the same manuscript. As editors, we see a wide breadth of submitted articles and have the best idea of publication standards. The problem with each of these viewpoints is that they are highly sample-size dependent. As an early-career scientist, I would be hard-pressed to have a decent representation of solid paper reviews from my own publications. Even as an editor, I imagine it would be easy to fall into a myopic view of science based on the standards of a single journal. Discussion groups with peer scientists, especially from a variety of career stages and subdisciplines, can be one of the best ways to stay abreast of fluctuating standards for scientific inquiry.


09 October 2013

How to think about research much different from your own.

As a follow up to my post last week about the value in learning about a breadth of topics, I thought it apropos to briefly describe some of the most profound ways in which my scientific thinking has been altered because of talking about disparate research.

Here's a bit of context. I was trained as an undergrad in molecular systematics of plants. I knew a lot about plant evolution and a little about molecular genetics. What did I learn when I started grad school and had to attend seminars about cellular pathways in mice, or behavior in insects? Here are a few examples.

  1. Researchers trained in particular fields approach the narratives of science from different perspectives. The way we ask scientific questions, design experiments, and convey our results differs widely depending on the biological scale and phenomena we're addressing. For example, my tendency towards thinking about organismal evolution is strikingly different from a reductionist view of molecular developmental pathways. These ways of thinking are not mutually exclusive, but sometimes it seems we get stuck in thinking about science the same way. One of my current officemates was trained as a physicist, which results in some pretty eye-opening revelations about biological complexity and uncertainty.
  2. You can do really cool science by applying methods from one theoretical background to a novel question from another field. There are some obvious examples of the success of these mash-ups. The modern synthesis, evolutionary development, and systems biology are all examples of uniting previously disparate fields of research. My personal favorite is the application of ecological principles to genomics (some examples are here, here and here).
  3. Cross-talk assists in uniting themes in biology that are exclusive of model system. A great example of this point comes from journal club last week. Metagenomic methods borrow largely from those developed by ecologists to evaluate how diversity and abundance of organisms differs between ecosystems. It's pretty obvious to ecologists who work on macro-organisms that the average size of species can factor heavily into their influence on an ecosystem. The same argument can apply for microbes that differ widely in average size, yet biomass is rarely considered in microbial studies. Talking about vastly different study systems helps remove model-system specific bias.
Of course, those are just a few of my favorite vignettes to validate the time I spend thinking about research that isn't directly related to my own. Dare I also say that such thought experiments are also simply fun? Basically, I refuse to let myself be impatient about attending seminars or meeting with visiting scientists if their work is very different from my own. I had a great one-on-one meeting with Darwin historian Alistair Sponsel  a few weeks back when he visited NESCent. We only spoke for half an hour, but the time was constructively spent talking about visualization of different types of data as conveyed across a time scale: certainly important insight for both historians and biologists.

My last point is that understanding a breadth of research helps make your own research deliverables more appealing to a broader audience. Some practical applications are obvious: how to communicate in a seminar to a broad audience, how to convince a panel of experts your grant is worth funding. I'll continue this thought in a few days, focusing on one particular part of our job: peer review.

04 October 2013

Journal club and breadth of research

As someone interested in research synthesis, it's not surprising I have an appreciation for a wide breadth of biological investigations. My PhD training was in a biology department which spanned the gamut of research, from neuroscience to cell/molecular to ecology/evolution. As a result, the peers with whom I interacted often possessed research which was only distantly related to my interests in plant evolution. Fellow graduate students working on mouse stem cells, molecular pathways in fungus, and katydid behavior offered some of the best insights into formation of my dissertation questions and analysis.

The result of this training is acceptance that I will often be drawn into discussions about science which does not include direct application to my personal expertise. Rather than bemoan the time "wasted" by these "distractions," I instead use them as a way to improve the overall efficacy of my scientific thinking.

Take, for example, this selection of articles, each of which was discussed at NESCent's journal club sometime during the last few months (our journal club basically invites all NESCent scientists to read a journal article prior to an hour-long meeting where we talk through the paper).
  1. Dung Beetles Use the Milky Way for Orientation
  2. Gut Microbiota from Twins Discordant for Obesity Modulate Metabolism in Mice
  3. Is there Room for Punctuated Equilibrium in Macroevolution?
  4. Genomic Evolution and Transmission of Helicobacter pylori in two South African families
  5. The Tragedy of the Commons
NESCentians are all evolutionary biologists, but that's where the generalities end. That sampling of articles also handily describes the variation in research from participating scientists. Molecular to organismal, animals to bacteria, theory and empiricism. While most of the papers are current (2013), the fifth article is from 1968. The really beautiful part is that some articles don't even mention evolution at all! 

Somehow, we still manage to find plenty of things to discuss (sometimes rather heatedly). This model for journal club does well to expand our brains and promote novel research questions by taking advantage of the variety of expertise here at NESCent. There's generally one person in attendance who knows something about the model system or experimental approach, who then answers basic questions about the whys and hows of the methodology. The goal is for us to not understand every nuanced detail of the paper's analysis, but to focus on the parts in which we're interested. 

The moral of the story: it's not necessary to have a super-specific focus for a discussion group to still have meaningful and interesting discourse. The particular benefits gleaned from these interactions, however, will have to wait for another post.

30 September 2013

On science writing: Gender

An interesting little tool popped up in my newsfeed the other day which only served to fuel my preoccupation with writing style and clarity. Gender Guesser is a system which estimates the gender of a writer based on a submission of at least 300 words of text. The estimation is based on word frequencies and parts of speech. I won't talk more about the specifics of the algorithm, except to say that the original research doesn't seem to include any discussion of science writing in particular.

Being a somewhat obsessive data collector, I proceeded to submit a broad selection of my own writing to the online interface. An example of my results appears below (this result is actually from the same blog post I wrote about revising last week).


I'm not really surprised that nearly all of my writing estimates that I'm a male, often with very high (i.e., >90%) confidence for both informal and formal writing. I tested ~20 writing samples with appropriate word lengths, including posts from this blog, personal writing, and even excerpts from my last publication, for which I am sole author. At best, I am only scored as weakly female (the semantics of which are another issue altogether). The only exception is a blog post from over four years ago. 

What are the implications? The authors of the web interface for Gender Guesser note that females writing in fields which are dominated by males (of which I believe biology qualifies) will tend to score as male. Have I been trained to write in a more masculine manner? Moreover, do I really care if my writing possesses masculine characteristics? Perhaps a more important component to this discussion is what style of writing is more appealing to a wide breadth of readers, or whether readers purposely or subconsciously discern the gender of a writer from an anonymous sample. 

27 September 2013

On science writing: The reader's perspective

I have a good excuse for this most recent unplanned, unannounced blog break. During the month of September, I attended a writing workshop from George Gopen (Writing from the Reader's Perspective) and discovered that I still have much to learn about the process of writing. A brief overview of Gopen's premise can be found in a succinct article from American Scientist, but here are a few interesting points I noted:

  1. Contemporary teaching for improving writing is often focused on the unimportant parts of communication and narration.
  2. Preoccupation with good grammar and punctuation hides more effective ways to improve writing and enforces inequality.
  3. First person can be useful in scientific writing but is often imprecise ("We" didn't all hold hands and perform PCR) and can sound ridiculous.
  4. Passive voice is perfectly fine when used in the appropriate context.
  5. Just because something "sounds" good doesn't mean readers will be able to appropriately interpret your meaning (I'm especially bad about this; I read things aloud to determine clarity).

As per Gopen's recommendations at the end of the course, I selected an old blog post and checked the writing sample for several of his identified "reader expectations" (explained in the article mentioned above). My biggest problem is misplacement of old and new information, which is a very common problem among science writers.

Knowing one or more errors may be lurking in my writing has danced on the periphery of my perception for weeks, stifling my urges to put pen to paper and making me acutely aware of my failings as a professional. While this assessment may seem a bit melodramatic, I am in the midst of sending off applications for jobs, and the requisite cover letters, research statements, and teaching philosophies I've been including now appear to be suboptimal. I've begun the tedious process of revising these documents. While perfect application of my newly learned skills is impossible, I'm hoping for marked improvement...or at least the ability to write without hearing Gopen's voice chastising me.

16 July 2013

Monocots at Monocots!


I couldn't very well travel to the 5th International Conference on Comparative Biology of Monocotyledons and not post pictures of monocots, could I? I figured most of the other conference attendees would be covering the New York Botanical Gardens, so I took the opportunity to document monocots in the gardens at The Cloisters, a portion of the Metropolitan Museum of Art.






The Cloisters' gardens feature a variety of plants relevant to the medieval theme of the annex, including a wide range of plants used for food and medicine. Not a lot was blooming at the time of my visit, but I managed to snag a few shots of particular plants of interest. Shown here: Paris (above), Hemerocallis (right), and Allium (bottom).

I appreciated the museum's transparency in which plants were poisonous (something often overlooked by horticulturists), as well as notes in some portions of the garden about which plants were new additions and how the particular cultivars grew best.











02 July 2013

I am a data vulture.

Heather Piwowar was one of the iEvoBio keynote speakers last week. I tweeted directly after her talk that she'd lit a fire under my ass to start advocating for open access/science/data, so immediately started my own ImpactStory profile and began meticulously analyzing my professional life.

I've long been sold on open science, and have gradually implemented practices to bring my daily professional life in line with that philosophy. In her talk, Heather spent time discussing issues hindering open science. Most of these were familiar arguments, such as current or future research being "scooped" by data being publicly available. Sometimes I'm personally hindered by technology, time, preferences of collaborators, or even just my own ignorance. I acknowledge, however, that my research here at NESCent is dependent on open access to data and analytical tools, and am hoping to continue this type of research for the rest of my career.

Starting at slide 98, Heather began relating some of the more visceral reactions to scientists opposed to providing open access to their data. I was a bit shocked at the rhetoric surrounding their claims: fear of "armchair ecologists" and "data vultures" reaping the benefit of analyzing data without having to set foot in the field.

I sat back in surprise at the realization that I am a data vulture. I literally feed my research on the carrion of discarded genomic sequencing projects, digging through the trash of the repetitive genomic fraction. I've always liked the mental image of digging through genomic junk. I use pictures of Oscar the Grouch or photos of myself digging through trash cans (right) in my professional presentations. Suddenly, though, those cute metaphors were starting to seem like a betrayal of "real" science.

Heather related the words of another respondent to her questions who was disinclined to share data: "we bleed for each data point." I nearly laughed at the rhetoric, recognizing how similar it was to a post I wrote over two years back: I bleed for my thesis: Part 1. I've literally been that scientist before.

I've sat on both sides of the metaphorical data collection fence. I know how much time, money, and energy it takes to do field/lab/greenhouse work, and I appreciate the desire to make the most of that investment with thorough data analysis. I know the work of data collection isn't always rewarded in our current academic climate. However, I'm not afraid of others knowing about my research. I gladly welcome collaborators, and am happy to foist off projects on other folks. I have enough ideas for research to last several lifetimes and will gladly share them.

What to do about my existential crisis? I suppose I'd rather spend my time really owning the label "data vulture" than worry about keeping my research secret. In my mind, the benefits of sharing data and research far outweigh the potential risks. I'll tarry onward with my perhaps idealistic view of science research and think about knitting a vulture costume...because that would be an even better visual gag for presentations, right?