Showing posts with label publishing. Show all posts
Showing posts with label publishing. Show all posts

Wednesday, May 1, 2024

Encouraging ethical publishing

Scientists establish their credentials and reputations by publishing in peer-reviewed articles. Participating in the act of asking answerable questions, collecting unbiased empirical evidence to evaluate those questions, and passing through the gauntlet of peer-review to publish findings are the hallmark of science. Essentially, publishing in peer-reviewed scientific journal means that you are a scientist. However, the publishing landscape is replete with ethical, moral, practical, reputational, and economic decisions. 

 

 

Deciding where to publish is a complex and multifaceted decision process. The considerations about where to publish typically include:

 

1) Journal impact factor (and there has been a lot written about this).

2) Breadth of journal/topic area (is your article of general interest or better suited informing a more specific audience).

3) Cost to publish (Open access charges, page charges, etc.).

4) Article match (does the journal tend towards robust experiments, observational data, theory).

5) Editorial board (people you respect or are knowledgeable about your area of research)

6) Experience with journal (a place you’ve published before).

7) Who is doing the publishing and realizing the benefit of your work (we don’t discuss this enough)

 

The recent article by Receveur and colleagues, titled “David versus Goliath: Early career researchers in an unethical publishing system”, published in Ecology Letters, makes the argument that better publishing decisions need to be made by individual researchers in order to support a more ethical publishing landscape. They come at this from the point of view of early career researchers (ECRs), who are more impacted by publishing, but nothing in their article is exclusive to ECRs. In fact, I’d say that these discussions about publishing are best served by including the entire community.

 

Before I dig a little more into Receveur et al.’s suggested path forward I will say a bit about my publishing philosophy. I now only send my articles to society owned or non-profit journals. My more general-appeal manuscripts are sent to Science or PNAS (both society journals). I do not review for Springer-Nature, Wiley, Elsevier, etc. unless the publication is a society one. I do not want my labour, effort, and creativity to be turned into someone else’s profit, rather if indirect benefits arise, I want them to serve academic communities. I came to this philosophy slowly over time, but it solidified probably about five or six years ago seeing Nature journals created without any meaningful contribution back to the communities they purportedly serve. As those in working groups or collaborations I am in can attest, I do make my perspective known, though I won’t hold up others’ publishing decisions.

 

The Receveur et al. general guidelines are a good set of rules to follow, though some aspects could use more detail. For example, they state that decisions should be made on whether publications are ethical or not. But they don’t really set the parameters on what is ‘ethical’ or not. They do cite profits as one consideration. They do highlight some of the profits made by publishers, and Elsevier profits were two orders of magnitude higher than Wiley’s profits. Does this mean Wiley is much more ethical than Elsevier? Maybe, or maybe not.

 

What does ethical publishing look like?

- The journal follows the prescribed ethical guidelines laid out in COPE. This means that the publication has transparent processes and business practices, bases decisions on anonymous peer-review.

- Academics/researchers should be the ones making both the operational and strategic decisions for the journal.

- Editorial boards are populated by active researchers in the field, and these boards should be diverse and representative (gender, geography, career stage, etc.).

- The journal’s primary mandate is not to generate profits for a company, but rather to advance scientific knowledge.

- Proceeds made by the publication feed back into the scientific community.

 

As a result of these ethical imperatives:

- Journals should be society owned and managed. Even if the journals are published by for-profit publishers, society ownership indicates that oversight is likely not profit-driven, and that proceeds go back into supporting the community.

- If the journal is not owned by a society, then non-profit publishers again ensure that profits are not the primary motivation which could influence decision-making.

 

For an example that I am intimately familiar with[i], the British Ecological Society, who own eight ecological journals plus a grey literature repository, partner with Wiley to publish. Wiley obviously has a profit mandate, but the Society negotiates publishing contracts that prioritize benefits to the BES members, and they retain all decision-making power over their publications.

 

Moving forward

As Receveur and colleagues argue, there needs to be a culture change. I wholeheartedly agree. Right now, many academics support a perverse system that does not have our best interests in mind. Building on the Receveur et al. recommendations, what should we do as individuals? 

 

- Publish in society or non-profit journals.

- Publish in journals that adhere to ethical standards.

- Evaluate quality of the contributions of candidates for positions or promotion.

- Choose to serve on society or non-profit journal editorial boards rather than on publisher-owned for-profit ones.

- Only review for society or non-profit journals.

- Value service to society or non-profit journal editorial boards and reviewing in hiring, promotion, and annual progress evaluations.

 

Finally, Receveur and colleagues point to an invaluable resource for determining which journals are owned by societies or non-profit organizations: the DAFNEE database of ethical journals 

 

This is a discussion that needs to be had by academics more broadly, and needs to influence hiring, tenure, awards, and grant committees, so that we are cognizant of individual and shared ethical publishing behaviour.

 



[i] Note that I am the Editor-in-Chief of Ecological Solutions and Evidence and the Chair of Applied Ecology Resources, two newer BES publication projects. Before this, I was the Editor of Journal of Applied Ecology. So, I have been intimately entangled with the BES-Wiley relationship for years and might not have a completely objective perspectives and I have developed friendships with people on both sides of this.

Wednesday, May 13, 2020

Publication Partners: a COVID-19 publication assistance program in conservation science


Researchers around the world are trying to keep up on work duties and responsibilities while being required to stay at home. For some people this means caring for young children or other family members, devising homeschooling, switching courses to online delivery, scheduling meetings with team members, receiving new duties from superiors, and perhaps worrying about job security. It is natural that these people may feel overwhelmed and that routine tasks, like checking references or proofreading manuscripts, might seem insurmountable.

However, for others, COVID-19 lockdowns have resulted in more time to push projects to completion and clear out backlogs. There is then inequality in the impact of COVID-19 restrictions on individuals.

These COVID-19 impacts on individuals not only have these unequal impacts on mental wellbeing and career trajectories but are on top of the desperate necessity of conservation science to continue. We win by having a greater diversity of experts communicating with one another.

Publication Partners is an attempt to address some of this COVID-19 impact inequality and to ensure that conservation science is still being published by assisting people with their manuscript preparation. This is a match-making service of the conservation community to bring researchers struggling with their current working conditions together with those that feel that have extra capacity and are willing to help others in this difficult time. The partner might be asked for publication advice, to assist with manuscript editing, help sorting and checking references, organizing tasks for revisions or preparing figures.

The idea is that the Publication Partners would normally be contributing less than would be expected for authorship and thus will be listed in the acknowledgments of the resulting paper. Publication Partners will match volunteers with those requesting support.

To volunteer or request a partner, please see this document with contact instrucitons.

As a journal editor, I see this a valuable and much needed assistance strategy. And I’m not alone. Many of the most important conservation journals have signaled their support and welcome submissions using this service. The journals support Publication Partners includes (please note that the list of journals is being updated and so will change over time):

 *Thanks to Bill Sutherland for sharing his thoughts on this post.

Wednesday, April 12, 2017

The most "famous" ecologists (and some time wasting links) (Updated)

(Update: This has gotten lots more attention than I expected. Since first posted, the top 10 list has been updated 2 times based on commenters suggestions. You can also see everyone we looked up here. Probably I won't update this again, because there is a little time wasting, and there is a lot of time wasting :) )

At some point my officemates Matthias and Pierre and I started playing the 'who is the most famous ecologist' game (instead of, say, doing useful work), particular looking for ecologists with an h-index greater than 100. An h-index of 100 would mean that the scientist had 100 publications with at least 100 citations  and their other papers had less than 100 citations. Although the h-index is controversial, it is readily available and reasonably capture scientists that have above average citations per paper and high productivity. We restricted ourselves to only living researchers. We used Publish or Perish to query Google Scholar (which now believes everyone using the internet in our office may be a bot).

We identified only 12 ecologists at level 100 or greater. For many researchers in specialized subfields, an h-index this high is probably not achievable. The one commonality in these names seems to be that they either work on problems of broad importance and interest (particularly, climate change and human impacts on the landscape) or else were fundamental to one or more areas of work. They were also all men, and so we tried to identify the top 12 women ecologists. (We tried as best as we could, using lists here and here to compile our search). The top women ecologists tended to have been publishing for an average of 12 years less than the male ecologists (44 vs. 56 years) which may explain some of the rather jarring difference. The m-index is the h-index/years publishing and so standardizes for differences in career age.

(It's difficult to get these kind of analyses perfect due to common names, misspellings in citations, different databases used, etc. It's clear that for people with long publication lists, there is a good amount of variance depending on how that value is estimated).

Other links: 
(I've been meaning to publish some of these, but haven't otherwise had a time or space for it.. )
Helping graduate students deal with imposter syndrome (Link). Honestly, not only graduate students suffer from imposter syndrome, and it is always helpful to get more advice on how to escape the feeling that you've lucked into something you aren't really qualified for. 

A better way to teach the Tree of Life (Link). This paper has some great ideas that go beyond identifying common ancestors or memorizing taxonomy.

Analyzing scientists are on Twitter (Link). 

Recommendation inflation (Link). Are there any solutions to an arms race of positivity?  


Tuesday, January 24, 2017

The removal of the predatory journal list means the loss of necessary information for scholars.

We at EEB & Flow periodically post about trends and issues in scholarly publishing, and one issue that we keep coming back to is the existence of predatory Open Access journals. These are journals that abuse a valid publishing model to make a quick buck and use standards that are clearly substandard and are meant to subvert the normal scholarly publishing pipeline (for example, see: here, here and here). In identifying those journals that, though their publishing model and activities, are predatory, we have relied heavily on Beall's list of predatory journals. This list was created by Jeffrey Beall, with the goal of providing scholars with the necessary information needed to make informed decisions about which journals to publish in and to avoid those that likely take advantage of authors.

As of a few days ago, the predatory journal list has been taken down and is no longer available online. Rumour has it that Jeffrey Beall removed the list in response to threats of lawsuits. This is really unfortunate, and I hope that someone who is dedicated to scholarly publishing will assume the mantle.

However, for those who still wish to consult the list, an archive of the list still exists online -found here.

Monday, October 17, 2016

Reviewing peer review: gender, location and other sources of bias

For academic scientists, publications are the primary currency for success, and so peer review is a central part of scientific life. When discussing peer review, it’s always worth remembering that since it depends on ‘peers’, broader issues across ecology are often reflected in issues with peer review. A series of papers from Charles W. Fox--and coauthors Burns, Muncy, and Meyer--do a great job of illustrating this point, showing how diversity issues in ecology are writ small in the peer review process.

The journal Functional Ecology provided the authors up to 10 years of data on the submission, editorial, and review process (between 2004 and 2014, maximum). This data provides a unique opportunity to explore how factors such as gender and geographic local affects the peer review process and outcomes, and also how this has changed over the past decade.

Author and reviewer gender were assigned using an online database (genderize.io) that includes 200,000 names and an associated probability reflecting the genders for each name. Geographic location of editors and reviewers were also identified based on their profiles. There are some clear limitations to this approach, particularly that Asian names had to be excluded. Still, 97% of names were present in the genderize.io database, and 94% of those names were associated with a single gender >90% of the time.

Many—even most—of Fox et al.’s findings are in line with what has already been shown regarding the causes and effects of gender gaps in academia. But they are interesting, nonetheless. Some of the gender gaps seem to be tied to age: senior editors were all male, and although females make up 43% of first authors on papers submitted to Functional Ecology, they are only 25% of senior authors.

Implicit biases in identifying reviewers are also fairly common: far fewer women were suggested then men, even when female authors or female editors were identifying reviewers. Female editors did invite more female reviewers than male editors. ("Male editors selected less than 25 percent female reviewers even in the year they selected the most women, but female editors consistently selected ~30–35 percent female").  Female authors also suggested slightly more female reviewers than male authors did.

Some of the statistics are great news: there was no effect of author gender or editor gender on how papers were handled and their chances of acceptance, for example. Further, the mean score given to a paper by male and female reviewers did not differ – reviewer gender isn’t affecting your paper’s chance of acceptance. And when the last or senior author on a paper is female, a greater proportion of all the authors on the paper are female too.

The most surprising statistic, to me, was that there was a small (2%) but consistent effect of handling editor gender on the likelihood that male reviewers would respond to review requests. They were less likely to respond and less likely to agree to review, if the editor making the request is female.

That there are still observable effects of gender in peer review despite an increasing awareness of the issue should tell us that the effects of other forms of less-discussed bias are probably similar or greater. Fox et al. hint at this when they show how important the effect of geographic locale is on reviewer choice. Overwhelmingly editors over-selected reviewers from their own geographic locality. This is not surprising, since social and professional networks are geographically driven, but it can have the effect of making science more insular. Other sources of bias – race, country of origin, language – are more difficult to measure from this data, but hopefully the results from these papers are reminders that such biases can have measurable effects.

From Fox et al. 2016a. 

Tuesday, June 14, 2016

Rebuttal papers don’t work, or citation practices are flawed?

Brian McGill posted an interesting follow up to Marc’s question about whether journals should allow post-publication review in the form of responses to published papers. I don’t know that I have any more clarity as to the answer to that question after reading both (excellent) posts. Being idealistic, I think that when there are clear errors, they should be corrected, and that editors should be invested in identifying and correcting problems in papers in their journals. Based on the discussions I’ve had with co-authors about a response paper we’re working on, I’d also like to believe that rebuttals can produce useful conversations, and ultimately be illuminating for a field. But pragmatically, Brian McGill pointed out that it seems that rebuttals rarely make an impact (citing Banobi et al 2011). Many times this was due to the fact that citations of flawed papers continued, and “were either rather naive or the paper was being cited in a rather generic way”.

Citations are possibly the most human part of writing scientific articles. Citations form a network of connections between research and ideas, and are the written record of progress in science. But they're also one of the clearest points at which biases, laziness, personal relationships (both friendships and feuds), taxonomic biases, and subfield myopia are apparent. So why don't we focus on improving citation practices? 

Ignoring more extreme problems (coercive citations, citation fraud, how to cite supplementary materials, data and software), as the literature grows more rapidly and pressure to publish increases, we have to acknowledge that it is increasingly difficult to know the literature thoroughly enough to cite broadly. A couple of studies found that 60-70% of citations were scored as accurate (Todd et al. 2007; Teixeira et al. 2013) (Whether you can see that as too low or pretty high depends on your personality). Key problems were the tendency to cite 'lazily' (citing reviews or synthetic pieces rather than delve into the literature within) or 'naively' (citing high profile pieces in an offhand way without considering rebuttals and follow ups (a key point of the Banobi et al. piece)). At least one limited analysis (Drake et al. 2013) showed that citations tended to be much more accurate in higher IF journals (>5), perhaps (speculating) due to better peer review or copy editing. 

Todd et al (2007) suggest that journals institute random audits of citations to ensure authors take greater care. This may be a good idea that is difficult to institute in journals where peer reviewers are already in short supply. It may also be useful to have rebuttal papers considered as part of the total communication surrounding a paper - the full text would include them, they would be automatically downloaded in the PDF, there would be a tab (in addition to author information, supplementary material, references, etc) for responses. 

More generally - why don't we learn how to cite well as students? The vast majority of advice on citation practices with a quick google search regards the need to avoiding plagiarism and stylistic concerns. Some of it is philosophical, but I have never heard a deep discussion of questions like, 'What’s an appropriate number of citations – for an idea?'; 'For a manuscript?'; 'How deep do I cite? (Do I need to go to Darwin?)'. It would be great if there were a consensus advice publication, like the sort the BES is so good at on best practices in citation.

Which is to say, that I still hope that rebuttals can work and be valuable.

Friday, May 27, 2016

How to deal with poor science?

Publishing research articles is the bedrock of science. Knowledge advances through testing hypotheses, and the only way such advances are communicated to the broader community of scientists is by writing up the results in a report and sending it to a peer-reviewed journal. The assumption is that papers passing through this review filter report robust and solid science.

Of course this is not always the case. Many papers include questionable methodology and data, or are poorly analyzed. And a small minority actually fabricate or misrepresent data. As Retraction Watch often reminds us, we need to be vigilant against bad science creeping into the published literature.



Why should we care about bad science? Erroneous results or incorrect conclusions in scientific papers can lead other researchers astray and result in bad policy. Take for example the well-flogged Andrew Wakefield, a since discredited researcher who published a paper linking autism to vaccines. The paper is so flawed that it does not stand up to basic scrutiny and was rightly retracted (though how it could have passed through peer review is an astounding mystery). However, this incredibly bad science invigorated an anti-vaccine movement in Europe and North America that is responsible for the re-emergence of childhood diseases that should have been eradicated. This bad science is responsible for hundreds of deaths.

From Huffington Post 

Of course most bad science will not result in death. But bad articles waste time and money if researchers go down blind alleys or work to rebut papers. The important thing is that there are avenues available to researchers to question and criticize published work. Now days this usually means that papers are criticized through two channels. First is through blogs (and other social media). Researchers can communicate their concerns and opinion about a paper to the audience that reads their blog or through social media shares. A classic example was the blog post by Rosie Redfield criticizing a paper published in Science that claimed to have discovered bacteria that used arsenic as a food source.

However, there are a few problems with this avenue. First is that it is not clear that the correct audience is being targeted. For example, if you normally blog about your cat, and your blog followers are fellow cat lovers, then a seemingly random post about a bad paper will likely fall on deaf ears. Secondly, the authors of the original paper may not see your critique and do not have a fair opportunity to rebut your claims. Finally, your criticism is not peer-reviewed and so flaws or misunderstandings in your writing are less likely to be caught.

Unlike the relatively new blog medium, the second option is as old as scientific publication –writing a commentary that is published in the same journal (and often with an opportunity for the authors of the original article to respond). These commentaries are usually reviewed and target the correct audience, namely the scientific community that reads the journal. However, some journals do not have a commentary section and so this avenue is not available to researchers.

Caroline and I experienced this recently when we enquired about the possibility to write a commentary on an article was published and that contained flawed analyses. The Editor responded that they do not publish commentaries on their papers! I am an Editor-in-Chief and I routinely deal with letters sent to me that criticize papers we publish. This is important part of the scientific process. We investigate all claims of error or wrongdoing and if their concerns appear valid, and do not meet the threshold for a retraction, we invite them to write a commentary (and invite the original authors to write a response). This option is so critical to science that it cannot be overstated. Bad science needs to be criticized and the broader community of scientists should to feel like they have opportunities to check and critique publications.


I could perceive that there are many reasons why a journal might not bother with commentaries –to save page space for articles, they’re seen as petty squabbles, etc. but I would argue that scientific journals have important responsibilities to the research community and one of them must be to hold the papers they publish accountable and allow for sound and reasoned criticism of potentially flawed papers.

Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.



Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.


The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.


I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism. 

Friday, March 4, 2016

Pulling a fast one: getting unscientific nonsense into scientific journals. (or, how PLOS ONE f*#ked up)

The basis of all of science is that we can explain the natural world through observation and experiments. Unanswered questions and unsolved riddles are what drive scientists, and with every observation and hypothesis test, we are that much closer to understanding the universe. However, looking to supernatural causes for Earthly patterns is not science and has no place in scientific inquiry. If we relegate knowledge to divine intervention, then we fundamentally lose the ability to explain phenomena and provide solutions to real world problems.

Publishing in science is about leaping over numerous hurdles. You must satisfy the demands of reviewers and Editors, who usually require that methodologies and inferences satisfy strict and ever evolving criteria -science should be advancing. But sometimes people are able to 'game the system' and get junk science into scientific journals. Usually, this happens by improper use of the peer review systems or inventing data, but papers do not normally get into journals while concluding that simple patterns conform to divine intervention.

Such is the case in a recent paper published in the journal PLOS ONE. This is a fairly pedestrian paper about human hand anatomy and they conclude that anatomical structures provide evidence of a Creator. They conclude that since other primates show a slight difference in tendon connections, a Creator must be responsible for the human hand (well at least the slight, minor modification from earlier shared ancestors). Obviously this lazy science and an embarrassment to anyone that works as an honest scientist. But more importantly, it calls into question the Editor who handled this paper (Renzhi Han, Ohio State University Medical Center), but also PLOS ONE's publishing model. PLOS ONE handles thousands of papers and requires authors to pay for the costs of publishing. This may just be an aberration, a freak one-off, but the implications of this seismic f$@k up, should cause the Editors of PLOS to re-evaluate their publishing model.  

Friday, November 6, 2015

Science in China –feeding the juggernaut*

For those of us involved in scientific research, especially those that edit journals, review manuscripts or read published papers, it is obvious that there has been a fundamental transformation in the scientific output coming from China. Both the number and quality of papers have drastically increased over the past 5-10 years. China is poised to become a global leader in not only scientific output, but also in the ideas, hypotheses and theories that shape modern scientific investigation.

I have been living in China for a couple of months now (and will be here for 7 months more), working in a laboratory at Sun Yat-sen University in Guangzhou, and I have been trying to identify the reasons for this shift in scientific culture in China. Moreover, I see evidence that China will soon be a science juggernaut (or already is), and there are clear reasons why this is. Here are some reasons why I believe that China has become a science leader, and there are lessons for other national systems.

The reasons for China’s science success:

1.      University culture.

China is a country with a long history of scholarly endeavours. We can look to the philosophical traditions of Confucius 2500 years ago as a prime example of the respect and admiration of scholarly traditions. Though modern universities are younger in China than elsewhere (the oldest being about 130 years old), China has invested heavily in building Universities throughout the country. In the mid-1990s, the government built 100 new universities in China, and now graduates more than 6 million students every year from undergraduate programs.
Confucius (551-479 BC), the grand-pappy of all Chinese scholars

This rapid increase in the number of universities means that many are very modern with state-of-the-art facilities. This availability of infrastructure has fostered the growth of new colleges, institutes and departments, meaning that new faculty and staff have been hired. Many departments that I have visited have large numbers of younger Assistant and Associate Professors, many having been trained elsewhere, that approach scientific problems with energy and new ideas.
My new digs


2.      Funding

From my conversations with various scientists, labs are typically very well funded. With the expansion of the number of universities seems to have been an expansion in funds available for research projects. Professors need to write a fair number of grant proposals to have all of their projects funded, but it seems that success rates are relatively high, and with larger grants available to more senior researchers. This is in stark contrast to other countries, where funding is inadequate. In the USA, National Science Foundation funding rates are often below 10% (only 1 in 10 proposals are funded). This abysmal funding rate means that good, well-trained researchers are either not able to realize their ideas or spend too much of their time applying for funding. In China, new researchers are given opportunities to succeed.


3.      Collaboration

Chinese researchers are very collaborative. There are several national level ecological research networks (e.g., dynamic forest plots) that involve researchers from many institutions, as well as international collaborative projects (e.g., BEF China). In my visits to different universities, Chinese researchers are very eager to discuss shared research interests and explore the potential for collaboration. Further, there are a number of funding schemes to get students, postdocs and junior Professors out of China and into foreign labs, which promotes international collaboration. Collaborations provide the creative capital for new ideas, and allow for larger, more expansive research projects.

4.      Environmental problems

It is safe to say that the environment in China has been greatly impacted by economic growth and development over the past 30 years. This degradation of the environment has made ecological science extremely relevant to the management of natural resources and dealing with contaminated soil, air and water. Ecological research appears to have a relatively high profile in China and is well supported by government funding and agencies.

5.      Laboratory culture

In my lab in Canada, I give my students a great deal of freedom to pursue their own ideas and allow them much latitude in how they do it. Some students say that they work best at night, others in spurts, and some just like to have four-day weekends every week. While Chinese students seem equally able to pursue their own ideas and interests, students tend to have more strict requirements about how they do their work. Students are often expected to be in the lab from 9-5 (at least) and often six days a week. This expectation is not seen as demanding or unreasonable (as it probably would be in the US or Canada), but rather in line with general expectations for success (see next point).

Labs are larger in China. The lab I work in has about 25 Masters students and a further 6 PhD students, plus postdocs and technicians. Further, labs typically have a head professor and several Assistant or Associate Professors. When everyone is there everyday, there is definitely a vibe and culture that emerges that is not possible if everyone is off doing their own thing.

The lab I'm working in -"the intellectual factory"

Another major difference is that there is a clear hierarchy of respect. Masters students are expected to respect and listen to PhD students, PhD students respect postdocs and so on up to the head professor. This respect is fundamental to interactions among people. As it has been described to me, the Professor is not like your friend, but more like a father that you should listen to.

What’s clear is that lab culture and expectations are built around the success of the individual people and the overall lab. And success is very important –see next point.


6.      Researcher/student expectations

I left the expectations on researchers for last because this needs a longer and more nuanced discussion. My own view of strict expectations has changed since coming to China, and I can now see the motivating effect these can have.

For Chinese researchers it is safe to say that publications are gold. Publishing papers, and especially the type of journal those papers appear determine career success in a direct way. A masters student is required to publish one paper, which could be in a local Chinese journal. A PhD student is required to publish two papers in international journals. PhD students who receive a 2-year fellowship to travel to foreign labs are required to publish a paper from that work as well. For researchers to get a professor position, they must have a certain number of publications in high-impact international journals (e.g., Impact Factor above 5).

Professors are not immune from these types of expectations. Junior professors are not tenured, nor are they able to get tenure until they qualify for the next tier, and they need to constantly publish. To get a permanent position as a full professor or group leader, they need to have a certain number of high impact papers. For funding applications, their publication records are quantified (number and impact factors of journals) and they must surpass some threshold.

Of course in any country, your publication record is the most important component for your success as a researcher, but in China the expectations are clearly stated.

While there are pros and cons of such a reward based system, and certainly the pressure can be overwhelming, I’ve witnessed the results of this system. Students are extremely motivated and have a clear idea what it means to be successful. To get two publications in a four year PhD requires a lot of focus and hard work; there is no time for drifting or procrastinating.

So why has Chinese science been so successful? It is because a number of factors have coalesced around and support a general high demand for success. Regardless of the number of institutional and funding resources available, this success is only truly realized because of researchers' desire to exceed strict expectations. And they are doing so wonderfully.  

*over the next several months I will write a series of posts on science and the environment in China