Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mistakes Reviewers Make (umd.edu)
50 points by sjrd on Feb 7, 2016 | hide | past | favorite | 10 comments


Academic journal article reviewing is a very peculiar world, and I wasn't impressed by what I saw of it. My observations (as a former postdoc in biomedical research) were:

Reviewers get no pay or remuneration, pauce guidelines, and no training (which is where articles like this can make a difference). There are no tangible career benefits for doing it (other than 'everyone else does it'), because you won't get any sort of official record for papers that you've reviewed. (And because it's single-blind, you'll never be credited on the paper.) There's very little feedback or quality control on reviews exerted by editors. You can't ever discuss the paper with your fellow reviewer(s). And it's an enormous time sink - reviewing a paper properly takes at least two hours, depending on the length and complexity. This is a real issue when you're in a field where doing lab research, writing your own grants and papers, reading the latest literature to keep up-to-date with the field, and possibly doing some teaching or admin, already takes up most of your time.

It's a seriously broken system. I inherently like the idea of doing reviews because it feels like you're giving back something to the community, but it ended up feeling like this good will was being taken advantage of by the journals, particularly the for-profit ones. I'm amazed that the whole system continues to work as well as it does.


All that and reviewers who have made a name for themselves in a small-ish field can view new entrants into that field are competitors rather than collaborators.


At the very least, services are popping up to help researchers get credit for peer review, like Publons (https://publons.com/), along with journals that publish peer review, like Nature Communication as of a recent announcement: http://www.nature.com/ncomms/2015/151214/ncomms10277/full/nc... (and F1000, eLife, in the life sciences).


This part reminds of some of the job interviews I've gone to, as a software developer:

"Detail-oriented: New researchers are often immersed in the minutiae of research, such as building software, collecting data, and running experiments. This means that they tend to focus on details (which may or may not be significant) rather than the bigger picture."

I am in my 40s, yet when I go to a job interview I am often interviewed by people in their 20s. I have 20 years experience with dozens of technologies. And yet, just recently, I found myself facing a long list of questions about the details of specific technologies, for instance, NodeJS. While I may not know the details about NodeJS, I had no trouble learning Struts and then Spring and then Ruby On Rails. Is there any reason to think I can't pick up the details of NodeJS? I have done one major project with Node, is it really crucial that I know all the latest packages before I get a job at your company?

In these interviews I am often surprised by the focus on very specific aspects of particular technologies. Who really cares? We all need to learn some new technologies for any job, even if it is just the specifics of the software that the company has built.

I am often surprised at the extent to which my 20 years of experience is discounted. However, I run into this less often when I am interviewed by someone who is in their 30s or 40s or 50s -- they seem more willing to recognize that I've had a long career and I've learned a lot of tech.


I think this is small sample size. I spend a lot of my time as an engineering leader teaching people how to interview for engineers. A lot of them are young but intrinsically recognize that trivia questions aren't important. Because I read about things like behavioral interviewing now instead of the release notes of the new webpack I get to make an impact in my org, but lots of smart companies do the same.


I can provide some insight into this, as someone in their 30s, I fully understand and appreciate your point, but I nevertheless ask specific questions about technologies we use.

The reason that specific questions are asked, in my interview context anyway, is to ensure there is ideological alignment and acceptance of the technology. Whilst an experienced developer should be able to understand a programming language/framework, being willing to work with its idioms or recommended best practices can be difficult for some.

As an extreme example, if someone is well versed in procedural programming, and we work in an OO heavy environment, then my concern is about whether the candidate can understand the reasoning for our abstractions and contribute to our OO modelling conversations. The same could be said of an experienced OO practitioner being asked to code in a functional manner.

I think a good rebuttal, although I haven't had candidates do this nor I have tried this in interviews myself, is to to be well read about what a technological detail is about then draw parallels to what you do know in detail and explain how you are able to work with the technology the interviewer is talking about due to the similarities with your experience.

For example, working in a PHP shop, if I asked someone about Doctrine ORM and they responded that they have worked with Hibernate in Java and that allowed me to go into a line of questioning that discussed modelling and leads away from the specifics of Doctrine, I would be perfectly happy with such a response. On the other hand, if they outright dismiss it and say that they prefer to always right queries directly, I may need to question context and decide whether they are so inflexible about it that it would cause them issues when working with our codebase.


I commonly have academic computer science papers rejected for a variety of these reasons. That is, the reviewers do not have any factual or methodology concerns; they don't think we're wrong, or that we made any mistakes designing our experiments. They just don't like it.

I've started to call these "Your baby is ugly" reviews.


I don't think this has a lot to do with being a new reviewer. My experience has in fact been that due to time pressure, senior researchers make decisions on papers for completely the wrong reasons. They pervert the cause of science because of the rat race. The one that really sticks out in the list, that I'm a stickler for, is "details".

The devil IS in the details. If a paper can't communicate the important details, then how can you ever claim that your work is reproducible? If your work is not reproducible, then it has no place in a scientific journal. In my field, a lot of senior researchers haven't executed a line of code in years, so the natural feeling is that the "detail" isn't important. If your code is not open-source and can't be audited, you better have the details in place.

Another thing that I think deserves A LOT more attention is the co-author list. There should be more in place to stop what's all too common: people forcing themselves, especially senior researchers, on to papers that they have no business being on. The setup in a lot of academic environments is such that this can't be tackled from the inside. I think this should be a fundamental part of a "reviewer manifesto": figure out who wrote the paper and if you can't, ask to find out.


Hear, hear. I would add one thing: make sure that you don't advance claims without evidence (this goes for all scientific enterprises, not just writing reviews). Just because your review is anonymous, do not think it excuses not upholding the basic standard of science: all claims must be stated as clearly as possible and supported by evidence.


Felt this was glossed over. It's all well and good to not be too harsh for the reasons they mentioned, but ultimately the point is to 'peer review' the science.

In fact I would say an important "mistake reviewers make" is ... not actually doing much work. I've seen some appalling 2-3 line comments like "seems fine", even from senior academics. That's not even to talk about problems with misunderstood p-values, not reading the algorithm closely, not walking through the proof manually, and so on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: