As a researcher, your work is always subject to others’ scrutiny and judgement: during submission of a manuscript, to the reviewers, after publication (if it passes through) to virtually anyone working in your same field of study. That’s how it works, it can be argued that it is what tells scientific papers apart from any other kind of publication, and you, as a researcher, are trained to deal with it.

You also quickly learn to deal with the frustration and psychological impact of a reject: it basically says “your current work does not deserve to be published”. It may be because it is poorly written, poorly organised, or due to methodological errors, insufficient evaluation, bad experimental results, etc.

Whatever the reasons, it is in your best interest, as a researcher, to quickly learn how to interpret the rejection properly so as to be happy anyway (hence overcome frustration): your peers are telling you what is interesting and what’s not in your paper, what is correct or not, what is well described or not, and so on, while also giving you suggestions for improving it. For free. And, at least theoretically, those giving you advice are experts in the field, hence the go-to researchers you most likely look at as examples of scientific rigour and success.

However, this is the best case scenario. In the worst case, a rejection may leave you with nothing but frustration and a feeling of disrespect of your own hard work.

In this post, unfortunately I want to tell you about the this latter case.

Here follows a screenshot of the meta-review I received for one of my papers:

no-alignment

A meta-review is (briefly and roughly) a summary of other reviews that a (usually) senior or expert researcher writes after a discussion phase in which all the reviewers try to achieve consensus about acceptance or rejection of a paper (or, at least, about strengths and weaknesses thereof).

How would you interpret the “Reasons to reject” (“A type of paper that was more commonly seen in the literature previously”) ? I have no idea. It also doesn’t sound like a scientific motivation for rejection. It sounds instead very much as a post-hoc justification of an a-priori decision. Or the post-hoc excuse to reject a paper that doesn’t fit the acceptance threshold (more on this later on).

The remainder of the meta-review is not much better.

Juts to clarify, for this specific submission we got 3 accept out of 3 reviews, and all the reviews are largely positive for the most part, with few remarks (some also expressed in dubitative form by the reviewers themselves). However the meta-review completely ignores this, and quickly dismisses the manuscript with (quote): “The Paper well-written, clearly motivated.” as the “Reasons to accept”.

In particular, all of the following appreciations vanished (actually there are more in the reviews, but I focus on the ones more directly contradicting the luckluster meta-review):

  • {Novelty} Good: The paper makes non-trivial advances over the current state-of-the-art.
  • {Impact} Good: The paper is likely to have high impact within a subfield of AI […]
  • {Evaluation} Good: The experimental evaluation is adequate, and the results convincingly support the main claims.
  • {Reasons to accept} The paper brings important, reproducible results that should be applicable to many other domains. This research has been well-planned and carried out.
  • {Evaluation} Good: The experimental evaluation is adequate, and the results convincingly support the main claims.
  • {Novelty} Good: The paper makes non-trivial advances over the current state-of-the-art.
  • {Impact} Good: The paper is likely to have high impact within a subfield of AI […]
  • {Evaluation} Excellent: The experimental evaluation is comprehensive and the results are compelling
  • {Reasons to accept} […] it addresses an important topic directly related to the conference theme. The presented approach is technically sound and fairly evaluated.

In addition, the less positive scores (that are anyway not negative) are all from the same reviewer, that self-assesed as the less confident one.

Receiving a reject despite 3 clear accepts is already frustrating, but it may happen in competitive conferences as weird as it sounds (again, a matter of acceptance threshold). Also, such frustration could be quickly overcome by looking at the suggestions for improvement you got. It could, if you got some… Without them, you are left with nothing but frustration, and disrespect.

I hope that none of you incur in a similar review ever, but if you do, here follows my advice to go past it quick, and mentally healthy:

  1. blame the (meta)reviewer, not your work
  2. blame the acceptance threshold, not your work

Blame the (meta, in this case) reviewer, as it is his/her work to scientifically critique your paper in a constructive way. Failing to do so is his/her problem, not yours.

Blame the acceptance threshold as it is either a now obsolete legacy of the past or, worse, a clear sign of lazyness by the program committee of the conference.

This topic could require a post on its own, but let’s be quick here as I’m getting tired of writing (😉): the best conferences have low acceptance rates (let’s say < 25%). This is a matter of fact (IJCAI, AAAI, etc.)

Upon this matter of fact, many conferences quickly and lazily find a way to try to become the next best conference by the following (methodologically wrong) logical implication: to be a best conference we must set our acceptance rate low.

Hence, the (poor?) meta-reviewer found him/her-self in the position of having to reject a paper because the acceptance threshold was already achieved. That’s not science.

Peace.