This information is to provide guidance for ISMAR 2021 reviewing, and is specifically directed towards those that are performing FULL REVIEWS of papers (e.g., secondary 2AC / committee member and external reviewers).
Like previous years, our review process will include a phase whereby the primary (1AC) reviewer / coordinator will examine all full reviews for quality. If the primary deems that a review is insufficient in detail or quality, then reviewers will be asked to revisit their review and improve its quality. In extreme cases, reviewers may be removed from the process, and other reviewers will be sought to replace the inadequate reviews.
So as is likely clear by now, we are very serious in ensuring the highest possible reviewing standards for ISMAR 2021. As such, we ask that you carefully review these guidelines, and if needed, seek additional resources to ensure you understand the level of reviewing quality we are committed to.
Before getting into the details of a quality review, we want to let you know that ISMAR 2021 reviewing forms require that you provide a single score judging the quality of the paper; assessing the submission’s quality with respect to a paper presented at ISMAR in general.
For guidance, we consider a paper of sufficient journal quality if it provides obvious and strong contributions to the field, either in theory, methods, engineering, design and/or evaluation approaches. Journal papers should reflect an in-depth overview of related work (as compared to conference paper), and, in turn, the resulting paper is comprehensive and self-contained.
Therefore, as you perform your reviews, we ask that you reflect on the overall contribution of the paper and how the contributions position the submission to be accepted into the ISMAR journal track.
In terms of the actual review, we strongly recommend that you read the entire (short) article by Ken Hinckley, which we found to give a lot of constructive advice:
Hinckley, K. (2016). So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit. https://mobilehci.acm.org/2015/download/ExcellenceInReviewsforHCICommunity.pdf
We’ve pulled some of the major elements from Ken’s paper, in some cases modified the text, added insights from further advice by Steve Mann and Mark Bernstein, and placed them in bullet form below:
- A high quality review should have a number of paragraphs (or even several pages of well-considered commentary if warranted).
- Short and/or content-free reviews are insufficient and will be caught by our reviewer’s review process.
- Read papers with care and sympathy. Many hours of work — in some cases, years of work — have gone into research and writing this paper. Try to avoid last-minute reviews.
- State specifically why the paper is “great”, “mediocre” or “bad”.
- Clearly describe on what grounds the paper should be accepted (or rejected).
- Describe the contributions in the paper and why they are noteworthy, or important.
- Reflect on the contributions or possible contributions of the work.
- Explicitly and clearly discuss the weaknesses and limitations in a positive and constructive manner. Specifically, don’t be insulting – be positive.
- Clearly and explicitly call out the strengths and utility of the work.
- Your review should not be about what should have been done; rather it should be a critique of what the authors actually did.
- Consider how the author’s arguments, results, and demonstrations fit into closely related work as well as the field as a whole.
- Do not reject a paper because of a few missed citations.
- In fact, do not reject a paper because of anything that can be easily fixed/addressed, clearly state the requirements, all conditionally accepted papers will be shepherded towards an improved version that fulfill those requirements.
- A paper’s failure to justify or fully motivate certain decisions likely represent a correctable oversight, not an unequivocal sign of poorly conceived research.
- Avoid the fallacy of novelty. Specifically, do not simply reject papers because they replicated experiments.
- Reviewing scores: Around ⅔ of your rankings should be 1 or 6, around 1/3 should be 2 or 5, and you should rarely, rarely, rarely give a rating of 3 or 4.
From ISMAR’s perspective, we would like to add that in your role as gatekeepers of high-quality papers, we ask that you not categorically find hidden flaws and assassinate papers wherever possible but instead accept papers for their merits.
When reviewing, keep in mind that almost every paper we review “could have done x, or y, or z”. Don’t fall into this trap! We cannot reject papers because authors did not perform their research the way we would have done it, or even how it is typically done. Instead, we must judge each paper on its own merit, and whether or not the body of work presented can stand on its own, as presented. Sure, every paper “could have done more”, but is the work that has been done of sufficient quality and impact?
Furthermore, we should not reject papers because “this experiment has been done before”. This fallacy of novelty, ignores the long standing tradition in science of replication. New work that performs similar research and finds consistent (or even conflicting) results are of value to the community and to science in general, and should be considered on their own merits.
When evaluating papers that run human-subjects studies it is important that the participant sample population be representative of the population for which the technology is being designed. For example, if the technology is being designed for a general population then the participant population should include equal gender representation and a wide range of ages. All papers with human-subject studies must report at a minimum demographic information including age, gender, and if possible race/ethnicity. Appropriate constructive critique and advice should be given to papers making general claims that do not use representative sample populations.
In addition, we want to encourage you to make use of conditionally accepting papers. In contrast to previous years, we would like to encourage the conditional acceptance of more papers and give authors the chance to polish their work in the revision cycle. If these conditions are not fulfilled, then the paper can (and should) be rejected at the end.
And finally, to quote Ken Hinckley: “When in doubt, trust the literature to sort it out.“
COVID-19 situation:
We are aware that a lot of researchers are still facing major challenges due to COVID-19 with regards to conducting research. In particular, in-person user evaluation currently remains impossible for most researchers. However, it is important to highlight that ISMAR is not lowering the publication standard. We encourage authors to consider alternative ways for validating their results and would like to ask the reviewers to do the same. For many works there are suitable alternative ways to demonstrate validity.
These guidelines are based on the work of the ISMAR 2019 and 2020 PC Chairs (Shimin Hu, Joseph L. Gabbard, Jens Grubert, Stefanie Zollmann, Denis Kalkoffen, Jonathan Ventura) and updated for the ISMAR 2021 review process.
Thank you for work to ensure the highest quality ISMAR reviews,
ISMAR 2021 PC Chairs,
Daisuke Iwai, Osaka University, Japan
Denis Kalkofen, Graz University of Technology, Austria
Guillaume Moreau, IMT Atlantique, France
Tabitha Peck, Davidson College, USA