What is the optimal format for a review?

Home Forums An open forum to discuss the future of peer review What is the optimal format for a review?

This topic contains 15 replies, has 8 voices, and was last updated by  ThomasA 3 years, 1 month ago.

  • Author
    Posts
  • #2432 Score: 1

    brembs
    Participant
    1 pt

    What is the optimal format for a review? A single document with metrics, or a more interactive discussion between authors and reviewers with multiple iterations of edits and comments on an authoring platform? Is there a way to combine the benefits of both formats and get rid of the downsides?

  • #2438 Score: 1

    Stephanie Dawson
    Participant
    5 pts

    I really like the idea of a conversation between colleagues that results in a better paper – probably lots of those happen at conferences, by e-mail or skype, in discussion forums or beer gardens. It seems hard to me to find a specific format for that. One thing we are trying at ScienceOpen is to institutionalize this conversation as author-mediated pre-publication peer review that we call PRE (Peer Review by Endorsement). The reviewer would be asked to sign their name to the statement:

    I have read this article, given feedback to the authors and now feel that it is of appropriate quality to be included in the scientific literature and be part of the open scientific discourse.”

    The rest of the discussion remains between author and reviewer. So I am excited to see if it catches on. But these papers as well as every paper on ScienceOpen is also open for Post-Publication Peer Review with the comments and identity of the reviewer available.

  • #2442 Score: 0

    Jan Velterop
    Participant
    1 pt

    I have posted a piece on ScienceOpen (https://www.scienceopen.com/document/vid/1dcfbe69-c30c-4eaa-a003-948c9700da40) about peer review by endorsement, with peer review being arranged by the authors themselves. It is an attempt to make the whole process of peer reviewed scientific publishing more efficient, and much more inexpensive than it commonly is when the process of peer review is arranged and organised by the publisher.

    • #2450 Score: 0

      Pandelis Perakakis
      Participant
      1 pt

      I totally support the author-guided peer review model. In fact we coined this term in a 2010 publication (citation would be appreciated 🙂 ). Although we would like to see it implemented in open access repositories, it is wonderful that a publisher like ScienceOpen endorsed the idea and put it to practice. I truly hope the experiment proves successful 🙂

      • #2458 Score: 1

        Jan Velterop
        Participant
        1 pt

        Happy to acknowledge you, Pandelis, but I hope you are not offended by me saying I would not like to use “author-guided”. To me, that sounds too much like the author trying to guide, i.e. influence, the peer review. I prefer the authors to invite experts to freely and honestly review their manuscripts, and if, probably after some iteration and a discussion with the authors – and with one another – they feel the article could be published, to endorse the paper. If anybody provides guidance in this picture, it is the experts to the authors. Rejection isn’t really part of this picture, at least not in the sense we usually know it. If the invited experts don’t think the article is ready, or good enough, for publication, they just don’t endorse it, upon which the authors have to make material changes in order to secure an endorsement before it can be published. Of course, they could try and find another set of experts, but since the experts won’t be anonymous, the chances that they will endorse a bad article are slim, as they won’t want to jeopardise their own reputations. There may in some fields be differences of opinion, schools of thought, if you wish, that result in one set of invited experts endorsing, and another set withholding endorsement. That doesn’t matter, as long as the (hopefully) ensuing open discussion about the subject of the paper, made possible by its publication, is a legitimate academic one and there is a good post-publication review option via which skepticism or criticism can be expressed. Again, not anonymously. Anonymity is possibly suitable for the sensitive process of assessing and judging the value or importance of a paper and its author; it is not for a decision to make an article part of the scholarly discourse by way of publishing it. Conflating and intertwining the two – assessment (for career purposes and the like) on the one hand, and adding to the scientific discourse on the other – is the greatest underlying problem of the current publishing system.

        • #2459 Score: 0

          Pandelis Perakakis
          Participant
          1 pt

          Totally agree with you Jan. “author-guided” is probably a wrong choice of words. We never intended to insinuate that authors should “guide” the process. You have been able to better convey the meaning we wanted to give to this “author-initiated” (like that better?) peer review process. In fact, I take your comment as a post-publication review and promise no to use the term author-guided again 🙂

        • #2463 Score: 0

          Jan Velterop
          Participant
          1 pt

          Good. Taking my comments as a post-publication review. I like that! 🙂
          Yes, ‘author-initiated’, or even ‘author-arranged’ or ‘author-mediated’ are more neutral in my opinion.

  • #2445 Score: 0

    EmilOWK
    Participant

    About 1.5 years ago we started a new journal. One of the goals was the change the way peer review works. We came up with the following system:

    Peer review system of OpenPsych

    Thus, the review is open is the sense that everybody can read and participate in the discussion of a given submission.

    In my opinion it works fairly well most of the time. When it doesn’t work well, it is because authors or reviewers are rude. This problem cannot be avoided completely, but because the reviewing process is open and people are (for the most part) using their real names, they have an incentive to not behave badly.

    Some examples:

     

    • #2453 Score: 0

      carlessierra
      Participant

      Question to EmilOWK.

       

      How do you avoid collusions of reviewers?

      • #2454 Score: 0

        EmilOWK
        Participant

        Could you give an example of what you have in mind?

  • #2447 Score: 1

    Pandelis Perakakis
    Participant
    1 pt

    All replies so far in this topic seem to favour the second option. As brembs nicely put it: an interactive discussion between authors and reviewers with multiple iterations of edits and comments on an authoring platform.

    In this case, does this mean that the only quantitative input provided by a reviewer should be a binary accept/reject (in case of pre-publication review) or ready/needs revisions (in the case of post-publication review)? Are we happy with that? Shouldn’t we take advantage of the opportunity and ask reviewers for more input? For example a numerical evaluation on specific dimensions (e.g. clarity, methodology, importance in the field, etc)? Would this quantitative assessment be meaningful or should we stick to the simpler and more straightforward binary decision of whether a paper meets scientific standards or not?

    The second question is, if a review is not a single document with metrics, but a series of edits and comments, how can we evaluate this review and consequently the reviewer? Isn’t a reviewer reputation system useful? Should all opinions count the same? Without doubt for any given paper some opinions are more relevant than others (e.g. the opinion of the inventor of an algorithm I am using in my model). Also, some reviews are more helpful than others. Should’t we be able to reward reviewers for contributing to improving article quality? Can we somehow keep the concept of interactive reviews and still find a way to build a reviewer reputation system that would also incentivise reviewers? If reviews were a single, standalone document it would be easy to ask authors and the community to evaluate the helpfulness of this review. Is there a way to do that with “fluid” reviews? Would it be meaningful?

    • #2480 Score: 0

      ThomasA
      Participant

      Several of the questions you pose are related to the proposal I posted here: http://www.sjscience.org/article?id=401. Your questions here actually make me reconsider my proposal. What if a review is not simply one coherent statement but a series of interactions with the author – a thread? I do not really think that fits into my proposed assessment model, but since this interactive review process seems a good idea I think it should. I need to think more about this…

  • #2455 Score: 0

    JaeYung Kwon
    Participant

    Based on my experience as editor for 2 student journals, I think that a collaborative peer review process is the best approach where two reviewers come together online through skype or through email to review a manuscript together, allowing them to provide a consistent feedback to the authors. This would still allow reviewers to maintain anonymity to the authors which is important to provide an honest review. There are also numerous benefits such as mentorship between reviewers (you would be surprised at a number of clueless and biased reviewers out there). I have recently submitted this innovative, collaborative review process and will provide a link when published. This is a link to our current journal that has implemented this approach: http://www.hpsj.chd.ubc.ca/index.php/journal

    • #2456 Score: 0

      Pandelis Perakakis
      Participant
      1 pt

      Very interesting process. Thanks for sharing!

      Why do you think anonymity is so important for an honest review. I would claim the opposite. Anonymity allows hiding conflicting interests while transparency forces reviewers to support their judgements with strong arguments.

  • #2476 Score: 0

    brembs
    Participant
    1 pt

    In fact, what I meant by ‘authoring system’ was a way to let reviewers make direct edit suggestions on the manuscript, commenting on specific words or sentences, very much like in a GoogleDoc.

    Personally, I think that a form of 1990s style message-boards should have been implemented across the board already in, erm, the 1990s? 🙂 But others may disagree.

    • #2481 Score: 0

      ThomasA
      Participant

      It seems hypothes.is could be a solution to this way of reviewing with comments directly in the manuscript text. This is something I have often wished for when reviewing papers.

You must be logged in to reply to this topic.