์ฃผ์š” ์ฝ˜ํ…์ธ ๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ

๊ท€ํ•˜์˜ ๋ธŒ๋ผ์šฐ์ €๊ฐ€ ์™„๋ฒฝํ•˜๊ฒŒ ์ง€์›๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์˜ต์…˜์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ตœ์‹  ๋ฒ„์ „์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๊ฑฐ๋‚˜ Mozilla Firefox, Microsoft Edge, Google Chrome ๋˜๋Š” Safari 14 ์ด์ƒ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ๊ฐ€๋Šฅํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ์ง€์›์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ ํ”ผ๋“œ๋ฐฑ์„ ๋ณด๋‚ด์ฃผ์„ธ์š”.

์ด ์ƒˆ๋กœ์šด ๊ฒฝํ—˜์— ๋Œ€ํ•œ ๊ท€ํ•˜์˜ ์˜๊ฒฌ์— ๊ฐ์‚ฌ๋“œ๋ฆฝ๋‹ˆ๋‹ค.์˜๊ฒฌ์„ ๋ง์”€ํ•ด ์ฃผ์„ธ์š”ย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ

Elsevier
์—˜์Šค๋น„์–ด์™€ ํ•จ๊ป˜ ์ถœํŒ
Connect

A helping hand with finding reviewers: introducing the Elsevier reviewer recommender

2018๋…„ 6์›” 15์ผ

์ €์ž: Navid Bazari

A helping hand with finding reviewers: introducing the Elsevier reviewer recommender

Our experience developing decision support tools for editors

ยฉ istockphoto.com/busracavus

At Elsevier, we want to support our editors, particularly when it comes to the tasks we know often prove difficult. One of these is the ongoing challenge of identifying suitable reviewers for submissions. In January 2017, we began work on developing valuable tools to support your editorial needs. It is now a little over six months since the successful beta release of the first tool that came out of this strategy - the EVISE reviewer recommender - so we would like to explain our approach, discuss the lessons learnt and share some of the feedback from your peers who are already using the new technology.

Understanding your needs

As with any potential new tool, a sensible approach is to start with a keen understanding of the problem before looking at possible solutions. In this case, feedback shared with publishers and comments/ratings from the editor satisfaction surveys allowed us to home in on a pattern for the key problems you face as editors.

Having constructed a hypothesis of the most important problems we hoped to address, we then got out of the office to test our assumptions by talking to editors about:

  • How you go about using data to manage your journalโ€™s performance

  • How you find peer-reviewers for submissions

  • How satisfied you are with the tools available

The result from this feedback was clear. Despite many tools being available, finding reviewers remains one of the most challenging tasks you face.

Identifying the problem

Making use of the feedback we had collected, we then visualised the standard approach to finding reviewers using a technique called โ€œuser story mappingโ€. The resulting โ€œmapโ€ showed the standard goals are to:

  1. Match a manuscript with a suitable candidate reviewer's qualifications/interest

  2. Remove candidates who have a potential conflict of interest

  3. Look for signals which might indicate the candidate reviewer's willingness to accept a request to review

We continued to develop the map, breaking each goal into sub-goals. This in turn led us to create a new system that responded powerfully to your needs.

Putting our system to the test

When creating systems like this, we are firm believers in iterative, evidence-based development. To ensure we were on the right track, we identified the biggest risk/assumption and designed a test to explore this. The test was in the form of a clickable โ€œwireframeโ€ of a new approach to reviewer identification, known as a โ€œrecommenderโ€. After a few internal iterations of the design we went back out of the office to gather your feedback.

Our fieldwork showed us that it was time to validate whether we couldย actuallyย build a system that identified good reviewers. Our hypothesis was that our software could, over time, substitute the โ€œcold-startโ€ editors usually faced when finding reviewers.

Looking at the technology

The tool we have developed has two parts. First it identifies suitable reviewer candidates. We match the submission's meta-data with all published research articles in the past five years, then we extract authorsโ€™ details for the top-matched articles, including their publishing and reviewing history. Finally, the tool filters out candidates based on our conflict of interest guidelines, for example removing known co-authors in the last three years.

Second, the tool generates a meaningful recommendation. We re-rank the candidates using a machine learning model that invokes the content similarity and some 10 other feature signals. For instance, the number of publications from the reviewer in the journal in question can be a strong indication of potential motivation.

The new tool is launched

In November 2017, we signed up 10 journals to take part in a โ€œclosed betaโ€ of the tool. One user was immediately impressed tweeting this short reaction:

โ€œBeta testing Elsevierโ€™s new tool to identify potential reviewers for manuscripts based on ML algorithms. Wow. Scary good.โ€

We found editors adopted the tool, finding one in every three reviewers they needed with its help. And every two to four weeks we released enhancements based on new learnings.

With these successes we decided to move to an "open betaโ€ release in April 2018. The tool is now available - on request - to all editors and journals on EVISE. As a result, over 500 more journals have started to use the recommender.

Where to next?

One of our goals is of course to make the tool available for all our journals however our vision does not end there. Ultimately, we want to build a platform that helps better allocate reviewers and saves editors time. We have already started work on the next steps, for example enabling reviewer โ€œsignalsโ€ such as days since last review or periods of leave. Do stay tuned for further updates. If you would like a demo of what's available now please visit ourย YouTube channelย ์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ, or if would like to enable the recommender for your journal, please contact your publisher.

๊ธฐ์—ฌ์ž

Navid Bazari

NB

Navid Bazari

Editors' Update - supporting editors, every step of the way.

man working from home