Digital dashboard tackles unconscious bias in academic recruitment
2023๋ 3์ 10์ผ
์ ์: Linda Willems
New application has indicators that provide a more holistic view of candidates by highlighting attributes like mentoring, social engagement and collaboration
Academic recruitment is fraught with challenges. It can be time-consuming and complex, and with its reliance on publications and personal networks, thereโs a high potential for bias. This bias can compromise the integrity of research, which depends on diverse perspectives. And it ultimately impacts researchers and their careers. โIf early-career researchers feel they arenโt being recognized, promoted or rewarded, they will go and do something else,โ says Profย Margaret Sheilย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ.
As Vice Chancellor ofย Queensland University of Technologyย ์ ํญ/์ฐฝ์์ ์ด๊ธฐย (QUT), Margaret has a first-hand view of the challenges and intricacies of the recruitment and promotion process in academia โ and innovative ideas on how to improve it.
The topic came up when Margaret met with Elsevierโs Chief Academic Officer, Drย Nick Fowlerย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ, during his visit to Australia in 2020. When they touched on the use of research indicators in recruitment and promotion, Margaretโs gender radar was triggered:
There is this view that using indicators and data to generate candidate shortlists is less biased than traditional methods, such as reviewing CVs. But I think indicators can be just as biased, and can discriminate against women, in particular.
She points to theย h-index as one of the biggest culprits:
It relies on length of career and other factors, including patronage from senior colleagues, yet women are more likely than men to take career breaks for family reasons and/or be nominated less frequently for awards and international collaborations.
However, her years in academia and university leadership roles, along with her policy work for the Australian government, have shown Margaret just how valuable indicators can be.
People really like using them โ I do too when I want evidence of outcomes or performance in an area or discipline โ but I am very careful when they are applied to the evaluation of individuals.
So I asked Nick whether we could use indicators to make the initial selection process for vacancies fairer; for example, look at peopleโs careers more inclusively and reward other behaviors, such as being a generous collaborator or mentor.
Nickโs answer was a resounding yes:
Working with someone like Margaret, who is so knowledgeable and respected, is a huge honor. And when she described her idea to me, not only did it make so much sense, I also knew our teams had the experience, data access and computing power to make it a reality.
Once home in Amsterdam, Nick connected Margaret with colleagues in Elsevierโsย International Center for the Study of Researchย (ICSR) and itsย ICSR Lab. Together, they embarked on a project that has the potential to transform academic recruitment. Over the 18 months that followed, Margaret and the ICSR team developed a prototype application with an interactive dashboard that contains an array of indicators. These include familiar choices, such as publication count andย h-index, but also new and broader indicators that expand the definition of researcher success.
The application is currently being tested by Margaret and other research leaders in the higher education sector. And while the project team gathers their feedback, work is already underway on the next stage of the collaboration โ the development of a graphical CV for researchers that provides an enriched view of their careers and achievements over time.
The recruitment dashboard โ creating a more even playing field
How the application works
An institution provides the ICSR with a search query defining the field of expertise required for the vacancy. This is used to generate a dashboard of suitable candidates, which contains more than 30 indicators on five themed tabs:
Career Status
Is Innovative
Leader in the Field
Multi-Dimensional
Social and Education Oriented
Figure 1 shows theย Is Innovativeย tab for candidates. Dynamic text at the top provides a breakdown of inferred gender, including an โunknownโ category for researchers whose gender is impossible to infer within the confidence threshold. For Margaret, even this relatively simple data point adds great value:
Knowing the gender breakdown of the candidates on your shortlist can help to raise early red flags. For example, if Iโm recruiting for a biology position and I see more men than women, then I know I should check the data because women generally outnumber men in that field.
Each indicator has a histogram showing the distribution of candidates by gender, with a โsliderโ filter that dynamically changes the data. For example, Margaret can reduce her shortlist by increasing the percentage of publications with funder acknowledgement that candidates must have published. And whenever a filter is adjusted, all histograms in the dashboard are updated, including the figures that break down the proportion of gender in the shortlist. This allows Margaret to see how her weighting affects other indicators, the overall number of researchers on her shortlist, and the ratio between men and women.
Drย Angela McGuireย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ, who headed up the project for the ICSR, explains:
What I like about the dashboard is that it helps recruiters make evidenced-based decisions, and they can see the immediate impact of their choices on the gender balance of the candidate pool. I know that itโs not always possible to get a 50:50 gender split in a shortlist, but improving on the initial gender ratio, and getting more women seen outside of personal networks, is a positive start.
I also like that it highlights the efforts of groups that might otherwise get overlooked, like candidates that are nurturing the next generation and those that are supporting inclusivity and diversity by expanding their collaboration networks. This dashboard makes it possible for these efforts to be recognized and rewarded.
Aย Resultsย table at the bottom of the dashboard enables institutions to select and download indicators for their chosen shortlist of candidates. The candidates are not individually identifiable by gender.
Generating new insights with new indicators
For Margaret, while all tabs and indicators have their value, there are a few she turns to regularly. For example, histograms on theย Social and Education Orientedย tab show the number of candidatesโ advisees and their percentage breakdown by gender.
โThese are some of my favorites,โ she says. โI want to recruit people for QUT who will mentor the next generation of researchers. The advisee indicator shows how nurturing the candidates on the shortlist are overall, as well as how nurturing they are towards men and women, in particular.โ
She adds: โWhile these indicators still need refining, people are already getting excited about them as they credit something that is rarely recognized.โ
The co-author information on theย Multi-Dimensional tabย is also proving popular at QUT (see figure 2).
โThe indicators show that women tend to have a higher percentage of women co-authors than men do, Margaret explains. โThatโs something we wouldnโt see if we were looking at CVs alone.โ
When considering men for a role, Margaret looks to see whether they have a healthy number of women co-authors relative to the field. โThat tells me they have a diverse lab and are thinking more inclusively and broadly,โ she says. โAnd if I see a high-profile woman who isnโt publishing with other women, that suggests they arenโt necessarily good at supporting women colleagues.โ
For Margaret, the ability to work in a team and collaborate at all levels is crucial for potential QUT employees. โI was recently selecting a new head of school, and one of the schoolโs professors pointed out that while a man candidate had a really good research record, the data on the dashboard showed that he rarely published with more than one person. He said, โThatโs not what we are about here. We are collaborative and want to work across disciplines.โ He had a point!โ
Margaret also finds the 5-year field-weighted citation index (FWCI) on theย Leader in the Fieldย tab useful. The FWCI normalizes impact by field. An FWCI value of 1 indicates the average global impact for that research area, while an FWCI of more than 1 indicates higher than expected citations, and vice versa.
Margaret reveals: โI was on the selection committee for a prestigious national prize with a shortlist of very high-performing researchers. We evaluated them in a variety of ways, but there was one woman candidate with a highย h-index, and the committee members thought it was down to her field, where everyone has high publication rates. Using the FWCI, I was able to show that, even within that field, she was a very strong performer.โ
Addressing a pressing industry need
For Margaret, initiatives like the dashboard are much needed given the ongoing rise in indicator types and uptake. โWe donโt want people relying on indicators that entrench explicit bias at the same time as we are trying to change behaviors.โ Importantly, the dashboard can also result in a more cost-efficient and effective recruitment process for institutions.
And to those who might have concerns over the potential for positive discrimination, she says: โThis initiative is just about broadening your pool beyond the networks of your team or external recruiters. We are not diluting excellence โ we always want to appoint quality people. The dashboard just helps us make good choices aboutย howย we recruit them.โ
From Margaretโs perspective, these choices not only create a fairer recruitment process, but they also help to build a better future:
There are so many examples of where diversity has led to better outcomes; for example, in tackling implicit biases in research questions. A broader network brings the best minds to a problem. And universities need to reflect the societies they serve โ thatโs something we havenโt done historically, at least in recent times.
For researchers, benefits include a more transparent and equitable recruitment process. Margaret notes: โIโve found that people donโt mind entering a competition if they feel itโs a fair or level playing field. And this dashboard will help us to retain talent. If early career researchers feel they arenโt being recognized, promoted or rewarded, they will go and do something else. So, for me, the dashboard is not just about recruitment, it also has the potential to help us promote and support the right internal people.โ
The graphical CV โ telling the story of the researcher
The second strand of the collaboration between the ICSR and QUT is the creation of a graphical CV for researchers. Although the project is still in its early stages, figure 3 shows a conceptual view.
According to Angela, the goal is to create a CV for each candidate that displays selected indicators over time, helping recruiters estimate their career trajectory. Although the CV will be automatically generated using data from a variety of sources, researchers can annotate the document: for example, provide more detail on career breaks or achievements.
For Margaret, this kind of in-depth, more holistic view of the researcher is invaluable from a recruitment perspective: โIโm looking for people who are good at what they do. So if thereโs a career break, this CV helps me understand what they did before that break, and whether it was any good.โ She adds:
Whatever a candidateโs background, I want people who can climb a mountain and reach the peak.
โThe aggregated indicators you see in traditional CVs often average things so that those peaks are excluded,โ she explains. โFor example, I know people who have had a goodย h-index for a very long time, just based on one or two blockbuster PhD papers. Seeing the trajectory of a career like this is much more helpful. There is so much richness in there.โ
Next steps
The project team plans to continue developing the CV concept with input from Margaret and her colleagues at QUT. It will also continue gathering feedback from the academic community to optimize the dashboardโs performance, and usefulness, to QUT and other universities and research institutions worldwide.
According to Angela, she and the ICSR team are also exploring additional ways the dashboard can be generated. It currently relies on the institution supplying a customized search query, but in the future, it might be possible to create a dashboard based on aย SciVal Topic, or even select a researcher who fits a vacancyโs requirements and then ask the dashboard to find lookalikes.
Angela adds: โWe have so many ideas, and many of them come from Margaret. Weโve really appreciated her honest feedback โ every time she suggests a pivot in direction, the dashboard improves.โ
Margaret is also eager to continue the collaboration:
I could have found a data scientist to do this work for me, but the ICSR is at the forefront of this field, and far ahead of where any academic would be โ it understands whatโs possible in terms of the data and is familiar with institutionsโ needs and best practice. Having this kind of think tank available is a great opportunity.
For Margaret, the collaboration has been a rare opportunity to return to her academic roots:
This has really felt like a proper research project. By working with the ICSR team, which is at the cutting edge of this kind of work, Iโm learning so much.
About the International Center for the Study of Research
Theย International Center for the Study of Researchย (ICSR) seeks to advance research evaluation in all fields of knowledge production. It delivers on this mission through its research projects and reports, and itsย ICSR Lab, a cloud-based computational platform that enables researchers to analyze big datasets, including those that power Elsevier solutions such as Scopus, SciVal and PlumX. For this project, the ISCR team drew on multiple data sources, including:
Scopus author profiles for publication and citation data
Inferred gender*
PlumX Indicators for the reach and impact of online publications
SciVal Topics for a fine-grained view of research focus
Policy data from Overtonย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ
Information on patents from LexisNexis PatentSight
Publicly-available data on researcher prizes and awards
*We used the gender inference approach previously deployed inย Elsevierโs 2020 report on gender in research. Using this approach, the inferred gender is on a binary scale given as simply man/woman and unknown, and we note that this is a limitation. Gender inference is based on first and last name and country of origin (defined as the country in which the author published the most papers in their first year). Gender is assigned when the confidence threshold is >= 0.85 otherwise the gender is classified as โunknownโ.