Personal interviews remain at the centre of most recruitment processes. This column examines the impact of ‘contrast effects’ – whereby people evaluate a current choice option relative to the previous one instead of evaluating it individually – in interviews. The authors find that individual admission and hiring recommendations strongly react to the quality of the previously interviewed candidate, particularly when the two candidates share characteristics, when there is little time between the two interviews, and at the beginning of an interview day. They also discuss whether targeted measures can mitigate the impact of these contrast effects.
Selecting the right candidates is one of the most important and difficult challenges that firms and organisations face. While job-testing technologies and algorithms are gaining traction (e.g. Hoffman et al. 2018, Li et al. 2020), personal interviews remain at the centre of most hiring and admission processes.
Through personal interaction, interviews can provide valuable information about the fit of a candidate with the specific requirements of a job or programme. However, interview assessments are inherently subjective and particularly susceptible to the influence of human biases and heuristics, which can challenge the validity of hiring and admission decisions. Research has shown that biases in subjective assessments can have substantial impacts on high-stakes decisions, such as teacher grades or judicial rulings (e.g. Megalokonomou and Lavy 2003, Patault et al. 2020). While firms and organisations are increasingly concerned about their impact on interview outcomes, there is a lack of precise empirical knowledge of how human biases and heuristics play out in hiring and admission processes.
In a recent paper (Radbruch and Schiprowski 2024), we analyse and quantify the impact of ‘contrast effects’ in hiring and admission interviews. Contrast effects are a commonly known psychological phenomenon whereby people evaluate a current choice option relative to the previous one, instead of evaluating it individually. For example, laboratory experiments have shown that subjects assess crimes to be less severe after hearing about atrocious crimes (Pepitone and DiNubile 1976). Similarly, studies by Simonsohn and Loewenstein (2006) on housing choices and Bhargava and Fisman (2014) on speed dating show that a person’s most recent experiences influence her subsequent evaluations. This phenomenon has high potential relevance for interviewing, where evaluators encounter one candidate after the other, often at high frequency. Candidates might be evaluated worse when interviewed after a strong candidate and vice versa. By introducing arbitrary comparisons between consecutive candidates, contrast effects can threaten the validity of interview assessments and lead organisations to make systematic mistakes in their hiring or admission decisions.
The previous candidate as a benchmark for evaluation
Our study investigates the empirical relevance of contrast effects in real-world hiring and admission processes. We collected data from two independent candidate selection processes, covering a total of more than 35,000 personal interviews. Our first dataset stems from a prestigious study grant programme for German university students, while the second covers the hiring process of a large consulting company. In these processes, evaluators interview several candidates over one or two days. Based on the interview, they assess the candidate’s fit with the selection criteria and give a recommendation regarding the hiring or admission decision. Both datasets involve quasi-random assignment of candidates to evaluators, as well as quasi-random interview ordering, which enables the causal interpretation of our findings.
We find that, despite the random ordering of candidates, individual admission and hiring recommendations strongly react to the quality of the previously interviewed candidate. Figure 1 shows that this leads to a sizeable negative path dependence between the outcomes of subsequent candidates. Specifically, evaluators are up to 13 percentage points (40%) less likely to vote in favour of a candidate after having voted in favour of the previous candidate. The influence of the previous candidate, which is unrelated to the candidate’s own quality and fit, leads to strong changes in final decisions made by the admission and hiring committees.
Figure 1 Probability of positive vote conditional on vote of the previous candidate, with 95% confidence intervals
Note: Panel (a) includes data from the admission process, panel (b) includes data from the hiring process.
Which conditions favour contrast effects in interviews?
We find evidence that similarity between candidates intensifies the contrast effect. Concretely, the previous candidate has a stronger influence when sharing characteristics such as gender or academic background with the current candidate. Moreover, the influence is more pronounced when there is little time between two interviews, whereas larger time breaks diminish the size of contrast effects. We also find that contrast effects are stronger at the beginning of an interview day, when evaluators have not yet seen many other candidates. Strikingly, overall experience, seniority, or interview training do not mitigate the effect, suggesting that it is deep-rooted and widespread across a broad range of evaluators.
Policy responses for firms and organisations
Our results show that decisions made by professional interviewers can be distorted by the evaluation of candidates against an arbitrary benchmark. This effect emerges in two processes that already take several steps to reduce the influence of subjective factors, such as training evaluators, conducting structured interviews, and collecting three independent assessments per candidate. In our study, we investigate whether targeted measures can mitigate the impact of contrast effects.
First, we evaluate the effects of a light awareness intervention conducted in one of the processes we study. Specifically, the interviewer briefing was complemented with a brief summary of our main results, raising awareness and providing advice on how to avoid using the previous candidate as a benchmark. We find that the intervention did not reduce the influence, suggesting that awareness cannot easily fix the problem. Second, we simulate the effects of reordering candidates such that the similarity of consecutive candidates is minimised. While the simulation exercise predicts a non-negligible reduction in the impact of contrast effects, it is unclear how evaluators would react if they became aware of strategic ordering. Third, we simulate the collection of multiple independent assessments per candidate. This reduces the impact of individual contrast effects on final decisions, albeit at a slow rate. As the collection of independent assessments usually involves high costs, organisations would benefit from concentrating such efforts on candidates who are close to the threshold of being hired or admitted and whose interview assessments were likely influenced by contrast effects. A simple algorithm could be used to identify such candidates.
Beyond these interventions, organisations can complement subjective interview assessments with an increasing number of alternative tools, such as job-testing technologies or selection algorithms. How to optimally combine objective and subjective information about candidates is an important question for future research and practice.
Source : VOXeu