People’s reactions to decisions by human vs. algorithmic decision-makers: the role of explanations and type of selection tests

Wesche, Jenny S.; Hennig, Frederike; Kollhed, Christopher Sebastian; Quade, Jessica; Kluge, Sören; Sonderegger, Andreas (2022). People’s reactions to decisions by human vs. algorithmic decision-makers: the role of explanations and type of selection tests European Journal of Work and Organizational Psychology, pp. 1-12. Routledge 10.1080/1359432X.2022.2132940

[img]
Preview
Text
People s reactions to decisions by human vs algorithmic decision makers the role of explanations and type of selection tests.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (915kB) | Preview

Research suggests that people prefer human over algorithmic decision-makers at work. Most of these studies, however, use hypothetical scenarios and it is unclear whether such results replicate in more realistic contexts. We conducted two between-subjects studies (N=270; N=183) in which the decision-maker (human vs. algorithmic, Study 1 and 2), explanations regarding the decision- process (yes vs. no, Study 1 and 2), and the type of selection test (requiring human vs. mechanical skills for evaluation, Study 2) were manipulated. While Study 1 was based on a hypothetical scenario, participants in pre-registered Study 2 volunteered to participate in a qualifying session for an attractively remunerated product test, thus competing for real incentives. In both studies, participants in the human condition reported higher levels of trust and acceptance. Providing explanations also positively influenced trust, acceptance, and perceived transparency in Study 1, while it did not exert any effect in Study 2. Type of the selection test affected fairness ratings, with higher ratings for tests requiring human vs. mechanical skills for evaluation. Results show that algorithmic decision-making in personnel selection can negatively impact trust and acceptance both in studies with hypothetical scenarios as well as studies with real incentives.

Item Type:

Journal Article (Original Article)

Division/Institute:

Business School > Institute for New Work
Business School > Institute for New Work > New Forms of Work and Organisation
Business School

Name:

Wesche, Jenny S.;
Hennig, Frederike;
Kollhed, Christopher Sebastian;
Quade, Jessica;
Kluge, Sören and
Sonderegger, Andreas0000-0003-0054-0544

Subjects:

B Philosophy. Psychology. Religion > BF Psychology
Q Science > QA Mathematics > QA75 Electronic computers. Computer science

ISSN:

1359-432X

Publisher:

Routledge

Language:

English

Submitter:

Andreas Sonderegger

Date Deposited:

02 Nov 2022 11:46

Last Modified:

02 Nov 2022 11:46

Publisher DOI:

10.1080/1359432X.2022.2132940

Uncontrolled Keywords:

Algorithmic decision-making; ADM; technology acceptance; trust; explainable AI

ARBOR DOI:

10.24451/arbor.17875

URI:

https://arbor.bfh.ch/id/eprint/17875

Actions (login required)

View Item View Item
Provide Feedback