D3 A4 (15min pres)
Tracks
Track A | Ball Room 1 (recorded for In-person & digital)
Saturday, October 26, 2024 |
12:00 PM - 12:15 PM |
Stream A | Ballroom 1 |
Overview
Rethinking assessment validity in the advent of human-copilot partnerships.
(Patrick Dunlop)
Presenter
Professor Patrick Dunlop
Future Of Work Institute, Curtin University
Rethinking Assessment Validity in the Advent of Human-Copilot Partnerships
12:00 PM - 12:15 PMAuthor(s)
Dunlop, Patrick D
Wee, Serena
Anglim, Jeromy
Bourdage, Joshua S
Wee, Serena
Anglim, Jeromy
Bourdage, Joshua S
Abstract
We are rapidly moving to a world where people collaborate with large language model (LLM)-based copilots such as ChatGPT and Microsoft Copilot on many tasks. These LLM copilots can assist people with complex tasks such as communication, idea generation, and problem solving. Organisations are already integrating these AI tools into employee workflows and, meanwhile, job candidates are also using these tools to help find employment (e.g., to write cover letters, prepare interview responses). Establishing and using effective job candidate assessment tools is the foundation of staffing organisations, however, the extant evidence base for best candidate assessment practices, namely, thousands of validation studies, assumes that humans are the sole agents of what they produce, both as candidates and as employees. In a world of human-copilot collaboration, the tenability of that assumption is questionable, and thus our current validation frameworks must be updated for fitness to a future world of human-copilot partnerships. Ignoring this shift risks basing hiring decisions on outdated assumptions and criteria that fail to capture the full spectrum of a candidate’s abilities, including their proficiency in effectively and honestly leveraging AI tools.
In this conceptual paper, we present a framework that is designed to underpin the foundational validation work required for the new world of human-copilot partnerships. Specifically, we discuss how to understand and operationalize the criterion-related validity of job candidate assessments for a future where candidates will seek co-pilot assistance with their job applications (with the goal of making positive impressions on employers, with honest or dishonest intentions) and perform work tasks with the assistance of a co-pilot.
In this conceptual paper, we present a framework that is designed to underpin the foundational validation work required for the new world of human-copilot partnerships. Specifically, we discuss how to understand and operationalize the criterion-related validity of job candidate assessments for a future where candidates will seek co-pilot assistance with their job applications (with the goal of making positive impressions on employers, with honest or dishonest intentions) and perform work tasks with the assistance of a co-pilot.
Learning outcomes
Attendees will learn about the fundamental assumptions that underpin our evidence base for personnel assessment and selection, and how these assumptions are under threat of disruption as large language models inevitably become mainstream features of work.
Attendees will also learn about how estimates of assessment validity can be garnered in light of the changes of to work practices.
Attendees will also learn about how estimates of assessment validity can be garnered in light of the changes of to work practices.
.....
Patrick Dunlop is a Professor at the Curtin University Future of Work Institute and a registered Organisational Psychologist. His research interests relate to all the processes involved in personnel recruitment, assessment, and selection including volunteering contexts. These include attracting talent, designing fair and diversity-supportive selection systems, and ensuring a positive candidate experience. Patrick is especially interested in how technological developments are influencing these processes.