Despite structured interviews, psychometric testing and AI-supported screening, personnel selection continues to produce a high number of false positives and false negatives. Organizations systematically hire people who later fail — and reject candidates who would have performed well.
This talk examines why this problem persists, even under “best practice” conditions. Drawing on evidence from industrial-organizational psychology, meta-analyses on selection validity, and real-world hiring data, I argue that the core issue is not poor execution, but structural limits of selection methods themselves.
Statistical validity does not translate directly into decision accuracy at the individual level. Base rates, construct underrepresentation, context specificity of job performance, and non-orthogonal predictors lead to systematic decision errors — regardless of whether selection is human-driven or algorithmic.
The session critically discusses why AI and matching algorithms inherit the same structural weaknesses as traditional selection tools, and why more data or better models alone will not solve the problem. Instead, we need to rethink what selection can realistically achieve — and where its explanatory power ends.
The goal of this talk is not to promote a specific tool, but to establish a more realistic, evidence-based understanding of personnel selection limits — especially relevant for tech-driven organizations building AI-supported HR systems.
Die Veranstaltung findet an folgenden Zeitpunkten statt.