Abstract
Abstract In the digital era, organizations are increasingly leveraging artificial intelligence (AI) to optimize their operations and decision‐making. However, the opaqueness of AI processes raises concerns over trust, fairness, and autonomy, especially in the gig economy, where AI‐driven management is ubiquitous. This study investigates how explainable AI (xAI), through the comparative use of counterfactual versus factual and local versus global explanations, shapes gig workers’ acceptance of AI‐driven decisions and management relations, drawing on cognitive load theory. Using experimental data from 1107 gig workers, we found that both counterfactual (relative to factual) and local (relative to global) explanations increase the acceptance of AI decisions. However, the combination of local and counterfactual explanations can overwhelm workers, thereby reducing these positive effects. Furthermore, worker acceptance mediated the relationship between xAI explanations and management relations. A follow‐up study using a simplified scenario and additional procedural controls confirmed the robustness of these effects. Our findings underscore the value of carefully tailored xAI in fostering equitable, transparent, and constructive organizational practices in digitally mediated work environments.
Affiliated Institutions
Related Publications
Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems
Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and ...
Anchors: High-Precision Model-Agnostic Explanations
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for...
Influence-Directed Explanations for Deep Convolutional Networks
We study the problem of explaining a rich class of behavioral properties of deep neural networks. Distinctively, our influence-directed explanations approach this problem by pee...
Explaining Explanations in AI
Recent work on interpretability in machine learning and AI has focused on the\nbuilding of simplified models that approximate the true criteria used to make\ndecisions. These mo...
THE BELIEF STRUCTURE OF MANAGERS RELATIVE TO WORK CONCEPTS MEASURED BY A FACTOR ANALYTIC MODEL<sup>1</sup>
Studies in the early seventies suggested that the traditional work ethic position was changing and that the American worker was expressing new needs for interesting and challeng...
Publication Info
- Year
- 2025
- Type
- article
- Citations
- 0
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1111/joms.70039