Abstract

Abstract In the digital era, organizations are increasingly leveraging artificial intelligence (AI) to optimize their operations and decision‐making. However, the opaqueness of AI processes raises concerns over trust, fairness, and autonomy, especially in the gig economy, where AI‐driven management is ubiquitous. This study investigates how explainable AI (xAI), through the comparative use of counterfactual versus factual and local versus global explanations, shapes gig workers’ acceptance of AI‐driven decisions and management relations, drawing on cognitive load theory. Using experimental data from 1107 gig workers, we found that both counterfactual (relative to factual) and local (relative to global) explanations increase the acceptance of AI decisions. However, the combination of local and counterfactual explanations can overwhelm workers, thereby reducing these positive effects. Furthermore, worker acceptance mediated the relationship between xAI explanations and management relations. A follow‐up study using a simplified scenario and additional procedural controls confirmed the robustness of these effects. Our findings underscore the value of carefully tailored xAI in fostering equitable, transparent, and constructive organizational practices in digitally mediated work environments.

Affiliated Institutions

Related Publications

Publication Info

Year
2025
Type
article
Citations
0
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

0
OpenAlex

Cite This

Miles M. Yang, Ying Lu, Fang Lee Cooke (2025). Demystifying <scp>AI</scp> for the Workforce: The Role of Explainable <scp>AI</scp> in Worker Acceptance and Management Relations. Journal of Management Studies . https://doi.org/10.1111/joms.70039

Identifiers

DOI
10.1111/joms.70039