Abstract

Abstract This study investigates whether AI-powered Low-Code/No-Code (LCNC) solutions may unintentionally generate gender-biased responses. We developed four AI-powered LCNC implementations (i.e., Spreadsheet-based, Workflow-based, Web-Application-based and Mobile-Application-based), using different generative AI models, including those from OpenAI, DeepSeek, Claude, and Google DeepMind, and evaluated their outputs in response to prompts designed to highlight potential gendered associations in roles, traits, and personal preferences. Our analysis consists of two parts. First, we applied a mixed-methods structured content analysis to systematically identify potential stereotypical patterns in the responses of the AI models. Second, we compared the outputs across the different AI models for each prompt to explore variations in gender bias-related behavior. Our findings raise an ethical concern: without appropriate policies and guidelines in place, AI-powered LCNC solutions may replicate or even amplify existing societal biases. This work contributes to ongoing discussions on responsible AI integration and bias-aware design, especially within the evolving LCNC ecosystem.

Affiliated Institutions

Related Publications

Publication Info

Year
2025
Type
article
Volume
6
Issue
1
Citations
0
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

0
OpenAlex
0
Influential

Cite This

Spyridon Tsoukalas, Dialekti Athina Voutyrakou, Marios Karelis et al. (2025). AI-powered LCNC implementations and gender: a comparative study of role attribution bias. AI and Ethics , 6 (1) . https://doi.org/10.1007/s43681-025-00843-0

Identifiers

DOI
10.1007/s43681-025-00843-0

Data Quality

Data completeness: 77%