Abstract

<title>Abstract</title> Fraud in financial services—especially account opening fraud—poses major operational and reputational risks. Static rules struggle to adapt to evolving tactics, missing novel patterns and generating excessive false positives. Machine learning promises adaptive detection, but deployment faces severe class imbalance: in the NeurIPS 2022 BAF Base benchmark used here, fraud prevalence is 1.10%. Standard metrics (accuracy, f1_weighted) can look strong while doing little for the minority class. We compare logistic regression, SVM (RBF), Random Forest, LightGBM, and a GRU model on N=1,000,000 accounts under a unified preprocessing pipeline. All models are trained to minimize their loss function, while configurations are selected on a stratified development set using validation 1_weighted. For the four classical models, class weighting in the loss (class_weight in {None, 'balanced'}) is treated as a hyperparameter and tuned. Similarly, the GRU is trained with a fixed class-weighted cross-entropy loss that up-weights fraud cases. This ensures that both model families leverage weighted training objectives, while their final hyperparameters are consistently selected by the f1_weighted metric. Despite similar AUCs and aligned feature importance across families, the classical models converge to high-precision, low-recall solutions (1-6% fraud recall), whereas the GRU recovers 78% recall at 5% precision (AUC = 0.8800). Under extreme imbalance, objective choice and operating point matter at least as much as architecture.

Affiliated Institutions

Related Publications

Publication Info

Year
2025
Type
article
Citations
0
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

0
OpenAlex

Cite This

W. Y. Sun, Qiannan Shen, Yijun Gao (2025). Objective over Architecture: Fraud Detection Under Extreme Imbalance in Bank Account Opening. . https://doi.org/10.21203/rs.3.rs-8303897/v1

Identifiers

DOI
10.21203/rs.3.rs-8303897/v1