Abstract

Abstract In recent years, autoencoders, a family of deep learning-based methods for representation learning, are advancing data-driven research owing to their variability and nonlinear power for multimodal data integration. Despite their success, current implementations lack standardization, versatility, comparability and generalizability. Here we present AUTOENCODIX, an open-source framework, designed as a standardized and flexible pipeline for preprocessing, training and evaluation of autoencoder architectures. These architectures, such as ontology-based and cross-modal autoencoders, provide key advantages over traditional methods by offering explainability of embeddings or the ability to translate across data modalities. We apply the method to datasets from pan-cancer studies (The Cancer Genome Atlas) and single-cell sequencing as well as in combination with imaging. Our studies provide important user-centric insights and recommendations to navigate through architectures, hyperparameters and important tradeoffs in representation learning. These include the reconstruction capability of input data, the quality of embedding for downstream machine learning models and the reliability of ontology-based embeddings for explainability.

Affiliated Institutions

Related Publications

Publication Info

Year
2025
Type
article
Citations
1
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1
OpenAlex

Cite This

Maximilian Josef Joas, Neringa Jurenaite, Dušan Praščević et al. (2025). AUTOENCODIX: a generalized and versatile framework to train and evaluate autoencoders for biological representation learning and beyond. Nature Computational Science . https://doi.org/10.1038/s43588-025-00916-4

Identifiers

DOI
10.1038/s43588-025-00916-4