Abstract

Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.

Keywords

Biomedical text miningComputer scienceArtificial intelligenceNatural language processingLanguage modelNamed-entity recognitionRelationship extractionText miningText corpusRepresentation (politics)F1 scoreDomain (mathematical analysis)Source codeInformation extraction

MeSH Terms

Data MiningLanguageNatural Language ProcessingSoftware

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Volume
36
Issue
4
Pages
1234-1240
Citations
6148
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

6148
OpenAlex
795
Influential
4333
CrossRef

Cite This

Jinhyuk Lee, Wonjin Yoon, Sungdong Kim et al. (2019). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics , 36 (4) , 1234-1240. https://doi.org/10.1093/bioinformatics/btz682

Identifiers

DOI
10.1093/bioinformatics/btz682
PMID
31501885
PMCID
PMC7703786
arXiv
1901.08746

Data Quality

Data completeness: 93%