Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Nov 9;16(11):e0259763. doi: 10.1371/journal.pone.0259763

A clinical specific BERT developed using a huge Japanese clinical text corpus

Yoshimasa Kawazoe 1,*,#, Daisaku Shibata 1,#, Emiko Shinohara 1,#, Eiji Aramaki 2,#, Kazuhiko Ohe 3,#
Editor: Diego Raphael Amancio4
PMCID: PMC8577751  PMID: 34752490

Abstract

Generalized language models that are pre-trained with a large corpus have achieved great performance on natural language tasks. While many pre-trained transformers for English are published, few models are available for Japanese text, especially in clinical medicine. In this work, we demonstrate the development of a clinical specific BERT model with a huge amount of Japanese clinical text and evaluate it on the NTCIR-13 MedWeb that has fake Twitter messages regarding medical concerns with eight labels. Approximately 120 million clinical texts stored at the University of Tokyo Hospital were used as our dataset. The BERT-base was pre-trained using the entire dataset and a vocabulary including 25,000 tokens. The pre-training was almost saturated at about 4 epochs, and the accuracies of Masked-LM and Next Sentence Prediction were 0.773 and 0.975, respectively. The developed BERT did not show significantly higher performance on the MedWeb task than the other BERT models that were pre-trained with Japanese Wikipedia text. The advantage of pre-training on clinical text may become apparent in more complex tasks on actual clinical text, and such an evaluation set needs to be developed.

1 Introduction

In recent years, generalized language models that perform pre-training on a huge corpus have achieved great performance on a variety of natural language tasks. These language models are based on the transformer architecture, which is a novel neural network based solely on a self-attention mechanism [1]. Models such as Bidirectional Encoder Representations from Transformers (BERT) [2], Transformer-XL [3], XLNet [4], RoBERTa [5], XLM [6], GPT [7], and GPT-2 [8] have been developed and achieved state-of-the-art results. It is preferred that the domain of the corpus used for pre-training is the same as that of the target task. In the fields of life science and clinical medicine, domain-specific pre-trained models, such as Sci-BERT [9], Bio-BERT [10], and Clinical-BERT [11], have been published for English texts. A study that used the domain-specific pre-trained Clinical-BERT model yielded performance improvements on the tasks of common clinical natural language processing (NLP) compared to nonspecific models.

While many BERT models for English have been published, few models are available for Japanese texts, especially in clinical medicine. One option available for Japanese clinical texts is the multilingual BERT (mBERT) published by Google; however, mBERT would have a disadvantage in word-based tasks because of its character-based vocabulary. For general Japanese texts, BERTs that have been pre-trained using Japanese Wikipedia have been published [12, 13]; however, their applicability to the NLP task for clinical medicine has not yet been studied. Because clinical narratives (physicians’ or nurses’ notes) have differences in linguistic characteristics from text on the web, pre-training on clinical text would be advantageous for the clinical NLP tasks. In this work, we developed and publicly released a BERT that was pre-trained with huge amount of Japanese clinical narratives. We also present the evaluation of the developed clinical-specific BERT through its comparison with three nonspecific BERTs for Japanese text based on a shared NLP task.

2 Methods

2.1 Datasets

Approximately 120 million lines of clinical text gathered over a period of eight years and stored in the electronic health record system of the University of Tokyo Hospital were used. Those texts were mainly recorded by physicians and nurses during daily clinical practice. Because Japanese text includes two-byte full-width characters (mainly Kanji, Hiragana, or Katakana) and one-byte half-width characters (mainly ASCII characters), the Normalization Form Compatibility Composition (NFKC) followed by full-width characterization were applied to all characters as a pre-processing task. Because the clinical text may contain personal information of patients, it was anonymized as much as possible by computer processing. Data collection followed a protocol approved by the Institutional Review Board (IRB) at the University of Tokyo Hospital (2019276NI). The IRB approved the possible inclusion of personal information in some of the texts used in this study.

2.2 Tokenization of Japanese text

To input a sentence into BERT, it is necessary to segment a sentence into tokens included in the vocabulary of BERT. In non-segmented languages such as Japanese or Chinese, a tokenizer must accurately identify every word in a sentence that requires a method of finding word boundaries without the aid of word delimiters. To obtain BERT tokens from Japanese text, morphological analysis followed by wordpiece tokenization was applied. Morphological analyzers such as MeCab [14] or Juman++ [15] is commonly used in Japanese text processing to segment a source text into word units that are pre-defined in its own dictionary. Subsequently, the wordpiece tokenization would be applied, which segments a word unit into several pieces of tokens included in the BERT vocabulary. During the wordpiece tokenization, a word like playing is segmented to two subwords, namely play and ##ing. A subword that starts with ## represents a subword that is an appendage to another word. Fig 1 shows a schematic view of the morphological analysis and wordpiece tokenization of a Japanese text.

Fig 1. The schematic view of morphological analysis and wordpiece tokenization.

Fig 1

2.3 Making BERT vocabulary

A BERT model requires a fixed number of token vocabulary for wordpiece embeddings. To make the BERT vocabulary, candidate word pieces were obtained by applying morphological analysis followed by Byte Pair Encoding (BPE) [16] to the entire dataset. MeCab was used as a morphological analyzer along with the mecab-ipadic-NEologd dictionary [17] and the J-MeDic [18] as an external dictionary. The former had been built utilizing various resources on the web, and it was used to identify personal names in clinical text as much as possible and aggregate them into a special token (@@N). The latter is a domain specific dictionary that had been built from Japanese clinical text, and it was used to segment words for diseases or findings into as large a unit as possible. BPE first decomposes a word unit into character symbols and, subsequently, creates a new symbol by merging two adjacent and highly frequent symbols. The merging process is stopped if the number of different symbols reaches the desired vocabulary size. In addition to this process, candidate words that represented specific people or facilities were excluded through manual screening, which allowed us to make the developed BERT publicly available. Eventually, 25,000 tokens including special tokens were adopted as the vocabulary.

2.4 Pre-training of BERT

BERT has shown state-of-the-art results for a wide range of tasks, such as single sentence classification, sentence pair classification, and question answering without substantial modifications to task specific architecture. The novelty of BERT is that it took the idea of learning word embeddings one step further, by learning each embedding vector considering the co-occurrence of words. To do this, BERT utilizes the self-attention mechanism, which learns sentence and word embeddings by capturing co-occurrence relationships between those embeddings. BERT is pre-trained by inputting fixed-length tokens obtained from two sentences and optimizing the Masked-LM and the Next Sentence Prediction simultaneously. As these two tasks do not require manually supervised labels, the pre-training is conducted as self-supervised learning.

2.5 Masked-LM

Fig 2A shows a schematic view of Masked-LM. This task masks, randomly replaces, or keeps each input token with a certain probability, and estimates the original tokens. Estimating not only the masked tokens but also the replaced or kept tokens help to keep a distributional contextual representation of every input token. Although the selection probability of the tokens to be dealt with is arbitrary, we used the 15% mentioned in the original paper [2].

Fig 2. The schematic view of Masked-LM and Next Sentence Prediction task.

Fig 2

A. Masked LM predicts the original tokens for the masked, replaced or kept tokens. B. Next Sentence Prediction predicts if the second sentence in the pair is the subsequent sentence in the original documents. The role of special symbols are as follows: [CLS] is added in front of every input text, and the output vector is used for Next Sentence Prediction task; [MASK] is masked token in Masked-LM task; [SEP] is a break between sentences; [UNK] is unknown token that does not appear in the vocabulary.

2.6 Next Sentence Prediction

Fig 2B shows a schematic view of Next Sentence Prediction. In this task, the model receives pairs of sentences and predicts whether the second sentence of the pair is a consecutive sentence in the original dataset. To develop such a training dataset, for two consecutive sentences in the original dataset, the first sentence is connected to the original second sentence with a probability of 50% as a positive example. The remaining 50% of the time, the first sentence is connected to a randomly sampled sentence as negative example. We treated all sentences appearing in a document recorded in one day for a patient as consecutive sentences.

2.7 Evaluation task

The performance of the developed BERT was evaluated through a fine-tuned approach using the NTCIR-13 Medical Natural Language Processing for Web Document (MedWeb) task [19]. MedWeb is publicly available and provides manually created fake Twitter messages regarding medical concerns in a cross-language and multi-label corpus, covering three languages (Japanese, English, and Chinese), and annotated with eight labels. A Positive or Negative status is given to eight labels of Influenza, Diarrhea, Hay fever, Cough, Headache, Fever, Runny nose, and Cold; the Positive status may be given to multiple labels in a message. We performed a multi-label task to classify these eight classes simultaneously. Table 1 shows examples of each set of pseudo-tweets.

Table 1. Three examples of pseudo-tweets with the eight classes of symptoms.

Lang Pseudo-tweets Flu Diarrhea Hay fever Cough Headache Fever Runny nose Cold
1 ja 風邪で鼻づまりがやばい。 N N N N N N P P
en I have a cold, which makes my nose stuffy like crazy.
zh 感冒引起的鼻塞很烦人。
2 ja 花粉症のせいでずっと微熱でぼーっとしてる。眠い。 N N P N N P P N
en I’m so feverish and out of it because of my allergies. I’m so sleepy.
zh 由于花粉症一直发低烧, 晕晕沉沉的。很困。
3 ja 鼻風邪かなと思ってたけど、頭痛もしてきたから今日は休むことにしよう。 N N N N P N P P
en It was just a cold and a runny nose, but now my head is starting to hurt, so I’m gonna take a day off today.
zh 想着或许是鼻伤风, 可头也开始疼了, 所以今天就休息吧。

The English (en) and Chinese (zh) sentences were translated from Japanese (ja).

2.8 Experimental settings

For the pre-training experiments, we leveraged the Tensorflow implementation of BERT-base (12 layers, 12 attention heads, 768 embedding dimension, 110 million parameters) published by Google [2]. Approximately 99% of the 120 million sentences was used for training, and the remaining of 1% was used for the evaluation of the accuracies of Masked LM and Next Sentence Prediction. For the evaluation experiments, the pre-trained BERT was fine-tuned. The network was configured such that the output vector C corresponding to the first input token ([CLS]) was linearly transformed to eight labels by a fully connected layer, and the Positive or Negative status of each of the eight labels were outputted through a sigmoid function. Binary cross entropy was used for the loss function, and the learning rate was optimized by Adam initialized with 1e-5. All network parameters including BERT were updated during this fine-tuning process. Fig 3 shows a schematic view of this network. Five models were trained by 5-fold cross-validation using the MedWeb training data consisting of 1,920 texts, and the mean results of the models on the MedWeb test data consisting of 640 texts were assessed. The performance was assessed based on the exact-match accuracy and label-wise F-measure (macro F1). To inspect an advantage of the domain specific model, we also evaluated the two kinds of domain nonspecific BERT that are pre-trained in Japanese Wikipedia and mBERT. Table 2 shows the specifications of each BERT model.

Fig 3. The schematic view of the network for evaluation.

Fig 3

Table 2. The specifications of each BERT.

UTH-BERT KU-BERT TU-BERT mBERT
Publisher The University of Tokyo Hospital The University of Kyoto The University of Tohoku Google
Language Japanese Japanese Japanese Multilingual
Pre-training corpus Clinical text (120 million) JP Wikipedia (18 million) JP Wikipedia (18 million) 104 languages of Wikipedias
Tokenizer Morphological analyzer MeCab Juman++ MeCab -
External Dictionary Mecab-ipadic-neologd, J-MeDic - Mecab-ipadic -
Number of vocabularies 25,000 32,000 32,000 119,448
Total number of [UNK] tokens present in the MedWeb dataset. 253 (0.68%) 394 (1.11%) 369 (0.94%) 1 (0.00%)

3 Results

3.1 Pre-training performance

Table 3 shows the results of the pre-training. The pre-training was almost saturated at approximately 10 million steps (4 epochs), and the accuracies of Masked LM and Next Sentence Prediction were 0.773 and 0.975, respectively. With a mini-batch size of 50, 2.5 million steps are equivalent to approximately 1 epoch. It took approximately 45 days to learn 4 epochs using a single GPU. In the subsequent experiment, UTH-BERT with 10 million steps of training was used.

Table 3. Accuracies of Masked-LM and Next Sentence Prediction in pre-training for the evaluation dataset.

UTH-BERT Number of training steps (epochs)
2.5 × 106 (1) 5.0 × 106 (2) 7.5 × 106 (3) 10 × 106 (4)
Masked LM (accuracy) 0.743 0.758 0.768 0.773
Next Sentence Prediction (accuracy) 0.966 0.970 0.973 0.975

3.2 Finetuning performance

Table 4 shows the exact-match accuracies with 95% confidence intervals of four pre-trained BERTs. There were no significant differences among UTH-BERT (0.855), KU-BERT (0.845), and TU-BERT (0.862); however, mBERT significantly showed the lowest accuracy compared to the other BERTs.

Table 4. The exact-match accuracy of each model with five-fold cross validation.

Model name Exact match accuracy (95% CI)
UTH-BERT 0.855 (0.848–0.862)
KU-BERT 0.845 (0.833–0.857)
TU-BERT 0.862 (0.857–0.866)
mBERT 0.806 (0.794–0.817)

Table 5 shows the label-wise Recall, Precision, and F-measure of each model. There were no significant differences in the mean F-measures among UTH-BERT (0.888), KU-BERT (0.882), TU-BERT (0.888), and mBERT (0.855). The mean F-measure of mBERT tended to be lower than other BERT models, but the difference was not significant. In terms of the performance for each symptom, the mean F values for Flu (0.714) and Fever (0.838) were lower than for the other symptoms.

Table 5. The label-wise performances of each model with five-fold cross validation.

Flu Diarrhea Hay fever Cough Headache Fever Runny nose Cold Mean F1 (95% CI)
Rec. / Prec. Rec. / Prec. Rec. / Prec. Rec. / Prec. Rec. / Prec. Rec. / Prec. Rec. / Prec. Rec. / Prec.
F1 F1 F1 F1 F1 F1 F1 F1
UTH 0.676/0.858 0.914/0.919 0.904/0.835 0.928/0.963 0.947/0.974 0.797/0.905 0.920/0.927 0.885/0.913 0.888 (0.846–0.931)
0.755 0.916 0.865 0.945 0.960 0.845 0.923 0.898
KU 0.594/0.842 0.877/0.956 0.896/0.896 0.892/0.963 0.947/0.958 0.760/0.927 0.921/0.927 0.890/0.936 0.882 (0.828–0.935)
0.694 0.915 0.895 0.926 0.952 0.835 0.924 0.912
TU 0.735/0.692 0.927/0.947 0.898/0.874 0.916/0.957 0.936/0.982 0.825/0.903 0.912/0.938 0.884/0.904 0.888 (0.837–0.939)
0.710 0.937 0.885 0.936 0.958 0.861 0.925 0.893
mBERT 0.598/0.850 0.867/0.906 0.870/0.822 0.918/0.892 0.928/0.927 0.745/0.890 0.869/0.893 0.902/0.887 0.855 (0.807–0.902)
0.696 0.885 0.841 0.905 0.926 0.810 0.879 0.894
Mean F1 0.714 0.913 0.872 0.928 0.949 0.838 0.913 0.899

The performances shown are Recall, Precision and F-measure.

3.3 Error analysis

To obtain a better understanding of the UTH-BERT classifier’s mistakes, we qualitatively analyzed its false positive (FP) and false negative (FN) cases in the 640 MedWeb test dataset. The error analysis was conducted for the labels for which UTH-BERT was wrong all five times in the five-fold cross validation. The labels of MedWeb were annotated in terms of three aspects such as Factuality (whether the tweeter has certain symptom or not), Location (whether the symptoms are those of the tweeter or someone in the vicinity or not), and Tense (whether the symptoms exist within 24 hours or not) [19]. Since the MedWeb dataset did not contain information about these perspectives, we manually categorized the error cases based on these aspects. As a result, we obtained eight error types for the FP cases and five error types for the FN cases (Table 6).

Table 6. Interpretations obtained from the results of the error analysis.

No. Error Cause of the error Num. of errors Example sentence Incorrect prediction
1 False positive (FP) Co-occurring symptoms 10 (ja) インフルかと思って病院行ったけど、検査したら違ったよ。 Fever pos.
(en) I thought I had the flu so I went to the doctor, but I got tested and I was wrong.
2 Symptoms mentioned in general topics 8 (ja) 風邪といえば鼻づまりですよね。 Cold pos. Runny nose pos.
(en) To me, a cold means a stuffy nose.
3 Suspected influenza 5 (ja) インフルエンザかもしれないから部活休もうかな。 Flu pos.
(en) I might have the flu so I’m thinking I’ll skip the club meeting.
4 Fully recovered symptoms 5 (ja) やっと咳と痰が治まった。 Cough pos.
(en) My cough and phlegm are finally cured.
5 Metaphorical expressions 3 (ja) 熱をあげているのは嫁と娘だ。 Fever pos.
(en) What makes me excited are my wife and daughter.
6 Denied symptoms 2 (ja) 鼻水が止まらないので熱でもあるのかと思ったけど、全然そんなことなかったわ。 Fever pos.
(en) My nose won’t stop running, which got me wondering if I have a fever, but as it turns out I definitely do not.
7 Symptoms for asking unspecified people 2 (ja) 誰か熱ある人いない? Fever pos.
(en) Anyone have a fever?
8 Past symptoms 1 (ja) ネパールにいったら食べ物があわなくてお腹壊して下痢になった・・・ Diarrhea pos.
(en) When I went to Nepal, the food didn’t agree with me, and I got an upset stomach and diarrhea. . .
9 False negative (FN) Symptoms that are directly expressed 8 (ja) 痰が止まったとおもったらこんどは頭痛。 Headache neg.
(en) Just when I thought the phlegm was over, now I have a headache
10 Symptoms that are indirectly expressed 5 (ja) 中国にいた時は花粉症ならなかったのに再発したー! Runny nose neg.
(en) Even though I didn’t have allergies when I was in China, they’re back!
11 Symptoms that can be inferred to be positive by being a tweet from a person 4 (ja) 今日花粉少ないとか言ってるやつ花粉症じゃないから。 Runny nose neg.
(en) The people who are saying there’s not a lot of pollen today don’t have allergies.
12 Symptoms that are in the recovery process 1 (ja) インフルが回復してきてだいぶ元気になった!けどあと2日は外出禁止なんだよな。 Flu neg.
(en) I’ve recovered from the flu and feel great! But I’m still not allowed to go out for two days.
13 Symptoms occurring in the tweeter’s neighborhood 1 (ja) うちのクラス、集団で下痢事件 Diarrhea neg.
(en) There’s a diarrhea outbreak in my class

1. FP due to false detection of co-occurring symptoms

This type of error was categorized as Factuality. The example sentence No.1 expresses that the flu is negative, but UTH-BERT incorrectly predicted that the fever and flu are positive. The reason for this error would be that the training data contained sentences in which flu and fever are positive simultaneously. In addition, UTH-BERT could not detect the negative expression of flu, so both flu and fever were incorrectly positive.

2. FP for symptoms mentioned in general topics

This was categorized as Factuality. The example sentence No. 2 states that a runny nose is a common symptom of a cold. Despite the general topic, UTH-BERT incorrectly predicted that the tweeter has a cold and a runny nose. The reason for this error would be that UTH-BERT failed to distinguish between symptoms that are stated as a general topic and those occurring in a person.

3. FP for suspected influenza

This was categorized as Factuality. According to the MedWeb annotation criteria, suspected symptoms were treated as positive, but only influenza was treated as negative. (This is because the MedWeb dataset was developed primarily for the surveillance of influenza.) This suggests that difference of annotation criteria between flu and other symptoms, and a lack of sentence expression about suspected flu in the training dataset, led to the errors.

4. FP for fully recovered symptoms

This was categorized as Factuality. According to the annotation criteria, symptoms are labeled as positive if they are in the recovery process and negative if they are completely cured. In example sentence No. 4, even though the tweeter stated that the cough was cured, UTH-BERT could not recognize cured and incorrectly predicted that cough was positive.

5. FP for metaphorical expressions

This was categorized as Factuality. This error is due to an inability to recognize metaphorical expressions. In example sentence No. 5, the Japanese phrase熱を上げる is a metaphorical expression for excited, but because it uses the same kanji as fever, UTH-BERT incorrectly predicted that fever is positive.

6. FP for denied symptoms

This was categorized as Factuality. This error is caused by UTH-BERT missing a negative expression.

7. FP for symptoms for asking unspecified people

This was categorized as Location. Although example sentence No. 7 asks about the presence of fever for an unspecified person, UTH-BERT incorrectly predicted that the tweeter has a fever.

8. FP for past symptoms

This was categorized as Tense. According to the annotation criteria, past symptoms are treated as negative. This error occurred because UTH-BERT was not able to recognize the tense.

9. FN for symptoms that is directly expressed

This was categorized as Factuality. Although the sentences directly express that the tweeter has a symptom, this type of errors occurred because UTH-BERT could not detect it.

10. FN for symptoms that is indirectly expressed

This was categorized as Factuality. This is the type of error that overlooks a symptom that would be inferred to be positive if another symptom was present at the same time. Example sentence No.10 directly expresses that hay fever is positive, but if you have some knowledge, you can guess that runny nose is also positive.

11. FN for symptoms that can be inferred to be positive because the tweet is from a person

This was categorized as Factuality. This is also the type of error that overlooks a symptom that is expressed indirectly, but it requires more advanced reasoning. Example sentence No.11 states a general topic, but given that it is a tweet, you can infer that the person has hay fever.

12. FN for symptoms that is in the recovery process

This was categorized as Factuality. According to the annotation criteria, symptoms are labeled as positive if they are in the recovery process; however, UTH-BERT could not detect it.

13. FN for symptoms occurring in the tweeter’s neighborhood

This was categorized as Location. According to the annotation criteria, if a population in the same space has a symptom, the symptom is annotated as positive regardless of whether the tweeter has the symptom or not. In this case, UTH-BERT predicted it as negative, because there were probably not enough such cases in the training data.

4 Discussions

We presented a BERT model pre-trained with a huge amount of Japanese clinical text and evaluated it on the NTCIR-13 MedWeb task. To the best of our knowledge, this work is the first to inspect a BERT model that is pre-trained using Japanese clinical text and publish the results. Among the BERT models, UTH-BERT, KU-BERT, and TU-BERT, which are specialized for Japanese text, significantly outperformed mBERT in exact-match accuracy and tended to outperform mBERT in label-wise accuracy. The mBERT uses character-based vocabulary that alleviates vocabulary problems of handling multiple languages instead of giving up the semantic information that words have. This result would indicate a disadvantage for character-based vocabulary compared to word-based vocabulary. Nevertheless, the performance of mBERT was close to the other Japanese BERT models. Regarding the advantages of pre-training with clinical text, UTH-BERT showed no significant advantages over KU-BERT and TU-BERT. One of the reasons is that sentence classification is a relatively easy task for BERTs that are pre-trained on a large text corpus; therefore, the advantage of pre-training with the domain text may not have been noticeable. Further, because the NTCIR-13 MedWeb used for the evaluation is an intermediate corpus between web and medicine, the differences among the BERTs may not have been clear. The advantage of training on domain-specific texts may become apparent in more complex tasks such as named entity recognition, relation extraction, question answering, or causality inference on clinical text, and a Japanese corpus for such an evaluation is yet to be developed.

We conducted an error analysis that resulted in 13 different types of error interpretations. Among these interpretations, errors related to the factuality of symptoms were the most common, and errors related to the location and the tense were less common. This bias could be due to the small amount of data labeled for location and tense in the MedWeb dataset, rather than a feature of the UTH-BERT. The most common error type in the FP cases was due to false detection of co-occurring symptoms, and 10 of these errors were found in this analysis. Since the task in the MedWeb dataset is a multi-label classification of eight symptoms, this error would be influenced by co-occurrence relationships of multiple labels appearing in the training dataset. On the other hand, some of the FN error types were found to overlook symptom that were expressed indirectly. A possible way to reduce such oversights would be to prepare many similar cases in the training dataset, but it seemed to be difficult as long as only text was used as a source of information. It was difficult to conduct further analysis since this error analysis was based on manual categorization. For further investigation, it would be possible to apply Shapley additive explanations (SHAP) [20] or local interpretable model-agnostic explanations (LIME) [21] to visualize the effect of the predictions when the input data is perturbed by deleting or replacing input tokens.

Differences in the distribution of words appearing in the clinical text used for pre-training of UTH-BERT and the pseudo-tweet messages used for evaluation may have affected the errors. Clinical texts differ in that they contain objective information about the patient, while pseudo-tweets message contain subjective information about the tweeter. Another difference is that the former is written in a literary style, while the latter is written in a spoken form. Because of these differences, there may be cases where the representation of the UTH-BERT pre-acquired tokens were not maximally utilized in the pseudo-tweet messages, leading to errors. A limitation of our error analysis is that it was not possible to compare the error trends between the BERT models because we did not perform the error analysis for KU-BERT and TU-BERT. Moreover, given that our developed BERT was evaluated exclusively on the NTCIR-13 MedWeb task, there is currently a limitation in the generalizability of the performance.

5 Conclusions

We developed a BERT model that made use of a huge amount of Japanese clinical text and evaluated it on the NTCIR-13 MedWeb dataset to investigate the advantage of a domain-specific model. The result shows that there are no significant differences among the performances of BERT models that are pre-trained with Japanese text. Our aim is to develop publicly available tools that will be useful for NLP in the clinical domain; however, knowing the nature of the developed model require evaluations based on more complex tasks such as named entity recognition, relation extraction, question answering, and causality inference on actual clinical text.

Data Availability

1. The UTH-BERT model used for the experiment is available at our web site under the Creative Commons 4.0 International License (CC BY-NC-SA 4.0). URL: https://ai-health.m.u-tokyo.ac.jp/uth-bert 2. The NTCIR-13 dataset used for the experiment is available at the NTCIR web site upon request under the Creative Commons 4.0 International License (CC BY 4.0). URL: http://research.nii.ac.jp/ntcir/index-en.html 3. The KU-BERT model used for the experiment is available at the web site of Kurohashi-Chu-Murakami Lab, Kyoto University. URL: https://nlp.ist.i.kyoto-u.ac.jp/?ku_bert_japanese 4. The TU-BERT model used for the experiment is available at the web site of Tohoku NLP lab, Tohoku University. URL: https://github.com/cl-tohoku/bert-japanese.

Funding Statement

This project was partly funded by the Japan Science and Technology Agency, Promoting Individual Research to Nurture the Seeds of Future Innovation and Organizing Unique, Innovative Network (JPMJPR1654). There were no other funders. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. Adv Neural Inf Process Syst. 2017: 5998–6008. [Google Scholar]
  • 2.Devlin J, Chang M, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv: 1810.04805 [Preprint]. 2018 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1810.04805.
  • 3.Dai Z, Yang Z, Yang Y, Carbonell J, Le QV, Salakhutdinov R. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv: 1901.02860 [Preprint]. 2019 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1901.02860.
  • 4.Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive pretraining for Language Understanding. arXiv: 1906.08237 [Preprint]. 2019 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1906.08237.
  • 5.Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT pretraining Approach. arXiv:1907.11692 [Preprint]. 2019 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1907.11692.
  • 6.Lample G, Conneau A. Cross-lingual Language Model pretraining. Adv Neural Inf Process Syst. 2019: 7059–7069. [Google Scholar]
  • 7.Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative Pre-training. OpenAI Blog, 2018. [cited 2021 May 31]. Available from: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf. [Google Scholar]
  • 8.Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners. OpenAI Blog, 2019. [cited 2021 May 31]. Available from https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. [Google Scholar]
  • 9.Beltagy I, Lo K, Cohan A. SciBERT: A Pre-trained Language Model for Scientific Text. arXiv: 1903.10676 [Preprint]. 2019 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1903.10676.
  • 10.Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. arXiv: 1901.08746 [Preprint]. 2019 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1901.08746. [DOI] [PMC free article] [PubMed]
  • 11.Alsentzer E, Murphy JR, Boag W, Weng WH, Jin D, Naumann T, et al. Publicly Available Clinical BERT Embeddings. arXiv: 1904.03323 [Preprint]. 2019 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1904.03323.
  • 12.Kyoto University. A BERT published by the Kyoto University. [cited 2021 May 31]. Available from: http://nlp.ist.i.kyoto-u.ac.jp/EN/.
  • 13.Tohoku University. A BERT published by the Tohoku University. [cited 2021 May 31]. Available from: https://github.com/cl-tohoku/bert-japanese.
  • 14.Taku K. MeCab: Yet Another Part-of-Speech and Morphological Analyzer (in Japanese). [cited 2021 May 31]. Available from: https://github.com/taku910/mecab.
  • 15.Morita H, Kawahara D, Kurohashi S. Morphological Analysis for Unsegmented Languages using Recurrent Neural Network Language Model. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing; 2015 Sep 17–21; Lisbon, Portugal. pp.2292–2297.
  • 16.Sennrich R, Haddow B, Birch A. Neural Machine Translation of Rare Words with Subword Units. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics; 2016 Aug 7–12; Berlin, Germany. pp.1715–1725.
  • 17.Sato T, Hashimoto T, Okumura M. Implementation of a word segmentation dictionary called mecab-ipadic-NEologd and study on how to use it effectively for information retrieval (in Japanese). Proceedings of the 23rd Annual Meeting of the Association for Natural Language Processing; NLP2017-B6-1, 2017.
  • 18.Ito K, Nagai H, Okahisa T, Wakamiya S, Iwao T, Aramaki E. J-MeDic: A Japanese Disease Name Dictionary based on Real Clinical Usage. Proceedings of the 11th International Conference on Language Resources and Evaluation; 2018 May 7–12; Miyazaki, Japan.
  • 19.Wakamiya S, Morita M, Kano Y, Ohkuma T, Aramaki E Overview of the NTCIR-13 MedWeb Task, Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies; 2017 Dec 5–8; Tokyo, Japan. pp.40-49.
  • 20.Lundberg S, Lee SI. A Unified Approach to Interpreting Model Predictions. arXiv:1705.07874v2 [Preprint]. 2017 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1705.07874v2.
  • 21.Ribeiro MT, Singh S, Guestrin C. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. arXiv:1602.04938v3 [Preprint]. 2016 [cited 2021 May 31]. Available from: https://arxiv.org/abs/1602.04938v3.

Decision Letter 0

Diego Raphael Amancio

3 Dec 2020

PONE-D-20-20418

A clinical specific BERT developed with huge size of Japanese clinical narrative

PLOS ONE

Dear Dr. Kawazoe,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jan 17 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Diego Raphael Amancio

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. In the ethics statement in the manuscript and in the online submission form, please provide additional information about the patient records used in your retrospective study. Specifically, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.

3. Thank you for stating the following in the Competing Interests section:

"Y.K and E.S belong to the 'Artificial Intelligence in Healthcare, Graduate School of Medicine, The University of Tokyo' which is an endowment department, supported with an unrestricted grant from ‘I&H Co., Ltd.’ and ‘EM SYSTEMS company’, but these sponsors had no control over the interpretation, writing, or publication of this work."

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

4. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Report on the manuscript "A clinical specific BERT developed with huge size of

Japanese clinical narrative" by Kawazoe and coauthors submitted for

publication in PLOS ONE.

In this manuscript, the authors present a clinical specific BERT model trained

on a massive data set comprising over 120 million lines of clinical text

obtained from the University of Tokyo Hospital. As the authors rightly point

out, there are very few models pre-trained with Japanese texts in general, and

particularly in the clinical domain. The authors compare its BERT model

(UTH-BERT, pre-trained with clinical text) with three other BERT models:

KU-BERT and TU-BERT (both pre-trained with the Japanese Wikipedia), and the

Google multilingual BERT (Google-ML). They observe that BERT models

pre-trained with Japanese texts outperform Google-ML, but no substantial

improvement is found between UTH-BERT and KU-BERT and TU-BERT (that is, there

is no significant advantage in using domain-specific texts). However, the

authors argue that this difference may emerge with more complex tasks.

This work reads very well, and I believe it is an essential contribution to

literature as it reduces the shortage of studies with the Japanese language

and may trigger other investigations. I have no suggestions on how to improve

this work, and I recommend publication in the present form.

Reviewer #2: This paper describes a clinical-specific BERT model for Japanese. The

model is pre-trained by using a huge Japanese clinical text. The

experiments on a text classification task in a clinical domain

demonstrate the proposed BERT model was slightly better than general

BERT models.

The trained BERT model is valuable for Japanese clinical

researches. However, the differences between the proposed BERT and

other general Japanese BERT models in the evaluated task are very

small. Although I understand the situation where only a few Japanese

evaluation sets in a clinical domain are available, this experimental

result is weak for insisting the effectiveness of the

clinical-specific Japanese BERT model.

Although the authors say that "error analysis is required" in L277,

improved examples and errors should be presented, and discussion

should be made in the paper. These will help readers understand the

strength and weakness of the proposed BERT model.

The comparison between BERT and other classical machine learning

methods such as SVM and LR does not make sense in this paper because

the superiority of BERT models has been shown in many

papers. Furthermore, SVM with KU or TU vocabularies does not make

sense because these (subword) vocabularies are not for SVM nor LR.

There are many typos and some misunderstandings for BERT. Please check

the followings carefully.

- L40: that pre-trained -> that are pre-trained

- L55: on huge corpus -> on a huge corpus

- L56: natural language task -> natural language tasks

- L60: corpus for pre-training preferred to use the same domain as the target task

-> the domain of a corpus for pre-training prefers to the same as the one of a target task

- L63: a study that domain specifically pre-training -> the domain-specific pre-trained model (?)

- L66: pre-trained transforms -> BERT models

- L67: Japanese text -> a Japanese text

- L67: Because multilingual BERT was pre-trained using a general

corpus, the sentence "One of the options is to use multilingual BERT

.." is not appropriate

- L70: WIKIPEDIA -> Wikipedia

- L73: make -> makes

- L90: I can't understand the sentence ".. before attempt to parse it .."

- L93: the MeCab -> MeCab, the Juman++ -> Juman

- L93: The reference [15] is wrong, and should be the following:

Morphological Analysis for Unsegmented Languages using Recurrent Neural Network Language Model.

Hajime Morita, Daisuke Kawahara, Sadao Kurohashi.

EMNLP 2015

- L94: wordpiece tokenization -> the wordpiece tokenization

- L95: which segment -> which segments

- L97: All the tokens are called "subword". The explanation should be as follows:

"A subword that starts with "##" represents a subword that is not the begging of a word."

- L107: external dictionary -> an external dictionary

- L109: the clinical text -> a clinical text

- L110: domain specific dictionary -> a domain specific dictionary

- L110: Japanese clinical text -> a Japanese clinical text

- L111: in as -> into as (?)

- L111: decompose -> decomposes

- L112: create -> creates

- L118: The paragraph "Pre-training BERT" contains several inaccurate expressions.

- L122: What does "the sequence" mean? It means a word sequence? The

word2vec embeddings, for example, are trained from a word sequence,

and so this explanation is inaccurate for mentioning the difference

between BERT and existing embeddings.

- L123: I think the clause "which learns sentence expressions

.. between words" is misunderstanding. The self-attention in BERT

can learn token embeddings as well as sentence embeddings.

- L131: the original embeddings of those tokens -> the original tokens

- L132: I can't understand the sentence "more appropriate

representation of sentences is obtained".

- L138: the consecutive sentence -> a consecutive sentence

- L140: The subject of the verb "connect" is missing

- L142: I think the verb "pinch" is not appropriate in this context

- L143: I think there is a misunderstanding in the NSP task. For a

sentence in a document, a random sentence as a negative example is

chosen from other documents. Therefore, it is a usual way that all

the sentences in a document in one day are regarded as one

document. The explanation from L142 to L147 does not make sense.

- L161: Please explain "pseudo-Twitter messages".

- Table2: Juman -> Juman++ (for KU-BERT)

- L203: single GPU -> a single GPU

- L211: "due to" is incorrect. I don't know the intention of this

phrase.

- L213: Google-ML BERT -> Google mBERT

- L225: highest -> the highest

- L237: by -> with

- L251: indicate -> indicates

- L256: which specialized to -> which are specialized to

- L258: use -> uses

- L258: alleviate -> alleviates

- L265: have pre-trained -> are pre-trained

- L267: intermediate corpus -> an intermediate domain

- L282: contribute in -> contribute to

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Tomohide Shibata

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Nov 9;16(11):e0259763. doi: 10.1371/journal.pone.0259763.r002

Author response to Decision Letter 0


6 Jul 2021

Thank you for giving us the opportunity to submit a revised draft of the manuscript. We appreciate the time and effort that you dedicated to provide us with your valuable feedback on the manuscript. We are grateful to the reviewers for their insightful comments on our paper. We incorporated changes to reflect most of the suggestions provided by the reviewers.

At first, due to incorrect spelling, we would like to change the title as follows: A clinical specific BERT developed using a huge Japanese clinical text corpus.

Here is a point-by-point response to the reviewers’ comments and concerns.

# Reviewer1

1. This work reads very well, and I believe it is an essential contribution to literature as it reduces the shortage of studies with the Japanese language and may trigger other investigations. I have no suggestions on how to improve this work, and I recommend publication in the present form.

Response to 1

Thank you for your comment. We trust that the pre-trained model will help research on clinical texts.

# Reviewer2

1. The trained BERT model is valuable for Japanese clinical research. However, the differences between the proposed BERT and other general Japanese BERT models in the evaluated task are very small. Although I understand the situation where only a few Japanese evaluation sets in a clinical domain are available, this experimental result is weak for insisting the effectiveness of the clinical-specific Japanese BERT model.

Response to 1

Thank you for pointing this out. As a result of the re-experiment, we confirmed that there are no significant differences between the BERTs. In addition, because there are, to our knowledge, few Japanese datasets in the clinical domain to use in our experiments, evaluating the BERT models with an appropriate one was challenging. We modified our results as follows:

On Page 10,

Table 4 shows the exact-match accuracies with 95% confidence intervals of four pre-trained BERTs. There were no significant differences among UTH-BERT (0.855), KU-BERT (0.845), and TU-BERT (0.862); however, mBERT significantly showed the lowest accuracy compared to the other BERTs.

2. Although the authors say that "error analysis is required" in L277, improved examples and errors should be presented, and discussion should be made in the paper. These will help readers understand the strength and weakness of the proposed BERT model.

Response to 2

Thank you for pointing this out. We agree that an error analysis is important to know the nature of the model. We conducted a qualitative error analysis and described it as follows:

On Page 12,

3.3 Error analysis

To obtain a better understanding of the UTH-BERT classifier’s mistakes, we qualitatively analyzed its false positive (FP) and false negative (FN) cases in the 640 MedWeb test dataset. The error analysis was conducted for the labels for which UTH-BERT was wrong all five times in the five-fold cross validation. The labels of MedWeb were annotated in terms of three aspects such as Factuality (whether the tweeter has certain symptom or not), Location (whether the symptoms are those of the tweeter or someone in the vicinity or not), and Tense (whether the symptoms exist within 24 hours or not) [19]. Since the MedWeb dataset did not contain information about these perspectives, we manually categorized the error cases based on these aspects. As a result, we obtained eight error types for the FP cases and five error types for the FN cases (Table 6).

1. FP due to false detection of co-occurring symptoms

This type of error was categorized as Factuality. The example sentence No.1 expresses that the flu is negative, but UTH-BERT incorrectly predicted that the fever and flu are positive. The reason for this error would be that the training data contained sentences in which flu and fever are positive simultaneously. In addition, UTH-BERT could not detect the negative expression of flu, so both flu and fever were incorrectly positive.

2. FP for symptoms mentioned in general topics

This was categorized as Factuality. The example sentence No. 2 states that a runny nose is a common symptom of a cold. Despite the general topic, UTH-BERT incorrectly predicted that the tweeter has a cold and a runny nose. The reason for this error would be that UTH-BERT failed to distinguish between symptoms that are stated as a general topic and those occurring in a person.

3. FP for suspected influenza

This was categorized as Factuality. According to the MedWeb annotation criteria, suspected symptoms were treated as positive, but only influenza was treated as negative. (This is because the MedWeb dataset was developed primarily for the surveillance of influenza.) This suggests that difference of annotation criteria between flu and other symptoms, and a lack of sentence expression about suspected flu in the training dataset, led to the errors.

4. FP for fully recovered symptoms

This was categorized as Factuality. According to the annotation criteria, symptoms are labeled as positive if they are in the recovery process and negative if they are completely cured. In example sentence No. 4, even though the tweeter stated that the cough was cured, UTH-BERT could not recognize cured and incorrectly predicted that cough was positive.

5. FP for metaphorical expressions

This was categorized as Factuality. This error is due to an inability to recognize metaphorical expressions. In example sentence No. 5, the Japanese phrase熱を上げる is a metaphorical expression for excited, but because it uses the same kanji as fever, UTH-BERT incorrectly predicted that fever is positive.

6. FP for denied symptoms

This was categorized as Factuality. This error is caused by UTH-BERT missing a negative expression.

7. FP for symptoms for asking unspecified people

This was categorized as Location. Although example sentence No. 7 asks about the presence of fever for an unspecified person, UTH-BERT incorrectly predicted that the tweeter has a fever.

8. FP for past symptoms

This was categorized as Tense. According to the annotation criteria, past symptoms are treated as negative. This error occurred because UTH-BERT was not able to recognize the tense.

9. FN for symptoms that is directly expressed

This was categorized as Factuality. Although the sentences directly express that the tweeter has a symptom, this type of errors occurred because UTH-BERT could not detect it.

10. FN for symptoms that is indirectly expressed

This was categorized as Factuality. This is the type of error that overlooks a symptom that would be inferred to be positive if another symptom was present at the same time. Example sentence No.10 directly expresses that hay fever is positive, but if you have some knowledge, you can guess that runny nose is also positive.

11. FN for symptoms that can be inferred to be positive because the tweet is from a person.

This was categorized as Factuality. This is also the type of error that overlooks a symptom that is expressed indirectly, but it requires more advanced reasoning. Example sentence No.11 states a general topic, but given that it is a tweet, you can infer that the person has hay fever.

12. FN for symptoms that is in the recovery process

This was categorized as Factuality. According to the annotation criteria, symptoms are labeled as positive if they are in the recovery process; however, UTH-BERT could not detect it.

13. FN for symptoms occurring in the tweeter's neighborhood

This was categorized as Location. According to the annotation criteria, if a population in the same space has a symptom, the symptom is annotated as positive regardless of whether the tweeter has the symptom or not. In this case, UTH-BERT predicted it as negative, because there were probably not enough such cases in the training data.

On Page 18,

We conducted an error analysis that resulted in 13 different types of error interpretations. Among these interpretations, errors related to the factuality of symptoms were the most common, and errors related to the location and the tense were less common. This bias could be due to the small amount of data labeled for location and tense in the MedWeb dataset, rather than a feature of the UTH-BERT. The most common error type in the FP cases was due to false detection of co-occurring symptoms, and 10 of these errors were found in this analysis. Since the task in the MedWeb dataset is a multi-label classification of eight symptoms, this error would be influenced by co-occurrence relationships of multiple labels appearing in the training dataset. On the other hand, some of the FN error types were found to overlook symptom that were expressed indirectly. A possible way to reduce such oversights would be to prepare many similar cases in the training dataset, but it seemed to be difficult as long as only text was used as a source of information. It was difficult to conduct further analysis since this error analysis was based on manual categorization. For further investigation, it would be possible to apply Shapley additive explanations (SHAP) [20] or local interpretable model-agnostic explanations (LIME) [21] to visualize the effect of the predictions when the input data is perturbed by deleting or replacing input tokens.

Differences in the distribution of words appearing in the clinical text used for pre-training of UTH-BERT and the pseudo-tweet messages used for evaluation may have affected the errors. Clinical texts differ in that they contain objective information about the patient, while pseudo-tweets message contain subjective information about the tweeter. Another difference is that the former is written in a literary style, while the latter is written in a spoken form. Because of these differences, there may be cases where the representation of the UTH-BERT pre-acquired tokens were not maximally utilized in the pseudo-tweet messages, leading to errors. A limitation of our error analysis is that it was not possible to compare the error trends between the BERT models because we did not perform the error analysis for KU-BERT and TU-BERT. Moreover, given that our developed BERT was evaluated exclusively on the NTCIR-13 MedWeb task, there is currently a limitation in the generalizability of the performance.

3. The comparison between BERT and other classical machine learning methods such as SVM and LR does not make sense in this paper because the superiority of BERT models has been shown in many papers. Furthermore, SVM with KU or TU vocabularies does not make sense because these (subword) vocabularies are not for SVM nor LR.

Response to 3

As you commented, the superiority of BERT models has been shown in previous studies. We removed the sentences that mentioned SVM and LR.

4. There are many typos and some misunderstandings for BERT. Please check the followings carefully.

Response to 4

Thank you for your careful attention. We revised our manuscript based on your comments, and the manuscript was subsequently proofread by native English speakers. We believe it is significantly improved from the previous submission.

5. "A subword that starts with "##" represents a subword that is not the beginning of a word."

Response to 5

Thanks for the important comment. We modified the manuscript as follows:

On Page 4,

A subword that starts with ## represents a subword that is an appendage to another word.

6. - L122: What does "the sequence" mean? It means a word sequence? The word2vec embeddings, for example, are trained from a word sequence, and so this explanation is inaccurate for mentioning the difference between BERT and existing embeddings.

Response to 6

We modified the manuscript to clearly distinguish between BERT and word2vec as follows:

On Page 6,

The novelty of BERT is that it took the idea of learning word embeddings one step further, by learning each embedding vector considering the co-occurrence of words.

7. - L123: I think the clause "which learns sentence expressions .. between words" is misunderstanding. The self-attention in BERT can learn token embeddings as well as sentence embeddings.

Response to 7

We modified the manuscript as follows:

On Page 6,

To do this, BERT utilizes the self-attention mechanism, which learns sentence and word embeddings by capturing co-occurrence relationships between those embeddings.

Response to 8

As you commented, our description was inappropriate. We revised the manuscript based on the original BERT paper (Devlin et al. 2018) as follows:

On Page 6,

This task masks, randomly replaces, or keeps each input token with a certain probability, and estimates the original tokens. Estimating not only the masked tokens but also the replaced or kept tokens help to keep a distributional contextual representation of every input token.

9. - L142: I think the verb "pinch" is not appropriate in this context

10. - L143: I think there is a misunderstanding in the NSP task. For a sentence in a document, a random sentence as a negative example is chosen from other documents. Therefore, it is a usual way that all the sentences in a document in one day are regarded as one document. The explanation from L142 to L147 does not make sense.

Response to 9, 10

Thank you for pointing this out. In response to your suggestion, we deleted the inappropriate expressions and revised the text as follows:

On Page 7,

To develop such a training dataset, for two consecutive sentences in the original dataset, the first sentence is connected to the original second sentence with a probability of 50% as a positive example. The remaining 50% of the time, the first sentence is connected to a randomly sampled sentence as negative example. We treated all sentences appearing in a document recorded in one day for a patient as consecutive sentences.

Attachment

Submitted filename: 07_responce_r1.docx

Decision Letter 1

Diego Raphael Amancio

27 Oct 2021

A clinical specific BERT developed using a huge Japanese clinical text corpus

PONE-D-20-20418R1

Dear Dr. Kawazoe,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Diego Raphael Amancio

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The manuscript has greatly been improved according to my comments, and

is judged to be acceptable for publication. Although the proposed BERT

model is compatible with other general-domain BERT models, the

presented model and experimental results are valuable for other

researchers especially in the medical domain.

- Minor points:

- L52: a corpus for evaluation -> an evaluation set

- L69: when compared to .. -> compared to ..

- L78: for NLP of Japanese clinical .. -> for Japanese clinical .. (?)

- L82: other -> general

- L114: the sentence -> a sentence

- L250: Experiment settings -> Experimental settings

- L261: I can't understand how the cross-validation was performed. 1

of 4:1 split was used for the development set?

- Table2: Juman -> Juman++

- Table6: The number of categories (13) is relatively large. It is

better to use FP-1, .. FP-8, FN-1, .. , FN-5, and "FP" and "FN"

can be excluded from the interpretations.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Tomohide Shibata

Acceptance letter

Diego Raphael Amancio

29 Oct 2021

PONE-D-20-20418R1

A clinical specific BERT developed using a huge Japanese clinical text corpus

Dear Dr. Kawazoe:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Diego Raphael Amancio

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: 07_responce_r1.docx

    Data Availability Statement

    1. The UTH-BERT model used for the experiment is available at our web site under the Creative Commons 4.0 International License (CC BY-NC-SA 4.0). URL: https://ai-health.m.u-tokyo.ac.jp/uth-bert 2. The NTCIR-13 dataset used for the experiment is available at the NTCIR web site upon request under the Creative Commons 4.0 International License (CC BY 4.0). URL: http://research.nii.ac.jp/ntcir/index-en.html 3. The KU-BERT model used for the experiment is available at the web site of Kurohashi-Chu-Murakami Lab, Kyoto University. URL: https://nlp.ist.i.kyoto-u.ac.jp/?ku_bert_japanese 4. The TU-BERT model used for the experiment is available at the web site of Tohoku NLP lab, Tohoku University. URL: https://github.com/cl-tohoku/bert-japanese.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES