Skip to main content
. Author manuscript; available in PMC: 2021 Jan 14.
Published in final edited form as: J Biomed Inform. 2020 Jun 18;108:103473. doi: 10.1016/j.jbi.2020.103473

Fig. 4.

Fig. 4.

Baseline model architecture. For each word, a character representation is fed into the input layer of the Bi-LSTM network. For each word, xwe represents pre-trained word embeddings, xce represents character embeddings, and xind represents indicator embeddings. The final predictions for the spatial role labels in a sentence are made combining the Bi-LSTM’s final score and CRF score.