Fig. 2From: A multi-layer soft lattice based model for Chinese clinical named entity recognitionBERT visualization: a attention-head view for BERT, for inputs, The left and center figures represent different layers/attention heads. The right figure depicts the same layer/head as the center figure, but with Sentence A → Sentence B filter selected [31]; b Model view of BERT, for same inputs, layers 4; c Neuron view of BERT for layer 0, head 0Back to article page