Recognizing clinical entities in hospital discharge summaries using Structural Support Vector Machines with word representation features
© Tang et al.; licensee BioMed Central Ltd. 2013
Published: 5 April 2013
Named entity recognition (NER) is an important task in clinical natural language processing (NLP) research. Machine learning (ML) based NER methods have shown good performance in recognizing entities in clinical text. Algorithms and features are two important factors that largely affect the performance of ML-based NER systems. Conditional Random Fields (CRFs), a sequential labelling algorithm, and Support Vector Machines (SVMs), which is based on large margin theory, are two typical machine learning algorithms that have been widely applied to clinical NER tasks. For features, syntactic and semantic information of context words has often been used in clinical NER systems. However, Structural Support Vector Machines (SSVMs), an algorithm that combines the advantages of both CRFs and SVMs, and word representation features, which contain word-level back-off information over large unlabelled corpus by unsupervised algorithms, have not been extensively investigated for clinical text processing. Therefore, the primary goal of this study is to evaluate the use of SSVMs and word representation features in clinical NER tasks.
In this study, we developed SSVMs-based NER systems to recognize clinical entities in hospital discharge summaries, using the data set from the concept extration task in the 2010 i2b2 NLP challenge. We compared the performance of CRFs and SSVMs-based NER classifiers with the same feature sets. Furthermore, we extracted two different types of word representation features (clustering-based representation features and distributional representation features) and integrated them with the SSVMs-based clinical NER system. We then reported the performance of SSVM-based NER systems with different types of word representation features.
Results and discussion
Using the same training (N = 27,837) and test (N = 45,009) sets in the challenge, our evaluation showed that the SSVMs-based NER systems achieved better performance than the CRFs-based systems for clinical entity recognition, when same features were used. Both types of word representation features (clustering-based and distributional representations) improved the performance of ML-based NER systems. By combining two different types of word representation features together with SSVMs, our system achieved a highest F-measure of 85.82%, which outperformed the best system reported in the challenge by 0.6%. Our results show that SSVMs is a great potential algorithm for clinical NLP research, and both types of unsupervised word representation features are beneficial to clinical NER tasks.
Recently, rapid growth of large electronic health records (EHRs) has led to an unprecedented expansion of the availability of electronic medical data, including clinical narratives. EHR data have been used not only to support computerized clinical applications (e.g., clinical decision support systems), but also to enable clinical and translational research. One of the challenges for using EHR data is that much of detailed patient information is embedded in clinical text, which is not directly accessible for other computerized applications that reply on structured data. Therefore, natural language processing (NLP) technologies, which can extract structured clinical information from narrative text, have been introduced to the medical domain for more than a decade . Many clinical NLP systems have been developed and used in different applications .
Named Entity Recognition (NER), which is to identify boundary and to determine semantic classes (e.g., person names, locations, or organizations) of words/phrases in free text, is an important task in NLP research. Apparently, recognition of clinical entities such as drugs and diseases in clinical text is one of the fundamental tasks for clinical NLP systems as well. Most existing clinical NLP systems (e.g., MedLEE , SymText/MPlus [3, 4], MetaMap  and KnowledgeMap ), as well as recent open source ones such as cTAKES  and HiTEX  often use rule-based methods that rely on existing biomedical vocabularies to identify clinical entities. More recently, i2b2 (the Center of Informatics for Integrating Biology and the Bedside) at Partners Health Care System has organized a few clinical NLP challenges that aimed to recognize clinical entities from text, including the 2009 challenge on medication recognition  and the 2010 i2b2 challenge on recognizing medical problems, treatments, and tests entities . In the 2009 challenge, both rule-based [11, 12] and machine learning based methods [13, 14], as well as hybrid methods  have been developed to extract medication entities. In the 2010 i2b2 NLP challenge, organizers provided more annotated data. Therefore, many participating teams, including all top five systems (with F-measures ranging from 81.3% to 85.2%), were primarily based on machine learning approaches [16–18].
To apply machine learning algorithms to an NER task, annotated data are typically converted into a BIO format. Specifically, it assigns each word into a class as follows: B means beginning of an entity, I means inside an entity, and O means outside of an entity. By doing that, an NER problem now can be considered as a classification problem of sequential labeling, which assigns one of the three class labels to each word. Different machine learning algorithms have been used for NER tasks. Among them, Conditional Random Fields (CRFs) and Support Vector Machines (SVMs) are two widely used algorithms. In NER tasks for biomedical literature corpus, some studies reported better results using CRFs , while others showed that the SVMs was better . In theory, CRFs is a representative sequence labeling algorithm, which is suitable for the NER problem. SVMs is a robust machine learning algorithm that is designed for classification tasks based on large margin theory. By default, it ignores the relationships between neighbor tokens in sequences when we apply it to sequence labeling problems, although researchers have developed methods to incorporate neighbour information into features for SVMs-based NER systems [21, 22]. In 2005, Structural Support Vector Machines (SSVMs) was proposed by Tsochantaridis et al.  for structural data, such as trees and sequences. It is an SVMs-based discriminative algorithm for structural prediction. Therefore, SSVMs combines the advantages of both CRFs and SVMs and is suitable for sequence labeling problems. Recently, SSVMs has been applied to NER tasks in different domains and sometimes it shows improved performance when it is compared with CRFs . However, the use of SSVMs for clinical entity recognition has not been extensively evaluated yet.
Another important factor that largely affects the performance of ML-based NER systems is features used to train the model. Syntactic (e.g., part-of-speech tags) and semantic (e.g., semantic classes in UMLS (Unified Medical Language System)) information of context words are often used as features in clinical NER systems. However, word representation, which generates word-level back-off features over large unlabeled corpus by unsupervised algorithms, has not been widely used. This type of features often contains grammatical or semantic meanings, and can represent words that do not appear in the labelled corpus effectively. Different techniques have been used to generate word representation features. For example, Joseph et al.  classified them into three categories: clustering-based, distributional and distributed word representations. Word representation features have been used in NLP work in the general English domain, and have shown stable improvements on a variety of tasks [25, 26]. However, few studies have applied word representation features to NLP research in the medical domain. de Bruijn B et al.  used some clustering-based word representation features in their NER system for the 2010 i2b2 NLP challenge and achieved the highest performance in the challenge. Jonnalagadda et al.  investigated distributional semantics features for clinic entity recognition, and their evaluation on the same 2010 i2b2 challenge data showed a significant improvement when using these features. Nevertheless, the contribution of different types of word representation features to clinic entity recognition has not been extensively investigated yet.
In our previous work presented in the ACM sixth international workshop on Data and text mining in biomedical informatics (DTMBIO'12) , we explored the uses of SSVMs, combined features, clustering-based word representation features and tag representations for clinical entity recognition. This paper is an extension to our previous work . In addition to the comparison between SSVMs and CRFs, we implemented two types of word representation features (clustering-based and distributional word representation features) and evaluated the contribution of individual and combined word representation features from these two different methods, for clinic entity recognition. Our results showed that SSVMs achieved higher performance than CRFs on the 2010 i2b2 concept extraction data set, indicating it is a promising alternative algorithm for clinical entity recognition. In addition, we demonstrated not only that both clustering-based and distributional word representation features were beneficial to clinical NER tasks, but also that these two types of word representation features were complementary to each other. When both types of word representation features were combined with SSVMs, our system achieved a highest F-measure of 85.82%, an improvement of 0.4% to the baseline system, which outperformed the best system reported in the challenge by 0.6%.
Counts of different types of entities in training and test data sets used in this study.
Concepts (N = 72,846)
Training (349 notes)
Test (477 notes)
Machine learning algorithms: SSVMs and CRFs
The task of sequence labeling for NER is to find the best label sequence y* = y 1 y 2 ... y N for a given input sequence x = x 1 x 2 ... x N and a set of labels L, where y i ∈Lfor 1≤i≤N. In CRFs, y* is the tag sequence with the highest probability for the given input sequence. For SSVMs, y* is the tag sequence with the highest score determined by a linear discriminant function. Both of them decode the sequence-labeling problem by undirected Markov chain and Viterbi algorithm. The difference between them is the optimization goal. SSVMs models sequence labeling problems by the large margin method like SVMs, which has good generalization ability; while CRFs models sequence labeling problems by maximum likelihood estimation of conditional probability, which could suffer from the over-fitting problem.
where C trades off margin size and training error, is a loss function that computes the distance between y and y', and is a slack variant for non-separable data.
We used SVM hmm (http://osmot.cs.cornell.edu/svm_hmm/) as the implementation of SSVMs in this study, which solves Eq.2 by the Cutting-Plane algorithm [23, 29]. For CRFs, we used the CRF++ package (http://crfpp.sourceforge.net/), which has been widely used for various NER tasks [30, 31]. The pair-wise (one-against-one) strategy for multi-classification was used in this study.
Tags for entities
Features for machine learning classifier
We have developed a CRFs-based classifier for the 2010 i2b2 challenge and it ranked second among 20 participating teams . In that study, we used four types of features including 1) word level information such as bag-of-word and orthographic information; 2) syntactic information such as Part of Speech (POS) tags; 3) lexical and semantic information from NLP systems such as normalized concepts (e.g., UMLS concept unique identifiers (CUIs)) and semantic types; and 4) discourse information such as sections in the clinical notes. In , we introduced combined features, which were generated by combining different types of features (e.g., word and POS), and it improved performance. Therefore, in this study, our baseline method included all four types of features in  and the combined features in . The focus of this study was then to compare the contribution of two types of word representation features: clustering-based and distributional representations.
1) Clustering-based word representation
2) Distributional word representation
Experiments and evaluation
For each algorithm (SSVMs vs. CRFs), we developed the NER system using the training data set containing 349 annotated clinical notes and evaluated it using the test set of 477 annotated notes. Same feature files were used for both algorithms. Parameters for each algorithm were optimized using the training set via a 10-fold cross validation (CV) method. To evaluate the effect of different types of word representation features, we started with the baseline method that used four types of features and the combined features described in the previous section, and then added two types of unsupervised word representation features and reported results correspondingly. We also compared performance of different algorithms when either BIO or BIESO tags were used. All experiments were conducted on computers of Intel(R) Xeon(R) CPU X5670 @ 2.93 GHz.
Micro-averaged Precision, Recall, and F-measure for all concepts were reported by using the evaluation script provided by i2b2 challenge organizers . Results were reported using both exact matching, which requires that the starting and ending offsets of a concept have to be exactly same as those in the gold standard, and inexact matching, which refers to cases where their offsets are not exactly same as those in gold standard, but they overlap with each other. To assess whether the mean F-measures of both NER systems (using SSVMs or CRFs) are statistically significantly different, we further conducted a statistical test based on results from bootstrapping. From the test set, we randomly selected 2000 sentences with replacement for 200 times and generated 200 bootstrapping data sets. For each bootstrapping data set, we evaluated and measured F-measures for both SSVMs and CRFs based NER systems. We then used Wilcoxon signed rank test , a nonparametric test for paired samples, to determine if the difference between F-measures from SSVMs and CRFs based NER systems is statistically significant (p-value < 0.05).
Performance of SSVMs and CRFs based NER systems when different features and tag representations were used.
SSVMs - F(R/P)(%)
CRFs - F(R/P)(%)
Base + Clustering
Base + Distributional
Base + Clustering + Distributional
Base + Clustering
Base + Distributional
Base + Clustering + Distributional
Moreover, both clustering-based and distributional word representation features improved performance of NER systems. In the BIO setting, adding the clustering-based and distributional word representation features improved the performance of SSVMs-based NER systems by 0.33% and 0.30% of F-measure respectively. In the BIESO setting, the improvements were 0.32% for either the clustering-based or distributional word representation features. Moreover, when both types of word representation features were added to the NER systems, the performance improvements were larger than any single type of word representation features, achieving increases of 0.56% and 0.40% F-measure for the BIO and BIESO settings respectively. When all features and BIESO tags were used, both SVMMs and CRFs reached the highest performance. For SSVMs, it achieved a highest exact-matching F-measure of 85.82%, an increase of 0.40% to the baseline method. For CRFs, it achieved that of 85.68%, an increase of 0.64% to the baseline method.
Results by entity type for the best performed SSVMs and CRFs clinical entity recognition systems.
Exact matching (%)
Inexact matching (%)
When comparing SSVMs with CRFs (Tables 2 and 3), we noticed that SSVMs achieved much better recall values, although CRFs usually had better precision values. For sequential labelling problems, SSVMs not only takes advantages of relationships of neighbour words like CRFs, but also has strong generalization ability like SVMs. Different from CRFs, it is not necessary to assume an exponential distribution among training and test data for SSVMs. Therefore, SSVMs has better capability to detecting testing samples that do not appear in the training data. For the clinical NER task in this study, SSVMs found more entities that did not appear in training data than CRFs. For example, when the basic features and clustering-based word representation features were used, SSVMs detected 890 more entities than CRFs. Among them, about 500 entities were true positive. Therefore, SSVMs achieved better recall than CRFs. Given the performance differences in precision and recall of SSVMs and CRFs, they can be complementary to each other. An interesting direction is to combine outputs from SSVMs and CRFs to further improve performance of clinical NER systems, which is one case of our future work.
The performance gain from BIESO tags was not trivial as well (F-measures: 85.42% for BIESO vs. 84.89% for BIO when basic features were used). We noticed that the improvement by BIESO tags was mainly from increased precisions, which indicated that the BIESO tag representation helped the boundary determination of entities. For further details, we looked into all entities (20425 single-word entities and 24584 multi-word entities) in the gold standard. When basic features were used, the precisions of the BIESO-based SSVMs system for sing-word entities and multi-word entities were 91.52% and 87.34% respectively; while the precisions of the BIO-based SSVMs system for single-word and multi-word entities were 90.94%% and 87.33% respectively.
It is not surprising that word representation features such as clustering-based and distributional word representations improved performance of clinical NER systems, as it was reported by previous studies as well . The performance gain from each type of word representation features was not trivial (F-measures: 85.74% for clustering-based or distributional word representation features vs. 85.42% for the the baseline). However, Jonnalagadda et al.  reported a larger increase (about 2% F-measure) by using distributional semantics features on the same i2b2 data set. Although the difference in performance gain could be due to different methods for generating word representation features, we would think it is more related to the baseline performance. In Jonnalagadda et al.'s experiment, the baseline method had an F-measure of 80.3%; while our baseline method achieved a much higher F-measure of 85.42%, which made it more difficult for further improvements. We noticed that the improvement by word representation features was mainly from increased recall, which indicated that unsupervised word representation features helped to detect more correct entities; especially those did not appear in the training data set. Moreover, the total performance gain by combining two types of word representation features was a bit higher than the gain from any of them, indicating that these two types of word representation methods could be complementary to each other. To further improve NER performance, it is worth exploring to combine more types of word representation features. In the future, we plan to investigate another type of word representation features: distributed word representation such as Canonical Correlation Analysis (CCA) , as well as other algorithms for generating word representations in NLP domain, such as Hyperspace Analogue to Language (HLA) .
In this study, we investigated the use of SSVMs and clustering-based and distributional word representations for clinical entity recognition. Our evaluation using the 2010 i2b2 NLP challenge data showed that SSVMs could achieve better F-measures than CRFs for detecting clinical entities from discharge summaries, indicating its great potential for clinical NLP research. Moreover, we demonstrated not only that both clustering-based and distributional word representation features were beneficial to clinical NER tasks, but also that these two types of word representation features were complementary to each other.
This study is supported in part by grants from NLM R01-LM007995 and NCI R01CA141307. We also thank the 2010 i2b2 NLP challenge organizers for making the annotated data set available.
This work is based on an earlier work: “Clinical entity recognition using structural support vector machines with rich features”, in Proceedings of the ACM Sixth International Workshop on Data and Text Mining in Biomedical Informatics, 2012 © ACM, 2012. http://doi.acm.org/10.1145/2390068.2390073
The publication costs for this article are funded by the corresponding author.
This article has been published as part of BMC Medical Informatics and Decision Making Volume 13 Supplement 1, 2013: Proceedings of the ACM Sixth International Workshop on Data and Text Mining in Biomedical Informatics (DTMBio 2012). The full contents of the supplement are available online at http://www.biomedcentral.com/bmcmedinformdecismak/supplements/13/S1.
- Friedman C, Alderson PO, Austin JH, Cimino JJ, Johnson SB: A general natural-language text processor for clinical radiology. J Am Med Inform Assoc. 1994, 1: 161-174. 10.1136/jamia.1994.95236146.PubMed CentralView ArticlePubMedGoogle Scholar
- Meystre SM, Savova GK, Kipper-Schuler KC, Hurdle JF: Extracting information from textual documents in the electronic health record: a review of recent research. Yearb Med Inform. 2008, 128-144.Google Scholar
- Haug PJ, Koehler S, Lau LM, Wang P, Rocha R, Huff SM: Experience with a mixed semantic/syntactic parser. Proc Annu Symp Comput Appl Med Care. 1995, 284-288.Google Scholar
- Haug PJ, Christensen L, Gundersen M, Clemons B, Koehler S, Bauer K: A natural language parsing system for encoding admitting diagnoses. Proc AMIA Annu Fall Symp. 1997, 814-818.Google Scholar
- Aronson AR, Lang FM: An overview of MetaMap: historical perspective and recent advances. J Am Med Inform Assoc. 2010, 17: 229-236.PubMed CentralView ArticlePubMedGoogle Scholar
- Denny JC, Miller RA, Johnson KB, Spickard A: Development and evaluation of a clinical note section header terminology. AMIA Annu Symp Proc. 2008, 156-160.Google Scholar
- Savova GK, Masanz JJ, Ogren PV, Zheng J, Sohn S, Kipper-Schuler KC, Chute CG: Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications. J Am Med Inform Assoc. 2010, 17: 507-513. 10.1136/jamia.2009.001560.PubMed CentralView ArticlePubMedGoogle Scholar
- Zeng QT, Goryachev S, Weiss S, Sordo M, Murphy SN, Lazarus R: Extracting principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system. BMC Med Inform Decis Mak. 2006, 6: 30-10.1186/1472-6947-6-30.PubMed CentralView ArticlePubMedGoogle Scholar
- Uzuner O, Solti I, Cadag E: Extracting medication information from clinical text. J Am Med Inform Assoc. 2010, 17: 514-518. 10.1136/jamia.2010.003947.PubMed CentralView ArticlePubMedGoogle Scholar
- Uzuner O, South BR, Shen S, DuVall SL: 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. J Am Med Inform Assoc. 2011, 18: 552-556. 10.1136/amiajnl-2011-000203.PubMed CentralView ArticlePubMedGoogle Scholar
- Doan S, Bastarache L, Klimkowski S, Denny JC, Xu H: Integrating existing natural language processing tools for medication extraction from discharge summaries. J Am Med Inform Assoc. 2010, 17: 528-531. 10.1136/jamia.2010.003855.PubMed CentralView ArticlePubMedGoogle Scholar
- Spasic I, Sarafraz F, Keane JA, Nenadic G: Medication information extraction with linguistic pattern matching and semantic rules. J Am Med Inform Assoc. 2010, 17: 532-535. 10.1136/jamia.2010.003657.PubMed CentralView ArticlePubMedGoogle Scholar
- Patrick J, Li M: High accuracy information extraction of medication information from clinical notes: 2009 i2b2 medication extraction challenge. J Am Med Inform Assoc. 2010, 17: 524-527. 10.1136/jamia.2010.003939.PubMed CentralView ArticlePubMedGoogle Scholar
- Li Z, Liu F, Antieau L, Cao Y, Yu H: Lancet: a high precision medication event extraction system for clinical text. J Am Med Inform Assoc. 2010, 17: 563-567. 10.1136/jamia.2010.004077.PubMed CentralView ArticlePubMedGoogle Scholar
- Meystre SM, Thibault J, Shen S, Hurdle JF, South BR: Textractor: a hybrid system for medications and reason for their prescription extraction from clinical text documents. J Am Med Inform Assoc. 2010, 17: 559-562. 10.1136/jamia.2010.004028.PubMed CentralView ArticlePubMedGoogle Scholar
- de Bruijn B, Cherry C, Kiritchenko S, Martin J, Zhu X: Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010. J Am Med Inform Assoc. 2011, 18: 557-562. 10.1136/amiajnl-2011-000150.PubMed CentralView ArticlePubMedGoogle Scholar
- Jiang M, Chen Y, Liu M, Rosenbloom ST, Mani S, Denny JC, Xu H: A study of machine-learning-based approaches to extract clinical entities and their assertions from discharge summaries. J Am Med Inform Assoc. 2011, 18: 601-606. 10.1136/amiajnl-2011-000163.PubMed CentralView ArticlePubMedGoogle Scholar
- Torii M, Wagholikar K, Liu H: Using machine learning for concept extraction on clinical documents from multiple data sources. J Am Med Inform Assoc. 2011, 18: 580-587. 10.1136/amiajnl-2011-000155.PubMed CentralView ArticlePubMedGoogle Scholar
- Li D, Kipper-Schuler K, Savova G: Conditional random fields and support vector machines for disorder named entity recognition in clinical texts. Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing. 2008, Columbus, Ohio: Association for Computational Linguistics, 94-95. pp. 94-95View ArticleGoogle Scholar
- Wu Y-C, Fan T-K, Lee Y-S, Yen S-J: Extracting named entities using support vector machines. Proceedings of the 2006 international conference on Knowledge Discovery in Life Science Literature. 2006, Singapore: Springer-Verlag, 91-103. pp. 91-103Google Scholar
- Kudoh T, Matsumoto Y: Use of support vector learning for chunk identification. Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7. 2000, Lisbon, Portugal: Association for Computational Linguistics, 142-144. pp. 142-144View ArticleGoogle Scholar
- Kudo T, Matsumoto Y: Chunking with support vector machines. Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies. 2001, Pittsburgh, Pennsylvania: Association for Computational Linguistics, 1-8. pp. 1-8Google Scholar
- Tsochantaridis I, Joachims T, Hofmann T, Altun Y: Large Margin Methods for Structured and Interdependent Output Variables. J Mach Learn Res. 2005, 6: 1453-1484.Google Scholar
- Turian J, Ratinov L, Bengio Y: Word representations: a simple and general method for semi-supervised learning. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden: Association for Computational LinguisticsGoogle Scholar
- Miller SaG, Jethran , Zamanian , Alex : Name Tagging with Word Clusters and Discriminative Training. HLT-NAACL. 2004, 337-342. pp. 337-342Google Scholar
- Turian J, Ratinov L, Bengio Y: Word representations: a simple and general method for semi-supervised learning. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. 2010, Uppsala, Sweden: Association for Computational Linguistics, 384-394. pp. 384-394Google Scholar
- Jonnalagadda S, Cohen T, Wu S, Gonzalez G: Enhancing clinical concept extraction with distributional semantics. Journal of Biomedical Informatics. 2012, 45: 129-140. 10.1016/j.jbi.2011.10.007.PubMed CentralView ArticlePubMedGoogle Scholar
- Tang B, Cao H, Wu Y, Jiang M, Xu H: Clinical entity recognition using structural support vector machines with rich features. Proceedings of the ACM Sixth International Workshop on Data and Text Mining in Biomedical Informatics. 2012, New York: ACM, 13-20. 10.1145/2390068.2390073.Google Scholar
- Joachims T, Finley T, Yu C-NJ: Cutting-plane training of structural SVMs. Mach Learn. 2009, 77: 27-59. 10.1007/s10994-009-5108-8.View ArticleGoogle Scholar
- He Y, Kayaalp M: Biological entity recognition with conditional random fields. AMIA Annu Symp Proc. 2008, 293-297.Google Scholar
- Song Y, Kim E, Lee GG, Yi BK: POSBIOTM-NER: a trainable biomedical named-entity recognition system. Bioinformatics. 2005, 21: 2794-2796. 10.1093/bioinformatics/bti414.View ArticlePubMedGoogle Scholar
- Sang EFTK, Veenstra J: Representing text chunks. Ninth Conference of the European Chapter of the Association for Computational Linguistics. 1999, 173-179. pp. 173-179View ArticleGoogle Scholar
- Brown PF, deSouza PV, Mercer RL, Pietra VJD, Lai JC: Class-based n-gram models of natural language. Comput Linguist. 1992, 18: 467-479.Google Scholar
- Lund K, Burgess C: Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, & Computers. 1996, 28: 203-208.View ArticleGoogle Scholar
- Wilcoxon F: Individual comparisons by ranking methods. Biometrics Bulletin. 1945, 1: 4-Google Scholar
- Dhillon PS, Foster D, Ungar L: Multi-View Learning of Word Embeddings via CCA. NIPS'11. 2011, 199-207.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.