This article has Open Peer Review reports available.
Predicting sample size required for classification performance
 Rosa L Figueroa†^{1},
 Qing ZengTreitler†^{2}Email author,
 Sasikiran Kandula†^{2} and
 Long H Ngo†^{3}
https://doi.org/10.1186/14726947128
© Figueroa et al; licensee BioMed Central Ltd. 2012
Received: 30 June 2011
Accepted: 15 February 2012
Published: 15 February 2012
Abstract
Background
Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target.
Methods
We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an unweighted fitting method.
Results
A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline unweighted method (p < 0.05).
Conclusions
This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an unweighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Background
The availability of biomedical data has increased during the past decades. In order to process such data and extract useful information from it, researchers have been using machine learning techniques. However, to generate predictive models, the supervised learning techniques need an annotated training sample. Literature suggests that the predictive power of the classifiers is largely dependent on the quality and size of the training sample [1–6].
Human annotated data is a scarce resource and its creation expensive both in terms of money and time. For example, unannotated clinical notes are abundant. To label unannotated text corpora from the clinical domain, however, requires a group of reviewers with domain expertise and only a tiny fraction of the available clinical notes can be annotated.
The process of creating an annotated sample is initiated by selecting a subset of data; the question is: what should the size of the training subset be to reach a certain target classification performance? Or to phrase it differently: what is the expected classification performance for a given training sample size?
Problem formulation
Our interest in sample size prediction stemmed from our experiments with active learning. Active learning is a sampling technique that aims to minimize the size of the training set for classification. The main goal of active learning is to achieve, with a smaller training set, a performance comparable to that of passive learning. In the iterative process, users need to make a decision on when to stop/continue the data labeling and classification process. Although termination criteria is an issue for both passive and active learning, identifying an optimal termination point and training sample size may be more important in active learning. This is because the passive and active learning curves will, given a sufficiently large sample size, eventually converge and thus diminish the advantage of active learning over passive learning. Relatively few papers have been published on the termination criteria for active learning [7–9]. The published criteria are generally based on target accuracy, classifier confidence, uncertainty estimation, and minimum expected error. As such, they do not directly predict a sample size. In addition, depending on the algorithm and classification, active learning algorithms differ in performance and sometimes can perform even worse than passive learning. In our prior work on medical text classification, we have investigated and experimented with several active learning sampling methods and observed the need to predict future classification performance for the purpose of selecting the best sampling algorithm and sample size[10, 11]. In this paper we present a new method that predicts the performance at an increased sample size. This method models the observed classifier performance as a function of the training sample size, and uses the fitted curve to forecast the classifier's future behaviour.
Previous and related work
Sample size determination
Our method can be viewed as a type of sample size determination (SSD) method that determines sample size for study design. There are a number of different SSD methods to meet researchers' specific data requirements and goals [12–14]. Determining the sample size required to achieve sufficient statistical power to reject a null hypothesis is a standard approach [13–16]. Cohen defines statistical power as the probability that a test will "yield statistically significant results" i.e. the probability that the null hypothesis will be rejected when the alternative hypothesis is true[17]. These SSD methods have been widely used in bioinformatics and clinical studies [15, 18–21]. Some other methods attempt to find the sample size needed to reach a target performance (e.g. a high correlation coefficient) [22–25]. Within this category we find methods that predict the sample size required for a classifier to reach a particular accuracy [2, 4, 26]. There are two main approaches to predict the sample size required to achieve a specific classifier performance: Dobbin et al. describe a "modelbased" approach to predict the number of samples needed for classifying microarray data [2]. It determines sample size based on standardized fold change, class prevalence, and number of genes or features on the arrays. Another more generic approach is to fit a classifier's learning curve created using empirical data to inverse power law models. This approach is based on the findings from prior studies where it was shown that the learning classifier learning curves generally follow the inverse power law [27]. Examples of this approach include the algorithms proposed by Mukherjee and others [1, 28–30]. Since our proposed method is a variant of this approach, we will describe the prior work on learning curve fitting in more detail.
Learning curve fitting
Mukherjee et al. experimented with fitting inverse power laws to empirical learning curves to forecast the performance at larger sample sizes [1]. They have also discussed a permutation test procedure to assess the statistical significance of classification performance for a given dataset size. The method was tested on several relatively small microarray data sets (n = 53 to 280). The differences between the predicted and actual classification errors were found to be in the range of 1%7%. Boonyanunta et al. on the other hand conducted the curve fitting on several much larger datasets (n = 1,000) using a nonlinear model consistent with the inverse power law [28]. The mean absolute errors were very small, generally below 1%. Our proposed method is similar to that discussed in Mukherjee et al. with a couple of differences: 1) we conducted weighted curve fitting to favor future predictions; 2) we calculated the confidence interval for the fitted curve rather than fitting two additional curves for the lower and upper quartile data points.
Progressive sampling
Another research area related to our work is progressive sampling. Both active learning and progressive sampling start with a very small batch of instances and progressively increase the training data size until a termination criteria is met [31–36]. Active learning algorithms seek to select the most informative cases for training. Several of the learning curves used in this paper were generated using active learning techniques. Progressive sampling, on the other hand, focuses more on minimizing the amount of computation for a given performance target. For instance, Provost et al. proposed progressive sampling using a geometric progressionbased sampling schedule [31]. They also explored convergence detection methods for progressive sampling and selected a convergence method that used linear regression with local sampling (LRLS). In LRLS, the slope of a linear regression line that has been built with r points sampled around the neighborhood of the last sample size is compared to zero. If it is close enough to zero, convergence is detected. The main difference between progressive sampling and SSD of classifiers is that progressive sampling assumes there are an unlimited number of annotated samples and does not predict the sample size required to reach a specific performance target.
Methods
In this section we describe a new fitting algorithm to predict classifier performance based on a learning curve. This algorithm fits an inverse power law model to a small set of initial points of a learning curve with the purpose of predicting a classifier's performance at larger sample sizes. Evaluation was carried out on 12 learning curves at dozens of sample sizes for model fitting and predictions were validated using standard goodness of fit measures.
Algorithm description
 1)
Learning curve creation;
 2)
Model fitting;
 3)
Sample size prediction;
Learning curve creation
Assuming the target performance measure is classification, a learning curve that characterizes classification accuracy (Y_{acc}), as a function of the training set size (X) is created. To obtain the data points (x_{j}, y_{j}), classifiers are created and tested at increasing training set sizes x _{ j }. With a batch size k, x _{ j } = k·j, j = 1, 2,...,m, i.e. ${\overrightarrow{x}}_{j}=\left\{k,2k,3k,...,k\cdot m\right\}$. Classification accuracy points (y_{j}), i.e. the proportion of correctly classified samples, can be calculated at each training sample sizex _{ j } using an independent test set or through nfold cross validation.
Model fitting and parameter identification
Let us define the set Ωas the collection of data points on an empirical learning corresponding to ($X,{Y}_{ac{c}_{X}}$). Ω can be partitioned into two subsets: Ω_{ t } to fit the model, and Ω_{ t } to validate the fitted model. Please note that in real life applications only Ω_{ t } will be available. For example, at sample size x_{s} Ω_{ t } = {(x j, y _{ j }) x _{ j } ≤ x _{ s }} and Ω_{ v } = {(x j, y _{ j }) x _{ j } > x _{ s }}.
UsingΩ_{ t }, we applied nonlinear weighted least squares optimization together with the nl2sol routine from Port Library[39] to fit the mathematical model from Eq(1) and find the parameter vector $\overrightarrow{\beta}$ = {a, b, c}.
We also assigned weights to the data points inΩ_{ t }. As described earlier, data points on the learning curve associates with sample sizes; we postulated that the classifier performance at a larger training sample size is more indicative of the classifier's future performance. To account for this, a data point (x _{ j } , y _{ j })∈Ω_{ t } is assigned the normalized weight j/m where m is the cardinality of Ω.
Performance prediction
In this step, the mathematical model (Eq.(1)) together with the estimated parameters {a, b, c} are applied to unseen sample sizes and the resulting prediction is compared with the data points in Ω_{ v }. In other words, the fitted curve is used to extrapolate the classifier's performance at larger sample sizes. Additionally, the 95% confidence interval of the estimated accuracy${\u0177}_{s}$ is also calculated by using Hessian matrix and the secondorder derivatives on the function describing the curve. See appendix1 (additional file 1) for more details on the implementation of the methods.
Evaluation
Datasets
We evaluated our algorithm using three sets of data. In the first two sets (D1 and D2), observations are smokingrelated sentences from a set of patient discharge summaries from the Partners Health Care's research patient data repository (RPDR). Each observation was manually annotated with smoking status. D1 contains 7,016 sentences and 350 word features to distinguish between smokers (5,333 sentences) and non smokers (1,683 sentences). D2 contains 8,449 sentences, 350 word features to discriminate between past smokers (5,109 sentences) and current smokers (3,340 sentences).
The third data set (D3) is the waveform5000 dataset from the UCI machine learning repository [40] which contains 5,000 instances, 21 features and three classes of waves (1657 instances of w1, 1647 of w2, and 1696 of w3). The classification goal is to perform binary classification to discriminate the first class of waves from the other two.
Each dataset was randomly split into a training set and a testing set. Test sets for D1 and D2 contained 1,000 instances each while 2,500 instances were set apart as test set in D3. On the three datasets, we used 4 different sampling methods  three active learning algorithms and a random selection (passive)  together with a support vector machine classifier with linear kernel from WEKA [41] (complexity constant was set to 1, epsilon set to 1,0 E12, tolerance parameter 1,0E3, and normalization/standardization options were turned off) to generate a total of 12 actual learning curves for Y_{acc}. The active learning methods used are:

Distance (DIST), a simple margin method which samples training instances based on their proximity to a support vector machine (SVM) hyperplane;

Diversity (DIV) which selects instances based on their diversity/dissimilarity from instances in the training set. Diversity is measured as the simple cosine distance between the candidate instances and the already selected set of instances in order to reduce information redundancy; and

Combined method (CMB) which is a combination of both DIST and DIV methods.
The initial sample size is set to 16 with an increment size of 16 as well, i.e. k = 16. Detailed information about the three algorithms can be found in appendix 2 (see additional file 2) and in literature [10, 35, 42].
Each experiment was repeated 100 times and Y _{ acc }averaged at each batch size over the 100 runs to obtain data points(x _{ j } , y _{ j }) of the learning curve.
Goodness of fit measures
On each curve, we started the curve fitting and prediction experiment at Ω_{ t } = 5, i.e. at the sample size of 80 instances. In the subsequent experiments, the Ω_{ t } was increased by 1 until it reached 62 points, i.e. at the sample size of 992 instances.
To evaluate our method, we used as baseline the nonweighted least squares optimization algorithm described by Mukherjee et al [1]. Paired ttest was used to compare the RMSE and MAE between both methods for all experiments. The alternative hypothesis is that the means of the RMSE and MAE of the baseline method is greater than those of our weighted fitting method.
Results
Using the 3 datasets and 4 sampling methods, 12 actual learning curves are generated. We fitted the inverse power law model to each of the curves, using an increasing number of data points (m = 80992 in D1 and D2, m = 80480 in D3). A total of 568 experiments were conducted. In each experiment, the predicted performance was compared to the actual observed performance.
Average RMSE (%) for baseline and weighted fitting method.
Average RMSE (%)  

Weighted [minmax]  Baseline [minmax]  P  
D1DIV  1.52 [0.04  8.44]  2.57 [0.82  8.70]  2.7E44 
D1CMB  0.60 [0.06  4.61]  1.15 [0.44  4.94]  2.7E32 
D1DIS  0.61 [0.09  5.25]  1.16 [0.22  5.50]  1.9E22 
D1RND  1.15 [0.10  11.37]  2.01 [0.38  11.29]  8.2E19 
D2DIV  1.33 [0.283.95]  1.63 [0.733.53]  4.6E09 
D2CMB  0.29 [0.010.67]  0.38 [0.190.76]  3.3E04 
D2DIST  0.39 [0.041.74]  0.50 [0.222.11]  2.7E03 
D2RND  0.46 [0.13  4.99]  0.56 [0.16  4.44]  6.1E04 
D3DIV  0.34 [0.05  1.22]  0.43 [0.04  0.93]  4.6E02 
D3CMB  0.47 [0.09  1.66]  0.65 [0.21  1.60]  6.0E09 
D3DIS  0.38 [0.10  1.24]  0.49 [0.20  1.21]  5.1E10 
D3RND  0.32 [0.15  1.48]  0.32 [0.11  1.75]  6.3E01 
Discussion
In this paper we described a relatively simple method to predict a classifier's performance for a given sample size, through the creation and modelling of a learning curve. As prior research suggests, the learning curves of machine classifiers generally follow the inversepower law [1, 27]. Given the purpose of predicting future performance, our method assigned higher weights to data points associated with larger sample size. In evaluation, the weighted methods resulted in more accurate prediction (p < 0.05) than the unweighted method described by Mukherjee et al.
The evaluation experiments were conducted on free text and waveform data, using passive and active learning algorithms. Prior studies typically used a single type of data (e.g. microarray or text) and a single type of sampling algorithm (i.e. random sampling). By using a variety of data and sampling methods, we were able to test our method on a diverse collection of learning curves and assess its generalizability. For the majority of curves, the RMSE fell below 0.01, within a relative small sample size of 200 used for curve fitting. We observed minimal differences between values of RMSE and MAE which indicates a low variance of the errors.
Our method also provides the confidence intervals of the predicted curves. As shown in Figure 2, the width of the confidence interval negatively correlates with the prediction accuracy. When the predicted value deviates more from the actual observation, the confidence interval tends to be wider. As such, the confidence interval provides an additional measure to help users make the decision in selecting a sample size for additional annotation and classification. In our study, confidence intervals were calculated using a variancecovariance matrix on the fitted parameters. Prior studies have stated that the variance is not an unbiased estimator when a model is tested on new data [1]. Hence, our confidence intervals may sometimes be optimistic.
A major limitation of the methods is that an initial set of annotated data is needed. This is a shortcoming shared by other SSD methods for machine classifiers. On the other hand, depending on what confidence interval is deemed acceptable, the initial annotated sample can be of moderate size (e.g. n = 100~200).
The initial set of annotated data is used to create a learning curve. The curve contains
j data points with a starting sample size of m _{ 0 } and a step size of k. The total sample size m = m _{ 0 } + (j1)*k. The values of m _{ 0 } and k are determined by users. When m _{ 0 } and k are assigned the same value, m = j*k. In active learning, a typical experiment may assign m _{ 0 } as 16 or 32 and k as 16 or 32. For very small data sets, one may consider use m _{ 0 } = 4 and k = 4. Empirically, we found that j needed to be greater than or equal to 5 for the curve fitting to be effective.
In many studies, as well as ours, the learning curves appear to be smooth because each data point on the curve is assigned the average value from multiple experiments (e.g. 10fold cross validation repeated 100 times). With fewer experiments (e.g. 1 round of training and testing per data point), the curve will not be as smooth. We expect the model fitting to be more accurate and the confidence interval to be narrower on smoother curves, though the fitting process remains the same for the less smooth curves.
Although the curve fitting can be done in real time, the time to create the learning curve depends on the classification task, batch size, feature number, processing time of the machine among others. The longest experiment we performed to create a learning curve using active learning as sample selection method run on a single core laptop for several days, though most experiments needed only a few hours.
For future work, we intend to integrate the function to predict sample size into our NLP software. The purpose is to guide users in text mining and annotation tasks. In clinical NLP research, annotation is usually expensive and the sample size decision is often made based on budget rather than expected performance. It is common for researchers to select an initial number of samples in an ad hoc fashion to annotate data and train a model. They then increase the number of annotations if the target performance could not be reached, based on the vague but generally correct belief that performance will improve with a larger sample size. The amount of improvement though cannot be known without the modelling effort we describe in this paper. Predicting the classification performance for a particular sample size would allow users to evaluate the cost effectiveness of additional annotations in study design. Specifically, we plan for it to be incorporated as part of an active learning and/or interactive learning process.
Conclusions
This paper describes a simple sample size prediction algorithm that conducts weighted fitting of learning curves. When tested on free text and waveform classification with active and passive sampling methods, the algorithm outperformed the unweighted algorithm described in previous literature in terms of goodness of fit measures. This algorithm can help users make an informed decision in sample size selection for machine learning tasks, especially when annotated data are expensive to obtain.
Notes
Declarations
Acknowledgements
The authors wish to acknowledge CONICYT (Chilean National Council for Science and Technology Research), MECESUP program, and Universidad de Concepcion for their support to this research. This research was funded in part by CHIR HIR 08374 and VINCI HIR08204.
Authors’ Affiliations
References
 Mukherjee S, Tamayo P, Rogers S, Rifkin R, Engle A, Campbell C, Golub TR, Mesirov JP: Estimating dataset size requirements for classifying DNA microarray data. J Comput Biol. 2003, 10 (2): 119142. 10.1089/106652703321825928.View ArticlePubMedGoogle Scholar
 Dobbin K, Zhao Y, Simon R: How Large a Training Set is Needed to Develop a Classifier for Microarray Data?. Clinical Cancer Research. 2008, 14 (1): 108114. 10.1158/10780432.CCR070443.View ArticlePubMedGoogle Scholar
 Tam VH, Kabbara S, Yeh RF, Leary RH: Impact of sample size on the performance of multiplemodel pharmacokinetic simulations. Antimicrobial agents and chemotherapy. 2006, 50 (11): 39503952. 10.1128/AAC.0033706.View ArticlePubMedPubMed CentralGoogle Scholar
 Kim SY: Effects of sample size on robustness and prediction accuracy of a prognostic gene signature. BMC bioinformatics. 2009, 10 (1): 14710.1186/1471210510147.View ArticlePubMedPubMed CentralGoogle Scholar
 Kalayeh HM, Landgrebe DA: Predicting the Required Number of Training Samples. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 1983, 5 (6): 664667.View ArticleGoogle Scholar
 Nigam K, McCallum AK, Thrun S, Mitchell T: Text Classification from Labeled and Unlabeled Documents using EM. Mach Learn. 2000, 39 (23): 103134.View ArticleGoogle Scholar
 Vlachos A: A stopping criterion for active learning. Computer Speech and Language. 2008, 22 (3): 295312. 10.1016/j.csl.2007.12.001.View ArticleGoogle Scholar
 Olsson F, Tomanek K: An intrinsic stopping criterion for committeebased active learning. Proceedings of the Thirteenth Conference on Computational Natural Language Learning. 2009, Boulder, Colorado: Association for Computational Linguistics, 138146.View ArticleGoogle Scholar
 Zhu J, Wang H, Hovy E, Ma M: Confidencebased stopping criteria for active learning for data annotation. ACM Transactions on Speech and Language Processing (TSLP). 2010, 6 (3): 124. 10.1145/1753783.1753784.View ArticleGoogle Scholar
 Figueroa RL, ZengTreitler Q: Exploring Active Learning in Medical Text Classification. Poster session presented at: AMIA 2009 Annual Symposium in Biomedical and Health Informatics. 2009, San Francisco, CA, USAGoogle Scholar
 Kandula S, Figueroa R, ZengTreitler Q: Predicting Outcome Measures in Active Learning. Poster Session presented at: MEDINFO 2010 13th World Congress on MEdical Informatics. 2010, Cape Town, South AfricaGoogle Scholar
 Maxwell SE, Kelley K, Rausch JR: Sample size planning for statistical power and accuracy in parameter estimation. Annual review of psychology. 2008, 59: 537563. 10.1146/annurev.psych.59.103006.093735.View ArticlePubMedGoogle Scholar
 Adcock CJ: Sample size determination: a review. Journal of the Royal Statistical Society: Series D (The Statistician). 1997, 46 (2): 261283. 10.1111/14679884.00082.Google Scholar
 Lenth RV: Some Practical Guidelines for Effective Sample Size Determination. The American Statistician. 2001, 55 (3): 187193. 10.1198/000313001317098149.View ArticleGoogle Scholar
 Briggs AH, Gray AM: Power and Sample Size Calculations for Stochastic CostEffectiveness Analysis. Medical Decision Making. 1998, 18 (2): S81S92. 10.1177/0272989X9801800210.View ArticlePubMedGoogle Scholar
 Carneiro AV: Estimating sample size in clinical studies: basic methodological principles. Rev Port Cardiol. 2003, 22 (12): 15131521.PubMedGoogle Scholar
 Cohen J: Statistical Power Analysis for the Behavioural Sciences. 1988, Hillsdale, NJ: Lawrence Erlbaum AssociatesGoogle Scholar
 Scheinin I, Ferreira JA, Knuutila S, Meijer GA, van de Wiel MA, Ylstra B: CGHpower: exploring sample size calculations for chromosomal copy number experiments. BMC bioinformatics. 2010, 11: 33110.1186/1471210511331.View ArticlePubMedPubMed CentralGoogle Scholar
 Eng J: Sample size estimation: how many individuals should be studied?. Radiology. 2003, 227 (2): 309313. 10.1148/radiol.2272012051.View ArticlePubMedGoogle Scholar
 Walters SJ: Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF36. Health and quality of life outcomes. 2004, 2: 2610.1186/14777525226.View ArticlePubMedPubMed CentralGoogle Scholar
 Cai J, Zeng D: Sample size/power calculation for casecohort studies. Biometrics. 2004, 60 (4): 10151024. 10.1111/j.0006341X.2004.00257.x.View ArticlePubMedGoogle Scholar
 Algina J, Moulder BC, Moser BK: Sample Size Requirements for Accurate Estimation of Squared SemiPartial Correlation Coefficients. Multivariate Behavioral Research. 2002, 37 (1): 3757. 10.1207/S15327906MBR3701_02.View ArticlePubMedGoogle Scholar
 Stalbovskaya V, Hamadicharef B, Ifeachor E: Sample Size Determination using ROC Analysis. 3rd International Conference on Computational Intelligence in Medicine and Healthcare (CIMED2007): 2007. 2007Google Scholar
 Beal SL: Sample Size Determination for Confidence Intervals on the Population Mean and on the Difference Between Two Population Means. Biometrics. 1989, 45 (3): 969977. 10.2307/2531696.View ArticlePubMedGoogle Scholar
 Jiroutek MR, Muller KE, Kupper LL, Stewart PW: A New Method for Choosing Sample Size for Confidence IntervalBased Inferences. Biometrics. 2003, 59 (3): 580590. 10.1111/15410420.00068.View ArticlePubMedGoogle Scholar
 Fukunaga K, Hayes R: Effects of sample size in classifier design. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 1989, 11 (8): 873885. 10.1109/34.31448.View ArticleGoogle Scholar
 Cortes C, Jackel LD, Solla SA, Vapnik V, Denker JS: Learning Curves: Asymptotic Values and Rate of Convergence. 1994, San Francisco, CA. USA.: Morgan Kaufmann Publishers, VI:Google Scholar
 Boonyanunta N, Zeephongsekul P: Predicting the Relationship Between the Size of Training Sample and the Predictive Power of Classifiers. KnowledgeBased Intelligent Information and Engineering Systems. 2004, Springer Berlin/Heidelberg, 3215: 529535. 10.1007/9783540301349_71.View ArticleGoogle Scholar
 Hess KR, Wei C: Learning Curves in Classification With Microarray Data. Seminars in oncology. 2010, 37 (1): 6568. 10.1053/j.seminoncol.2009.12.002.View ArticlePubMedPubMed CentralGoogle Scholar
 Last M: Predicting and Optimizing Classifier Utility with the Power Law. Proceedings of the Seventh IEEE International Conference on Data Mining Workshops. 2007, IEEE Computer Society, 219224.View ArticleGoogle Scholar
 Provost F, Jensen D, Oates T: Efficient progressive sampling. Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining. 1999, San Diego, California, United States: ACMGoogle Scholar
 Warmuth MK, Liao J, Ratsch G, Mathieson M, Putta S, Lemmen C: Active learning with support vector machines in the drug discovery process. J Chem Inf Comput Sci. 2003, 43 (2): 667673. 10.1021/ci025620t.View ArticlePubMedGoogle Scholar
 Liu Y: Active learning with support vector machine applied to gene expression data for cancer classification. J Chem Inf Comput Sci. 2004, 44 (6): 19361941. 10.1021/ci049810a.View ArticlePubMedGoogle Scholar
 Li M, Sethi IK: Confidencebased active learning. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2006, 28 (8): 12511261.View ArticleGoogle Scholar
 Brinker K: Incorporating Diversity in Active Learning with Support Vector Machines. Proceedings of the Twentieth International Conference on Machine Learning (ICML): 2003. 2003, 5966.Google Scholar
 Yuan J, Zhou X, Zhang J, Wang M, Zhang Q, Wang W, Shi B: Positive Sample Enhanced AngleDiversity Active Learning for SVM Based Image Retrieval. Proceedings of the IEEE International Conference on Multimedia and Expo (ICME 2007): 2007. 2007, 22022205.View ArticleGoogle Scholar
 Yelle LE: The Learning Curve: Historical Review and Comprehensive Survey. Decision Sciences. 1979, 10 (2): 302327. 10.1111/j.15405915.1979.tb00026.x.View ArticleGoogle Scholar
 Ramsay C, Grant A, Wallace S, Garthwaite P, Monk A, Russell I: Statistical assessment of the learning curves of health technologies. Health Technology Assessment. 2001, 5 (12):Google Scholar
 Dennis JE, Gay DM, Welsch RE: Algorithm 573: NL2SOL  An Adaptive Nonlinear LeastSquares Algorithm [E4]. ACM Transactions on Mathematical Software. 1981, 7 (3): 369383. 10.1145/355958.355966.View ArticleGoogle Scholar
 UCI Machine Learning Repository. [http://www.ics.uci.edu/~mlearn/MLRepository.html]
 WekaMachine Learning Software in Java. [http://weka.wiki.sourceforge.net/]
 Tong S, Koller D: Support Vector Machine Active Learning with Applications to Text Classification. Journal of Machine Learning Research. 2001, 2: 4566.Google Scholar
 The prepublication history for this paper can be accessed here:http://www.biomedcentral.com/14726947/12/8/prepub
Prepublication history
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.