Notations
Let us suppose that a data set consists of N examples x1,..., x
N
each of which has P features {1,..., P}.
Let x
n
= (x1, n,..., xP, n) be the n-th example where n ∈ {1,..., N}, and the i-th feature value, i ∈ {1,..., P}, of the n-th example is denoted by xi, n. Class labels of the N examples will be denoted by y = (y1,..., y
N
).
In this paper, we only consider a binary classification problem because we are interested in distinguishing benign and malignant examples. Overall, the labeled data set is expressed as {(x1, y1),..., (x
N
, y
N
)}.
SVM
SVM is one of the most popular modern classification methods. Based on the structural risk minimization principal, SVM defines an optimal hyperplane between samples of different class labels. The position of the hyperplane is adjusted so that the distance from the hyperplane to a nearest sample, or margin, is maximized.
Moreover, if the SVM cannot define any hyperplane that separates examples in linear space, it can use kernel functions to send examples to any kernel space where the hyperplane can separate examples. Although we can use any kernel function meeting Mercer's Theorem for SVM, we consider widely-used the linear and Gaussian radial basis function (RBF) kernels only in this research.
SVM-RFE
SVM is a powerful classification method but it has no feature selection method. Therefore, a wrapper-type feature selection method, SVM-RFE, was introduced [8]. SVM-RFE generates ranking of features by computing information gain during iterative backward feature elimination. The idea of information gain computation is based on Optimal Brain Damage (OBD) [12]. In every iterative step, SVM-RFE sorts the features in working set in the order of difference of the obejective functions and removes a feature with the minimum difference. Defining IG(k) as information gain when k-th feature is removed, overall iterative algorithm of SVM-RFE is shown in Algorithm 1.
ENSEMBLE and JOIN
SVM-RFE [8] has two parameters that need to be determined. The first parameter decides how many features should be used to obtain best performance. The second parameter specifies what portion of features should be eliminated in each iteration. To resolve this issue, a simple approach can be easily
Algorithm 1 SVM-RFE
Require: Feature lists R = [] and S = [1,..., P]
1: while S ≠ [] do
2: Train a SVM with features in S
3: for all k-th feature in S do
4: Compute IG(k)
5: end for
6: e = arg min
k
(IG(k))
7: R = [e, R]
8: S = S - [e]
9: end while
10: return R
implemented. First, we separate given training set into a partial training set and a hold-out set. Then, we apply Algorithm 2 with some parameter 'threshold'.
Score of each feature subset R
o
is computed as
where err(R
o
) is the error of SVM trained using R
o
and tested with hold-out set. Using this method, we can obtain a feature subset R which yields reasonably small amount of error on trained dataset. Utilizing this algorithm as base, Jong et al. [10] proposed two methods, ENSEMBLE and JOIN to combine multiple rankings generated by SVM-RFE as in Algorithm 3 and 4.
In this paper, we used 25% of training set as hold-out set and used same sets of thresholds and cutoffs as in [10], i.e., {0.2, 0.3, 0.4, 0.5, 0.6, 0.7} and {1, 2, 3, 4, 5}.
Algorithm 2 SVM-RFE(threshold)
Require: Ranked feature lists R = [], R
i
= [] where i = 1,..., P and S' = [1,..., P]
1: i = 1
2: while S' ≠ [] do
3: Train an SVMs using a partial trainset with features in S'
4: for all features in S' do
5: Compute ranking of features as in SVM-RFE
6: end for
7: R
i
= S'
8: Eliminate threshold percent of lesser-important features from S'
9: i = i + 1
10: end while
11: R = R
o
where R
o
yields minimum score on hold-out set.
12: return R
Algorithm 3 ENSEMBLE(v1, v2,.., v
k
)
1: for threshold v ∈ {v1, v2,..., v
k
} do
2: R
v
= SVM-RFE(v)
3: end for
4: return a majority vote classifier using SVMs trained by
.
Algorithm 4 JOIN(cutoff, v1, v2,..., v
k
)
1: for threshold v ∈ {v1, v2,..., v
k
} do
2: R
v
= SVM-RFE(v)
3: end for
4: R = features selected at least cutoff times in {
}
5: return a SVM trained with R
Multiple SVM-RFE with bootstrap
Multiple SVM-RFE (MSVM-RFE) [9] is a recently introduced SVM-RFE-based feature selection algorithm. It exploits an ensemble of SVM classifiers and cross validation schemes to rank features. First, we make T subsamples from the original training set. Then, supposing that we have T SVMs trained using different subsamples, we calculate the corresponding discriminant information gain associated with each feature of each SVM. To compute this information gain, we use the same method as in SVM-RFE [8]. Exploiting the objective function of SVM, and its Lagrangian solution λ, we can derive a cost function
where H is a matrix with elements y
q
y
r
K(x
q
, x
r
) and 1 is a N dimensional vector of ones while K(·) is a kernel function and 1 ≤ q, r ≤ N. Since we are looking for the subset of features that has the best discriminating power between classes, we compute the difference in cost function for each elimination of i-th input feature, leaving Lagrangian multipliers unchanged. Therefore, the ranking for the i-th feature of j-th SVM can be defined as
where H(-i) denotes that i-th feature was removed from all elements in H. Then, considering DJ
j
as a weight vector of features for j-th SVM, we normalize all T weight vectors such as DJ
j
= DJ
j
/||DJ
j
||. This gives us T weight vectors each with P elements. Here, each element in the vector stands for a information gain achieved by eliminating the corresponding feature. After normalizing weight vectors for each SVM, we can compute each feature's ranking score
with μ
i
and σ
i
defined as:
The algorithm then applies this method to the training set with k-fold cross validation scheme. If we perform 5-fold cross validation and generate 20 subsamples in each fold, we will eventually have T = 100 SVMs to combine. The overall MSVM-RFE algorithm is described in Algorithm 5.
Algorithm 5 MSVM-RFE
Require: Ranked feature lists R = [] and S' = [1,..., P]
1: while S' ≠ [] do
2: Train T SVMs using T subsamples with features in S'
3: for all j-th SVM 1 ≤ j ≤ T do
4: for all i-th feature 1 ≤ i ≤ P do
5: Compute DJ
ji
6: end for
7: Compute DJ
j
= DJ
j
/||DJ
j
||
8: end for
9: for all feature l ∈ S' do
10: Compute c
l
using Equation (1)
11: end for
12: e = arg min
l
(c(l)) where l ∈ S'
13: R = [e, R]
14: S' = S' - [e]
15: end while
16: return R
One should note that original MSVM-RFE proposed in [9] uses cross-validation scheme when generating subsamples. However, we omitted this step because combining boosting into the original MSVM-RFE algorithm with cross-validation scheme is very complex and may confuse the purpose of this study.
Multiple SVM-RFE with boosting
When making subsamples, original MSVM-RFE uses the bootstrap approach [13]. This ensemble approach builds replicates of the original data set S by random re-sampling from S, but with replacement N times, where N is the number of examples. Therefore, each example (x
n
, y
n
) may appear more than once or not at all in a particular replicate subsample. Statistically, it is desirable to make every replicate differ as much as possible to gain higher improvement of the ensemble. The concept is both intuitively reasonable and theoretically correct. However, as the architecture of MSVM-RFE uses simple bootstrapping, it naturally follows that utilizing another popular ensemble method, boosting [14], instead of bootstrapping for two reasons. First, boosting outperforms bootstrapping on average [15, 16], and secondly, boosting of SVMs generally yields better classification accuracy than bootstrap counterpart [17]. Therefore, to make use of ensemble of SVMs effectively, it may be worthwhile to use boosting instead of bootstrapping. For this reason, we applied AdaBoost [14], a classic boosting algorithm, to MSVM-RFE algorithm instead of bootstrapping in this work.
Unlike the simple bootstrap approach, AdaBoost maintains weights of each example in S. Initially, we assign same value of weight to n-th example D1(n) = 1/N where 1 ≤ n ≤ N. Each iterative process consists of four steps. At first, the algorithm generates a bootstrap subsample according to weight distribution at t-th iteration D
t
. Next, it trains an SVM using the subsample. Third, it calculate the error using the original example set S. Finally it updates the weight value so that the probability of correctly classified examples is decreased while that of incorrect ones is increased. This update procedure makes next bootstrap pick more incorrectly classified examples, i.e. difficult-to-classify examples than easy-to-classify ones. The iterative re-sampling procedure MAKE_SUBSAMPLES() using AdaBoost algorithm is described in Algorithm 6.
Algorithm 6 MAKE_SUBSAMPLE
Require: S = {(x
n
, y
n
)}, D1(n) = 1/N, n = 1,..., N;
1: for j = 1 to T do
2: Build a bootstrap B
j
= {(x
n
, y
n
)|n = 1,..., N} based on weight distribution D
j
3: Train a SVM hypothesis h
j
using B
j
4: 
5: if ϵ
j
≥ 0.5 then
6: Goto line 2
7: end if
8: α
j
= (1/2)ln((1 - ϵ
j
)/ϵ
j
), α
j
∈ R
9: Dj+1(n) = (D
j
(n)/Z
j
) × exp(-α
j
y
n
h
j
(x
n
)) where Z
j
is a normalization factor chosen so that Dj+1 also be a probability distribution
10: end for
11: return B
j
, α
j
where 1 ≤ j ≤ T
In addition to modifying re-sampling method, we made a change in ranking criterion of original MSVM-RFE. In this MSVM-RFE with Boosting method, the weight vector DJ
j
of j-th SVM undergoes one more process between normalization and feature ranking score calculation. Since the contribution of each SVM in ensemble to the overall classification accuracy is unique, we multiply another weight factor to the normalized feature weight vector DJ
j
. The new weight factor is obtained from the weight of hypothesis classifier calculated during the re-sampling process of AdaBoost. By multiplying this weight α
j
to DJ
j
, we can grade the overall feature weight more coherently. The overall iterative algorithm of MSVM-RFE with AdaBoost is described in Algorithm 7.
Algorithm 7 MSVM-RFE with AdaBoost
Require: Ranked feature lists R = [] and S'= [1,..., P]
1: MAKE_SUBSAMPLES(B
t
, α
t
); t = 1,..., T
2: while S' ≠ [] do
3: Train T SVMs using B
t
, with features in set S'
4: Compute and normalize T weight vectors DJ
j
as in MSVM-RFE where 1 ≤ j ≤ T
5: for j = 1 to T do
6: DJ
j
= DJ
j
× ln(α
j
)
7: end for
8: for all feature l ∈ S' do
9: Compute the ranking score c
l
using Eq. (1)
10: end for
11: e = argmin
l
(c
l
) where l ∈ S'
12: R = [e, R]
13: S' = S' - [e]
14: end while
15: return R
Note that we took logarithm of hypothesis weights instead of raw values in order to avoid radical changes in ranking criterion. Since boosting algorithm overfits by nature and SVM, the base classifier, is relatively strong classifier, the error rate of hypothesis increases drastically as iteration in MAKE_SUBSAMPLES() progresses. We have witnessed this overfitting problem by preliminary experiment and solved the problem by taking logarithm to the hypothesis weight. Computation time of MSVM-RFE with boosting can also be explained here. From our experiments, we found that there is no significant difference between the original MSVM-RFE and MSVM-RFE with boosting as the number of subsamples generated by MAKE_SUBSAMPLES() decreases.
Lastly, unlike the conventional boosting algorithm application, we only exploit bootstrap subsamples generated by the algorithm and dismiss trained SVMs for the following reasons:
-
We are primarily interested in feature ranking and not the aggregation of weak hypotheses.
-
Since we are using SVM-RFE for eventual classification method, this require a certain criterion to pick appropriate number of features from different boosted models.
In preliminary experiments using same number of features and simple majority-voting aggregation, SVM-RFE using boosted models did not show significance in accuracy improvement. However, we could find some evidences that ensemble of SVMs can be useful in mammogram classification.