Abstract
In this paper we deal with the optimization problem involved in determining the maximal margin separation hyperplane in support vector machines. We consider three different formulations, based on L2 norm distance (the standard case), L1 norm, and L∞ norm. We consider separation in the original space of the data (i.e., there are no kernel transformations). For any of these cases, we focus on the following problem: having the optimal solution for a given training data set, one is given a new training example. The purpose is to use the information about the solution of the problem without the additional example in order to speed up the new optimization problem. We also consider the case of reoptimization after removing an example from the data set. We report results obtained for some standard benchmark problems.
Original language | English |
---|---|
Pages | 399-404 |
Number of pages | 6 |
Publication status | Published - 2000 |
Externally published | Yes |
Event | International Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy Duration: 2000 Jul 24 → 2000 Jul 27 |
Other
Other | International Joint Conference on Neural Networks (IJCNN'2000) |
---|---|
City | Como, Italy |
Period | 00/7/24 → 00/7/27 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence