In this paper we deal with the optimization problem involved in determining the maximal margin separation hyperplane in support vector machines. We consider three different formulations, based on L2 norm distance (the standard case), L1 norm, and L∞ norm. We consider separation in the original space of the data (i.e., there are no kernel transformations). For any of these cases, we focus on the following problem: having the optimal solution for a given training data set, one is given a new training example. The purpose is to use the information about the solution of the problem without the additional example in order to speed up the new optimization problem. We also consider the case of reoptimization after removing an example from the data set. We report results obtained for some standard benchmark problems.
|出版ステータス||Published - 2000|
|イベント||International Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy|
継続期間: 2000 7月 24 → 2000 7月 27
|Other||International Joint Conference on Neural Networks (IJCNN'2000)|
|Period||00/7/24 → 00/7/27|
ASJC Scopus subject areas