## 抄録

In this paper we deal with the optimization problem involved in determining the maximal margin separation hyperplane in support vector machines. We consider three different formulations, based on L_{2} norm distance (the standard case), L_{1} norm, and L_{∞} norm. We consider separation in the original space of the data (i.e., there are no kernel transformations). For any of these cases, we focus on the following problem: having the optimal solution for a given training data set, one is given a new training example. The purpose is to use the information about the solution of the problem without the additional example in order to speed up the new optimization problem. We also consider the case of reoptimization after removing an example from the data set. We report results obtained for some standard benchmark problems.

本文言語 | English |
---|---|

ページ | 399-404 |

ページ数 | 6 |

出版ステータス | Published - 2000 |

外部発表 | はい |

イベント | International Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy 継続期間: 2000 7月 24 → 2000 7月 27 |

### Other

Other | International Joint Conference on Neural Networks (IJCNN'2000) |
---|---|

City | Como, Italy |

Period | 00/7/24 → 00/7/27 |

## ASJC Scopus subject areas

- ソフトウェア
- 人工知能