Abstract
The capability of learning is one of the salient features of realtime search algorithms such as LRTA*. The major impediment is, however, the instability of the solution quality during convergence: (1) they try to find all optimal solutions even after obtaining fairly good solutions, and (2) they tend to move towards unexplored areas thus failing to balance exploration and exploitation. We propose and analyze two new realtime search algorithms to stabilize the convergence process. ε-search (weighted realtime search) allows suboptimal solutions with ε error to reduce the total amount of learning performed. δ-search (realtime search with upper bounds) utilizes the upper bounds of estimated costs, which become available after the problem is solved once. Guided by the upper bounds, δ-search can better control the tradeoff between exploration and exploitation.
Original language | English |
---|---|
Pages | 305-310 |
Number of pages | 6 |
Publication status | Published - 1996 |
Externally published | Yes |
Event | Proceedings of the 1996 13th National Conference on Artificial Intelligence, AAAI 96. Part 1 (of 2) - Portland, OR, USA Duration: 1996 Aug 4 → 1996 Aug 8 |
Conference
Conference | Proceedings of the 1996 13th National Conference on Artificial Intelligence, AAAI 96. Part 1 (of 2) |
---|---|
City | Portland, OR, USA |
Period | 96/8/4 → 96/8/8 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence