Abstract
Traditional evolutionary algorithms (EAs) generally starts evolution from scratch, in other words, randomly. However, this is computationally consuming, and can easily cause the instability of evolution. In order to solve the above problems, this paper describes a new method to improve the evolution efficiency of a recently proposed graph-based EA - genetic network programming (GNP) - by introducing knowledge transfer ability. The basic concept of the proposed method, named GNP-KT, arises from two steps: First, it formulates the knowledge by discovering abstract decision-making rules from source domains in a learning classifier system (LCS) aspect; Second, the knowledge is adaptively reused as advice when applying GNP to a target domain. A reinforcement learning (RL)-based method is proposed to automatically transfer knowledge from source domain to target domain, which eventually allows GNP-KT to result in better initial performance and final fitness values. The experimental results in a real mobile robot control problem confirm the superiority of GNP-KT over traditional methods.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 798-805 |
Number of pages | 8 |
ISBN (Print) | 9781479914883 |
DOIs | |
Publication status | Published - 2014 Sept 16 |
Event | 2014 IEEE Congress on Evolutionary Computation, CEC 2014 - Beijing Duration: 2014 Jul 6 → 2014 Jul 11 |
Other
Other | 2014 IEEE Congress on Evolutionary Computation, CEC 2014 |
---|---|
City | Beijing |
Period | 14/7/6 → 14/7/11 |
ASJC Scopus subject areas
- Artificial Intelligence
- Computational Theory and Mathematics
- Theoretical Computer Science