TY - JOUR
T1 - Adaptive learning of hypergame situations using a genetic algorithm
AU - Putro, Utomo Sarjono
AU - Kijima, Kyoichi
AU - Takahashi, Shingo
N1 - Funding Information:
Manuscript received December 2, 1998; revised June 1, 2000. This work was supported in part by Grant-in-Aid for International Scientific Research 09044025 of the Ministry of Education, Japan. This paper was recommended by Associate Editor J. Oommen.
PY - 2000/9
Y1 - 2000/9
N2 - In this paper, we propose and examine adaptive learning procedures for supporting a group of decision makers with a common set of strategies and preferences who face uncertain behaviors of 'nature'. First, we describe the decision situation as a hypergame situation, where each decision maker is explicitly assumed to have misperceptions about the nature's set of strategies and preferences. Then, we propose three learning procedures about the nature, each of which consists of several activities. One of the activities is to choose 'rational' actions based on current perceptions and rationality adopted by the decision makers, while the other activities are represented by the elements of a genetic algorithm (GA) to improve current perceptions. The three learning procedures are different from each other with respect to at least one of such activities as fitness evaluation, modified crossover, and action choice, though they use the same definition for the other GA elements. Finally, we point out that examining the simulation results how to employ preference- and strategy-oriented information is critical to obtaining good performance in clarifying the nature's set of strategies and the outcomes most preferred by the nature.
AB - In this paper, we propose and examine adaptive learning procedures for supporting a group of decision makers with a common set of strategies and preferences who face uncertain behaviors of 'nature'. First, we describe the decision situation as a hypergame situation, where each decision maker is explicitly assumed to have misperceptions about the nature's set of strategies and preferences. Then, we propose three learning procedures about the nature, each of which consists of several activities. One of the activities is to choose 'rational' actions based on current perceptions and rationality adopted by the decision makers, while the other activities are represented by the elements of a genetic algorithm (GA) to improve current perceptions. The three learning procedures are different from each other with respect to at least one of such activities as fitness evaluation, modified crossover, and action choice, though they use the same definition for the other GA elements. Finally, we point out that examining the simulation results how to employ preference- and strategy-oriented information is critical to obtaining good performance in clarifying the nature's set of strategies and the outcomes most preferred by the nature.
UR - http://www.scopus.com/inward/record.url?scp=0034270021&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0034270021&partnerID=8YFLogxK
U2 - 10.1109/3468.867863
DO - 10.1109/3468.867863
M3 - Article
AN - SCOPUS:0034270021
SN - 1083-4427
VL - 30
SP - 562
EP - 572
JO - IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans.
JF - IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans.
IS - 5
ER -