TY - JOUR
T1 - Multiple descent cost competition
T2 - Restorable self-organization and multimedia information processing
AU - Matsuyama, Yasuo
PY - 1998
Y1 - 1998
N2 - Multiple descent cost competition is a composition of learning phases for minimizing a given measure of total performance, i.e., cost. If these phases are heterogeneous toward each other, the total learning algorithm shows a variety of extraordinary abilities; especially in regards to multimedia information processing. In the first phase of descent cost learning, elements of source data are grouped. Simultaneously, a weight vector for minimal learning, (i.e., a winner), is found. Then, the winner and its partners are updated for further cost reduction. Therefore, two classes of self-organizing feature maps are generated. One is called a grouping feature map, which partitions the source data. The other is an ordinary weight vector feature map. The grouping feature map, together with the winners, retains most of the source data information. This feature map is able to assist in a high quality approximation of the original data. Traditional weight vector feature maps lack this ability. Another important capacity of the grouping feature map is that it can change its shape. Thus, the grouping pattern can accept external directions in order to metamorphose. In the text, the total algorithm of the multiple descent cost competition is explained first. In that section, image processing concepts are introduced in order to assist in the description of this algorithm. Then, a still image is first data-compressed (DC). Next, a restored image is morphed using the grouping feature map by receiving directions given by an external intelligence. Next, an interpolation of frames is applied in order to complete animation coding (AC). Thus, multiple descent cost competition bridges "DC to AC." Examples of multimedia processing on virtual digital movies are given.
AB - Multiple descent cost competition is a composition of learning phases for minimizing a given measure of total performance, i.e., cost. If these phases are heterogeneous toward each other, the total learning algorithm shows a variety of extraordinary abilities; especially in regards to multimedia information processing. In the first phase of descent cost learning, elements of source data are grouped. Simultaneously, a weight vector for minimal learning, (i.e., a winner), is found. Then, the winner and its partners are updated for further cost reduction. Therefore, two classes of self-organizing feature maps are generated. One is called a grouping feature map, which partitions the source data. The other is an ordinary weight vector feature map. The grouping feature map, together with the winners, retains most of the source data information. This feature map is able to assist in a high quality approximation of the original data. Traditional weight vector feature maps lack this ability. Another important capacity of the grouping feature map is that it can change its shape. Thus, the grouping pattern can accept external directions in order to metamorphose. In the text, the total algorithm of the multiple descent cost competition is explained first. In that section, image processing concepts are introduced in order to assist in the description of this algorithm. Then, a still image is first data-compressed (DC). Next, a restored image is morphed using the grouping feature map by receiving directions given by an external intelligence. Next, an interpolation of frames is applied in order to complete animation coding (AC). Thus, multiple descent cost competition bridges "DC to AC." Examples of multimedia processing on virtual digital movies are given.
KW - Competitive learning
KW - Coordination with external intelligence
KW - Data compression
KW - Grouping feature map
KW - Image processing
KW - Multiple descent cost
KW - Self-organization
KW - Standard pattern set
KW - Vector quantization
KW - Virtual movie generation
UR - http://www.scopus.com/inward/record.url?scp=0031646494&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0031646494&partnerID=8YFLogxK
U2 - 10.1109/72.655033
DO - 10.1109/72.655033
M3 - Article
C2 - 18252433
AN - SCOPUS:0031646494
SN - 1045-9227
VL - 9
SP - 106
EP - 122
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 1
ER -