TY - GEN
T1 - Accuracy-configurable low-power approximate floating-point multiplier based on mantissa bit segmentation
AU - Li, Jie
AU - Guo, Yi
AU - Kimura, Shinji
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11/16
Y1 - 2020/11/16
N2 - Nowadays, in energy-efficient design of digital systems, approximate computing (AC) has an increasingly important role. Due to human perceptual limitations, redundancy in input data and so on, there is a huge amount of applications that can tolerate errors. In this paper, an accuracy-configurable approximate floating-point (FP) multiplier is proposed to improve hardware consumption for such applications. Mantissa is divided into a short exactly processed part and a remaining approximately processed part. A new addition and shifting method is applied to the approximate part to replace multiplication to improve hardware performance. Experimental results show the 4-bit exact part configuration of the proposed work ensures the accuracy of 99.17% (MRED is 0.83%) with the reduction 67.65% of area, 16.64% of delay and 75.62% of power. The proposed work also shows good performance in image processing and neural networks.
AB - Nowadays, in energy-efficient design of digital systems, approximate computing (AC) has an increasingly important role. Due to human perceptual limitations, redundancy in input data and so on, there is a huge amount of applications that can tolerate errors. In this paper, an accuracy-configurable approximate floating-point (FP) multiplier is proposed to improve hardware consumption for such applications. Mantissa is divided into a short exactly processed part and a remaining approximately processed part. A new addition and shifting method is applied to the approximate part to replace multiplication to improve hardware performance. Experimental results show the 4-bit exact part configuration of the proposed work ensures the accuracy of 99.17% (MRED is 0.83%) with the reduction 67.65% of area, 16.64% of delay and 75.62% of power. The proposed work also shows good performance in image processing and neural networks.
KW - Accuracy-configurable
KW - Approximate computing
KW - Bit segmentation
KW - Floating-point multiplier
KW - High accuracy
KW - Low-power
UR - http://www.scopus.com/inward/record.url?scp=85098956907&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098956907&partnerID=8YFLogxK
U2 - 10.1109/TENCON50793.2020.9293755
DO - 10.1109/TENCON50793.2020.9293755
M3 - Conference contribution
AN - SCOPUS:85098956907
T3 - IEEE Region 10 Annual International Conference, Proceedings/TENCON
SP - 1311
EP - 1316
BT - 2020 IEEE Region 10 Conference, TENCON 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE Region 10 Conference, TENCON 2020
Y2 - 16 November 2020 through 19 November 2020
ER -