TY - GEN
T1 - Real-time liquid pouring motion generation
T2 - 2019 IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
AU - Saito, Namiko
AU - Dai, Nguyen Ba
AU - Ogata, Tetsuya
AU - Mori, Hiroki
AU - Sugano, Shigeki
N1 - Funding Information:
∗This work was supported in part by a JSPS Grant-in-Aid for Scientific Research (S) (No. 25220005), Grant-in-Aid for Scientific Research (A) (No. 19H01130), JST CREST (No. JPMJCR15E3), and the “Fundamental Study for Intelligent Machines to Coexist with Nature” program of the Research Institute for Science and Engineering, Waseda University, Japan.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - We propose a sensorimotor dynamical system model for pouring unknown liquids. With our system, a robot holds and shakes a bottle to estimate the characteristics of the contained liquid, such as viscosity and fill level, without calculating to determine their parameters. Next, the robot pours a specified amount of the liquid into another container. The system needs to integrate information on the robot's actions, the liquids, the container, and the surrounding environment to perform the estimation and execute a continuous pouring motion using the same model. We use deep neural networks (DNN) to construct the system. The DNN model repeats prediction and execution of the actions to be taken in the next time step based on the input sensorimotor data, including camera images, force sensor data, and joint angles. At the same time, the DNN model acquires liquid characteristics in the internal state. We confirmed that the DNN model can control the robot to pour a desired amount of liquid with unknown viscosity and fill level.
AB - We propose a sensorimotor dynamical system model for pouring unknown liquids. With our system, a robot holds and shakes a bottle to estimate the characteristics of the contained liquid, such as viscosity and fill level, without calculating to determine their parameters. Next, the robot pours a specified amount of the liquid into another container. The system needs to integrate information on the robot's actions, the liquids, the container, and the surrounding environment to perform the estimation and execute a continuous pouring motion using the same model. We use deep neural networks (DNN) to construct the system. The DNN model repeats prediction and execution of the actions to be taken in the next time step based on the input sensorimotor data, including camera images, force sensor data, and joint angles. At the same time, the DNN model acquires liquid characteristics in the internal state. We confirmed that the DNN model can control the robot to pour a desired amount of liquid with unknown viscosity and fill level.
KW - Liquid characteristics estimation
KW - Long Short-Term Memory (LSTM) networks
KW - Neural networks
KW - Pouring
UR - http://www.scopus.com/inward/record.url?scp=85079071576&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85079071576&partnerID=8YFLogxK
U2 - 10.1109/ROBIO49542.2019.8961718
DO - 10.1109/ROBIO49542.2019.8961718
M3 - Conference contribution
AN - SCOPUS:85079071576
T3 - IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
SP - 1077
EP - 1082
BT - IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 6 December 2019 through 8 December 2019
ER -