Tan, Sinan, Guo, Di, Liu, Huaping, Zhang, Xinyu, Sun, Fuchun Embodied scene description, Autonomous Robots 46(1) DOI: 10.1007/s10514-021-10014-9.
Embodiment is an important characteristic for all intelligent agents, while existing scene description tasks mainly focus on analyzing images passively and the semantic understanding of the scenario is separated from the interaction between the agent and the environment. In this work, we propose the Embodied Scene Description, which exploits the embodiment ability of the agent to find an optimal viewpoint in its environment for scene description tasks. A learning framework with the paradigms of imitation learning and reinforcement learning is established to teach the intelligent agent to generate corresponding sensorimotor activities. The proposed framework is tested on both the AI2Thor dataset and a real-world robotic platform for different scene description tasks, demonstrating the effectiveness and scalability of the developed method. Also, a mobile application is developed, which can be used to assist visually-impaired people to better understand their surroundings.