X. Yang, C. Castellini, D. Farina and H. Liu, Ultrasound as a Neurorobotic Interface: A Review, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 6, pp. 3534-3546, June 2024 DOI: 10.1109/TSMC.2024.3358960.
Neurorobotic devices, such as prostheses, exoskeletons, and muscle stimulators, can partly restore motor functions in individuals with disabilities, such as stroke, spinal cord injury (SCI), and amputations and musculoskeletal impairments. These devices require information transfer from and to the nervous system by neurorobotic interfaces. However, current interfacing systems have limitations of low-spatial and temporal resolution, and lack robustness, with sensitivity to, e.g., fatigue and sensor displacement. Muscle scanning and imaging by ultrasound technology has emerged as a neurorobotic interface alternative to more conventional electrophysiological recordings. While muscle ultrasound detects movement of muscle fibers, and therefore does not directly detect neural information, the muscle fibers are activated by neurons in the spinal cord and therefore their motions mirror the neural code sent from the spinal cord to muscles. In this view, muscle imaging by ultrasound provides information on the neural activation underlying movement intent and execution. Here, we critically review the literature on ultrasound applied as a neurorobotic interface, focusing on technological progresses and current achievements, machine learning algorithms, and applications in both upper- and lower- limb robotics. This critical review reveals that ultrasound in the human-machine interface field has evolved from bulky hardware to miniaturized systems, from multichannel imaging to sparse channel sensing, from simple muscle morphological analysis to input signal for musculoskeletal models and machine learning, from unimodal sensing to multimodal fusion, and from conventional statistical learning to deep learning. For future advances, we recommend exploring high-precision ultrasound imaging technology, improving the wearability and ergonomics of systems and transducers, and developing user-friendly real-time human-machine interaction models.