The objective of this research is to answer fundamental design questions for multi-functional robotic skin sensors, optimize their placement onto assistive robotic devices, have the robot and human “learn” how to use the skin sensors efficiently, and quantitatively assess the impact of this assistive technology to humans. The approach is to design and fabricate integrated micro-scale sensors in conjunction with iterative simulation and experimental studies of the performance of physical human-robot interaction enabled by this technology.
This project will contribute efficient algorithms for optimal placement and data networking of distributed skin sensors on robots; new learning and control algorithms to sense human intent and improve interactivity; practical robotic skin and garment hardware with distributed sensors to include tactile, thermal imaging, and acceleration sensing in flexible materials that can be easily attached on and peeled off robots; and new metrics to evaluate the impact of this skin to humans including level of assistance, safety, ease of use, aesthetics, and therapeutic benefits.
Co-robots of the future will share their living spaces with humans, and, like people, will wear sensor skins and clothing that must be interconnected, fitted, cleaned, repaired, and replaced. In addition to aesthetic purposes that increase societal acceptance, these sensorized garments will also enhance robot perception of the environment, and enable extraordinary levels of safety, cooperation, and therapy for humans. The research proposed here will unlock near-term and also unforeseen applications of robotic skin with broad applicability, and especially to home assistance, medical rehabilitation, and prosthetics.