How can misleading texts negatively affect AI behavior? This is what a recently submitted study hopes to address as a team of researchers from the University of California, Santa Cruz and Johns Hopkins University investigated the potential security risks of embodied AI, which is AI fixed in a physical body that uses observations to adapt to its environment, as opposed to using text and data, and include cars and robots. This study has the potential to help scientists, engineers, and the public better understand the risks for AI and the steps to take to mitigate them.
For the study, the researchers introduced CHAI (Command Hijacking against embodied AI), which is designed to combat outside threats to embodied AI systems, including misleading text and imagery. Instead, CHAI employs counterattacks that embodied Ais can use to disseminate right from wrong regarding text and images. The researchers tested CHAI on a variety of AI-based systems, including drone emergency landing, autonomous driving, aerial object tracking, and robotic vehicles. In the end, the researchers discovered that CHAI successfully identified incoming attacks while emphasizing the need for enhancing security measures for embodied AI.
“I expect vision-language models to play a major role in future embodied AI systems,” said Dr. Alvaro Cardenas, who is a Professor in the Computer Science and Engineering Department at UC Santa Cruz and a co-author on the study. “Robots designed to interact naturally with people will rely on them, and as these systems move into real-world deployment, security has to be a core consideration.”
Going forward, the researchers aspire to create defense mechanisms for CHAI with the goal of counteracting incoming threats before they comprise embodied AI too much.
How will CHAI help mitigate security risks for embodied AI in the coming years and decades? Only time will tell, and this is why we science!
As always, keep doing science & keep looking up!
Sources: arXiv, EurekAlert!