Researchers Investigate AI Safety in
Self-Driving Cars and Uncover Weaknesses
The Role of AI in Autonomous Vehicles
Artificial intelligence is a critical component of self-driving cars. It supports key functionalities like sensing, decision-making, and predictive modeling. However, recent studies have raised questions about how vulnerable these AI systems might be to attacks.
Ongoing Research at the University of Buffalo
Researchers at the University of Buffalo are exploring this issue in depth. Their findings suggest that malicious actors could manipulate or deceive these systems. One striking example involves the strategic placement of 3D-printed objects on a vehicle to trick AI-powered radar systems into ignoring its presence.
Potential Implications Beyond Technology
Although these findings stem from controlled experiments, they could influence regulatory decisions. Government bodies, along with the automotive, tech, and insurance industries, might reconsider safety protocols. According to Chunming Qiao, SUNY Distinguished Professor and head of the research, self-driving vehicles are on the path to becoming mainstream. Therefore, it is crucial to protect the AI models that drive them from hostile interference.
A Timeline of the Study
The team’s research history goes back to 2021. Findings were first published in the Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. More recent updates appeared in May at Mobicom 2024 and this month at the 33rd USENIX Security Symposium, both accessible on arXiv.
mmWave Radar: Strengths and Weaknesses
Yi Zhu, a former doctoral student at UB, led many experiments using a self-driving vehicle on campus. Now a professor at Wayne State University, Zhu specializes in cybersecurity and has authored several related studies. He explains that millimeter wave (mmWave) radar excels in poor weather and low light, outperforming traditional cameras. However, these systems remain vulnerable to both digital and physical hacks.
Deceiving Radar with “Tile Masks”
In one experiment, researchers used 3D printers and metal foils to create “tile masks.” By attaching just two of these masks to a vehicle, they made it disappear from radar detection. This highlighted how adversarial AI can be used to mislead object-recognition systems.
Broader Concerns and Real-World Risks
AI can be tricked into providing false outputs when presented with unfamiliar data. For example, a few pixel changes in a cat image might cause the AI to misclassify it as a dog. Similarly, attackers could place adversarial objects on vehicles or pedestrians, creating serious safety risks. The motives might range from insurance fraud and corporate sabotage to personal vendettas.
Practical Barriers and Realistic Threats
Although these attacks assume the hacker knows the exact radar system of the target vehicle, acquiring such detailed knowledge is generally difficult for the public. Still, it raises a red flag for manufacturers and regulators.
Lagging Security Measures in AV Technology
Zhu notes that most safety research focuses on internal car systems, often overlooking external threats. While defensive solutions are under review, no robust method has emerged yet. The team aims to explore the security of other sensors, such as motion planning systems and cameras, in the future.
The Road Ahead
Creating fail-safe AI models for self-driving cars remains a work in progress. Researchers at the University of Buffalo continue to develop strategies to minimize vulnerabilities and ensure safety on the roads.
Source: University of Buffalo
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.