My page - topic 1, topic 2, topic 3 Postbox Live
POSTBOX LIVE

Researchers look into the safety of AI

Researchers Look Into The Safety Of Ai In Self Driving Cars And Discover Weaknesses

Researchers Explore AI Safety in

Self-Driving Cars and Uncover Vulnerabilities

 

 

The Role of AI in Autonomous Vehicles

 

 

University of Buffalo researchers uncover vulnerabilities in AI systems powering self-driving cars, revealing risks of radar evasion and adversarial attacks.

 

Artificial intelligence (AI) plays a critical role in the operation of self-driving cars. It powers sensing, decision-making, and predictive modeling. However, recent research raises concerns about how secure these AI systems truly are.

 

 

Investigating Potential Threats

At the University of Buffalo, researchers are studying the risks of adversarial attacks on AI systems in autonomous vehicles. Their findings reveal that strategically placed 3D-printed objects can disrupt radar detection. By using these objects, vehicles can become “invisible” to AI-powered sensors. Although the tests occurred in a controlled environment, the implications are serious.

While current autonomous cars may not be immediately at risk, this research could influence lawmakers, tech companies, insurance firms, and automakers. The lead researcher, Chunming Qiao, a SUNY Distinguished Professor, emphasizes that self-driving cars will soon be widespread. Therefore, it’s essential to secure their AI systems against malicious threats.

 

 

Key Research Findings

This research builds on previous studies, including a 2021 paper published in the Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. More recent publications appeared in MobiCom 2024 and the USENIX Security Symposium, and are available on arXiv.

For the past three years, Ph.D. researcher Yi Zhu and his team have tested autonomous vehicles on the University of Buffalo’s North Campus. Zhu, now a professor at Wayne State University, has examined vulnerabilities in multiple sensor systems, including cameras, lidars, and radars.

 

 

mmWave Radar: Strong but Hackable

Zhu notes that millimeter-wave (mmWave) radar is popular because it performs well in rain, fog, and low-light environments. However, it is still susceptible to hacks.

To demonstrate this, researchers used 3D printers and metal foil to make geometric objects called “tile masks.” When they attached these masks to a car, the vehicle became undetectable by radar. This trick fooled the AI radar systems by distorting their input.

 

 

Real-World Risks and Motives

Zhu explains that AI, though powerful, can be confused by unfamiliar inputs. For instance, a model trained to identify cats might incorrectly label a distorted image as a dog. This vulnerability is known as adversarial AI.

Malicious actors could use similar tactics against self-driving cars. They might attach these adversarial objects while the vehicle is parked or even hide them in pedestrian clothing. These attacks could serve various purposes insurance fraud, corporate sabotage, or personal vendettas.

However, it’s important to note that most attacks require a deep understanding of the target vehicle’s radar system. While technically possible, such access is unlikely for the average person.

 

 

Lagging Security Measures

Despite advancements in AV technology, Zhu believes that security remains underdeveloped. Most current safety features focus on internal systems, not external threats.

Though some researchers have proposed defensive solutions, none have proven completely effective. “There’s still a long way to go,” Zhu admits. His team plans to study the security of other sensors, such as cameras and motion-planning systems. They also aim to develop stronger defense strategies to guard against these types of attacks.

 

 

University of Buffalo researchers uncover vulnerabilities in AI systems powering self-driving cars, revealing risks of radar evasion and adversarial attacks.

 

 

#AIsafety, #SelfDrivingCars, #AutonomousVehicles, #Cybersecurity, #RadarSpoofing, #AVsecurity, #AIresearch, #TechPolicy,

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading