My page - topic 1, topic 2, topic 3 Postbox Live

Researchers look into the safety of AI

Researchers Look Into The Safety Of Ai In Self Driving Cars And Discover Weaknesses

Researchers look into the safety of AI

in self-driving cars and discover weaknesses

 

 

 

 

 

One essential piece of technology for self-driving cars is artificial intelligence. Among other things, it’s utilized for sensing, decision-making, and predictive modeling. However, how open are these AI systems to being taken down?

This subject is being investigated at the University of Buffalo, and the findings point to the possibility that hostile actors could bring these systems down. For instance, it’s feasible to strategically place 3D-printed things on a vehicle to conceal it from AI-powered radar systems, making the vehicle invisible to them.

According to researchers, the work which is done in a controlled study environment does not imply that current autonomous vehicles are dangerous.

However, it might have an impact on lawmakers and government authorities as well as the car, tech, insurance, and other businesses.

Leading the research is Chunming Qiao, SUNY Distinguished Professor in the Department of Computer Science and Engineering. “While still novel today, self-driving vehicles are poised to become a dominant form of transportation in the near future,” Qiao adds. As a result, we must make sure that the technology systems particularly the artificial intelligence models that drive these cars are protected from hostile actions. This is a project that the University at Buffalo is actively working on.”

A report published in the Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS) provides a history of the research, which dates back to 2021. A study published in May in the Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, or Mobicom, and a paper presented this month at the 33rd USENIX Security Symposium, both accessible on arXiv, are two more recent examples.

mmWave detection is reliable but susceptible.

Yi Zhu and other members of Qiao’s team have been testing an autonomous car on UB’s North Campus for the last three years.

Zhu just accepted a professor job at Wayne State University after completing his doctorate in May from the UB Department of Computer Science and Engineering.

He is a cybersecurity expert and the main author of the studies described above that address the vulnerabilities of lidars, radars, and cameras as well as systems that combine several sensors.

“In autonomous driving, millimeter wave [mmWave] radar has become widely adopted for object detection because it’s more reliable and accurate in rain, fog and poor lighting conditions than many cameras,” Zhu explains. “But the radar can be hacked both digitally and in person.”

In one such experiment to test this notion, scientists created objects in particular geometric forms they dubbed “tile masks” using 3D printers and metal foils. They discovered that they could trick the AI models in radar detection by attaching two tile masks to a car, causing it to vanish from the radar.

Insurance fraud and AV competitiveness are two possible attack motivations.

 

Zhu points out that although AI is capable of processing large amounts of data, when given unique instructions that it isn’t trained to handle, it may become confused and deliver false information.

Assume for the moment that an AI is able to accurately identify a cat in an image. However, if we alter the image by a few pixels, AI might mistake it for a picture of a dog, according to Zhu. This is an illustration of adversarial AI. For many AI models, researchers have discovered or created a large number of hostile cases in recent years. Thus, we posed the question, “Is it feasible to create AI model examples for autonomous cars?”

Potential attackers might covertly attach a hostile object to a car before the driver starts driving, parks for a while, or stops at a traffic light, according to the researchers. According to Zhu, they might even insert an item into a pedestrian’s rucksack or other clothing item to completely eliminate detection of that person.

including attacks may be carried out for a variety of reasons, including as insurance fraud, rivalry amongst autonomous driving firms, or a personal vendetta against the driver or occupants of another car.

It is noteworthy, according to researchers, that the simulated attacks operate on the assumption that the attacker is fully conversant with the victim’s vehicle’s radar object detecting system. Although it is possible to access this information, the public is not likely to do so.

When it comes to technology, security lags.

Few studies have examined external hazards, according to Zhu, whereas the majority of AV safety technology concentrates on the interior of the car.

“The security has kind of lagged behind the other technology,” according to him.

Although academics have examined potential defenses against such attacks, they have not yet discovered a workable answer.

“I think there is a long way to go in creating an infallible defense,” Zhu stated. In the future, we hope to look at the security of other sensors, like as motion planning and cameras, in addition to radars. In order to lessen these attacks, we also plan to build certain defense strategies.”

supplied by the University of Buffalo

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading