New Israeli Research May Improve Collaboration Between Humans, AI in Self-Driving Vehicles

Researchers from Bar-Ilan University developed a system for neural networks to rate their own confidence in a decision and ask for human intervention if they weren’t sure enough

Giorgia Valente/The Media Line

A new study from Bar-Ilan University may make self-driving vehicles safer by allowing artificial intelligence to identify the level of confidence in a given decision.

The study, which was published today in Physica A, was the result of three months of research. Five researchers conducted the study under the lead of Ido Kanter, who works in Bar-Ilan University’s physics department and multidisciplinary brain research center.

Kanter’s research team investigated artificial intelligence’s ability to accurately describe its own level of confidence in a decision. The team’s physicists fed a neural network of various types of images and trained the network to identify images of different sorts. Eventually, they produced a neural network that was successful both at identifying images and at knowing which images it struggled to identify.

Much of the research is relevant to self-driving cars. Yuval Meir, one of the PhD students who worked on the study, told The Media Line that an autonomous vehicle using the system they developed would be able to tell if it couldn’t confidently identify a road sign and alert the driver to intervene in such a case.

“The project itself is very unique, and it can be applied to different models too, not just vehicles,” Meir said.

The team’s research suggests a future where artificial intelligence and human intelligence work together.

“This type of technology may never substitute a human being, but it can work for and with humans,” Yarden Tzach, another PhD student on the project, told The Media Line. He said he predicts that developments in AI will actually lead to more jobs rather than job losses.

AI’s awareness of its own limitations is important, given the potential for serious mistakes. “Since AI is trained on narrow or skewed datasets, the margin of error can increase significantly,” Tzach explained.

That potential for error turns many people off from using AI. An AI system with a built-in gauge for its own confidence might draw in some consumers who would otherwise distrust AI.

“For our project, AI comes to help human beings in daily activities too but still requires their supervision. I hope that starting from our study on confidence, many other projects will follow in the future, which may lead to driving much more autonomously,” Meir said.

This piece of research is just one of many studies on artificial intelligence taking place in Israel.

“There are amazing universities here in Israel, and the country invests a lot in scientific and technical research,” Meir said. “This country can be seen as a great place that values innovation and new tools, which can also be used in the military field.”

Brought to you by