Artificial intelligence neural network teaches when it should not be trusted



Neural Network Confidence

With researchers have developed a way for deep study neural networks to rapidly evaluate the confidence levels in their production. The advance could improve safety and efficiency in AI-assisted decision making. Credit: with

A faster way to evaluate uncertainty in AI-assisted decision-making could lead to safer results.

Increasingly, artificial intelligence systems known as deep learning neural networks are being used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know that they are correct? Alexander Amini and his colleagues in With And Harvard University wanted to find out.

They have developed a fast way for a neural network to crunch data and export not only a prediction but also the trust of the model based on the quality of the available data. The advance could save lives because deep study is already deployed in the real world today. The safety of a network may be the difference between an autonomous vehicle that states that “it is clear to pass through the intersection” and “it is probably clear, so stop just in case.”

Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression, accelerates the process and could lead to safer results.” We need the ability to not only have high-performance models, but also to understand when we can not trust those models, “says Amini , A PhD in Professor Daniela Rus’ group from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

“The idea is important and applicable in breadth. It can be used to assess products that rely on studied models. By evaluating the uncertainty of a studied model, we also learn how much error to expect from the model, and what missing data may be. Improve that model, “says Russ.

Amini will present the research at the NeurIPS conference next month, together with Rus, who is Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of CSAIL, and Deputy Dean of Research for the MIT Stephen A. Schwarzman College of Computing; And graduate students Wilko Schwarting from MIT and Ava Soleimany from MIT and Harvard.

Effective uncertainty

After an up-and-down history, in-depth study has demonstrated remarkable performance on a variety of tasks, in some cases even exceeding human Accuracy. And nowadays, deep learning seems to go wherever computers go. It fuels results for search engines, feeds from social media and facial recognition. “We’ve had great successes with deep learning,” says Amini. “Neural networks are really good at knowing the correct response 99 percent of the time.” But 99 percent will not cut it when life is on the line.

“One thing that illuminates researchers is the ability of the models to know and tell us when they could be wrong,” says Amini. “We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.”

Neural networks can be massive, and often read billions of parameters. So it can be a difficult computational lift just to get an answer, let alone a confidence level. Uncertainty analysis in neural networks is not new. The earlier approaches, which come from Bayesian deep study, have many times relied on a neural network to run or samples to understand his confidence. The process takes time and memory, a luxury that may not exist in high-speed traffic.

The researchers devised a way to estimate uncertainty from only one flow of the neural network. They have designed the network with a fixed output and produce not only a decision, but also a new probabilistic distribution that captured the evidence in support of that decision. These distributions, called evidentiary distributions, immediately capture the confidence of the model in his forecast. This includes any uncertainty in the underlying input data and the final decision of the model. This distinction can signal whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data is just noisy.

Security control

To test their approach, the researchers started with a challenging computer vision work. They trained their neural network to analyze a monocular color image and estimate a depth value (ie distance from the camera lens) for each pixel. An autonomous vehicle could use similar calculations to estimate the proximity to a pedestrian or to another vehicle, Which is no simple task.

The performance of their network was similar to previous modern-of-the-art models, but it also got the ability to appreciate its own uncertainty. As the researchers hoped, the network projected high uncertainty for pixels where it predicted the wrong depth. “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” says Amini.

To print their calibration, the team also showed that the network projected higher uncertainty for “out-of-distribution” data – completely new types of images that were never encountered during training. After training the network for indoor home scenes, they offered a lot of outdoor driving scenes. The network consistently warned that the responses to the novel’s outdoor scenes were uncertain. The test highlighted the network’s ability to flag when users should not place full trust in its decisions. In those cases, “if this is a health care application, maybe we do not trust the diagnosis that the model gives, and instead seek a second opinion,” says Amini.

The network even knew when the photographers were doctored, potentially hedging against data manipulation attacks. In another trial, the researchers boosted opponents noise levels in a batch of images they offered to the network. The effect was subtle – barely perceptible to the human eye – but the net sniffed the images, tagging the output with high levels of uncertainty. The ability to alarm on falsified data may help detect and deter opponents attacks, a growing concern in the age of deepfakes.

Deep Evidential Regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at Deepmind, who is not involved in the work . “This is done in a new way that avoids some messy aspects of other approaches – for example sampling or ensembles – which makes it not only elegant, but also competitively more efficient – a winning combination.”

Deep evidential regression may improve certainty in AI-assisted decision making. “We’re starting to see a lot more of these [neural network] Models leak from the research lab and into the real world, in situations that are touching humans with potentially life-threatening consequences, “says Amini.” Every user of the method, whether it be a doctor or a person in a passenger seat in a Vehicle, must be aware of any risk or uncertainty associated with the decision. “He envisions the system not only quickly flagging uncertainty, but also using it to make more conservative decisions in risky scenarios than an autonomous vehicle approaching an intersection.

“Any field that will have deployable machine learning ultimately needs to have reliable uncertainty awareness,” he says.

The work was supported, in part, by the National Science Foundation and Toyota Research Institute through the Toyota-CSAIL Joint Research Center.




Source link