To prove that automatic cars are safe, how can it be so difficult?
How to prove to the consumer that his car is safe has always been a problem that plagues autopilot manufacturers. The author believes that due to the variety and complexity of road conditions, auto-piloting using machine learning will also fail to deal with potential risks. This article comes from IEEE, author Andrew Sliver, Lei Feng network (search "Lei Feng network" public number concerned) compiled, without permission may not be reproduced.
To prove that automatic cars are safe, how can it be so difficult?
Last fall, the media is testing a Tesla Model S with an autopilot system in Palo Alto, California.
It's hard for automakers to prove how safe self-driving cars are, because the core of their intelligence is machine learning.
"You can't always feel that autopilot is a sure thing," said Phillip Koopman, a computer scientist at Carnegie Mellon University and an automotive manufacturer.
As early as 2014, a market research company said that the market for self-driving cars will reach US$87 billion by 2030. Many companies, including Google, Tesla and Uber, have carried out their own computer-aided driving or fully automated driving programs. These companies have made some progress more or less, but there are still a lot of technical problems waiting to be overcome.
Some researchers believe that because of the nature of machine learning, it is a very difficult challenge to prove that a self-driving car can be safely on the road, and Koopman is convinced.
In general, engineers write code as required, and then run tests to check if the code meets the requirements. However, using machine learning, it is not so simple to let computers control this complex system.
For example, dealing with images taken at different times on the same day or identifying important signs such as crosswalks and parking signs in certain circumstances is not a problem that can be solved by writing a code. Koopman said: "The difficulty of machine learning is that you simply can't define what the code needs to meet."
Many years ago, engineers realized that traditional software cannot analyze camera images, so they turned to machine learning algorithms. The algorithm generates a mathematical model that can solve a specific problem by processing the sample.
The engineer will provide many samples with notes - tell the computer what the parking sign is and what it is not. These algorithms will analyze the image sub-pixel block, extract the feature pixel block to establish the operation model. When the computer receives a new image, it will run these images over this model to identify the image that contains the parking sign.
Koopman said: "This type of inductive learning has a certain potential risk of failure. If you want to take a closer look at what these machines are doing, you can only see some statistics. It is a black box. You don't even know that it is learning. What are you."
To put it more vividly, imagine that you are testing a self-driving car and want to see if it will avoid pedestrians. And there are pedestrians wearing orange warning suits around, and you do not take control of the vehicle at this time. The vehicle may recognize the pedestrian's hand, arm, feet, or even an orange shirt as the program trained, and finally stop the vehicle.
Or to be more specific, if you test your self-driving vehicle in the summer and no one wears a hat all summer, then let the vehicle computer see a hat. Will it make it panic?
"The events we can use to train our algorithms are limited," Koopman said.
Artificial neural network is a common machine learning model that can simulate the connection between neurons in the human brain. Google researchers tried to use it to identify dumbbells. If the system knows what a "dumbbell" is through a series of pictures, but if the trained picture does not have the image of a dumbbell alone (all bodybuilders are holding it), then it is impossible to extract the basic elements of a dumbbell, and then Will produce wrong judgments.
Another issue here is security verification. Koopman points out that if there are too many training tests for the algorithm using approximate data, it may remember a particular test and return that test's information.
Koopman said that if Uber randomly selects a city to launch its self-driving car, and there is no detailed high-precision electronic map, then these self-driving cars will not be fully functional. Although there is a simple solution - train and put in self-driving cars only in downtown Pittsburgh (the map that Uber has drawn here). However, this greatly limits the scope of use of self-driving cars.
In addition, another major challenge is whether the recognition ability of the algorithm will be affected when the system encounters bad weather conditions such as rain, fog and dust.
In a study in 2013, it was found that changing a certain pixel block in an image does not show any change to the naked eye, but it can affect the ability of the algorithm to determine that the school bus in front of it is just an ordinary vehicle.
Matthieu Roy, a software reliability engineer at the French National Centre for Scientific Research, works in the automotive and avionics industry and tells us: “We will never put machine learning algorithms on aircraft because we cannot judge whether the system’s decision is Right or wrong. And if the plane can't pass its own safety test, it can't take off or land." Roy pointed out that we can't test all the scenarios that the self-driving car might encounter, but we have to deal with these potentials. risk.
Alessia Knauss is a postdoctoral software engineer at Chalmers University of Technology in Sweden. She is currently working on an optimal test model for the development of self-driving cars. She said: "The cost of doing so is too great!"
She is currently interviewing some car executives to get insight into their attitudes in this regard. She said that even if equipped with multiple sensors (such as Google's self-driving cars), it is only used as a backup emergency. However, each component must be tested according to actual conditions, so that the system can better use them. Knauss said: "We will do our best to develop an optimal test model."
Koopman wanted automakers to prove to third parties how secure their autopilot system was. "I don't believe what they say," Koopman said.
He also wanted manufacturers to specifically explain the characteristics of the vehicle's algorithm, the data of different scenarios of test training, and how their products in the simulation tests ensure the safety of vehicle passengers in real life. If none of the unexpected events happen in the 100-million-mile simulation test of the engineering team, the vehicle will naturally not have a corresponding treatment, and car manufacturers can claim that other situations are unlikely.
"Each industry company has an independent check and balance program when it comes to developing critical software," Koopman points out. Just last month, the National Highway Traffic Safety Administration (NHTAS) issued a self-driving automobile law, but NHTAS did not impose rigid regulations on independent safety tests.
Koopman believes that certain companies will relax their requirements on the safety of vehicles because of limitations in R&D time and R&D costs. For example, in 1986 NASA’s Challenger accident, it was precisely because of the neglect of some risk factors that caused the space shuttle to explode 73 seconds after it was lifted, resulting in the death of seven astronauts.
We don't need to tell the public how the algorithm performs security checks. The aviation industry has engineering experts who are employed by airlines. It is also a rule that they sign a confidentiality agreement. "I'm not teaching them what to do. I'm just telling them that people have the right to know something."
We are the food packaging manufacturer with more than 35years experience.We use safty materials,superb technology by ISO22000,FDA standard.From concept to delivery of tailor-made,Gonfor can do the innovation baby food packaging, we service a range of clients, including those with products ranged for Nestle, Sinospec Group, Alibaba and just to name a few.
We have 150 lovely employee, about 30 sets high speed pouch making machine, we can do about 400-500 thousands plastic bags every day.
Please read below to find out more about our abilities, or alternatively you can visit our website .
Premium Baby Food Packaging,Food Product Packaging,Food Packaging Bags,Food Packaging
ZHEJIANG GONFOR SOFT-PACKAGE CO.,LTD , https://www.chinasoftpackage.com