Brain polar body
Market capacity is hungry, technology companies and automobile manufacturers are spending money one after another. Unmanned vehicle road testing is not a new thing, but the arrival of automatic driving has not been achieved so quickly as imagined. For at least two years, it has a lot of worries.
In Arizona, where the U.S. drone policy is most friendly, there have been at least 21 incidents of harassment of autonomous vehicles and security personnel in two years, and even gunmen threatening security personnel to ask them to roll out of the neighborhood.
The fundamental reason for the public anger is that there are too many technical failures. Data show that the incidence of California auto-driving accidents has increased year by year from 2014 to 2018, with Google Waymo, General Cruise, Apple, TRI (Toyota Research Institute), Drive. ai, UATC (Uber) and other giants unavoidable.
Even though cautious, it is still easy to blame, and the unmanned car shows that the heart is very bitter.
After several years of development, the perception technology of automatic driving has made considerable progress, high-precision sensors and cameras have long been standard matching. Auto-driving in technological difficulties, I am afraid, needs a more sophisticated decision-making system to save their increasingly lost hearts and minds.
And lately, it seems that this salvation hero has actually appeared.
Preventing drones is better than preventing Sichuan? Old Problems and New Solutions of Automatic Driving
From the Autopilot Takeover Report 2018 just released by the California Motor Vehicle Administration (DMV), we can draw a basic conclusion:
The basic contradiction in the primary stage of automatic driving is the contradiction between people's increasing expectation of automation and the backward driving technology of unmanned vehicles.
It has been proved that although the spatial perception ability which originally limited the recognition of road for unmanned vehicles has been greatly enhanced, it has not been able to help them adapt well to the real world. That's not to blame the public for not having a good face on the driverless road.
At present, it is only the decision making system of wind control and efficiency that can be able to respect the self driving car.
Unfortunately, in many real world traffic situations that humans can easily handle, machines just can't make accurate, efficient and prudent judgments. Therefore, in a long period of time, automatic driving relies on manual operation to make up for the gap between system intelligence quotient and human expectations. The frequency of manual takeover has become the most important index to evaluate the technology of automatic driving.
According to the DMV report, the most skilled Waymo runs an average of 17,846.8 kilometers before it needs to be taken over manually, while Uber, who has been disqualified for road testing, will be taken over once if he runs 0.6 kilometers, in order to fatigue his own security guards! _______
In a report submitted by Google, 272 self-driving and self-driving cars in 14-month road tests have voluntarily left the driverless state, and 69 security officers have decided to take over control. Google said that without the intervention of security personnel, drones could have 13 traffic crashes.
Under such circumstances, California has to stipulate that all future unmanned autopilot companies must set up a remote control room to take over the driving work in case of accident.
But it would be naive to think that everything would be all right if only humans took over. And no matter how huge labor costs are needed to solve the unexpected problems of unmanned vehicles in the future. The automatic driving car is now IQ, let it alone on the road, even if you can not look at the clouds.
The fundamental solution is to let the UAV learn to control the car completely automatically and safely without human intervention. Is that possible?
MIT and Microsoft's latest research results make it possible for the system to recognize and correct its erroneous operations in the training process, so that it can deal with accidents that can only be judged by people at this stage in actual driving.
New Human Roles: From Helping Unmanned Vehicles Clean Up, to Machine Intelligence Trainers
In the latest research, MIT and Microsoft have proposed a new way of self-driving training to help UAVs make better decisions when they encounter accidents, rather than just tidy up the mess when something happens.
In the same way as the traditional training method, the researchers carried out the omni directional simulation training for the autopilot system, so as to prepare for everything that the vehicle may encounter on the road.
If the car behaves correctly, then man will do nothing. If the action of a car deviates from human behavior, then human beings will take over the steering wheel. At this point, the system will receive a signal, in this particular case, which is the feasible solution, what kind of operation is unacceptable.
By collecting human feedback data when a system occurs or is about to make any errors, the system has a list of human feedback behaviors. Researchers combine data to generate new models that can more accurately predict how the system needs to take the right action.
It is worth noting that during the training process, the system may receive many contradictory signals.
For example, in the eyes of the system, it is OK not to slow down while cruising in parallel with a large car, but if the other side is an ambulance, it will be judged wrong not to slow down in parallel.
For example, if the system performs nine correct operations (deceleration/parking) 10 times in an ambulance situation, then the choice in this particular situation will be marked as safe.
One small step for human beings, one big step for automatic driving?
From the practical point of view, the study also faces some practical problems.
For example, in general, unacceptable behavior is much less than acceptable behavior, which means that the system trained by probability calculation is likely to predict that all situations are safe in practice, which is undoubtedly extremely dangerous.
The real potential of this new training method lies in the fact that it has a brighter future for autonomous driving by means of common sense operation.
First of all, due to the high level of human participation, automatic driving can well predict that incorrect actions may be taken in the new situation before it actually goes on the road. In the past, these could only be handled passively by security personnel or cloud personnel.
This helps to pull autopilot back from the extreme moods of over optimism and over pessimism to the middle.
Moving forward in the oscillation is the most real way for AI to enter life.