A study, published in Nature Machine Intelligence, looked at a new algorithm to improve the safety of driverless cars.
Prof Benjamin Heydecker, Professor of Transport Studies, University College London (UCL), said:
“The authors develop an overlay for autonomous vehicles that ensures their collision-free travel in certain specified road and traffic environments. The operation is of the overlay is to override autonomous control of a vehicle if the planned path could otherwise lead to a collision with another road user (inclusive of vehicles and pedestrians). The effect of this is calculated to be safe throughout the duration of the action and to result in a state from which safe onward paths are available: this includes halting the autonomous vehicle until others that could legally conflict with them have passed. This override therefore ensures safe operation of any autonomous control mechanism when implemented jointly.
“The traffic environments within which safe operation is calculated allow for all vehicles travelling subject to legal limits on speeds and manoeuvres (ie follow all rules of the road), and subject to kinematic possibility (limited acceleration). Pedestrians are subject only to limitations on their kinematic possibility. The paper shows examples of combining the collision-free overlay with each of three different autonomous vehicle controllers in two traffic scenes.
“The effect of the collision-free overlay is to ensure safety in the specified environments, as illustrated by the example combinations of autonomous controller and scenarios. However, three main comments apply:
“First, the resulting control is safe to the point of conservatism in lacking anticipation of the movement of other road users. This leads to low speeds and stop-go movement of the autonomous vehicle. An example that is illustrated in the text is to divert a vehicle around a pedestrian who is on the designated walkway in case they step into the roadway. This in turn:
“Second, the override is predicated on legal action by other road users, so does not ensure safety in cases where other road users do not follow all rules of the road. Whilst safety cannot be guaranteed in all such cases, some possibilities arise from human perception, knowledge, experience and anticipation.
“In summary, the proposed collision-free overlay is shown to work within its defined environment, but at the expense of progressive motion. The practical environments in which this could be beneficial include plazas where autonomous vehicles travel at low speeds alongside pedestrians, and roadways that are available exclusively for autonomous vehicles. Effective implementation in fleets of cooperative vehicles would require further development of the methods described in the paper. The authors’ conclusion that this approach can drastically reduce the number of traffic accidents is supported by the text inasmuch as the reference would be to the number under unmodified autonomous control.”
Dr Ron Chrisley, Director, Centre for Cognitive Science, University of Sussex, said:
“The system’s assumption that other road users will always behave legally could potentially lead to collisions that a different system, based more on how road users *actually* behave, would prevent. Saying that a collision wasn’t strictly caused by the autonomous vehicle will be of cold comfort to the families of accident victims in cases where the autonomous vehicle could have avoided the collision *if* it had taken predictable but non-legal driving/walking behaviour of others into account. The authors admit as much when they suggest that in some cases (e.g. some pedestrians, some of the time) one may get better outcomes by relaxing this assumption. But when should the assumption be relaxed? Only when there is a registered school nearby? What if there are lots of children, around, but no officially designated school? There is no reason to believe the system can distinguish children from non-children, so it is unlikely to be able to make the clumsy switch from “those around will behave legally” to “those around may not behave legally” in the right situations.
“But it’s worse than that: it is my understanding from the paper that the system always restricts the autonomous vehicle to legal trajectories only. This will clearly give the wrong result in any situation where, say, a harmless Highway Code infraction on the part of the autonomous vehicle could prevent a multi-vehicle collision involving multiple fatalities.
“The paper focusses solely on avoiding collisions between the autonomous vehicle and others, but ignores considering what effect the vehicles manoeuvres might have on other drivers, making them more likely to collide with something else.
“A real-world autonomous vehicle is never 100% certain about what the objects, road boundaries, lanes, regulations, signals, etc. are in a situation. Rather, it has a *level* of confidence or *degree* of certainty about all these things. The system discussed in the paper does not, it seems, take these different confidence levels into account when computing what trajectory is best. A more ethically rational system would modulate the preference of a particular trajectory by not only whether it is legal and safe, but also by the degree of certainty in the variables used to make the relevant preference calculations.”
Prof Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics at the University of Sheffield, said:
“This research pushes autonomous vehicle road safety in the right direction by verifying safe trajectories for avoiding obstacles such as other cars, cyclists and pedestrians. However, the authors oversell the real-world usefulness. It is all conducted in computer simulation that doesn’t address the dynamics of the real world. It also makes the mistaken assumption that all other road users are obeying the rules of the road. While this is a useful project, considerably more work in the real world is required before it can be considered for certification.
“First, this is a simulated test given data from the real world to find out what the control mechanism would do. The real world is not so clean with unpredictable road surfaces and unanticipated events. Second, all of the test assumed that all other road users are behaving legally according to the rules of the road. This is a significant problem as the Google self-driving car tests showed that because the car stuck strictly to the letter of the law on entering the motorway, it continually had other cars run into it. The article has one case of someone suddenly crossed the road illegally while looking at their phone. The then set up conditions to anticipate that someone with a phone may cross the road without warning. But the sensing in an autonomous car is insufficient to determine that someone is looking at a phone. And that is just one of very a very large number of situations of unanticipated pedestrian movement.”
Professor John McDermid FREng, Director, Assuring Autonomy International Programme at the University of York, said:
“The press release says “always calculates an accident free trajectory” but the paper does not say how this is done.
“The paper is well-written and clear, and also well-motivated. The data is solid in one sense – it is based on data collected from driving vehicles in the real-world. However, the data is not solid in that it has not been used (so far as one can tell) in real-world driving to see how it actually behaves. One of the potential problems of such approaches is that the vehicle will tend to stop or slow down very often – figure 5a, for example, shows that the vehicle would be on a fail-safe path more than half the time and this is likely to cause it to be a nuisance to other road users – and potentially unsafe. The work has to be viewed as an interesting theory, not mature enough to deploy in the real-world.
“There is other similar work (which they cite) such as the responsibility-sensitive-safety (RSS) proposed by Mobileye (part of Intel); this has the sensible premise that vehicles cannot be held accountable for accidents which are the responsibility of others, e.g. driving through red lights. Thus it is consistent with one of the “philosophies” of safe driving. It also assumes that some run-time checks on safety are valuable – this seems sensible, and it is unlikely that any really successful autonomous vehicle will not include such checks.
“Whilst AVs perhaps should not be held accountable for misbehaviour of other vehicles (e.g. as with RSS) we know that road users do not obey traffic rules. AVs should account for this – including avoiding accidents if they can even if it is not its fault. Thus, for the environment AVs would be operating in (at least, not for the foreseeable future of mixed AV/human traffic), how robust would this be as a basis for a) certifying them or b) ‘significantly reducing’ testing efforts? (as per the ‘Discussion’ section on p. 6)? It seems likely that an “I stuck to my trajectory, it was his fault” behaviour would not ultimately be acceptable. Also, assuming the checker uses the same sensors as the planner (this seems likely on cost grounds) then there is the potential for a common-mode failure where planner and checker use the same incorrect data. This and other issues give rise to potential limitations, with implications for the real world …
“Further, there are implicit assumptions about the capability of the sensor set and object detection. Objects can be misclassified – for example as was the case with the Uber accident in Tempe Arizona, leading to incorrect prediction of trajectories. Objects can be occluded – the vehicle may ’think’ it has identified a single vehicle when there are two and the trajectory prediction is correct for one but misses the other. And so on. This is not to say that the approach is without its merits – but, at best, it can only be a partial solution to a complex problem and not a complete solution.”
Prof Harold Thimbleby, See Change Fellow in Digital Health, Swansea University, said:
“The main problem I have with this paper is that it does not discuss an algorithm or show any algorithm to be safe.
“There is confusion between “algorithm” and “maths”.
“The paper has some nice differential equations, which it claims support safe autonomous cars. That’s debatable (for example, stationary cars are assumed to be “safe”, and there are clearly many cases where they aren’t, e.g. on UK motorways), but the key problem is that differential equations are not algorithms. There is a big jump from having mathematical equations and having programs that can control autonomous cars. This jump is not discussed in the paper. If we want to build safe cars the link from theory, such as this paper, and real engineering has to be bridged.
“There seems to be very little if any empirical evidence in the paper; the paper is theoretical. Some of the claims in the abstract don’t seem to be justified. (They may be in the supplementary material, which I was not able to read)
“The paper is interesting and has some nice theoretical mathematical ideas, but it does not make an advance in driverless cars that justifies journalistic excitement – in my opinion.”
“Using online verification to prevent autonomous vehicles from causing accidents” by Pek et al was published in Nature Machine Intelligence at 16:00 UK TIME on Monday 14th September.
Declared interests
Dr Ron Chrisley: No declarations of interest.
Prof Noel Sharkey: No conflicts of interest
Prof Harold Thimbleby: I have no conflicting interests.
None others received.