From a new traffic-themed issue of Philosophical Transactions of the Royal Society, I was particularly interested in an article by Mark Campbell, et al., “Autonomous driving in urban environments: approaches, lessons and challenges.” As I discuss vis a vis Stanford’s “Junior” in Traffic, the perceptual and interaction dynamics of autonomous driving are infinitely complex. Here’s one bit, which follows a case study in which a vehicle had executed a maneuver that, while not being against the safety rules per se, were “still undesirable” — because, in essence, the vehicle, despite being equipped with formal Bayesian estimators and the like, had failed to take into account for what other vehicles might due. In other words, how do you program a vehicle to “expect the unexpected”?
In order for autonomous driving to reach its full potential, it is vitally important that the cars cooperate in the sense that they agree on traffic rules, whose turn it is to drive through an intersection, and so forth. For this, robust agreement protocols must be developed. Recent work on how to make multiple vehicles agree on common state variables, e.g. using consensus or gossip algorithm (Boyd et al. 2006; Olfati-Saber et al. 2007), provides a promising starting point for this undertaking.
When running such agreement algorithms, it is conceivable that not all vehicles will cooperate. They may, for example, be faulty, or simply driven by human operators, and such vehicles must be identified and isolated in order to balance autonomy with human inputs. This will be true on individual cars, but even more so in mixed human–robot networks. Questions of particular importance (that will have to be resolved using the available interconnections) include the following. (i) Safety: autonomous cars must be able to identify human-driven cars and then not drive into them even though they may violate the robot driving protocol. (ii) Opportunism on behalf of the human drivers: people are already driving badly on the road when the other cars are driven by people. How will they act if no-one is driving? This needs to be taken into account by the autonomous cars (i.e. not only will people not follow the ‘correct’ protocol—they might be outright hostile). (iii) Collaborative versus non-collaborative driving: how should non-cooperative vehicles be handled in an algorithmically safe yet equitable manner?