Features
Autonomous vehicles

Look, no hands

Driverless vehicle technology is advancing in leaps and bounds, but there are still some fundamental questions we need to answer before they can ply our roads. 

driverless-cars-hero

Words: Dr DF Merchant

If warm words from governments were all that was needed to fuel driverless vehicles they might be all around us. Following the launch in Britain of the Centre for Connected and Autonomous Vehicles (CCAV) in 2015, this September a new inquiry under the House of Lords’ Science and Technology Committee was announced. This will report early next year on the government’s plans to “ensure that the UK is a world leader in developing, testing and deploying connected and autonomous vehicles” for road, sea and space travel. 

In the US too, there is enthusiasm in government about the potential driverless vehicles could offer. The Department of Transportation (DOT) issued a Federal Automated Vehicles Policy in September (bit.ly/2cUt1rA) in which it sets out its “excitement” about the potential for highly-autonomous vehicles. The UK CCAV’s consultation paper, Pathway to Driverless Cars (bit.ly/29BrvMp), issued in July, echoes this attitude, noting how “excited” the government is by driverless technologies.

Google

The search engine giant, which has divisions working on many technological advances including artificial intelligence, started with experiments fitting sensors to Lexus and Toyota cars in 2009, but since 2014 has been testing purpose-built, two-seater cars with no steering wheel or brake pedal. 
The cars are fitted with LIDAR laser surveying equipment, which produces 3D maps of the car’s surroundings. The vehicles have so far covered 2.7 million km in autonomous driving mode and trials continue in cities in Arizona, Texas and Washington. 
Google reports the cars have been involved in 14 collisions, but says the driverless software has been responsible for only one of these. 
A short video of how the cars cope with cyclists and roadworks is at www.google.com/selfdrivingcar/how

Globally, the industry is gearing up for change. Mark Fields, CEO of car maker Ford, said in August that he expected production in 2021 to be of cars “without steering wheels”. By then many other manufacturers expect to offer cars that require no active intervention by the driver for most of a journey. Car manufacturers are hoovering up all the technology start-ups they can find with ideas for artificial intelligence, machine vision and mapping. 

The early trickle-down technology of lane detection and “follow me” cruise control is already standard on many models, but society – and politicians below national government level – is noticeably more reticent. Nobody is sure whether consumers will want, or trust, a car that they cannot take control of. California’s new autonomous traffic regulations are clear that the one thing a driverless car must have at all times is a driver. With a steering wheel.

There is a lot potentially for the authorities to be excited about. US road fatalities are not recorded centrally but 2015 estimates range from 35,000 to 38,000. The DOT claims that 94% of vehicle collisions can be attributed to driver error. It says the prospect of vehicles equipped with sensors and software maintaining safe distances and accelerating and manoeuvring in a predictable manner could markedly reduce the death and injury toll. 

Others argue this vision of safer roads with the human factor removed will only be realised if we reach the stage where all vehicles are fully autonomous, and we addressed the problems of how to protect pedestrians and cyclists from the rapidly-flowing traffic.

There are undoubtedly other benefits to automated vehicles; the CCAV predicts improved traffic flow, fuel savings and enhanced mobility for those who cannot drive, but we face a long road to get there and not all of the problems have obvious answers.

In stages

Driver-assistive technologies include electronic stability control, autonomous emergency braking systems and lane-keeping assist systems (LKAS), which use sensors and software to maintain a vehicle’s optimum driving position or avoid collisions. Currently the preserve of only the most expensive marques, these features will eventually filter down to standard cars, making them capable of semi-autonomous operation. These are equivalent to levels 1 and 2 in the SAE scale of vehicle automation (see Degrees of driverlessness box). 

But it is already feasible to build a fully self-driving car in mechanical terms, and there are several trials under way.

On 20 October, brewer Anheuser-Busch InBev completed what is believed to be the first partly-driverless commercial delivery – 2,000 cases of Budweiser beer – after a journey of nearly 200 km on public roads. The truck was retrofitted with a suite of sensors, radar and cameras that worked with an on-board computer to allow it to handle all driving operations on major roads. 

Degrees of driverlessness

In 2014 the US engineering body SAE International – formerly the Society of Automotive Engineers – released a standard, J3016, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems.
J3016 defines six levels of automation starting at 0 – no automation – which represents a traditional vehicle with the driver in charge of all its functions. 
Level 5 represents full automation of all functions: acceleration and deceleration, steering, navigation, monitoring and reacting to road conditions. 
The classification has been accepted by the US Department of Transportation and the UK government’s Centre for Connected and Autonomous Vehicles.  

Level 0: the human driver does everything.

Level 1: an automated system on the vehicle can sometimes assist the driver to conduct some parts of the driving task.

Level 2: an automated system on the vehicle can conduct some parts of the driving task, while the human continues to monitor the driving environment and performs the rest of the driving task.

Level 3: an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the driver must be ready to take back control when the automated system requests. 

Level 4: an automated system can conduct the driving task and monitor the driving environment, and the human need not take back control, but the automated system can operate only in certain environments and under certain conditions.

Level 5: the automated system can perform all driving tasks, under all conditions that a human driver could perform them. 

The system was developed by US-based start-up Otto – recently bought by the car-hire company Uber – and cost around £25,000. Otto aims to produce autonomous driving kits for lorries for a few thousand pounds apiece in the next few years.

There is real fear that a hacker could drive you into a tree, though this is more likely to happen with a human behind the wheel

Otto says the purpose of its system is not to dispense with a driver but to act as an autopilot, taking over the driving on motorways, reducing fatigue and errors and leaving the human to guide the vehicle in and out of depots and cities. 

This caveat – that Otto expects a human to take over on dense urban roads – is critical. The software intelligence and sensing required to negotiate our city streets and country lanes safely is a big leap.

Everything from road signs and lighting to lane markings and visibility at junctions is designed to make sense to a human brain. Our eyes can spot and decode a stop sign half-buried in a hedge. The difference between a red traffic light and the brake lights on a bus is so obvious we barely think about it. 

Machine vision, on the other hand, sees a world of coloured shapes. A red circle is just a red circle. A cyclist fiddling with a pedal is no different from a cyclist about to sprint across a junction. (Even limited driver-assist systems such as LKAS are not designed for use on roads shared with cyclists.) A man in a dark suit waving to his sister is hard to distinguish from a police officer warning drivers a bridge is closed. 

50 trillion calculations

So far, only one person is known to have died in worldwide trials of self-driving vehicles. That was 40-year-old Joshua Brown, whose Tesla Model S sedan crashed into another vehicle in Williston, Florida, in May, with its Autopilot system engaged. Cameras on the Tesla are believed to have mistaken the white side of an articulated lorry for the open sky and failed to apply the brakes.

Developers have been trying to teach their vehicles to understand our world through the brute force of computing. Google expects that, to arrive anywhere close to human levels of ability, the cars will need to process 50 trillion calculations a second. Even with two million miles of experience, the cars cannot process basic information such as broken traffic signals, roadworks, faded lane markings and roundabouts. 

Consider two cars arriving at a mini roundabout. In a split second there’s fleeting eye contact, the drivers each size up the other vehicle, decide who arrived first even if the interval was too short for their conscious minds to notice, weigh up how much of a hurry they’re in, how generous they’re feeling, even if they like the look of the other car, and someone takes the lead. It works so well we don’t think too long about it, but to a computer all that is just madness. An autonomous car will remain in situ waiting for the other driver to move because that’s the safest fallback option. If both cars are self-driving the wait could be very lengthy. For a robot in a factory, that could be the perfect safe solution; on a daily commute, not so much.

Lane changing

Volvo

From January 2017, Volvo plans to test 100 XC90 cars fitted with its proprietary Drive Me system on the streets of Gothenburg, Sweden, with the blessing of the city authorities and the national government. 
The cars will be fitted with four surround radar scanners, three long-field radar scanners (with a range of up to 150 m), four surround cameras, a trifocal camera to aid depth perception and spot distant objects, and 12 ultrasonic sensors to monitor the front, back and sides of the vehicle. On-board computers will access and update a 3D road map of the city. 
In very bad weather or at the end of a route, the driver will be prompted to assume control. Volvo says the experiment will be the first of its kind on city streets – albeit carefully selected ones – with non-specialist drivers.

The alternative to teaching computers the rules of the road is to rebuild the infrastructure. Roads were adapted when cars replaced horses, and they will be adapted again. The UK’s post-war new towns, such as Cumbernauld and Redditch, invested heavily in segregating cars and pedestrians, but they had the space to make that choice. 

The convoluted design of European road networks is far from ideal for wholesale rebuilding but we can still adapt gradually, within the existing environment. Rather than trying to write software that can recognise traffic lights, isn’t it more sensible to fit them with infrared beacons that talk directly to the computer, explaining the junction priorities and traffic flow? Should diversion signs have arrows, or barcodes? Should we finally just define priorities for all our mini roundabouts? 

Since we can’t just lock autonomous cars in tunnels and caged lanes, we need to give them a helping hand. One option is geo-fencing, virtually demarcating roads or parts of roads that are fit for driverless operation. Only then will cars allow the occupants driverless operation when they are in geo-fenced zones.

Europe’s new Galileo GPS system should offer much better positional awareness – within centimetres for paying customers – and 4G and 5G cellular services could offload some of the less time-sensitive computing power to cloud-based artificial intelligence. But with that connectivity comes the risk of hacking. There have already been attacks on the primitive CAN (controller area network) computing systems in some vehicles, resulting in some cases in the brakes being applied at speed. In the wrong hands a driverless car could become a local hazard, but if someone could attack a central command network and take over all the cars at once, the results could be catastrophic. Rolls-Royce admits its plans for crew-free cargo ships are dominated by the issues of electronic piracy, leading to the scenario of a “powered grounding” in which a vessel is run aground remotely. So far, manufacturers have been less than perfect at protecting their systems from malicious attack, and this is swaying the mood of society.

Historically, the public had a broadly positive attitude. They liked the idea of checking and updating Facebook on their way through the morning commute. As more details of the technology emerged, and accidents such as those involving a few Tesla drivers have been widely publicised, opinions have changed. 

There is real fear that a hacker could drive you into a tree, even though this is far more likely to happen with a human behind the wheel. Tesla’s Autopilot feature is statistically safe, but a couple of crashes have made headline news (see Tesla box). 

As a result of public concern, Tesla updated the software, forcing drivers to pay attention, and disconnected the feature if they weren’t. The public feels somehow reassured when humans make mistakes rather than machines, even if humans do it more often. They are willing to relinquish some control but not too much. If a machine isn’t clever enough we don’t trust it, and if it’s too clever we have flashbacks to Blade Runner and Terminator, and trust it even less. Should they be better than us, or just no worse? 

Tesla

Since October 2014, purchasers of the US electric car maker’s sedans have been able to pay an extra £3,500 for a model fitted with 12 ultrasonic sensors on the exterior, a camera, front-facing radar and digitally controlled brakes. 
Autopilot software was downloaded to 60,000 of these vehicles in October 2015, allowing them to change speed, manoeuvre within road lanes, change lanes, and park themselves. 
Drivers can engage Autopilot only in areas with clearly marked road lanes, where speed will be relatively constant and where the car’s computer can download a map of the surroundings. Drivers must be ready to disengage Autopilot and take over at any time.
In mid-2016 there were three accidents involving cars using Autopilot, one fatal (see p 28). Tesla’s founder, Elon Musk, says that updating Autopilot to allow fully autonomous operation will feasible within two years, if the law allows.

The buck stops where?

And here we have the most difficult question. If machines are allowed to make the sort of decisions we do, should they or can they take responsibility? It’s certain that they will rely on machine learning and neural networks to gradually improve their understanding of our world. Your car may be programmed to learn from your driving style, be that good or bad. Microsoft found to its cost with the Tay Twitter bot – which started broadcasting the worst racist epithets after learning from others’ tweets – that if artificial intelligence cannot understand when it is being exploited, control can be lost rapidly. 

It seems unlikely that owners will teach their cars to crash while they are still sitting in them, but what if an autonomous driving system was faced with a new scenario and made what to humans was an insane decision? The software will be so complex it’s unlikely we could pick apart exactly what went wrong and when.

It is a factor that extends to insurance liability too. The UK CCAV suggests stretching drivers’ compulsory motor insurance to include product liability so a car user would be covered when they let the vehicle take control. Should there be an accident, the car owner or their insurer would rely on the courts to apply product liability rules and common law negligence tests to decide responsibility.

In a report to the European Parliament, Luxembourg member Mady Delvaux proposed autonomous systems be held liable for their actions. She suggested “the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be”.

This would have far-reaching implications for industry and consumers alike. At what point would it be unfair to blame the programmer for a robot’s actions? Does it even make sense to apply workplace principles, such as the requirement to reduce risk, as far as is reasonably practicable to an autonomous system, or should we go with “as safe as the average human”? 

Of course a self-driving car cannot pay a fine or be sent to jail, so any financial burden will ultimately fall on the manufacturer but, if we accept these systems as part of our society, one day there will be a manslaughter case in which only the machine is found guilty. If we cannot accept that, autonomy must remain locked in cages, and cars can stay stupid. 

 

Add new comment