Science fiction impresses upon us that perfection in the robot world is to replicate a human in every way, from appearance to emotions. But in equal measure it uses those traits to scare us, Blade Runner style, about what happens if the boundaries blur between people and machines. In either case, it sets up the expectation that robots should be the same shape as us and think like we do; which is still far from the reality.
Many of those assumptions have seeped into the public consciousness; ask people to describe the ultimate robot assistant and you'll get Star Wars' C3PO, not R2D2: a machine they can have feelings about and that can somehow have feelings back.
Not meant
In the real world, robots have taken two different paths. We have industrial automata, designed purely for function and reliability and epitomised by the Unimate robot arm seen in factories the world over since the 1950s, and we have human interactive robots designed to tug on the heart strings, such as the somewhat disturbing Paro robotic seal used to keep old people company in Japan.
Until recently there has been little crossover between the two worlds. Industrial robots live in isolation, working in secure workcells surrounded by wire mesh and interlocks, where they can do no harm to their fleshy colleagues. There is a handful of accidents involving robots worldwide each year but almost invariably they happen during maintenance or setup, when the interlocks are defeated and an engineer is inside the cage. The robots are not responsible for those incidents any more than someone using a grinder without safety glasses could hold the sparks responsible for an injury.
There is no attempt to teach the robot how to behave around people, nor any need to. Countries that have written standards for industrial robot safety, such as the US safety administration OSHA's STD 01-12-002 (http://1.usa.gov/22a2snC), make it clear that the only acceptable solutions are to isolate humans and robots from one another by fences and interlocks. We live in our world; they live in theirs.
Predictions from the 1950s that by now we would all have domestic robots cooking our breakfasts and babysitting our cats seem ridiculous, don't they?
Opening envelopes
Or perhaps not. The definition of a robot is any machine that can be programmed to carry out a sequence of actions without human interaction. You can buy timed feeding bowls to deliver sustenance to the front end of your cat while you are away, and automatic litter trays that deal with the other end of the problem.
The international standard for robotics, ISO 8373 (bit.ly/1lXJEqN), defines an "industrial manipulator" as: "a machine in which the mechanism usually consists of a series of segments, jointed or sliding relative to one another, for the purpose of grasping and/or moving objects usually in several degrees of freedom".
But that category makes up a tiny fraction of the robot population. For every factory robot juggling car parts there are ten robot vacuum cleaners and lawnmowers beetling about people's homes. There's little public association between the two, and little fear over safety in either case. Justifiably so, because the industrial robots live in their cages and the domestic ones are too small and innocuous to cause any real harm.
The problem comes when you take a robot that has the strength and functionality to cause major damage and move it from the workcell into the workplace -- putting people within the machine's envelope of movement. This is now something that researchers the world over are trying to do, and it raises a number of questions for designers, safety and health policymakers, philosophers and the public.
Developing a machine that can cope with the complexities of our human environment is a monumental engineering task, and it's certainly true that we don't yet have the computing power to build anything remotely like C3PO. But a robot that can sit alongside humans on a production line is already on sale. The limiting factor in development is a more fundamental question: should robots be intrinsically safe?
A robot that can sit alongside humans on a production line is already on sale. The limiting factor in development is actually a more fundamental question: should robots be intrinsically safe?
It's not a new question by any means. Before workplace safety management evolved, many people went home at night because they knew not to get too close to a steam hammer. In the past 45 years we have been closing down the exceptions to that majority, on the principle that we can make every workplace safe, securing hazardous machinery behind the guards and locks. A healthy dose of fear is still essential in some jobs, such as work at height or firefighting, but if you told the Health and Safety Executive inspector that your factory floor risk assessment was based on workers scampering for cover whenever the crane moved, you would find yourself looking for a new job.
Question of ethicsIf robots on any scale are to work among us without causing injury it would help if they could work out the implications of their actions. Step in Professor Alan Winfield and his consequence engine. Winfield researches cognitive robotics at the University of the West of England's Robotics Lab and was keen to develop a machine that went beyond the current state of protection, in which a robot programmed to detect a human presence can simply halt its own movement where it could harm the human. This is what US robot ethicist James Moor defines as an "implicit ethical agent". Winfield believes he has gone one rung higher in Moor's hierarchy and created an "explicit ethical agent", one that can reason about ethics, if only to a limited extent. His software, the consequence engine, allows a robot to model the possible consequences of actions in meeting a directive to physically intervene to prevent harm coming to a human. The robot can then choose the best course of action. To test the engine, Winfield and his staff set an NAO robot a test. (The 58 cm tall NAO, pictured right, may look like a cute toy but is a sophisticated tool with the capability to sense and react to its surroundings, available in the UK to academics only.) The NAO with the consequence engine -- named by the team the A-Robot -- is set walking towards a hole in the ground. It skirts the hole successfully. Next a "human" -- another NAO, called the H-Robot -- is introduced, also walking towards the hole. The A-robot has to choose how to meet its directives to keep itself safe while preventing the human from harm. In the experiment, the robot models the consequences of its options: stand still, go straight ahead, turn left or right and, in almost all trials, turns to collide with the human to prevent them falling into the hole. The programming, says Winfield, "appears to match remarkably well with Asimov's first law of robotics [see main text] ... The robot will avoid injuring (ie colliding with) a human (may not injure a human), but may also sometimes compromise that rule in order to prevent a human from coming to harm (...or, through inaction, allow a human to come to harm)". But what if they tested its ethical judgment when there were two humans heading toward the hole and it could only save one? "Out of 33 runs, 16 times the A-robot managed to rescue one of the H-robots, but not the other, and amazingly, three times the A-robot rescued both," Winfield writes. What he found most interesting was the 14 instances in which the A-Robot failed to save either human. On the videos of the runs he says you can clearly see the A-Robot notice one human and start towards it, then notice the other and stop. At this point, he says, the robot, faced with an ethical dilemma (which to save?), loses valuable time dithering and saves no one. "The dithering is fixable to an extent," says Winfield. "But if an ethical dilemma is impossible for a human, it will be impossible for a robot to resolve." Winfield is also involved in another research programme led by Professor Michael Fisher, director of Liverpool University's Centre for Autonomous Systems. This work aims to derisk the choices made possible by the consequence engine by analysing all the possible actions it could generate -- using the verification procedures used to test safety critical software systems such as autopilots. "They run every path through the system's logic through a kind of 'prover'," Winfield explains. "We want to be able to prove the robot cannot make the wrong decision." It is early days, he says, but the results are promising. |
Singular problem
You could argue that robots are the epitome of predictability, so once you've seen one move it's easy for a human colleague to keep out of harm's way. It's true that the control languages for today's industrial manipulators are very simple and reliable but, to interact closely with humans in our world, that programming has to become a lot more complicated, and bugs are inevitable.
The mechanics themselves can be unpredictable; unless it's operated with great care, a multi-jointed robot arm can end up in a "singularity" where two joints line up, and the controller loses track of which motor moves in which direction. If it decides to execute "in time=0 do rotate=360°", standing next to it may not be the best option. The robot is still not to blame, but who is?
If we are to take robots out of their cells, then before anyone clicks a mouse we need to decide whether that robot should be absolutely incapable of causing any harm, or whether we should accept responsibility for staying out of the way.
The result may be some combination of the two, but there must be a single top-level priority for directing the detailed programming.
You may be thinking "well, it's obvious - the robot must be safe," but it's not that simple. A Roomba vacuum cleaner is safe to be around because it's physically incapable of sucking your arm off, but a robot that's holding a car engine in position while a worker connects the wiring has to be strong enough to lift it, so it follows that it's strong enough to kill the worker.
Developers will often point to biochemist and author Isaac Asimov's laws of robotics, formulated in the 1940s, as the guiding light for their research. The first law is usually quoted as "A robot may not harm a human, or, by inaction, allow a human to come to harm".
That's a mis-phrasing of two ideas from different novels, one filtered by a translator. What Asimov actually said is: "Any modern robot-¦would weigh such matters quantitatively. Which of the two situations, A or non-A, creates more misery? The robot would take a quick decision and opt for the least misery."
The word misery suggests how stupendously complicated this "quick decision" is to turn into computer code (though there are people working towards it -- see box on page opposite). We cannot teach a robot to have morals, nor can we teach it how to empathise. We can give it a set of scenarios and tell it what to do but placed in the chaotic world we humans live and work in, no list of examples can ever be exhaustive.
If two pedestrians run out in front of a robot car and it is unable to stop, should it kill the mother or the child? Which death leads to less misery? A human driver would make a snap decision; a programmer cannot. Should a robot assistant on a production line prioritise the safety of visitors over employees? Managers over interns? Women over men? Who would dare to say?
Second guessing
The solution to date is to make the robot so over-cautious that it can always stop in time, but that leads to dangers too. In a set of nine accident reports released by the US Department of Motor Vehicles involving driverless cars tested on the roads of California, the common theme was that other drivers expected the vehicle in front to behave as they themselves did, with the same level of disregard for caution. Faced with a vehicle that drove slowly enough to be safe, they drove into the back of it.
If robots are to fit into our world, they may need to take some risks to keep up with us, and that places huge liabilities on manufacturers and programmers. Volvo recently announced that it wanted to take full responsibility for every accident its driverless cars were involved in, but the legal systems of the world don't work like that.
One option is to build in compliance -- adding flexibility to the mechanics so if the robot hits an obstruction the force is applied gradually
Giving suppliers total liability for everything a robot does would add several zeros to its price tag. Who would be to blame when something went wrong? The programmer who fed in the list of examples, or the operator who taught it the sequence of movements? Should a co-worker be allowed to tweak his metallic assistant's moral compass, and would you buy a car that would kill you to save a stranger?
External threats
We also have to accept that hacking is an increasing threat. We've seen attacks that alter process control software to cause physical damage in factories, and robots will inevitably be targeted by people wanting to snoop around through the on-board cameras, or just to cause chaos. Your mechanical co-worker might well have been factory programmed not to kill you, but where exactly did that "important software update" come from? Cybersecurity is nowhere near strong enough.
If we can't make a robot intrinsically safe, can we make it less able to do major damage and still be useful? In many of the deaths involving industrial robots a worker was crushed between the manipulator and the surrounding structure.
One approach is to cover the robot in soft padding so if someone did get in the way they could be gently pushed clear. This works well where the robot is moving very slowly in an open area, but for most workstations it's not possible -- the robot will be picking up things and putting them down, so there will be structures within the working envelope against which someone could be trapped. We can cover the area with sensors that detect where humans are, but even with redundancy we're not immune to software bugs. Those cages still sound like a sensible control measure.
Baxter on hand
The next option is to build in compliance -- adding flexibility to the mechanics so if the robot hits an obstruction the force is applied gradually. Combined with sensors to measure that force, the robot should stop before it injures anyone.
This is the system used by Baxter, an assistance robot built by Rethink Robotics, that is designed to operate in proximity to humans in assembly lines and education. (Baxter is priced around $25,000 (£19,000), the average salary of a US production worker.) It also uses a multitude of sensors to detect objects before it touches them, and it is as safe as a robot can get these days.
The drawback is that it must still move frustratingly slowly, and the compliance places severe limits on load capacity. Robots of this type are ideal for repetitive tasks, such as loading and sorting materials, but can't compete with a human for speed and strength. On a commercial level, you are often better off with a conventional robot in a cage.
We are left with the idea that robots capable of doing what we can do will always be dangerous and, if we intend to let them out of their cages, we humans should just learn to be careful. In an engineering sense it's the simplest option, but how does it fit with the ALARP (as low as reasonably practicable -- see Lexicon on p 59) risk management model? Just how much risk is acceptable, and who decides?
It's a question all of society should be concerned with; because they're almost here. But perhaps history will repeat itself. Perhaps workers will feel threatened by machines that replace their friends, no matter what they look like.