Opinion

AI and nanotech risk outpacing our safeguards

wustemann_louis_2
Former editor, IOSH Magazine

Last month researchers at Facebook’s artificial intelligence (AI) research unit shut down an experiment after it took an unexpected turn.

Two “chatbots” – programmes designed to simulate conversation between humans – had been set up to practise and develop trading skills.

Soon after the programmes started negotiating with each other, they developed their own simplified version of English which quickly became incomprehensible to the researchers supervising them.

This sobering development is a long way along the technological scale from the innovations in personal protective equipment and health monitoring described in our feature on wearable technology in our September 2017 issue of IOSH Magazine. But the advances in cheap computing power and connectivity that make possible mass production of those sensor-based protective systems are also propelling the development of machines that learn and act on their learning.

AI too could offer great potential for making humankind safer, but it could also threaten unexpected consequences whose risks outweigh the benefits.

The physicist Professor Stephen Hawking warned in 2014 that AI would be likely to “take off on its own, and redesign itself at an ever-increasing rate”. It’s very hard to predict where such intelligence, linked to manufacturing capability or control systems, might end up.

To stop systems we set up to protect us deciding we are expendable for the greater good, we will need robust precautions. Ensuring they don’t start conversing in languages we can’t understand is not enough – though it’s not a bad place to start.

The problem is that scientists and industry are often more focused on pushing the boundaries of what is technologically possible than on defending the rest of us against the potential unintended consequences. That task is left to governments, which may not always have sufficient grasp of the implications of the scientific developments they are supposed to regulate.

In the US, there were celebrations among safety campaigners a few years ago when the National Institute for Occupational Safety and Health was awarded $5m (£3.1m) by the Obama administration for research into the workplace risks associated with nanomaterials. But that sum seemed small beer compared with the $1.5bn (£900m) a year the government was pumping into non safety-related nanotechnology research.

Nanoparticles are small enough to pass through the body’s defences such as the skin, the lungs and the blood-brain barrier. Some are a fraction of the size of welding fume particles; their widespread use in manufacturing would necessitate new levels of workplace control.

The author Ray Kurzweil has argued that our brains grasp the future in a linear fashion and struggle to anticipate exponential change. So the next decades may bring technological changes we find almost unimaginable. It would be comforting to know that somewhere smart people are imagining the attendant hazards and ways to control them.

 

Louis Wustemann is former editor, IOSH Magazine. He was previously editor of Health and Safety at Work magazine and Environment in Business. He has written, edited and consulted on health and safety, environmental and employment matters for more than 25 years.

Add new comment