AI will leave its mark on every aspect of our lives, but is this cause for alarm or celebration? The tech may keep workers safe, but is it a danger to privacy and consent?
The steady march of artificial intelligence (AI) into our daily lives is accompanied by as much unease as excitement: extending the role of machine decision-making may shrink humans’ role and reduce workers’ autonomy, increase perceptions of surveillance, and possibly even automate the biases of the original authors.
In addition, the data required by AI, gathered from sensors, cameras and wearable devices, is needed first to build intelligent algorithms, then accumulated to monitor, analyse and expand operations. While data in most workplace scenarios would be anonymised and aggregated, data intended for one purpose – such as social distancing compliance – could ultimately be repurposed for another less benign: for instance, to monitor productivity.
An age of consent
How should safety professionals navigate this territory? Lawyers in the field advise caution. ‘If it isn’t pitched correctly and considerately, there is potential to damage the trust and respect between the employer and employee, and that, as well as putting the business on the wrong side of compliance in respect of employment and data protection legislation, could be detrimental to productivity and wellbeing,’ says a spokesperson from law firm Osborne Clarke.
A key issue is that securing ‘consent’ is not necessarily enough to guarantee workers’ rights, or protect employers from future liability claims. ‘Consent will rarely be an option in an employment context because of the imbalance of power between employer and employee,’ says the Osborne Clarke spokesperson. ‘Instead, employers will need to look at what other lawful basis they can rely on for handling employee data collected through these types of technology.’ There could be lawful grounds for collecting health-related data, for instance.
Before AI is introduced into a workplace, a thorough OSH assessment and review should be performed
On the other hand, safety practitioners who have trialled AI-backed systems are positive about the results and the technology’s future. Travis Perkins plc calculates a 14% reduction in its manual handling-related lost time incidents after adopting wearable digital tags from Soter Analytics to alert staff to unsafe manual handling movements. TrackActive Me, a machine-learning app for staff with pain from musculoskeletal disorders (MSDs), can reduce lower back pain by 35%. And 30 Far East construction sites trialling the viAct intelligent site safety monitoring system have seen no recordable accidents since deployment.
Travis Perkins plc reports that staff have been ‘motivated and excited’ by the use of Soter’s technology, rolled out in its pipeline and heating solutions business BSS. ‘Colleagues said it increased their risk awareness, enabling them to make changes to their work routine,’ says Vimel Budhdev, health, safety and environment improvement specialist at Travis Perkins plc. ‘There is definitely a space in the workplace for new technologies, which help us work safer and smarter without slowing down.’
MSD apps: Tracking devices
The TrackActive Me app gathers data from specialist software for physiotherapists (TrackActive Pro) and matches employees with musculoskeletal pain to the best type of rehabilitative exercise.
Its algorithm can generate personalised programmes for back, muscle and joint health, so that patients start exercise programmes in the time normally spent waiting for a physiotherapist’s appointment.
Co-founder and managing director Ian Prangley (pictured) explains that TrackActive Me collects users’ feedback on the exercises that help or are too demanding, then uses that data in the back end to optimise the programmes. The next step would be to incorporate more data from more users to develop true AI that is even more responsive to the user’s needs.
Physiotherapist Tim Colledge came across the app while searching for a solution for a client company. ‘It’s like having a physiotherapist in your pocket,’ he says.
But although Tim is already using the app with some clients and their staff, a trial with a major manufacturer is on pause as its legal team wrestles with the issues around data privacy and corporate liability. What would happen if, for instance, a worker is told to exercise for back pain that turns out to be a kidney condition?
‘The arguments are around who makes the decisions: the human, or the machine learning?’ says Tim. ‘And who would have liability if the data collected ended up in a tribunal? Ultimately, we’re trying to integrate the app into the occupational health pathway as a tool, but the human still writes the report so it’s not replacing their responsibility.’
On the legal issues, Ian says the app could be used as a voluntary measure for staff, or possibly packaged with incentives: he acknowledges that making its use mandatory for employees or policy-holders will greatly depend on how an employer or insurer would wish to offer the app.
IOSH policy development manager Iván Williams Jiménez says the profession needs to balance the genuine prospect of safety benefits against the possibility that data-gathering technologies could add to psychosocial risk. ‘IOSH advocates socially responsible adoption of AI technologies. We believe that before any AI-enabled devices or systems are introduced into a workplace, a thorough OSH assessment and review should be performed, and whether any new hazards or risks are being introduced and therefore need to be remedied.’
Start-ups and entrepreneurs offering AI-backed safety tech are already signing up their first clients. New UK business Sensing Feeling, which has secured Innovate UK funding, now has 20 to 30 clients for its ‘behavioural sensing’ system. Its software analyses captured camera data on human behaviour, initially in retail settings then increasingly in high-risk environments and safety-critical operations such as railway stations and nuclear plants.
The algorithms are based on the actual behaviour recorded in a trial phase, such as how staff communicate, whether they are stationary in their posts or moving around, or how their attention is directed. As a live operation unfolds, the system monitors real-time behaviour and sends alerts – to both onsite and remote teams – if any anomalies or departures are detected.
Jag Minhas, chief executive and founder of Sensing Feeling, says that the system improves safety partly because it ‘depersonalises’ the issues by providing objective checks that give everyone the same information. ‘The system is playing back to you what you know, but with the ability to modify your behaviour,’ he says. ‘After accidents, crew members would often say: “We knew it was going to happen, but we were too afraid to speak up.” But if you can convey risk to the crew in a way that doesn’t require them to speak up, it’s an effective way of modifying behaviours.’
Tracking can benefit everybody – workers want to be able to make the best decisions at work too
Cedric Augé CMIOSH, safety, health, environment and quality manager at Morrison Utility Services, found that apps can provide an objectivity that is sometimes missing in safety conversations. While undertaking a master’s at the University of Strathclyde and working for the rail electrification business at Siemens, he trialled nine sleep and stress-tracking apps on himself and volunteer colleagues.
As well as conventional wrist-worn wearables, these included the Finnish Oura smart ring. The aim was to see whether the devices could improve on conventional shift and safety management strategies in a safety-critical track works environment, where workers are often sent long distances to sites, or for more than one contractor via different agencies.
In one case, an app identified that a colleague’s stress peaked when she was driving in rush hour, so a shift adjustment allowed her to start the day with physical and cognitive reserves intact. But Cedric says the real benefits lie in encouraging general fatigue awareness among employers who manage shifts. ‘Fatigue is really a problem for shift workers – a lot of companies are behind on managing it. Tracking can benefit everybody – workers want to be able to make the best decisions at work too.’
In construction, algorithm-backed computer-vision systems that automate real-time analysis of CCTV feeds in order to monitor sites for PPE or safety compliance are now available; one such system, viAct, is now deployed in around 30 sites in Hong Kong, Singapore, Taiwan and Japan and has ambitions to double that number in 2021.
By improving detection accuracy and reducing the time taken to review CCTV footage or walk the site, chief executive Gary Ng says that the system identifies 95% more PPE or safety infringements than previous data for the sites. The system sends text alerts with video clips to mobile phones in the vicinity of any infringement or hazard. Video footage can also be reviewed at team safety meetings.
He also argues that AI is more affordable for employers, and less intrusive for employees, than Internet of Things sensors; one camera can monitor up to 10 individuals, vehicles or pieces of site equipment, with no wearables required. ‘It might be hard to get the individual to wear the device, or the battery might run out. Our solution has less impact on the workers,’ Gary says.
In Gary’s experience, workers express concerns about being monitored, but this is reduced if the monitoring is in a larger area, and it’s the same for everyone. In addition, tracking technology in acknowledged danger zones – such as site entrances and exits that see heavy traffic – helps to reduce accidents, so workers are less concerned and feel more protected.
Evidence suggests the system is having a clear impact. ‘On the sites where the system has been deployed, no accidents have been recorded, but we will have more information when we are operating on a bigger scale, with more data to fine-tune the algorithms,’ says Gary.
MSD apps: Coaching colleagues
Preventing and managing MSDs is a growth area for safety tech. Travis Perkins plc’s BSS business has adopted the SoterCoach from Soter Analytics, a device clipped to the back of users’ collars.
It is able to detect high-risk movements such as bending and twisting, repetitive movements and overreaching, and warns the user immediately if a high-risk movement is made by vibrating and emitting a gentle sound.
Vimel Budhdev, health, safety and environment improvement specialist, explains: ‘At the end of the shift, the device is synchronised to the mobile app to give a personalised tutorial. This coaches the colleague on replacing poor movements with better posture and techniques. The app provides clean and easy-to-navigate data on when poor techniques and high-risk movement were detected.’
But greater uptake of monitoring prompted by COVID-19, while adopted with laudable intentions, could lead employers into a legal grey area on data privacy and protection. Theoretically, any pro-safety data-gathering systems could be used unlawfully to monitor productivity, or to create records on who is associating with whom. Some employers have already come under fire: A spokesperson from Osborne Clarke notes that last year Barclays had to scrap a system that tracked the time employees spent at their desks and sent warnings to those spending too long on breaks.
There are also fears that automated decision-making tools can exclude humans and their insights from decision-making, or automate bias or errors. In this context, IOSH and others are arguing for ‘human-centred’ AI, or ‘human in the loop’ systems, where decision-making is only ever partially automated and human judgement has the last word.
Organisations need to empower a younger generation to drive these changes
Iván at IOSH says: ‘A people-centred approach is pivotal when it comes to integrating AI technologies into workplaces. This means care needs to be taken that workers and the application of technology, not just the technology itself, are treated as key elements of any decision-making on digital transformation that could affect their working conditions.’ IOSH is also monitoring developments by the European Parliament Panel for the Future of Science and Technology and the UK’s All-Party Parliamentary Group on Artificial Intelligence (see Resources below).
In the absence of national regulation, some tech firms are adopting self-regulation. Jag says that Sensing Feeling was set up with a pledge not to allow its technology to create an imbalance of power in the workplace: it requires all clients adopting its software solution to have an end-user agreement in place. ‘Our position on ethics is strong – we will always ensure that clients thinking of behavioural sensing bring staff and end-users along with them, and can help clients with workshops or feedback sessions to allay concerns.’
The safety profession has traditionally had an uneasy relationship with technology. In the hierarchy of risk, algorithms and machine learning software are ‘controls’ that should only form second- or third-line defences, while reliance on sensors, algorithms and lines of code can be a challenge to safety cultures built on the notion that safety is ‘everyone’s responsibility’. Cedric, now a tutor in safety and risk management at Strathclyde, says that the sector is still midway through a mindset shift on technology.
‘A lot of more mature people within the industry [making decisions on safety] are still sceptical when it comes to introducing digital innovations. Organisations need to empower a younger generation, more inclined to technology, to drive these changes.’
But there is a growing group of safety professionals that evangelises about the results that can be achieved. At Travis Perkins plc, for instance, Vimel says that using the Soter Analytics system has reduced the risk exposure of the high-risk warehouse staff by an average of 54%, while the medium-risk drivers saw a reduction by an average of 29%. ‘It helps us better understand colleagues’ movement and techniques, and it provides insight into how we can improve our workplace design and layout to better improve colleague safety.’
But before AI and machine learning can really make an impact on safety outcomes, there are clearly tensions to be resolved. There is our desire to be safer versus the risk of over-zealous monitoring, and our faith in emerging technologies versus our fear of loss of autonomy and dehumanised decision-making. In this phase where early adopters meet justifiable nervousness, and exciting technologies meet excitable public discourse, it seems clear that the way ahead would best be illuminated by the publication of much-needed standards, guidance and regulation.
Image credit | iStock
- IOSH response to the European Commission white paper on AI: bit.ly/IOSH-AI-consultation-response
- EU-OSHA campaign against MSDs: healthy-workplaces.eu
- European Parliament Panel for the Future of Science and Technology on ‘Data subjects, digital surveillance, AI and the future of work’: bit.ly/EP-AI-future-of-work
- UK All-Party Parliamentary Group evidence brief on AI technologies in the post-Covid-19 recovery: bit.ly/APPG-AI-evidence