Individuals who work as online content moderators are often exposed to violent, distressing and exploitative material. We look at how employers can minimise the OSH risks and improve working conditions.
From graphic violence and sexual imagery to hate speech and self-harm, the diverse range of online content that needs to be looked through and evaluated before the public can consume it safely is mindboggling.
Although Internet users may assume that algorithms have been written to remove the content automatically, the reality is that artificial intelligence (AI) lacks the sophistication to make judgements about the suitability of content, so humans will continue to be gatekeepers/quality controllers for the foreseeable future. As a result, there has been a spike in the number of people who are working as online moderators, reviewing content and determining how appropriate it is.
Repeated exposure to harmful material, however, has significant implications for those undertaking this role. As well as often being employed in poor working environments, the nature of the work means content moderators can risk damage to their long-term health. Mental health conditions such as anxiety are not uncommon and there have even been cases of individuals reporting PTSD-like symptoms.
The demand for content moderators has grown substantially in recent years to reflect the massive surge in new content flooding digital platforms. The latest figures for social media giant Facebook reveal that an incredible 4,000 images are uploaded every second while video-sharing platform YouTube sees more than 500 hours of content uploaded every minute.
Estimates suggest that digital platforms including Facebook, Google and YouTube employ tens of thousands of people who operate globally to assess the suitability of content. Other digital firms such as Instagram, Reddit, Tumblr, and Twitter tend to subcontract these services through companies like TaskUs, Accenture, Genpact, and Cognizant, further distorting the geographical profile of online content moderation.
Outside of Silicon Valley in California, a growing band of content moderators operate from lower-to-middle income countries, where wages are lower and working conditions are less regulated. Temporary contracts, excessive working hours and poor welfare facilities are common themes. Another emerging trend is the use of freelancers or those who are paid a fee for providing the service. This model offers even less worker protection.
In many ways the working conditions faced by content moderators differs little from more traditional functions such as those undertaken by workers in contact centres. There are parallels in that both reflect a complex workplace culture. The contact centre sector has also undergone a rapid structural change in recent years shaped by similarly dramatic growth over a somewhat short time frame. Most employees have relatively low levels of education and very little job security. Managers also closely monitor performance. Both sectors are also similar in the way that workers’ wellbeing is managed and prioritised.
In recent years, digital firms have come under pressure from regulators and Internet users who have challenged them on their policies covering online content and criticised them for what they see as a lack of robustness in the safeguards. Some have come under fire over how they have monitored, managed and removed certain types of content from their platforms, particularly material that is user-generated.
"Most employees have relatively low levels of education and little job security. Managers also closely monitor performance"
‘The human side of online content moderation’, an article from asset management brand Aberdeen Standard Investments, notes how most of the digital giants have set up community standards that they expect their users to uphold and how they exercise the right to remove any content that falls short of these standards (bit.ly/37W0tvi). However, as the article goes on to explain, the trouble is that there are no clear-cut, industry-recognised standards for what constitutes appropriate online content. Indeed, some of the content is highly subjective. As a result, many of these companies are making their own decisions on what is and is not appropriate.
Effective line managers
The growing volume in, often harmful, material that content moderators are exposed to will inevitably impact on their mental health and wellbeing. To safeguard these employees, businesses should provide effective training and strong psychological support. Also, managers operating in these challenging work environments should be equipped with the skills to identify stressful situations and be aware of the potential consequences that could arise from intensive control measures and rigid monitoring practices.
The IOSH-funded study, Out of Sight, Out of Mind? Research into the Occupational Safety and Health of Distributed Workers (see box below) provides a wealth of materials to help companies develop more effective line management behaviours.
The lack of effective line management is particularly prevalent in less well-defined workforces like online content moderators. However, there’s still a general assumption that jobs materialising from the digital economy are ‘different’ to more traditional jobs. This outdated view needs to change. Nothing indicates that these workers should receive fewer protections for their health and wellbeing. On the contrary, barriers should be removed for these workers to enjoy the same standards of OSH and wellbeing that a workforce employed in more traditional sectors does.
Effective line management
The IOSH-funded and University of East Anglia-commissioned report, Out of Sight, Out of Mind? Research into the Occupational Safety and Health of Distributed Workers, defines distributed (or remote) workers as “those workers who spend at least part of their working day working away from a main location”.
Examples include public service workers, transportation workers, and utilities, energy and telecoms workers; while hazards they face include “chemicals, power (gas, electricity), slips, trips, […] risks presented through interaction with the public, such as verbal and physical abuse…”.
The research’s main aim was to “develop our understanding of how OSH practitioners can ensure employee health and safety among distributed workers”.
The project examined whether current OSH leadership frameworks were applicable in the context of distributed working, whether other frameworks may be more appropriate, and whether OSH practitioners can deploy appropriate frameworks to ensure the cascade of effective OSH leadership via line managers to distributed workers.
The research team generated a toolkit for OSH practitioners. This resource includes the skills and abilities that underpin effective OSH leadership behaviours that facilitate good OSH practices among distributed workers.
For more information: bit.ly/2qxk7wQ
Like some other forms of employment that fall under the gig economy umbrella, the role of content moderators and the fragmented nature of their work breaks from the traditional employer-employee relationship. The following are common themes that are reflected in this and similar types of work:
- Demands and relationships: stringent performance and productivity metrics generate excessive pressure on the worker and poor management practices by the supervisors, including micro-management and bullying. This impacts on working patterns and hours, with workers discouraged from taking breaks – even to go to the toilet.
- Support: there is a lack of transparency with regard to the way the work is undertaken, which is reinforced with strict non-disclosure agreements. This means that workers operate in a vacuum where they are unable to discuss their work and, more importantly, the impact it is having on them with colleagues, friends or family. This isolation is further reinforced due to high staff turnover, which discourages sustainable working relationships.
- High turnover rates: the rotation levels of the workforce are a concern and employees don’t tend to engage in long-term working relationships. This work environment is known to be highly demanding and requires high levels of desensitisation.
- Change and role: as the public debate on what is appropriate content continues to rage (eg harmful content versus free speech), and regulatory frameworks attempt to keep pace, the industry will continue to be fragmented. The system relies on the use and availability of internal codes of conduct and internal policies but it is nearly impossible to ensure that these are up to date. Furthermore, these policies, guidelines and community standards have many exceptions and grey areas meaning that there is a lack of clarity in the employee’s role.
- Relationships: an insecure workforce, cheap labour, high staff turnover and a lack of scrutiny from regulators and investors means the industry has not invested in its employees’ health and wellbeing. Primary controls such as risk assessment, job rotation, and audit programmes have not been implemented. Secondary controls such as (mental) health surveillance, return to work and rehabilitation policies do not exist. In some cases, tertiary measures such as employee assistance programmes are available but have limited impact as standalone measures.
- Rights: while workers’ rights in a subcontracting culture can be easily undermined, we need to be mindful that this culture might come to an end. Some tech companies have faced lawsuits and are increasingly being held accountable for the working conditions of their contractors and temp agencies. Some commentators have advocated for an extension to the duty of care under which third parties can be considered responsible for harmful acts they did not cause but did not do enough to prevent known as collective responsibility and/or secondary liability. This can also be applicable for those workers who allege to have sustained psychological damage as a result of their work and are successful in proving that the hirer of the independent contractor has failed in their duty to exercise reasonable care with regard to the employee’s health and safety.
"Content moderators’ work is repetitive and there are limited opportunities to take breaks and to mentally switch off"
Mental ill-health burden
The psychological impact of viewing harmful material is unquestionable, to the extent that it’s not uncommon for workers to consume drugs in the workplace as a coping mechanism. Content moderators’ work is repetitive and there are limited opportunities to take breaks and to mentally switch off. The work environment can also contribute to a sense of isolation and anxiety. There are reports of individuals who have suffered from burnout, stress, depression and also post-traumatic stress disorders. Rather than take preventative action to support employees, there has been a tendency for tech firms to replace individuals who leave rather than investigate the reasons for high staff turnover.
As the multinational professional services firm Ernst & Young noted in its publication The Plus Side of Mental Health, organisations should prioritise the work environment and its design by building on the principle that “a psychologically healthy workplace is an organisation where the psychological health of employees are valued and support is provided for those with psychological health problems”. Digital firms can do much more to minimise poor working conditions that impact on psychological health. So how can they bring about change? Some good starting points are:
1. There’s a lack of diligence from firms to hold outsourced companies accountable for their contracted or subcontracted workers’ physical and psychological wellbeing. Their duty of care should cover not only the protection of workers while they are employed, but also include those that continue to have mental health problems after they have left.
2. Organisations should be more open and collaborate with key stakeholders (insurers, regulators, occupational health, public health, employee assistance programmes). They also need to improve the integration of their organisational functions that have responsibility for this area of work, including human resources and occupational safety and health.
3. Standard employee assistance programmes or mental health services to help employees like content moderators cope with their work environment should be part of a more holistic approach to mental health and wellbeing such as wellness initiatives, making counsellors available and offering 24-hour assistance lines.
4. Training and awareness needs to be much more tailored to reflect the complexities of the work tasks. Interventions need to take into account the importance of the worker’s health and wellbeing, including how to identify early symptoms or spot the signs of work-related mental ill-health.
5. Although organisations need to manage these risks in the same way that they do for the governance systems they have in place for safety risks, sometimes the initiatives adopted have not been robust enough. Preventative measures such as distributing the work among more workers or limiting the amount of time people spend viewing extreme content are easy to implement.
6. The integration and adoption of automated systems and AI technologies to support content moderation.
Although the use of AI as triage systems to help remove harmful content is an important step forward, humans will not be entirely removed from online tasks. As a result, everyone has a collective responsibility to protect and enhance the working conditions for those individuals who undertake this challenging work.
The views expressed here are those of the author and do not necessarily represent those of IOSH and/or this magazine.