Since in most cases it is not possible to quantify either the likelihood or the severity with such accuracy, we make relative judgements -- a broken leg is worse than a cut, and an accident that has never happened seems less likely than one we've seen several times.
A risk matrix puts these judgements into a table, where one dimension represents categories of severity, and the other categories of likelihood. Typically, numbers are then applied to each category, and multiplied together to give a score for the combination of likelihood and severity. The scores are categorised and labelled.
The illustration here shows a typical 3 x 3 risk matrix where a score of 1 is considered trivial or insignificant, a score of 9 as intolerable, and anything in between is tolerable provided it is managed to as low a risk as reasonably practicable. For a 5 x 5 matrix, a score above 12 might be regarded as intolerable, and a score below 4 trivial.
Can anyone in your organisation explain why you use the matrix you use? Is it appropriate for everything you need to risk assess?
The Health and Safety Executive offers a sample risk matrix (bit.ly/2qNgr40) using the categories of likelihood and severity shown in the illustration.
Any organisation that adopts this model should define these categories to match risk profile and risk appetite, and the definitions must be capable of distinguishing between the different hazards an organisation faces -- if they all end up in one or two boxes, the matrix no longer helps with prioritisation of risk control. For example, a small children's nursery might define anything involving a visit to hospital as extreme harm; if the same definition were used for European motor racing, there would be no useful way of distinguishing between a broken leg, a single fatality and a multiple fatality.
One curiosity of the risk matrix is the dependence people place on numbers. Although we can say the score of 4 means something has less risk than the 9 and more than the 1, in reality we know nothing about the relationship between the risks in a diagonal band. It is not possible to prove that a risk of 2 x 2 is higher than a risk of 3 x 1. We don't even know that 3 x 1 is the same as 1 x 3. Think about shoe sizes. A size 7 is larger than a size 3, but if you had a size 7 and a size 3 end to end you can't know whether this would be longer or shorter than a size 6 and a size 4 end to end. You can't know that 7+3 = 6+4 if the numbers represent categories rather than interval numbers.
Any organisation that adopts this model should define these categories to match risk profile and risk appetite
To avoid confusion, ditch the numbers and replace "mostly harmful", "unlikely" and so on with descriptions that match your organisation's risk profile, and simply use the coloured areas to categorise the risk bands.
Then test your risk matrix -- ask colleagues to independently place the same hazards on the matrix and see if they come up with the same answers. Are you all happy with the prioritisation that the matrix gives you? Are you really equally as tolerant of the high likelihood of a harmful event as you are of the medium likelihood of an extremely harmful event? You might extend the red or the green area, and you can create a less symmetrical pattern to match your risk appetite.
Risk matrices can be a useful tool for determining acceptability and priorities for risk management -- but we must test the matrices we are using to make sure they are telling us what we think they are telling us. Poorly understood severity and likelihood categories and arbitrary risk bands will lead us to draw the wrong conclusions.