I had started writing about risk assessment for AoBBlog a few weeks ago, but only this week I’ve met a critical risk assessment in practice. I’m writing this when I should be on a flight to Africa: why am I not on that flight? Foreign Office travel advice changed to “avoid all but essential travel” and I abandoned my plans. I could have gone and ‘enjoyed’ a few days inside a hotel and a couple of days in a meeting, most likely without internet, but I certainly would not have been able to achieve most of the plans for my visit – examining biodiversity in situ, recording challenges to farmers as seen in the markets and fields, and discussing with students in various labs.
After cancelling my trip, I heard about the tragedy of Sharon Gray, a highly regarded plant scientist, being killed in riots on the road between the site of the genebank and capital. I’m also well aware that the greatest risk when travelling almost anywhere though is from a road traffic accident. As well as being easy for people to underestimate this risk, car travel can also be well-managed by such things as use of public transportation (often but not invariably safer), wearing seat belts, sitting in the back of vehicles, using well-maintained vehicles, or avoiding night travel. These two examples show RISK management in practice, a nearly unconscious procedure that everyone follows, with the actions ranging from not doing something at all, through to using alternatives, wearing protective equipment or changing operation practices.
The equation well known in laboratory assessments of protocols – including for fieldwork involving the activities above – is:
Risk = Hazard × Exposure
A hazard is something that has the potential to cause harm to a person, such as electricity, working on a high floor of a building, noise, a puddle of water, or using a keyboard. A risk is the chance, high or low, that any hazard will actually cause somebody harm when exposed using the practical implementation of the mitigation methods. The exposure part of the equation applies to use of mitigating circumstances in practice: even on a new car, the brakes might fail, or there might be a gap between your lab coat and gloves.
Within individual laboratories, there are now well-established risk analysis procedures, aiming to reduce the risk to minimal values. Many of these are based on national, or supra-national, rules and legislation, with local implementation standards. Supervision of these rules comes at various levels, not least at the final stage of publication: at Annals of Botany we require authors to warrant that “all national laws relating to the research have been complied with” when submitting a paper. As it would be illegal in every country, we would request clarification or even decline to publish a paper where procedures used were subjecting people, or indeed the environment, to an unassessed risk.
A more controversial aspect of risk assessment has been the consideration should be taken of the risks associated with mitigation measures taken for hazards. Personally, given the equation of Risk = Hazard × Exposure, and with the requirement to consider ‘exposure’ in its practical application, I think this is covered. So in a car driving example, exposure to the hazard is mitigated by driver training (still leaving a significant level of exposure), maintenance of the car (reducing exposure to brake failure), or introducing autonomous cars (with huge potential reduction in exposure to risk, but still requiring rigorous assessment of the chances of conditions where the control system did not work, or of malicious intervention in the control system). Such an assessment does require appropriate definition of the outcome process being examined.
How are the laws or rules, which must be followed, established? How is risk assessment used to control chemicals? I have recently been involved in discussions across Europe trying to ensure that risk, and exposure controls, should follow robust scientific assessments and not opinions (Dietrich et al., 2016a and 2016b). Unfortunately, there is widespread misunderstanding of the difference between hazard and risk, and this is being deliberately exploited in pseudoscientists.
The characterization of risk determines the likelihood that effects will occur under real exposure conditions. Whether for chemicals, whether of natural or synthetic origin, or GM crops, sound regulation requires comparison of exposure with potency, and risk characterization is required to enable the potential benefit of a chemical to be assessed against its potential to inflict harm. We pointed out that governments always have access to robust scientific advice, but this is not always being used in legislation because of strongly expressed opinions and (sometimes blatantly present) advocacy activities, where presentation of issues to the public by some groups is been deliberately selective and courses of action have been proposed that are not supported by a scientific evidence base.
For EDCs, glyphosate and gene editing techniques for plant breeding for example, there is a huge database and detailed understanding of all aspects of the substances from their mode of action, to breakdown in the environment, to their effect in humans. Globally, the risks of not eating enough vitamin A or its precursor is well characterized: deficiency of this micronutrient, obtained from plant sources, is stated to kill 667,000 children under 5 years old each year, accounting for 6.5% of all deaths (Black et al., 2008 where the huge further burden from sub-lethal effects is also measured). As most plant scientists know, golden rice with genes inserted for beta-carotene biosynthesis would start to alleviate this major risk to the world’s children.
It is critical that plant scientists recognize that management of risks should be based on robust scientific evidence – just like legal procedures (not least including criminal law). In the use of such scientific evidence, this will ensure protection of human health and the environment, while maintaining sustainability of the agriculture and industry.