Non-financial risks have become at least as important, to both firms and regulators, as financial risks. This is mostly due to the importance of firms’ dependence on IT systems to operate their businesses but regulators (particularly the Financial Conduct Authority – the FCA) are also highlighting non-financial risks arising from newer regulatory topics, adding to the universe of risks and risk categories firms need to consider.
This is the first of three articles looking at some of the new and emerging non-financial risks and the risk management and governance points that arise from those risks. This article focuses on new and emerging technology-related non-financial risks, along with issues arising from regulators’ work on operational resilience. The second article considers some of the new non-financial risks that fall outside the scope of technology (e.g. risk of non-compliance with Consumer Duty obligations; risk of lack of diversity). The third article considers risk management and governance points. There’s more thinking behind each of the articles; I’ve condensed that in order to make the articles a manageable length.
1 Technology risks include risks of failure to systems which could arise from power outages, cyber-attacks, transition to new systems or changes made to systems. The system failure might be a short interruption or last for a longer period and might affect certain parts of the business and its customers or apply across the organisation as a whole. The cause of the failure might originate in systems operated by the firm or at an outsourced service provider and there could also be underlying problems in software or other components on which the firm or a third-party service provider depends. This presents a considerable range of risk scenarios.
Technology risk isn’t new but its significance has increased, and continues to do so, and operational resilience has moved up the regulatory agenda. Both regulators will expect to see full compliance with operational resilience requirements from 31 March 2025 but those requirements are about the ability of firms to prevent, adapt, respond to, recover from and learn from operational disruptions. That will feed into operational risk management work but it won’t replace that. Operational risk management – including work in relation to technology risks – will run in parallel with operational resilience work and be informed and supported by that.
And new technology-related risks are emerging, within firms and within the various third-parties they deal with and other providers along the outsourcing chain. The types of cyber-attack are evolving and, when identified, need to be considered within the risk management framework. Levels of dependency on third-parties – particularly those providing critical IT infrastructure – can lead to new risks and affect the risk profile and regulators expect firms to identify new and emerging resilience risks in the context of their business as part of non-financial risk management work. Meeting operational resilience requirements will help to inform that work and mitigate risks.
2 Cyber-attacks, power outages and other incidents could also result in loss of information (including customer information) or information being accessed, altered or compromised in another way. Firms will need to consider operational resilience risks alongside risks to information security. Although there will be overlap, the risks won’t be cast in the same terms and risk assessment and mitigants are unlikely to be fully aligned.
3 Contagion and concentration risks are highly relevant to technology risks, although the terminology has been used, more traditionally, in relation to financial risks. As well as contagion risks being generated by individual firms’ activities, dependency on third-parties who are also critical third parties within the wider financial services sector creates scope for contagion. Dependence on a third-party platform for the whole of the business creates a concentration risk for the firm and there might be other concentrations that should be considered in a firm’s risk profile. Considering dependencies within outsourced service providers should be part of due diligence work and consider whether a number of outsourced service providers having the same dependency should be considered as an additional (concentration or other) risk.
4 AI-related risks need to be considered too. Most larger firms now use AI in some form and more extensive use can be expected in the future. And machine-learning and chatbots are already used at many firms. Possible AI risks include:
- The risk of not understanding AI and the risks it can generate.
- The risk of ‘rogue’ AI within the business.
- Potentially, increased risk of cyber-attack through an AI interface.
- The risk of not taking full advantage of the potential that AI could bring to the business and becoming uncompetitive or failing to offer customers the products and services – and support – they need.
The list of risks a firm identifies now will alter as further AI-related risks emerge and descriptions of risks are refined.
5 Many of the model management risks are similar to the broader category of AI-related risks:
- The risk that models aren’t understood within the business or that the business is dependent on one or two people who understand a model and the risks it presents.
- The risk that a model evolves beyond its original scope or purpose or (through machine learning) develops in ways that the firm can’t pinpoint or can’t understand.
- The risk of bias within models – at the outset and over time (particularly if a model evolves, as mentioned in the previous bullet point).
- The risk of models not keeping up with external developments – for instance, new developments in fraud.
6 Looking at machine-learning, specifically, and use of chatbots, risks might include:
- The risk of bias in machine-learning data, initially or over time.
- The risk of insufficient suitable data for the model, resulting in skewed learning and potential for bias (see also the previous bullet point).
- The risk of delays being experienced by customers where machine-learning requires human review or other intervention.
- The risk that chatbots aren’t set with the right tone, values, responses and other parameters – or that they alter over time through machine-learning or don’t alter as the business and its values and standards evolve.
In practice, these risks will need to be considered in combination, with risk appetites being set, risk assessments being carried out, mitigants and controls being devised and reporting being made by reference to a range of risk combinations and scenarios. That will require active involvement by the business as well as by the Risk function and oversight by the board, topics considered further in part 3 of these articles.
This article is intended to provide general information about current and expected topics and perspectives that might be of interest. It does not provide or constitute, or purport to provide or constitute, advice relevant to any particular circumstances. Legal or other professional advice relevant to any particular circumstances should always be sought.