Well-meaning cybersecurity risk owners will deploy countermeasures in an effort to manage the risks they see affecting their services or systems. What is not often considered is that those countermeasures may produce unintended, negative consequences themselves. These unintended consequences can potentially be harmful, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including services of others).
Here, I describe a framework co-developed with several international researchers at a Dagstuhl seminar in mid-2019, resulting in an eCrime 2019 paper later in the year. We were drawn together by an interest in understanding unintended harms of cybersecurity countermeasures, and encouraging efforts to preemptively identify and avoid these harms. Our collaboration on this theme drew on our varied and multidisciplinary backgrounds and interests, including not only risk management and cybercrime, but also security usability, systems engineering, and security economics.
We saw it as necessary to focus on situations where there is often an urgency to counter threats, but where efforts to manage threats have the potential to introduce harms. As documented in the recently published seminar report, we explored specific situations in which potential harms may make resolving the overarching problems more difficult, and as such cannot be ignored – especially where potentially harmful countermeasures ought to be avoided. Example case studies of particular importance include tech-abuse by an intimate partner, online disinformation campaigns, combating CEO fraud and phishing emails in organisations, and online dating fraud.
Consider disinformation campaigns, for example. Efforts to counter disinformation on social media platforms can include fact-checking and automated detection algorithms behind the scenes. These can reduce the burden on users to address the problem. However, automation can also reduce users’ scepticism towards the information they see; fact-checking can be appropriated as a tool by any one group to challenge viewpoints of dissimilar groups.
We then see how unintended harms can shift the burden of managing cybersecurity to others in the ecosystem without them necessarily expecting it or being prepared for it. There can be vulnerable populations which are disadvantaged by the effects of a control more than others. An example may be legitimate users of social media who are removed – or have their content removed – from a platform, due to traits shared with malicious actors or behaviour, e.g., referring to some of the same topics, irrespective of sentiment – an example of ‘Misclassification’, in the list below. If a user, user group, or their online activity are removed from the system, the risk owner for that system may not notice that problems have been created for users in this way – they simply will not see them, as their actions have excluded them. Anticipating and avoiding unintended harms is then crucial before any such outcomes can occur.
Based on the scenarios we examined and the associated countermeasures, we inductively organised unintended harms into seven categories, listed here (in no particular order) (see also Table 6, pg. 149 of the seminar report):
- Displacement: rather than risks and potential harms being reduced, they are moved elsewhere.
- Insecure norms: behaviours or norms with greater harms to the users and/or infrastructures are inadvertently encouraged or made feasible.
- Additional costs: the time and/or resources required for particular parties to be involved in the system or service is increased.
- Misuse: The countermeasure itself is used by others for malicious intent.
- Misclassification: The countermeasure wrongly identifies a phenomenon (i.e. Twitter post). The identification can be a false positive or false negative.
- Amplification: The countermeasure results in an increase in the targeted behaviour.
- Disruption: The countermeasure interrupts existing, effective countermeasure(s).
We also translated these categories into a series of prompts, for practitioners and stakeholders to use to systematically approach existing or planned cybersecurity countermeasures. An additional prompt addressing the issue of vulnerable groups is included as well, to query whether there are entities more at risk of unintended harm than others. Stakeholders ought to be mindful of how a countermeasure can impact already at-risk groups, or result in the creation of new vulnerable groups within the system. We envision the framework as a tool for supporting conversations between stakeholders who need to coordinate their approach in a complex, multi-party service and/or technology ecosystem.
The framework was developed further beyond the seminar, resulting in a paper, “Identifying Unintended Harms of Cybersecurity Countermeasures”, to appear in the proceedings of the APWG eCrime 2019 symposium. The work was awarded best paper at the symposium, with programme co-chair Gianluca Stringhini commenting that it addressed a gap “that is often overlooked when studying potential mitigations for online crime”.
We owe special thanks to Schloss Dagstuhl, and the organizers of Dagstuhl Seminar 19302 (“Cybersafety Threats – from Deception to Aggression”), for providing the opportunity for us to work together on what we believe is a critical, but overlooked, part of risk management.