The release of our business whitepaper “Awareness is only the first step” was recently announced by Hewlett Packard Enterprise (HPE). The whitepaper is co-authored by HPE, UCL, and the UK government’s National Technical Authority for Information Assurance (CESG). The whitepaper emphasises how a user-centred approach to security awareness can empower employees to be the strongest link in defending their organisation. As Andrzej Kawalec, HPE’s Security Services CTO, notes in the press release:
“Users remain the first line of defense when faced with a dynamic and relentless threat environment.”
Security communication, education, and training (CET) in organisations is intended to align employee behaviour with the security goals of the organisation. Security managers conduct regular security awareness activities – familiar vehicles for awareness programmes, such as computer-based training (CBT), can cover topics such as password use, social media practices, and phishing. However, there is limited evidence to support the effectiveness or efficiency of CBT, and a lack of reliable indicators means that it is not clear if recommended security behaviour is followed in practice. If the design and delivery of CET programmes does not consider the individual, they can’t be certain of achieving the intended outcomes. As Angela Sasse comments:
“Many companies think that setting up web-based training packages are a cost-effective way of influencing staff behavior and achieving compliance, but research has provided clear evidence that this is not effective – rather, many staff resent it and suffer from ‘compliance fatigue.’”
The whitepaper describes a path to guide the involvement of employees in their own security, as shown in the HPE awareness maturity curve above. To change security behaviors, a company needs to invest in the security knowledge and skills of its employees, and respond to employee needs differently at each stage.
Why have Bitcoin, with its distributed consistent ledger, and now Ethereum with its support for fully fledged “smart contracts,” captured the imagination of so many people, both within and beyond the tech industry? The promise to replace obscure stores of information and arcane contract rules – with their inefficient, ambiguous, and primitive human interpretations – with publicly visible decentralized ledgers reflects the growing technological zeitgeist in their guarantee that all participants would know and be able to foresee the consequences of both their own actions and the actions of all others. The precise specification of contracts as code, with clauses automatically executed depending on certain sets of events and permissible user actions, represents for some a true state of utopia.
Regardless of one’s views on the potential for distributed ledgers, one of the most notable innovations that smart contracts have enabled thus far is the idea of a DAO (Decentralized Autonomous Organization), which is a specific type of investment contract, by which members individually contribute value that then gets collectively invested under some governance model. In truly transparent fashion, the details of this governance model, including who can vote and how many votes are required for a successful proposal, are all encoded in a smart contract that is published (and thus globally visible) on the distributed ledger.
Today, this vision met a serious stumbling block: a “bug” in the contract of the first majorly successful DAO (which broke records by raising 11 million ether, the equivalent of 150 million USD, in its first two weeks of operation) allowed third parties to start draining its funds, and to eventually make off with 4% of all ether. The immediate response of the Ethereum and DAO community was to suspend activity – seemingly an anathema for a ledger designed to provide high resiliency and availability – and propose two potential solutions: a “soft-fork” that would impose additional rules on miners in order to exclude all future transactions that try to use the stolen ether, or, more drastically (and running directly contrary to the immutability of the ledger), a “hard-fork” that would roll back the transactions in which the attack took place, in addition to the many legitimate transactions that took place concurrently. Interestingly, a variant of the bug that enabled the hack was known to and dismissed by the creators of the DAO (and the wider Ethereum community).
While some may be surprised by this series of events, Maurice Wilkes, designer of the EDSAC, one of the first computers, reflected that “[…] the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.” It is not the case that because a program is precisely defined it is easy to foresee what it will do once executed on its own under the control of users. In fact, Rice’s theorem explicitly states that it is not possible in general to show that the result of programs, and thus smart contracts, will have any specific non-trivial property.
This forms the basis on which modern verification techniques operate: they try to define subsets of programs for which it is possible to prove some properties (e.g., through typing), or attempt to prove properties in a post-hoc way (e.g., through verification), but under the understanding that they may fail in general. There is thus no scientific basis on which one can assert generally that smart contracts can easily provide clarity into and foresight of their consequences.
The unfolding story of the DAO and its consequences for the Ethereum community offers two interesting insights. First, as a sign that the field is maturing, there is an explicit call for understanding the computational space of safe contracts, and contracts with foreseeable consequences. Second, it suggests the need for smart contracts protecting significant assets to include external, possibly social, mechanisms in order to unlock significant value transfers. The willingness of exchanges to suspend trading and of the Ethereum developers to suggest a hard-fork is a last-resort example of such a social mechanism. Thus, politics – the discipline of collective management – reasserts itself as having primacy over human affairs.
The Investigatory Powers Bill, being debated in Parliament this week, proposes the first wide-scale update in 15 years to the surveillance powers of the UK law-enforcement and intelligence agencies.
The Bill has several goals: to consolidate some existing surveillance powers currently either scattered throughout other legislation or not even publicly disclosed, to create a wide range of new surveillance powers, and to change the process of authorisation and oversight surrounding the use of surveillance powers. The Bill is complex and, at 245 pages long, makes scrutiny challenging.
The Bill has had its first and second readings in the House of Commons, and has been examined by relevant committees in the Commons. The Bill will now be debated in the ‘report stage’, where MPs will have the chance to propose amendments following committee scrutiny. After this it will progress to a third reading, and then to the House of Lords for further debate, followed by final agreement by both Houses.
These committees were faced with a difficult task of meeting an accelerated timetable for the Bill, with the government aiming to have it become law by the end of 2016. The reason for the haste is that the Bill would re-instate and extend the ability of the government to compel companies to collect data about their users, even without there being any suspicion of wrongdoing, known as “data retention”. This power was previously set out in the EU Data Retention Directive, but in 2014 the European Court of Justice found it be unlawful.
Many questions remain about whether the powers granted by the Bill are justifiable and subject to adequate oversight, but where insights from computer security research are particularly relevant is on the powers to grant law enforcement the ability to bypass normal security mechanisms, sometimes termed “exceptional access”.
Terms and Conditions (T&C) are long, convoluted, and are very rarely actually read by customers. Yet when customers are subject to fraud, the content of the T&Cs, along with national regulations, matter. The ability to revoke fraudulent payments and reimburse victims of fraud is one of the main selling points of traditional payment systems, but to be reimbursed a fraud victim may need to demonstrate that they have followed security practices set out in their contract with the bank.
Security advice in banking terms and conditions vary greatly across the world. Our study’s scope included Europe (Cyprus, Denmark, Germany, Greece, Italy, Malta, and the United Kingdom), the United States, Africa (Algeria, Kenya, Nigeria, and South Africa), the Middle East (Bahrain, Egypt, Iraq, Jordan, Kuwait, Lebanon, Oman, Palestine, Qatar, Saudi Arabia, UAE and Yemen), and East Asia (Singapore). Out of 30 banks’ terms and conditions studied, 26 give more or less specific advice on how you may store your PIN. The advice varies from “Never writing the Customer’s password or security details down in a way that someone else could easily understand” (Arab Banking Corp, Algeria), “If the Customer makes a written record of any PIN Code or security procedure, the Customer must make reasonable effort to disguise it and must not keep it with the card for which it is to be used” (National Bank of Kenya) to “any record of the PIN is kept separate from the card and in a safe place” (Nedbank, South Africa).
Half of the T&Cs studied give advice on choosing and changing one’s PIN. Some banks ask customers to immediately choose a new PIN when receiving a PIN from the bank, others don’t include any provision for customers to change their PIN. Some banks give specific advice on how to choose a PIN:
When selecting a substitute ATM-PIN, the Customer shall refrain from selecting any series of consecutive or same or similar numbers or any series of numbers which may easily be ascertainable or identifiable with the Customer…
Only 5 banks give specific advice about whether you are allowed to re-use your PIN on other payment cards or elsewhere. There is also disagreement about what to do with the PIN advice slip, with 7 banks asking the customer to destroy it.
Some banks also include advice on Internet security. In the UK, HSBC for example demands that customers
always access Internet banking by typing the address into the web browser and use antivirus, antispyware and a personal firewall. If accessing Internet banking from a computer connected to a LAN or a public Internet access device or access point, they must first ensure that nobody else can observe, copy or access their account. They cannot use any software, such as browsers or password managers, to record passwords or other security details, apart from a service provided by the bank. Finally, all security measures recommended by the manufacturer of the device being used to access Internet banking must be followed, such as using a PIN to access a mobile device.
Over half of banks tell customers to use firewalls and anti-virus software. Some even recommend specific commercial software, or tell customers how to find some:
It is also possible to obtain free anti-virus protection. A search for `free anti-virus’ on Google will provide a list of the most popular.
In the second part of our paper, we investigate the customers’ perception of banking T&Cs in three countries: Germany, the United States and the United Kingdom. We present the participants with 2 real-life scenarios where individuals are subject to fraud, and ask them to decide on the outcome. We then present the participants with sections of T&Cs representative for their country and ask them then to re-evaluate the outcome of the two scenarios.
Question
DE
UK
US
Scenario 1: Card Loss
41.5%
81.5%
76.8%
Scenario 1: Card Loss after T&Cs
70.7%
66.7%
96.4%
Scenario 2: Phishing
31.7%
37.0%
35.7%
Scenario 2: Phishing after T&Cs
43.9%
46.3%
42.9%
The table above lists the percentage of participants that say that the money should be returned for each of the scenarios. We find that in all but one case, the participants are more likely to have the protagonist reimbursed after reading the terms and conditions. This is noteworthy – our participants are generally reassured by what they read in the T&Cs.
Further, we assess the participants’ comprehension of the T&Cs. Only 35% of participants fully understand the sections, but the regional variations are large: 45% of participants in the US fully understanding the T&Cs but only 22% do so in Germany. This may indeed be related to the differences in consumer protection laws between the countries: In the US, Federal regulations give consumers much stronger protections. In Germany and the UK (and indeed, throughout Europe under the EU’s Payment Service Directive), whether a victim of fraud is reimbursed depends on if he/she has been grossly negligent – a term that is not clearly defined and confused our participants throughout.