Creating scalable distributed ledgers for DECODE

Since the introduction of Bitcoin in 2008, blockchains have gone from a niche cryptographic novelty to a household name. Ethereum expanded the applicability of such technologies, beyond managing monetary value, to general computing with smart contracts. However, we have so far only scratched the surface of what can be done with such “Distributed Ledgers”.

The EU Horizon 2020 DECODE project aims to expand those technologies to support local economy initiatives, direct democracy, and decentralization of services, such as social networking, sharing economy, and discursive and participatory platforms. Today, these tend to be highly centralized in their architecture.

There is a fundamental contradiction between how modern services harness the work and resources of millions of users, and how they are technically implemented. The promise of the sharing economy is to coordinate people who want to provide resources with people who want to use them, for instance spare rooms in the case of Airbnb; rides in the case of Uber; spare couches of in the case of couchsurfing; and social interactions in the case of Facebook.

These services appear to be provided in a peer-to-peer, and disintermediated fashion. And, to some extent, they are less mediated at the application level thanks to their online nature. However, the technical underpinnings of those services are based on the extreme opposite design philosophy: all users technically mediate their interactions through a very centralized service, hosted on private data centres. The big internet service companies leverage their centralized position to extract value out of user or providers of services – becoming de facto monopolies in many case.

When it comes to privacy and security properties, those centralized services force users to trust them absolutely, and offer little on the way of transparency to even allow users to monitor the service practices to ground that trust. A recent example illustrating this problem was Uber, the ride sharing service, providing a different view to drivers and riders about the fare that was being paid for a ride – forcing drivers to compare what they receive with what riders pay to ensure they were getting a fair deal. Since Uber, like many other services, operate in a non-transparent manner, its functioning depends on users absolute to ensure fairness.

The lack of user control and transparency of modern online services goes beyond monetary and economic concerns. Recently, the Guardian has published the guidelines used by Facebook to moderate abusive or illegal user postings. While, moderation has a necessary social function, the exact boundaries of what constitutes abuse came into question: some forms of harms to children or holocaust denial were ignored, while material of artistic or political value has been suppressed.

Even more worryingly, the opaque algorithms being used to promote and propagate posts have been associated with creating a filter bubble effect, influencing elections, and dark adverts, only visible to particular users, are able to flout standards of fair political advertising. It is a fact of the 21st century that a key facet of the discursive process of democracy will take place on online social platforms. However, their centralized, opaque and advertising-driven form is incompatible with their function as a tool for democracy.

Finally, the revelations of Edward Snowden relating to mass surveillance, also illustrate how the technical centralization of services erodes privacy at an unprecedented scale. The NSA PRISM program coerced internet services to provide access to data on their services under a FISA warrant, not protecting the civil liberties of non-American persons. At the same time, the UPSTREAM program collected bulk information between data centres making all economic, social and political activities taking place on those services transparent to US authorities. While users struggle to understand how those services operate, governments (often foreign) have total visibility. This is a complete inversion of the principles of liberal democracy, where usually we would expect citizens to have their privacy protected, while those in position of authority and power are expected to be accountable.

The problems of accountability, transparency and privacy are social, but are also based on the fundamental centralized architecture underpinning those services. To address them, the DECODE project brings together technical, legal, social experts from academia, alongside partners from local government and industry. Together they are tasked to develop architectures that are compatible with the social values of transparency, user and community control, and privacy.

The role of UCL Computer Science, as a partner, is to provide technical options into two key technical areas: (1) the scalability of secure decentralized distributed ledgers that can support millions or billions of users while providing high-integrity and transparency to operations; (2) mechanisms for protecting user privacy despite the decentralized and transparent infrastructure. The latter may seem like an oxymoron: how can transparency and privacy be reconciled? However, thanks to advances in modern cryptography, it is possible to ensure that operations were correctly performed on a ledger, without divulging private user data – a family of techniques known as zero-knowledge.

I am particularly proud of the UCL team we have put together that is associated with this project, and strengthens considerably our existing expertise in distributed ledgers.

I will be leading and coordinating the work. I have a long standing interest, and track record, in privacy enhancing technologies and peer-to-peer computing, as well as scalable distributed ledgers – such as the RSCoin currency proposal. Shehar Bano, an expert on systems and networking, has joined us as a post-doctoral researcher after completing her thesis at Cambridge. Alberto Sonnino will be doing his thesis on distributed ledgers and privacy, as well as hardware and IoT applications related to ledgers, after completing his MSc in Information Security at UCL last year. Mustafa Al-Bassam, is also associated with the project and works on high-integrity and scalable ledger technologies, after completing his degree at Kings College London – he is funded by the Turing Institute to work on such technologies. Those join our wider team of UCL CS faculty, with research interests in distributed ledgers, including Sarah Meiklejohn, Nicolas Courtois and Tomaso Aste and their respective teams.

 

This post also appears on the DECODE project blog.

Top ten obstacles along distributed ledgers’ path to adoption

In January 2009, Bitcoin was released into the world by its pseudonymous founder, Satoshi Nakamoto. In the ensuing years, this cryptocurrency and its underlying technology, called the blockchain, have gone on a rollercoaster ride that few could have predicted at the time of its deployment. It’s been praised by governments around the world, and people have predicted that “the blockchain” will one day be like “the Internet.” It’s been banned by governments around the world, and people have declared it “adrift” and “dead.”

After years in which discussions focused entirely on Bitcoin, people began to realize the more abstract potential of the blockchain, and “next-generation” platforms such as Ethereum, Steem, and Zcash were launched. More established companies also realized the value in the more abstract properties of the blockchain — resilience, integrity, etc. — and repurposed it for their particular industries to create an even wider class of technologies called distributed ledgers, and to form industrial consortia such as R3 and Hyperledger. These more general distributed ledgers can look, to varying degrees, quite unlike blockchains, and have a somewhat clearer (or at least different) path to adoption given their association with established partners in industry.

Amidst many unknowns, what is increasingly clear is that, even if they might not end up quite like “the Internet,” distributed ledgers — in one form or another — are here to stay. Nevertheless, a long path remains from where we are now to widespread adoption and there are many important decisions to be made that will affect the security and usability of any final product. In what follows, we present the top ten obstacles along this path, and highlight in some cases both the problem and what we as a community can do (and have been doing) to address them. By necessity, many interesting aspects of distributed ledgers, both in terms of problems and solutions, have been omitted, and the focus is largely technical in nature.

10. Usability: why use distributed ledgers?

The problem, in short. What do end users actually want from distributed ledgers, if anything? In other words, distributed ledgers are being discussed as the solution to problems in many industries, but what is it that the full public verifiability (or accountability, immutability, etc.) of distributed ledgers really maps to in terms of what end users want?

9. Governance: who makes the rules?

The problem, in short. The beauty of distributed ledgers is that no one entity gets to control the decisions made by the network; in Bitcoin, e.g., coins are generated or transferred from one party to another only if a majority of the peers in the network agree on the validity of this action. While this process becomes threatened if any one peer becomes too powerful, there is a larger question looming over the operation of these decentralized networks: who gets to decide which actions are valid in the first place? The truth is that all these networks operate according to a defined set of rules, and that “who makes the rules matters at least as much as who enforces them.”

In this process of making the rules, even the most decentralized networks turn out to be heavily centralized, as recent issues in cryptocurrency governance demonstrate. These increasingly common collapses threaten to harm the value of these cryptocurrencies, and reveal the issues associated with ad-hoc forms of governance. Thus, the problem is not just that we don’t know how to govern these technologies, but that — somewhat ironically — we need more transparency around how these structures operate and who is responsible for which aspects of governance.

8. Meaningful comparisons: which is better?

The problem. Bitcoin was the first cryptocurrency to be based on the architecture we now refer to as the blockchain, but it certainly isn’t the last; there are now thousands of alternative cryptocurrencies out there, each with its own unique selling point. Ethereum offers a more expressive scripting language and maintains state, Litecoin allows for faster block creation than Bitcoin, and each new ICO (Initial Coin Offering) promises a shiny feature of its own. Looking beyond blockchains, there are numerous proposals for cryptocurrencies based on consensus protocols other than proof-of-work and proposals in non-currency-related settings, such as Certificate Transparency, R3 Corda, and Hyperledger Fabric, that still fit under the broad umbrella of distributed ledgers.

Continue reading Top ten obstacles along distributed ledgers’ path to adoption

Caveat emptor: Privacy could turn UK’s genomic dream into a nightmare

Raise your hand if, over the past couple of years, you have not heard of whole genome sequencing (usually abbreviated as WGS), or at least read a sensational headline or two about how fast its costs are dropping. In a nutshell, WGS is used to determine an organism’s complete DNA sequence. But it is actually not the only way to analyze our DNA — in fact, genetic testing has been used in clinical settings for decades, e.g., to diagnose patients with known genetic conditions. Seven-time Wimbledon champion Pete Sampras is a beta-thalassemia carrier – a condition that affects the formation of beta-globin chains, ultimately leading to red blood cells not being formed correctly. Testing for thalassemia, usually triggered by family history or a blood test showing low mean corpuscular volume, is done with a number of simple in-vitro techniques.

The availability of affordable whole genome sequencing not only prompts new hopes toward the discovery and diagnosis of rare/unknown genetic conditions, but also enables researchers to better understand the relationship between the genome and predisposition to diseases, response to treatment, etc. Overall, progress makes it increasingly feasible to envision a not-so-distant future where individuals will undergo sequencing once, making their digitized genome easily available for doctors, clinicians, and third-parties. This would also allow us to use computational algorithms to analyze the genome as a whole, as opposed to expensive, slower, targeted in-vitro tests.

Along these lines is last week’s announcement by Prof. Dame Sally Davies, UK’s Chief Medical Officer, calling the NHS to deliver her “genomic dream” within five years, with whole genome sequencing becoming “as standard as blood tests and biopsies.” As detailed in her annual report, a large number of patients in the UK already undergo genetic testing at least once in their life, and for a wide range of reasons, including the aforementioned thalassemia diagnosis, screening for cancer predisposition triggered by high family incidence, or determining the best course of action in cancer treatment. So wouldn’t it make sense to sequence the genome once and keep the data available for life? My answer is yes, but with a number of bold and double underlined caveats.

The first one is with respect to the security concerns prompted by the need to store data of extreme sensitivity like genomic data. The genome obviously contains information about ethnic heritage and predisposition to diseases/conditions, possibly including mental disorders. Data breaches of sensitive information, including health and medical data, sadly happen on a daily basis. But certain security threats are actually specific to genomic data and much more worrisome. For instance, due to its hereditary nature, access to a genome essentially implies access to that of close relatives as well, including offspring, so one’s decision to publish/donate their genome is also being made for their siblings, kids/grandkids, etc. So sensitivity does not degrade over time, but persists long after a patient’s death. In fact, it might even increase, as new aspects of the genome are studied and discovered. As a consequence, Prof. Dame Davies’ dream could easily turn into a nightmare without adequate investments toward sound security measures, that involve both technical tools (such as upgrading of obsolete hardware) as well as education, awareness, and practices that do not simply shift burden onto clinicians and practitioners, but incorporate security in their design and not as an after-the-fact.

Another concern is with allowing researchers to use the genomic data collected by the NHS, along with medical history, for research purposes – e.g., to discover genetic mutations that are responsible for certain traits or diseases. This requires building a meaningful trust relationship between the NHS/Government and patients, which cannot happen without healing the wounds from recent incidents like the care.data debacle or Google DeepMind’s use of personal NHS records. Instead, the annual report seems to include security/anonymity promises we cannot realistically maintain, while, worse yet, promoting a rhetoric of greater good trumping privacy concerns, as well as seemingly pushing a choice between donating data and access to the best care. It is misleading to use terms like “de-identification” of genomic data as an effective protection tool, while proper anonymization is inherently impossible due to its peculiar combination of unique and hereditary features, as demonstrated by a wide array of scientific results. Rather, we should make it clear that data can never be fully anonymized, or protected with 100% guarantees.

Overall, I believe that patients should not be automatically enrolled in sequencing programs. Even if they are given an option to later withdraw, once the data is out there it is impossible to delete all copies of it. Rather, patients should voluntarily decide to join through an effective informed consent mechanism. This proves to be challenging against a background in which information that can be extracted/inferred from genomes may rapidly change: what if in the future a new mutation responsible for early on-set Alzheimer’s is discovered? What if the NHS is privatized? Encouraging results with respect to education and informed consent, however, do exist. For instance, the Personal Genome Project is a good example of effective strategies to help volunteers understand the risks and could be used to inform future NHS-run sequencing programs.

 

An edited version of this article was originally published on the BMJ.

Preventing phishing won’t stop ransomware spreading

Ransomware is in the news again, with Reckitt Benckiser reporting that disruption caused by the NotPetya ransomware could have cost them up to £100 million. In response to this news, just as every previous ransomware incident, the security industry started giving out advice – almost universally emphasising the importance of not opening phishing emails.

The problem is that this advice won’t work. Putting aside the fact that such advice is often so vague as to be impossible to put into action, the cause of recent ransomware outbreaks is not people opening phishing emails:

  • WannaCry, which notably caused severe disruption to the NHS, spread by automated scanning of computers vulnerable to an NSA-developed exploit. Although the starting point was initially assumed to be a phishing email, this was later debunked – only network scanning was used.
  • The Mole Ransomware attack that hit many organisations, including UCL, was initially thought to be spread by employees clicking on links in phishing emails. Subsequent analysis found this was incorrect and most likely the malware spread through malicious advertisements on legitimate websites.
  • NotPetya was initially thought to have been spread through Russian or Ukrainian phishing emails (explaining why that part of the world was so badly affected). It turned out to have not involved phishing at all, but the outbreak started through a tampered software update to the MEDoc tax accounting software mandated by the Ukranian government. Once inside an organisation, NotPetya then spread using the same exploit as WannaCry or by compromising administrative credentials.

Here are three major incidents, making international news, and the standard advice to “be vigilant” when opening emails or clicking links would have been useless. Is it any surprise that security advice gets ignored?

Not only is common anti-phishing advice unhelpful but it shifts blame to individuals (who are not in a position to prevent or mitigate most attacks) away from the IT industry and staff (who are). It also misleads management into thinking that they can “blame-and-train” their employees rather than investing in well engineered preventative security mechanisms and IT systems that can recover from compromise.

And there are things that can be done which have been shown to be effective, not just against the current outbreaks but many in the past and likely future. WannaCry would have been prevented by applying software updates, but the NotPetya outbreak was caused by a software update. The industry needs to act promptly to ensure that software updates are safe and reliable before customers become even more wary about installing them.

The spread of WannaCry and NotPetya within companies could have been prevented or slowed through better operational practices such as segmenting networks and limiting the use of administrative privilege. We’ve known this approach to be effective, but better tools and practices are needed to avoid enhanced security mechanisms being a drag on an organisation’s productivity.

Mole could have been prevented by ad-blocking browser extensions. The advertising industry is in open war against ad-blocking because it harms their income stream, but while they keep on spreading malware through their networks I have limited sympathy.

Well maintained and protected backups are essential to allow recovery, whether from ransomware, purely destructive attacks, or hardware failure. The security techniques above are effective, but these measures will not prevent every attack so mechanisms are needed to efficiently deal with the aftermath.

Most importantly we need to move away from security being a set of traditions passed from generation to generation with little or no reason to believe they are effective (so called “best practice”) to well engineered systems following rigorous, evidence-based guidance on state of the art cybersecurity principles, standards and practices.

EPFL blockchain summer school

This year EPFL hosted a Blockchain Summer School from the 21st to the 24th of June. UCL was well represented with Sarah Meiklejohn presenting two talks whilst Sarah Azouvi, Patrick McCorry, Mustafa Al-Bassam and Alexander Hicks also attended. This blog post is a joint effort from the four of us, aimed at highlighting the talks presented last week.

Patrick, Sarah, Sarah, Mustafa, Rebekah (UCL alumni) and Alex. Credit: Emin Gün Sirer

The Summer School featured talks on several aspects of blockchain technology ranging from classical distributed computing, security of smart contracts in Ethereum and proving the security of proof of work/stake. Here, we will provide a small summary for each of the talks. Slides can be found by clicking on each talk on the school’s program page.

TLS-N: Non-repudiation over TLS Enabling Ubiquitous Content Signing for Disintermediation by Arthur Gervais: Gervais’ talk highlights that a slight modification to TLS can allow a smart contract to verify the authenticity of data received from website.  Essentially, at the end of the TLS session the server signs evidence of the TLS session if requested by the client. This evidence is verified and stored by the smart contract. It is also worth mentioning that the protocol relies on redactable signatures that ensures private data isn’t revealed.

Town Crier: An Authenticated Data Feed for Smart Contracts – Ari Juels: Juel’s talk highlights that trusted execution environments can be leveraged to build authenticated data feeds. This trusted hardware communicates with the website before sending the data to the smart contract.  It is responsible for setting up a HTTPS session and fetching data from a website before sending the data to the smart contract. TownCrier is currently implemented using Intel SGX and is currently released for testing.

It is also worth mentioning that Juels beautifully provided a good definition for a smart contract:

“A smart contract is a trusted third party with public state.”

This is one of the reasons why cryptography and smart contracts are a great combination. The contract can ensure the cryptography is faithfully executed, whereas the cryptography can provide integrity and confidentiality for data used by the contract.

Continue reading EPFL blockchain summer school

Can we make people value IT security?

Angela Sasse was invited to give the sixth annual Wheeler Lecture, at the University of Cambridge Computer Laboratory. The video of her talk is below, and the slides are also available. A summary of the talk appears on the blog of the Research Institute in Science of Cyber Security (RISCS).

In many organisations today, IT security is a battleground: to manage the risks the organisation faces, security specialists devise policies and deploy security mechanisms that they expect staff and customers to comply with. But most of time, staff and customers don’t comply, and attempts to change that by “raising awareness” and “educating” them generally fail. The talk will use the examples of security warnings, access control, and sandboxing to explain the different perspectives and values that security specialists and ‘the rest of us’ apply to security. In conclusion, I will argue that a value-centred design approach is the only way to develop security solutions people want to use.

Find Security Champions in Blends of Organisational Culture

I was at the EuroUSEC ’17 workshop in Paris at the end of April. Our own Angela Sasse was also there to deliver the keynote talk, and Ruba Abu-Salma presented our paper “The Security Blanket of the Chat World: An Analytic Evaluation and a User Study of Telegram” (which was based on research by undergraduate students studying UCL’s COMP3096 “Research Group Project” module). I presented secondary analysis, conducted with Ingolf Becker and Angela Sasse, of a survey deployed at a large partner organisation. This analysis builds on research we presented at the Symposium on Usable Privacy and Security (SOUPS) in 2016. Based on survey responses and voluntary free-text comments, we saw potential for employees to inform policy from the ‘ground up’, in contradiction to the current trend for identifying security champions as local representatives of pre-determined policy.

Top-down security policies

Organisational policies are intended to promote a unified approach to security, one that all the organisation’s employees are expected to follow. If security procedures and mechanisms are unusable, policies risk being seen as impossible to follow, or may be sidelined if they lack clear relevance to business goals. This can result in deliberate or unwitting non-compliance, and workarounds to prescribed procedures.

Organisations may promote security champions, as local representatives to promote policy in their part of the organisation. However, these security champions can be effective only if policy is workable. Encouraging ‘top down’ policy compliance assumes that policy is correct, complete, and appropriate. It also assumes that policy applies to everyone equally and that employees have no role to play in shaping effective policy. Our analysis explores the potential for employees to inform effective policies, in particular whether it was possible to (i) identify local pockets of security expertise, and (ii) target engagement with employees that involves them in the creation of workable security solutions.

Identifying security champions ‘from the ground up’

Level Attitude Approach
1 Uninfluenced Security behaviour is driven by personal knowledge.
2 Technically Controlled Technical controls enforce compliance with policy.
3 Ad-hoc Knowledge and Application Shallow understanding of policy.
Knowledge absorbed from surrounding work environment.
4 Policy Compliant Comprehensive knowledge and understanding of policy.
Willing policy compliance.
Role model for organisation’s security culture.
5 Active Approach to Security Actively promote and advance security culture.
Intent of policy carried into work activities
Leverage well-understood values that support both security and business.
Employee security – Attitude-Levels. We studied an organisation with IT systems, so there were no participants at Level 1

A scenario-based survey was deployed in the partner company. Scenarios were based upon in-depth interviews with employees that explored security behaviours in the workplace. Each scenario involved a dilemma, where fixed options described different responses and included an element of non-compliance or an implicit cost. Participant choices indicate their Behaviour Type (above) and Attitude Level (below), which we recorded across groups of employees to characterise the security culture of the organisation and in four specific divisions. Both interviews and surveys represent a cross-section of divisions, locations, and age groups. We collected 608 survey responses; crucially, the survey allowed participants to comment on the scenarios and the available options – we also looked at 267 additional free-text comments that were provided.

Behaviour-Type Description
Individualists Rely on self for solutions
Egalitarians Rely on social or group solutions
Hierarchists Rely on existing systems or technologies
Fatalists Take a ‘naive’ approach, that their actions are not significant in creating outcomes
Behaviour-Types

Continue reading Find Security Champions in Blends of Organisational Culture

Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

Countries initially affected in WannaCry ransomware attack (source Wikipedia, User:Roke)

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?

Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

 

This article also appears on the Research Institute in Science of Cyber Security (RISCS) blog.

PayBreak able to defeat WannaCry/WannaCryptor ransomware

Recently I worked on some research with colleagues at Boston University (Manuel Egele, William Koch) and University College London (Gianluca Stringhini) into defeating ransomware. The fruit of our labor, PayBreak published this year in ACM ASIACCS, is a novel proactive system against ransomware. It happens to work against the new global ransomware threat, WannaCry. WannaCry is infecting more than 230,000 computers in 150 countries demanding ransom payments in exchange for access to precious files. This attack has been cited as being unprecidented, and the largest to date. Luckily, our research works against it.

PayBreak works by storing all the cryptographic material used during a ransomware attack. Modern ransomware uses what’s called a “hybrid cryptosystem”, meaning each ransomed file is encrypted using a different key, and each of those keys are then encrypted using another private key held by the ransomware authors. When ransomware attacks, PayBreak records the cryptographic keys used to encrypt each file, and securely stores them. When recovery is necessary, the victim retrieves the ransom keys, and iteratively decrypts each file.

Defeating WannaCry Ransomware

At this point, I think I’ve reverse engineered and researched something like 30 ransomware families, from over 1000 samples. Wannacry isn’t really much different than every other ransomware family. Those include other infamous families like Locky, CryptoWall, CryptoLocker, and TeslaLocker.

They all pretty much work the same way, including Wannacry. Actually, this comic sums up the ransom process the best I’ve seen. Every successful family today encrypts each file for ransom with a new unique “session” key, and encrypts each session key with a “private” ransom key. Those session keys are generated on the host machine. This is where PayBreak shims the generation, and usage of those keys, and saves them. Meaning, the encryption of those session keys by the ransomware’s private key is pointless, and defeated.

The PayBreak system doesn’t rely on any specific algorithm, or cryptographic library to be used by ransomware. Actually, Wannacry implemented, or atleast, statically compiled its own AES-128-CBC function. PayBreak can be configured to hook arbitrary functions, including that custom AES function, and record the parameters, such as the key, passed to it. However, a simpler approach in this case was to hook the Windows secure pseudorandom number generator function, CryptGenRandom, which the ransomware (and most others) use to create new session keys per file, and save the output of the function calls.

Recovering files is simply testing each of the recorded session keys with the encrypted files, until a successful decryption. Decrypting my file system of ~1000 files took 94 minutes.

Encrypted: Desert.jpg.WNCRY
Key used by Wannacry: cc24d9c8388fa566456ccec745e009c8
Decrypted: Desert.jpg

Thanks @jeffreycrowell for sharing a sample with me.
The full paper can be found here: https://eugenekolo.com/static/paybreak.pdf
SHA256 Hash of Sample: 24d004a104d4d54034dbcffc2a4b19a11f39008a575aa614ea04703480b1022c
WannaCry Custom AES: https://gist.github.com/eugenekolo/fe229be2a4230cf8322bf5537e291812

 

The original post appeared on Eugene Kolodenker’s blog.

The politics of the NHS WannaCrypt ransomware outbreak

You know you live in 2017 when the top headline on national newspapers relates to a ransomware attack on the National Heath Service, the UK Prime minister comments on the matter, and the the security researchers dealing with the outbreak are presented as heroic figures. As ever, The Register, has the most detailed and sophisticated technical article on the matter. But also strangely the most informative in terms of public policy. As if somehow, in our days, technical sophistication is a prerequisite also for sophisticated political comment on those matters. Other news outlets present a caricature, of the bad malware authors, the good security researcher and vendors working around the clock, the valiant government defenders, and a united humanity trying to beat the virus. I want to break that narrative open in this article, and discuss the actual political and social lessons we should be learning. In part to avoid similar disasters in the future.

First off, I am always surprised when such massive systemic outbreaks of malware, are blamed squarely on the author(s) of the malware itself, and the blame game ends there. It is without doubt that the malware author has a great share of responsibility. I personally think it is immoral to deploy ransomware in the wild, deny people access to their data, and seek to benefit from this. It is also a crime in the UK and elsewhere.

However, it is strange that a single author, or a small group of authors, without any major resources can have such a deep and widespread effect on major technological infrastructures. The absurdity becomes clear if we transpose the situation into the world of traditional engineering. Imagine all skyscrapers in major cities had to be evacuated, because a couple of teenagers with rocks were trying to blackmail business owners to pay up, to protect their precious glass windows. The fragility of software and IT systems seems to have no parallel in any other large scale engineering infrastructure — and this is not inherent, but the result of very specific micro-political, geo-political and economic decisions.

Lets take the WannaCrypt outbreak and look at the political and other social decisions that lead to the disaster — besides the agency of the malware authors:

  • The disaster was possible in part, and foremost, because IT systems within the UK critical NHS infrastructure are outdated — and for example rely on Windows XP that is not any more being maintained by Microsoft. Well, actually this is not strictly true: Microsoft does make security updates for Windows XP, but does not provide them for free — and instead Microsoft expects organizations that are locked in the OS to pay up to get patches and stay safe. So two key questions need to be asked …
  • Why is the NHS not upgrading to a new versions of Windows, or any other modern operating system? The answer is simple: line of business applications (LOB: from heath record management, specialist analysis and imaging software, to payroll) may not be compatible with new operating systems. On top of that a number of modern medical devices, such as large X-ray scanners or heart monitors, come with embedded computers running Windows XP — and only Windows XP. There is no way of upgrading them. The MEDJACK cyber-attacks were leveraging this to rampage through hospitals in 2015.
  • Is having LOB software tying you to an outdated OS, or medical devices costing millions that are not upgradeable, a fact of nature? No. It is down to a combination of terrible and naive procurement processes in health organizations, that do not take into account the need and costs if IT and security maintenance — and do not entrench it into the requirements and contracts for services, software and devices. It is also the result of the health software and devices industries being immature and unsophisticated as to the needs to secure IT. They reap the benefits of IT to make money, but without expending much of it to provide quality and security. The tragic state of security of medical devices has built the illustrious career of my friend Prof. Kevin Fu, who has found systemic attacks against implanted heart devices that could kill you, noob security bugs in medical device software, and has written extensively on the poor strategy to tackle these problem. So today’s attacks were a disaster waiting to happen — and expect more unless we learn the right lessons.
  • So given the terrible state of IT that prevents upgrading the OS, why is the NHS not paying up Microsoft to get security patches? That is because the government, and Jeremy Hunt in particular, back in 2014 decided to not pay up the money necessary to keep receiving security updates for Windows XP, despite being aware of the absolute reliance of the NHS on the outdated software. So in effect, a deliberate political decision was taken, at the highest level of the government to leave the NHS open to cyber attack. This is unlikely to be the last Windows XP security bug, so more are presumably to come.
  • Then there is the question of how malware authors, managed to get access to security bugs for windows XP? How did they get the tools necessary to attack such a mature, and rather common system, about 15 years after Windows XP was released, and only after it went out of maintenance? It turn out that the vulnerabilities they used, were in fact hoarded by the NSA as a cyber weapon — which was lost or stolen by hackers or leakers, and released into the wild! (The tool was codenamed EternalBlue). For may years, the computer security research community has been warning that stockpiling vulnerabilities in very common software for cyber-offense purposes, is dangerous. When those cyber weapons are lost, leaked, or even just used, there is proliferation of the technology necessary to attack, which criminals and foreign states can turn against critical infrastructure. This blog commented on the matter as recently as 8 March 2017 in a post entitled “What the CIA hack and leak teaches us about the bankruptcy of current “Cyber” doctrines”. This now feels like an unfortunately fulfilled prophesy, but the NHS attack was just the expected outcome of the US/UK and now common place doctrine around cyber — that contributes to, and leverages insecurity rather than security. Alternative public policy options exist of course.

So to summarize, besides the author of the malware, a number of other social and systemic factors contribute to making such cyber attacks possible: from poor security standards in heath informatics industries; poor procurement processes in heath organizations; lack of liability on any of the software vendors (incl. Microsoft) for providing insecure software or devices; cost-cutting from the government on NHS cyber security with no constructive alternatives to mitigate risks; and finally the UK/US cyber-offense doctrine that inevitably leads to proliferation of cyber-weapons and their use on civilian critical infrastructures.

It it those systemic factors that need to change to avoid future failures. Bad people wishing to make money from ransomware, or other badness, will always exist. There is a discipline devoted to preventing this, and it is called security engineering. It is time industry and goverment start taking its advice seriously.

 

This was originally posted on Conspicuous Chatter, the blog of Prof. George Danezis.