Memes are taking the alt-right’s message of hate mainstream

Unless you live under the proverbial rock, you surely have come across Internet memes a few times. Memes are basically viral images, videos, slogans, etc., which might morph and evolve but eventually enter popular culture. When thinking about memes, most people associate them with ironic or irreverent images, from Bad Luck Brian to classics like Grumpy Cats.

Bad Luck Brian (left) and Grumpy Cat (right) memes.

Unfortunately, not all memes are funny. Some might even look as innocuous as a frog but are in fact well-known symbols of hate. Ever since the 2016 US Presidential Election, memes have been increasingly associated with politics.

Pepe The Frog meme used in a Brexit-related context (left), Trump as Perseus beheading Hillary as Medusa (center), meme posted by Trump Jr. on Instagram (right).

But how exactly do memes originate, spread, and gain influence on mainstream media? To answer this question, our recent paper (“On the Origins of Memes by Means of Fringe Web Communities”) presents the largest scientific study of memes to date, using a dataset of 160 million images from various social networks. We show how “fringe” Web communities like 4chan’s “politically incorrect board” (/pol/) and certain “subreddits” like The_Donald are successful in generating and pushing a wide variety of racist, hateful, and politically charged memes.

Continue reading Memes are taking the alt-right’s message of hate mainstream

Exploring the multiple dimensions of Internet liveness through holographic visualisation

Earlier this year, Shehar Bano summarised our work on scanning the Internet and categorising IP addresses based on how “alive” they appear to be when probed through different protocols. Today it was announced that the resulting paper won the Applied Networking Research Prize, awarded by the Internet Research Task Force “to recognize the best new ideas in networking and bring them to the IETF and IRTF”. This occasion seems like a good opportunity to recall what more can be learned from the dataset we collected, but which couldn’t be included in the paper itself. Specifically, I will look at the multi-dimensional aspects to “liveness” and how this can be represented through holographic visualisation.

One of the most interesting uses of these experimental results was the study of correlations between responses to different combinations of network protocols. This application was only possible because the paper was the first to simultaneously scan multiple protocols and so give us confidence that the characteristics measured are properties of the hosts and the networks they are on, and not artefacts resulting from network disruption or changes in IP address allocation over time. These correlations are important because the combination of protocols responded to gives us richer information about the host itself when compared to the result of a scan of any one protocol. The results also let us infer what would likely be the result of a scan of one protocol, given the result of a scan of different ones.

In these experiments, 8 protocols were studied: ICMP, HTTP, SSH, HTTPS, CWMP, Telnet, DNS and NTP. The results can be represented as 28=256 values placed in a 8-dimensional space with each dimension indicating whether a host did or did not respond to a probe of that protocol. Each value is the number of IP addresses that respond to that particular combination of network protocols. Abstractly, this makes perfect sense but representing an 8-d space on a 2-d screen creates problems. The paper dealt with this issue through dimensional reduction, by projecting the 8-d space on to a 2-d chart to show the likelihood of a positive response to a probe, given a positive response to probe on another single protocol. This chart is useful and easy to read but hides useful information present in the dataset.

Continue reading Exploring the multiple dimensions of Internet liveness through holographic visualisation

New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy

Due to more and more services and messaging applications implementing end-to-end encryption, law enforcement organisations and intelligence agencies have become increasingly concerned about the prospect of “going dark”. This is when law enforcement has the legal right to access a communication (i.e. through a warrant) but doesn’t have the technical capability to do so, because the communication may be end-to-end encrypted.

Earlier proposals from politicians have taken the approach of outright banning end-to-end encryption, which was met with fierce criticism by experts and the tech industry. The intelligence community had been slightly more nuanced, promoting protocols that allow for key escrow, where messages would also be encrypted under an additional key (e.g. controlled by the government). Such protocols have been promoted by intelligence agencies as recently as 2016 and early as the 1990s but were also met with fierce criticism.

More recently, there has been a new set of legislation in the UK, statements from the Five Eyes and proposals from intelligence officials that propose a “different” way of defeating end-to-end encryption, that is akin to key escrow but is enabled on a “per-warrant” basis rather than by default. Let’s look at how this may effect threat models in applications that use end-to-end encryption in the future.

Legislation

On the 31st of August 2018, the governments of the United States, the United Kingdom, Canada, Australia and New Zealand (collectively known as the “Five Eyes”) released a “Statement of Principles on Access to Evidence and Encryption”, where they outlined their position on encryption.

In the statement, it says:

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards.

The statement goes on to set out that technology companies have a mutual responsibility with government authorities to enable this process. At the end of the statement, it describes how technology companies should provide government authorities access to private information:

The Governments of the Five Eyes encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services that they create or operate in our countries. Governments should not favor a particular technology; instead, providers may create customized solutions, tailored to their individual system architectures that are capable of meeting lawful access requirements. Such solutions can be a constructive approach to current challenges.

Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.

Their position effectively boils down to requiring technology companies to provide a technical means to fulfil court warrants that require them to hand over private data of certain individuals, but the implementation for doing so is open to the technology company.

Continue reading New threat models in the face of British intelligence and the Five Eyes’ new end-to-end encryption interception strategy

UCL runs a digital security training event aimed at domestic abuse support services

In late November, UCL’s “Gender and IoT” (G-IoT) research team ran a “CryptoParty” (digital security training event) followed by a panel discussion which brought together frontline workers, support organisations, as well as policy and tech representatives to discuss the risk of emerging technologies for domestic violence and abuse. The event coincided with the International Day for the Elimination of Violence against Women, taking place annually on the 25th of November.

Technologies such as smartphones or platforms such as social media websites and apps are increasingly used as tools for harassment and stalking. Adding to the existing challenges and complexities are evolving “smart”, Internet-connected devices that are progressively populating public and private spaces. These systems, due to their functionalities, create further opportunities to monitor, control, and coerce individuals. The G-IoT project is studying the implications of IoT-facilitated “tech abuse” for victims and survivors of domestic violence and abuse.

CryptoParty

The evening represented an opportunity for frontline workers and support organisations to upskill in digital security. Attendees had the chance to learn about various topics including phone, communication, Internet browser and data security. They were trained by a group of so-called “crypto angels”, meaning volunteers who provide technical guidance and support. Many of the trainers are affiliated with the global “CryptoParty” movement and the CryptoParty London specifically, as well as Privacy International, and the National Cyber Security Centre.

G-IoT’s lead researcher, Dr Leonie Tanczer, highlighted the importance of this event in light of the socio-technical research that the team pursued so far: “Since January 2018, we worked closely with the statutory and voluntary support sector. We identified various shortcomings in the delivery of tech abuse provisions, including practice-oriented, policy, and technical limitations. We set up the CryptoParty to bring together different communities to holistically tackle tech abuse and increase the technical security awareness of the support sector.”

Continue reading UCL runs a digital security training event aimed at domestic abuse support services

Justice for victims of bank fraud – learning from the Post Office trial

In London, this week, a trial is being held over a dispute between the Justice for Subpostmasters Alliance (JFSA) and the Post Office, but the result will have far-reaching repercussions for anyone disputing computer evidence. The trial currently focuses on whether the legal agreements and processes set up by the Post Office are a fair basis for managing its relationship with the subpostmasters who operate branches on its behalf. Later, the court will assess whether the fact that the Post Office computer system – Horizon – indicates that a subpostmaster is in debt to the Post Office is sufficient evidence for the subpostmaster to be indeed liable to repay the debt, even when the subpostmaster claims the accounts are incorrect due to computer error or fraud.

Disputes over Horizon have led to subpostmasters being bankrupted, losing their homes, or even being jailed but these cases also echo the broader issues at the heart of the many phantom withdrawals disputes I see between a bank and its customers. Customers claim that money was taken from their accounts without their permission. The bank claims that their computer system shows that either the customer authorised the withdrawal or was grossly negligent and so the customer is liable. The customer may also claim that the bank’s handling of the dispute is poor and the contract with the bank protects the bank’s interests more than that of the customer so is an unfair basis for managing disputes.

There are several lessons the Post Office trial will have for the victims of phantom withdrawals, particularly for cases of push payment fraud, but in this post, I’m going to explore why these issues are being dealt with first in a trial initiated by subpostmasters and not by the (far more numerous) bank customers. In later posts, I’ll look more into the specific details that are being disclosed as a result of this trial.

Continue reading Justice for victims of bank fraud – learning from the Post Office trial

Coconut E-Petition Implementation

An interesting new multi-authority selective-disclosure credential scheme called Coconut has been recently released, which has the potential to enable applications that were not possible before. Selective-disclosure credential systems allow issuance of a credential (having one or more attributes) to a user, with the ability to unlinkably reveal or “show” said credential at a later instance, for purposes of authentication or authorization. The system also provides the user with the ability to “show” specific attributes of that credential or a specific function of the attributes embedded in the credential; e.g. if the user gets issued an identification credential and it has an attribute x representing their age, let’s say x = 25, they can show that f(x) > 21 without revealing x.

High-level overview of Coconut, from the Coconut paper

A particular use-case for this scheme is to create a privacy-preserving e-petition system. There are a number of anonymous electronic petition systems that are currently being developed but all lack important security properties: (i) unlinkability – the ability to break the link between users and specific petitions they signed, and (ii) multi-authority – the absence of a single trusted third party in the system. Through multi-authority selective disclosure credentials like Coconut, these systems can achieve unlinkability without relying on a single trusted third party. For example, if there are 100 eligible users with a valid credential, and there are a total of 75 signatures when the petition closes, it is not possible to know which 75 people of the total 100 actually signed the petition.

Continue reading Coconut E-Petition Implementation

UK Faster Payment System Prompts Changes to Fraud Regulation

Banking transactions are rapidly moving online, offering convenience to customers and allowing banks to close branches and re-focus on marketing more profitable financial products. At the same time, new payment methods, like the UK’s Faster Payment System, make transactions irrevocable within hours, not days, and so let recipients make use of funds immediately.

However, these changes have also created a new opportunity for fraud schemes that trick victims into performing a transaction under false pretences. For example, a criminal might call a bank customer, tell them that their account has been compromised, and help them to transfer money to a supposedly safe account that is actually under the criminal’s control. Losses in the UK from this type of fraud were £145.4 million during the first half of 2018 but importantly for the public, such frauds fall outside of existing consumer protection rules, leaving the customer liable for sometimes life-changing amounts.

The human cost behind this epidemic has persuaded regulators to do more to protect customers and create incentives for banks to do a better job at preventing the fraud. These measures are coming sooner than UK Finance – the trade association for UK based banking payments and cards businesses – would like, but during questioning by the House of Commons Treasury Committee, their Chief Executive conceded that change is coming. They now focus on who will reimburse customers who have been defrauded through no fault of their own. Who picks up the bill will depend not just on how good fraud prevention measures are, but how effectively banks can demonstrate this fact.

UK Faster Payment Creates an Opportunity for Social Engineering Attacks

One factor that contributed to the new type of fraud is that online interactions lack the usual cues that help customers tell whether a bank is genuine. Criminals use sophisticated social engineering attacks that create a sense of urgency, combined with information gathered about the customer through illicit means, to convince even diligent victims that it could only be their own bank calling. These techniques, combined with the newly irrevocable payment system, create an ideal situation for criminals.

Continue reading UK Faster Payment System Prompts Changes to Fraud Regulation

What We Disclose When We Choose Not To Disclose: Privacy Unraveling Around Explicit HIV Disclosure Fields

For many gay and bisexual men, mobile dating or “hook-up” apps are a regular and important part of their lives. Many of these apps now ask users for HIV status information to create a more open dialogue around sexual health, to reduce the spread of the virus, and to help fight HIV related stigma. Yet, if a user wants to keep their HIV status private from other app users, this can be more challenging than one might first imagine. While most apps provide users with the choice to keep their status undisclosed with some form of “prefer not to say” option, our recent study which we describe in a paper being presented today at the ACM Conference on Computer-Supported Cooperative Work and Social Computing 2018, finds privacy may “unravel” around users who choose this non-disclosure option, which could limit disclosure choice.

Privacy unraveling is a theory developed by Peppet in which he suggests people will self-disclose their personal information when it is easy to do so, low-cost, and personally beneficial. Privacy may then unravel around those who keep their information undisclosed, as they are assumed to be “hiding” undesirable information, and are stigmatised and penalised as a consequence.

In our study, we explored the online views of Grindr users and found concerns over assumptions developing around HIV non-disclosures. For users who believe themselves to be HIV negative, the personal benefits of disclosing are high and the social costs low. In contrast, for HIV positive users, the personal benefits of disclosing are low, whilst the costs are high due to the stigma that HIV still attracts. As a result, people may assume that those not disclosing possess the low gain, high cost status, and are therefore HIV positive.

We developed a series of conceptual designs that utilise Peppet’s proposed limits to privacy unraveling. One of these designs is intended to artificially increase the cost of disclosing an HIV negative status. We suggest time and financial as two resources that could be used to artificially increase disclosure cost. For example, users reporting to be HIV negative could be asked to watch an educational awareness video on HIV prior to disclosing (time), or only those users who had a premium subscription could be permitted to disclose their status (financial). An alternative (or in parallel) approach is to reduce the high cost of disclosing an HIV positive status by designing in mechanisms to reduce social stigma around the condition. For example, all users could be offered the option to sign up to “living stigma-free” which could also appear on their profile to signal others of their pledge.

Another design approach is to create uncertainty over whether users are aware of their own status. We suggest profiles disclosing an HIV negative status for more than 6 months be switched automatically to undisclosed unless they report a recent HIV test. This could act as a testing reminder, as well as increasing uncertainty over the reason for non-disclosures. We also suggest increasing uncertainty or ambiguity around HIV status disclosure fields by clustering undisclosed fields together. This may create uncertainty around the particular field the user is concerned about disclosing. Finally, design could be used to cultivate norms around non-disclosures. For example, HIV status disclosure could be limited to HIV positive users, with non-disclosures then assumed to be a HIV negative status, rather than HIV positive status.

In our paper, we discuss some of the potential benefits and pitfalls of implementing Peppet’s proposed limits in design, and suggest further work needed to better understand the impact privacy unraveling could have in online social environments like these. We explore ways our community could contribute to building systems that reduce its effect in order to promote disclosure choice around this type of sensitive information.

 

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 675730.

Can Ethics Help Restore Internet Freedom and Safety?

Internet services are suffering from various maladies ranging from algorithmic bias to misinformation and online propaganda. Could computer ethics be a remedy? Mozilla’s head Mitchell Baker warns that computer science education without ethics will lead the next generation of technologists to inherit the ethical blind spots of those currently in charge. A number of leaders in the tech industry have lent their support to Mozilla’s Responsible Computer Science Challenge initiative to integrate ethics with undergraduate computer science training. There is a heightened interest in the concept of ethical by design, the idea of baking ethical principles and human values into the software development process from design to deployment.

Ethical education and awareness are important, and there exist a number of useful relevant resources. Most computer science practitioners refer to the codes of ethics and conduct provided by the field’s professional bodies such as the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers, and in the UK the British Computing Society and the Institute of Engineering and Technology. Computer science research is predominantly guided by the principles laid out in the Menlo Report.

But aspirations and reality often diverge, and ethical codes do not directly translate to ethical practice. Or the ethical practices of about five companies to be precise. The concentration of power among a small number of big companies means that their practices define the online experience of the majority of Internet users. I showed this amplified power in my study on the Web’s differential treatment of the users of Tor anonymity network.

Ethical code alone is not enough and needs to be complemented by suitable enforcement and reinforcement. So who will do the job? Currently, for the most part, companies themselves are the judge and jury in how their practices are regulated. This is not a great idea. The obvious misalignment of incentives is aptly captured in an Urdu proverb that means: “The horse and grass can never be friends”. Self-regulation by companies can result in inconsistent and potentially biased regulation patterns, and/or over-regulation to stay legally safe.

Continue reading Can Ethics Help Restore Internet Freedom and Safety?

When Convenience Creates Risk: Taking a Deeper Look at Security Code AutoFill on iOS 12 and macOS Mojave

A flaw in Apple’s Security Code AutoFill feature can affect a wide range of services, from online banking to instant messaging.

In June 2018, we reported a problem in the iOS 12 beta. In the previous post, we discussed the associated risks the problem creates for transaction authentication technology used in online banking and elsewhere. We described the underlying issue and that the risk will carry over to macOS Mojave. Since our initial reports, Apple has modified the Security Code AutoFill feature, but the problem is not yet solved.

In this blog post, we publish the results of our extended analysis and demonstrate that the changes made by Apple mitigated one symptom of the problem, but did not address the cause. Security Code AutoFill could leave Apple users in a vulnerable position after upgrading to iOS 12 and macOS Mojave, exposing them to risks beyond the scope of our initial reports.

We describe four example attacks that are intended to demonstrate the risks stemming from the flawed Security Code AutoFill, but intentionally omit the detail necessary to execute them against live systems. Note that supporting screenshots and videos in this article may identify companies whose services we’ve used to test our attacks. We do not infer that those companies’ systems would be affected any more or any less than their competitors.

Flaws in Security Code AutoFill

The Security Code AutoFill feature extracts short security codes (e.g., a one-time password or OTP) from an incoming SMS and allows the user to autofill that code into a web form, webpage, or app when authenticating. This feature is meant to provide convenience, as the user no longer needs to memorize and re-enter a code in order to authenticate. However, this convenience could create risks for the user.

Continue reading When Convenience Creates Risk: Taking a Deeper Look at Security Code AutoFill on iOS 12 and macOS Mojave