Teaching Privacy Enhancing Technologies at UCL

Last term I had the opportunity and pleasure to prepare and teach the first course on Privacy Enhancing Technologies (PETs) at University College London, as part of the MSc in Information Security.

The course covers principally, and in some detail, engineering aspects of PETs and caters for an audience of CS / engineering students that already understands the basics of information security and cryptography (although these are not hard prerequisites). Students were also provided with a working understanding of legal and compliance aspects of data protection regimes, by guest lecturer Prof. Eleni Kosta (Tilburg); as well as a world class introduction to human aspects of computing and privacy, by Prof. Angela Sasse (UCL). This security & cryptographic engineering focus sets this course apart from related courses.

The taught part of the course runs for 20 hours over 10 weeks, split in 10 topics:

Continue reading Teaching Privacy Enhancing Technologies at UCL

UCL Code Breaking Competition

6689260_sModern security systems frequently rely on complex cryptography to fulfil their goals and so it is important for security practitioners to have a good understanding of how cryptographic systems work and how they can fail. The Cryptanalysis (COMPGA18/COMPM068) module in UCL’s MSc Information Security provides students with the foundational knowledge to analyse cryptographic systems whether as part of system development in industry or as academic research.

To give students a more realistic (and enjoyable) experience there is no written exam for this module; instead the students are evaluated based on coursework and a code breaking competition.

UCL has a strong tradition of experimental research and we have been running many student competitions and hacking events in the past. In March 2013 a team directed by Dr Courtois won the UK University Cipher Challenge 2013 award, held as part of the UK Cyber Security Challenge.

This year the competition has been about finding cryptographically significant events in a real-life financial system. The competition (open both to UCL students and those of other London universities) requires the study of random number generators, elliptic curve cryptography, hash functions, exploration of large datasets, programming and experimentation, data visualisation, graphs and statistics.

We are pleased to announce the winners of the competition:

  • Joint 1st prize: Gemma Bartlett. Grade obtained 92/100.
  • Joint 1st prize: Vasileios Mavroudis.  Grade obtained 92/100.
  • 2nd prize: David Kohan Marzagão.  Grade obtained 82/100.

About the winners:

gemmb vasmdavm

  • Gemma Bartlett (left) is in her final year at UCL studying for an M.Eng. in Mathematical Computation with a focus on Information Security. Her particular interests include digital forensics. She will be starting a job in this field after graduation.
  • Vasilios Mavroudis (middle) received his B.Sc. in Applied Informatics from the University of Macedonia, Greece in 2012.  He is currently pursuing an M.Sc. in Information Security at UCL. In the past, he has worked as a security researcher in Deutsche Bank, University of California Santa Barbara and at the Centre for Research and Technology Hellas (CERTH). His research interests include network and systems security, malware, and applied cryptography.
  • David Kohan Marzagão (right) is currently undertaking a PhD in Computer Science under the supervision of Peter McBurney at King’s College London.  In 2014, he received his BSc in Mathematics at the University of São Paulo, Brazil. His research interests include cryptography, multi-agent systems, graph theory, and random walks.

Measuring Internet Censorship

Norwegian writer Mette Newth once wrote that: “censorship has followed the free expressions of men and women like a shadow throughout history.” Indeed, as we develop innovative and more effective tools to gather and create information, new means to control, erase and censor that information evolve alongside it. But how do we study Internet censorship?

Organisations such as Reporters Without Borders, Freedom House, or the Open Net Initiative periodically report on the extent of censorship worldwide. But as countries that are fond of censorship are not particularly keen to share details, we must resort to probing filtered networks, i.e., generating requests from within them to see what gets blocked and what gets through. We cannot hope to record all the possible censorship-triggering events, so our understanding of what is or isn’t acceptable to the censor will only ever be partial. And of course it’s risky, or even outright illegal, to probe the censor’s limits within countries with strict censorship and surveillance programs.

This is why the leak of 600GB of logs from hardware appliances used to filter internet traffic in and out of Syria was a unique opportunity to examine the workings of a real-world internet censorship apparatus.

Leaked by the hacktivist group Telecomix, the logs cover a period of nine days in 2011, drawn from seven Blue Coat SG-9000 internet proxies. The sale of equipment like this to countries such as Syria is banned by the US and EU. California-based manufacturer Blue Coat Systems denied making the sales but confirmed the authenticity of the logs – and Dubai-based firm Computerlinks FZCO later settled on a US$2.8m fine for unlawful export. In 2013, researchers at the University of Toronto’s Citizen Lab demonstrated how authoritarian regimes in Saudi Arabia, UAE, Qatar, Yemen, Egypt and Kuwait all rely on US-made equipment like those from Blue Coat or McAfee’s SmartFilter software to perform filtering.

Continue reading Measuring Internet Censorship

Understanding Online Dating Scams

Our research on online dating scams will be presented at the  Conference on Detection of Intrusions and Malware and Vulnerability Assessment (DIMVA) that will be held in Milan in July. This work was a collaboration with colleagues working for Jiayuan, the largest online dating site in China, and is the first large-scale measurement of online dating scams, comprising a dataset of more than 500k accounts used by scammers on Jiayuan across 2012 and 2013.

As someone who has spent a considerable amount of time researching ways to mitigate malicious activity on online services, online dating scams picked my interest for a number of reasons. First, online dating sites operate following completely different dynamics compared to traditional online social networks. On a regular social network (say Facebook or Linkedin) users connect with people they know in real life, and any request to connect from an unknown person is considered unsolicited and potentially malicious. Many malicious content detection systems (including my own) leverage this observation to detect malicious accounts. Putting people who don’t know each other in contact, however, is the main purpose of online dating sites – for this reason, traditional methods to detect fake and malevolent accounts cannot be applied to this context, and the development of a new threat model is required. As a second differentiator, online dating users tend to use the site only for the first contact, and move to other media (text messages, instant messaging) after that. Although that is fine for regular use, it makes it more difficult to track scammers, because the online dating site loses visibility of the messages exchanged between users after they have left the site. Third, online dating scams have a strong human component, which differentiates them heavily from traditional malicious activity on online services such as spam, phishing, or malware.

We identified three types of scams happening on Jiayuan. The first one involves advertising of  escort services or illicit goods, and is very similar to traditional spam. The other two are far more interesting and specific to the online dating landscape. One type of scammers are what we call swindlers. For this scheme, the scammer starts a long-distance relationship with an emotionally vulnerable victim, and eventually asks her for money, for example to purchase the flight ticket to visit her. Needless to say, after the money has been transferred the scammer disappears. Another interesting type of scams that we identified are what we call dates for profit. In this scheme, attractive young ladies are hired by the owners of fancy restaurants. The scam then consists in having the ladies contact people on the dating site, taking them on a date at the restaurant, having the victim pay for the meal, and never arranging a second date. This scam is particularly interesting, because there are good chances that the victim will never realize that he’s been scammed – in fact, he probably had a good time.

In the paper we analyze the accounts that we detected belonging to the different scam types, and extract typical information about the demographics that scammers pose as in their accounts, as well as the demographics of their victims. For example, we show that swindlers usually pose as widowed mid-aged men and target widowed women. We then analyze the modus operandi of scam accounts, showing that specific types of scam accounts have a higher chance of getting the attention of their victims and receiving replies than regular users. Finally, we show that the activity performed on the site by scammers is mostly manual, and that the use of infected computers and botnet to spread content – which is prominent on other online services – is minimal.

We believe that the observations provided in this paper will shed some light on a so far understudied problem in the field of computer security, and will help researchers in developing systems that can automatically detect such scam accounts and block them before they have a chance to reach their victims.

The full paper is available on my website.

Update (2015-05-15): There is press coverage of this paper in Schneier on Security and BuzzFeed.

Teaching cybersecurity to criminologists

I recently had the pleasure of teaching my first module at UCL, an introduction to cybersecurity for students in the SECReT doctoral training centre.

The module had been taught before, but always from a fairly computer-science-heavy perspective. Given that the students had largely no background in computer science, and that my joint appointment in the Department of Security and Crime Science has given me at least some small insight into what aspects of cybersecurity criminologists might find interesting, I chose to design the lecture material largely from scratch. I tried to balance the technical components of cybersecurity that I felt everyone needed to know (which, perhaps unsurprisingly, included a fair amount of cryptography) with high-level design principles and the overarching question of how we define security. Although I say I designed the curriculum from scratch, I of course ended up borrowing heavily from others, most notably from the lecture and exam material of my former supervisor’s undergraduate cybersecurity module (thanks, Stefan!) and from George’s lecture material for Introduction to Computer Security. If anyone’s curious, the lecture material is available on my website.

As I said, the students in the Crime Science department (and in particular the ones taking this module) had little to no background in computer science.  Instead, they had a diverse set of academic backgrounds: psychology, political science, forensics, etc. One of the students’ proposed dissertation titles was “Using gold nanoparticles on metal oxide semiconducting gas sensors to increase sensitivity when detecting illicit materials, such as explosives,” so it’s an understatement to say that we were approaching cybersecurity from different directions!

With that in mind, one of the first things I did in my first lecture was to take a poll on who was familiar with certain concepts (e.g., SSH, malware, the structure of the Internet), and what people were interested in learning about (e.g., digital forensics, cryptanalysis, anonymity). I don’t know what I was expecting, but the responses really blew me away! The students overwhelmingly wanted to hear about how to secure themselves on the Internet, both in terms of personal security habits (e.g., using browser extensions) and in terms of understanding what and how things might go wrong. Almost the whole class specifically requested Tor, and a few had even used it before.

This theme of being (pleasantly!) surprised continued throughout the term.  When I taught certificates, the students asked not for more details on how they work, but if there was a body responsible for governing certificate authorities and if it was possible to sue them if they misbehave. When I taught authentication, we played a Scattergories-style game to weigh the pros and cons of various authentication mechanisms, and they came up with answers like “a con of backup security questions is that they reveal cultural trends that may then be used to reveal age, ethnicity, gender, etc.”

There’s still a month and a half left until the students take the exam, so it’s too soon to say how effective it was at teaching them cybersecurity, but for me the experience was a clear success and one that I look forward to repeating and refining in the future.

One-out-of-Many Proofs: Or How to Leak a Secret and Spend a Coin

I’m going to EUROCRYPT 2015 to present a new zero-knowledge proof that I’ve developed together with Markulf Kohlweiss from Microsoft Research. Zero-knowledge proofs enable you to demonstrate that a particular statement is true without revealing anything else than the fact it is true. In our case the statements are one-out-of-many statements, intuitively that out of a number of items one of them has a special property, and we greatly reduce the size of the proofs compared to previous works in the area. Two applications where one-out-of-many proofs come in handy are ring signatures and Zerocoin.

Ring signatures can be used to sign a message anonymously as a member of a group of people, i.e., all a ring signature says is that somebody from the group signed the message but not who it was. Consider for instance a whistleblower who wants to leak her company is dumping dangerous chemicals in the ocean, yet wants to remain anonymous due to the risk of being fired. By using a ring signature she can demonstrate that she works for the company, which makes the claim more convincing, without revealing which employee she is. Our one-out-of-many proofs can be used to construct very efficient ring signatures by giving a one-out-of-many proof that the signer holds a secret key corresponding to a public key for one of the people in the ring.

Zerocoin is a new virtual currency proposal where coins gain value once they’ve been accepted on a public bulletin board. Each coin contains a commitment to a secret random serial number that only the owner knows. To anonymously spend a coin the owner publishes the serial number and gives a one-out-of-many proof that the serial number corresponds to one of the public coins. The serial number prevents double spending of a coin; nobody will accept a transaction with a previously used serial number. The zero-knowledge property of the one-out-of-many proof provides anonymity; it is not disclosed which coin the serial number corresponds to. Zerocoin has been suggested as a privacy enhancing add-on to Bitcoin.

The full research paper is available on the Cryptology ePrint Archive.

MSc Information Security @UCL

As the next programme director of UCL’s MSc in Information Security, I have quickly realized that showcasing a group’s educational and teaching activities is no trivial task.

As academics, we learn over the years to make our research “accessible” to our funders, media outlets, blogs, and the likes. We are asked by the REF to explain why our research outputs should be considered world-leading and outstanding in their impacts. As security, privacy, and cryptography researchers, we repeatedly test our ability to talk to lawyers, bankers, entrepreneurs, and policy makers.

But how do you do good outreach when it comes to postgraduate education? Well, that’s a long-standing controversy. The Economist recently dedicated a long report on tertiary education and also discussed misaligned incentives in strategic decisions involving admissions, marketing, and rankings. Personally, I am particularly interested in exploring ways one can (attempt to) explain the value and relevance of a specialist masters programme in information security. What outlets can we rely on and how do we effectively engage, at the same time, current undergraduate students, young engineers, experienced professionals, and aspiring researchers? How can we shed light on our vision & mission to educate and train future information security experts?

So, together with my colleagues of UCL’s Information Security Group, I started toying with the idea of organizing events — both in the digital and the analog “world” — that could provide a better understanding of both our research and teaching activities. And I realized that, while difficult at first and certainly time-consuming, this is a noble, crucial, and exciting endeavor that deserves a broad discussion.

DSC_0016

Information Security: Trends and Challenges

Thanks to the great work of Steve Marchant, Sean Taylor, and Samantha Webb (now known as the “S3 team” :-)), on March 31st, we held what I hope is the first of many MSc ISec Open Day events. We asked two of our friends in industry — Alec Muffet (Facebook Security Evangelist) and Dr Richard Gold (Lead Security Analyst at Digital Shadows and former Cisco cloud web security expert) — and two of  our colleagues — Prof. Angela Sasse and Dr David Clark — to give short, provocative talks about what they believe trends and challenges in Information Security are. In fact, we even gave it a catchy name to the event: Information Security: Trends and Challenges.

Continue reading MSc Information Security @UCL

Banks undermine chip and PIN security because they see profits rise faster than fraud

The Chip and PIN card payment system has been mandatory in the UK since 2006, but only now is it being slowly introduced in the US. In western Europe more than 96% of card transactions in the last quarter of 2014 used chipped credit or debit cards, compared to just 0.03% in the US.

Yet at the same time, in the UK and elsewhere a new generation of Chip and PIN cards have arrived that allow contactless payments – transactions that don’t require a PIN code. Why would card issuers offer a means to circumvent the security Chip and PIN offers?

Chip and Problems

Chip and PIN is supposed to reduce two main types of fraud. Counterfeit fraud, where a fake card is manufactured based on stolen card data, cost the UK £47.8m in 2014 according to figures just released by Financial Fraud Action. The cryptographic key embedded in chip cards tackles counterfeit fraud by allowing the card to prove its identity. Extracting this key should be very difficult, while copying the details embedded in a card’s magnetic stripe from one card to another is simple.

The second type of fraud is where a genuine card is used, but by the wrong person. Chip and PIN makes this more difficult by requiring users to enter a PIN code, one (hopefully) not known to the criminal who took the card. Financial Fraud Action separates this into those cards stolen before reaching their owner (at a cost of £10.1m in 2014) and after (£59.7m).

Unfortunately Chip and PIN doesn’t work as well as was hoped. My research has shown how it’s possible to trick cards into accepting the wrong PIN and produce cloned cards that terminals won’t detect as being fake. Nevertheless, the widespread introduction of Chip and PIN has succeeded in forcing criminals to change tactics – £331.5m of UK card fraud (69% of the total) in 2014 is now through telephone, internet and mail order purchases (known as “cardholder not present” fraud) that don’t involve the chip at all. That’s why there’s some surprise over the introduction of less secure contactless cards.

Continue reading Banks undermine chip and PIN security because they see profits rise faster than fraud

A Digital Magna Carta?

I attended two privacy events over the past couple of weeks. The first was at the Royal Society, chaired by Prof Jon Crowcroft.

All panelists talked about why privacy is necessary in a free, democratic society, but also noted that individuals are ill equipped to achieve this given the increasing number of technologies collecting data about us, and the commercial and government interests in using those.

During the question & answer session, one audience member asked if we needed a Digital Charter to protect rights to privacy. I agreed, but pointed out that citizens and consumers would need to express this desire more clearly, and be prepared to take collective action to stop the gradual encroachment.

The second panel – In the Digital Era – Do We Still Have Privacy? – organised in London by Lancaster University this week as part of its 50th Anniversary celebrations, chaired by Sir Edmund Burton.

One of the panelists – Dr Mike Short from Telefonica O2 – stated that it does not make commercial sense for a company to use data in a way that goes against their customer’s privacy preferences.

But there are service providers that force users to allow data collection – you cannot have the service unless you agree to your data being collected (which goes against the OECD principles for informed consent) or the terms & conditions so long that users don’t want to read them – and even if they were prepared to read them, they would not understand them without a legal interpreter.

We have found in our research at UCL (e.g. Would You Sell Your Mother’s Data, Fairly Truthful) that consumers have a keen sense of ‘fairness’ about how their data is used – and they definitely do not think it ‘fair’ for them to be used against their express preferences and life choices.

In the Q & A after the panel the question of what can be done to ensure fair treatment for consumers, and the idea of a Digital Charter, was raised again. The evening’s venue was a CD’s throw away from the British Library, where the Magna Carta is exhibited to celebrate its 800th anniversary. The panelists reminded us that last year, Sir Tim Berners-Lee called for a ‘Digital Magna Carta’ – I think this is the perfect time for citizens and consumers to back him up, and unite behind his idea.

Why Bentham’s Gaze?

Why is this blog called “Bentham’s Gaze”? Jeremy Bentham (1748–1832) was an philosopher, jurist and social reformer. Although he took no direct role in the creation of UCL (despite the myth), Bentham can be considered its spiritual founder, with his ideas being embodied in the institution. Notably, UCL went a long way to fulfilling Bentham’s desire of widening access to education, through it being the first English university to admit students regardless of class, race or religion, and to welcome women on equal terms with men.

Bentham’s Gaze refers not just to his vision of education but also to the Panopticon – a design proposed for a prison where all inmates in the circular building are potentially under continual observation from a central inspection house. Importantly, inmates would not be able to tell whether they were actively being observed and so the hope was that good behaviour would be encouraged without the high cost of actually monitoring everyone. Although no prison was created exactly to Bentham’s design, some (e.g. Presidio Modelo in Cuba) have notable similarities and pervasive CCTV can be seen as a modern instantiation of the same principles.

Finally, the more corporeal aspect to the blog name is that UCL hosts Bentham’s Auto-Icon – a case containing his preserved skeleton with wax head, seated in a chair, and dressed in his own clothes. The construction of the Auto-Icon was specified in Bentham’s will and since 1850 has been cared for by UCL. His head was also preserved but judged unsuitable for public display and so is stored by UCL Museums. Many of the staff and students at UCL will walk in view of Bentham while crossing the campus.

You too can now enjoy Bentham’s Gaze thanks to the UCL PanoptiCam – a webcam attached to the top of the Auto-Icon, as you can see below from my photo of it (and its photo of me). Footage from the camera is both on Twitter and YouTube, with highlights and discussion on @Panopticam.

UCL Panopticam

View from PanoptiCam (2015-02-19)