Scaling Tor hidden services

Tor hidden services offer several security advantages over normal websites:

  • both the client requesting the webpage and the server returning it can be anonymous;
  • websites’ domain names (.onion addresses) are linked to their public key so are hard to impersonate; and
  • there is mandatory encryption from the client to the server.

However, Tor hidden services as originally implemented did not take full advantage of parallel processing, whether from a single multi-core computer or from load-balancing over multiple computers. Therefore once a single hidden service has hit the limit of vertical scaling (getting faster CPUs) there is not the option of horizontal scaling (adding more CPUs and more computers). There are also bottle-necks in the Tor networks, such as the 3–10 introduction points that help to negotiate the connection between the hidden service and the rendezvous point that actually carries the traffic.

For my MSc Information Security project at UCL, supervised by Steven Murdoch with the assistance of Alec Muffett and other Security Infrastructure engineers at Facebook in London, I explored possible techniques for improving the horizontal scalability of Tor hidden services. More precisely, I was looking at possible load balancing techniques to offer better performance and resiliency against hardware/network failures. The focus of the research was aimed at popular non-anonymous hidden services, where the anonymity of the service provider was not required; an example of this could be Facebook’s .onion address.

One approach I explored was to simply run multiple hidden service instances using the same private key (and hence the same .onion address). Each hidden service periodically uploads its own descriptor, which describes the available introduction points, to six hidden service directories on a distributed hash table. The hidden service instance chosen by the client depends on which hidden service instance most recently uploaded its descriptor. In theory this approach allows an arbitrary number of hidden service instances, where each periodically uploads its own descriptors, overwriting those of others.

This approach can work for popular hidden services because, with the large number of clients, some will be using the descriptor most recently uploaded, while others will have cached older versions and continue to use them. However my experiments showed that the distribution of the clients over the hidden service instances set up in this way is highly non-uniform.

I therefore ran experiments on a private Tor network using the Shadow network simulator running multiple hidden service instances, and measuring the load distribution over time. The experiments were devised such that the instances uploaded their descriptors simultaneously, which resulted in different hidden service directories receiving different descriptors. As a result, clients connecting to a hidden service would be balanced more uniformly over the available instances.

Continue reading Scaling Tor hidden services

Sarah Meiklejohn – Security and Cryptography

Sarah Meiklejohn As a child, Sarah Meiklejohn thought she might become a linguist, largely because she was so strongly interested in the work being done to decode the ancient Greek writing systems Linear A and Linear B.

“I loved all that stuff,” she says. “And then I started doing mathematics.” At that point, with the help of Simon Singh’s The Code Book, she realised the attraction was codebreaking rather than human languages themselves. Simultaneously, security and privacy were increasingly in the spotlight.

“I’m a very private person, and so privacy is near and dear to my heart,” she says. “It’s an important right that a lot of people don’t seem interested in exercising, but it’s still a right. Even if no one voted we would still agree that it was important for people to be able to vote.”

It was during her undergraduate years at Brown, which included a fifth-year Masters degree, that she made the transition from mathematics to cryptography and began studying computer science. She went on to do her PhD at the University of California at San Diego. Her appointment at UCL, which is shared between the Department of Computer Science and the Department of Crime Science, is her first job.

Probably her best-known work is A Fistful of Bitcoins: Characterizing Payments Among Men with No Names (PDF), written with Marjori Pomarole, Grant Jordan, Kirill Levchenko, Damon McCoy, Geoffrey M. Voelker, and Stefan Savage and presented at USENIX 2013, which studied the question of how much anonymity bitcoin really provides.

“The main thing I was trying to focus on in that paper is what bitcoin is used for,” she says. The work began with buying some bitcoin (in 2012, at about £3 each), and performing some transactions with them over a period of months. Using the data collected this way allowed her to uncover some “ground truth” data.

“We developed these clustering techniques to get down to single users and owners.” The result was that they could identify which addresses belonged to which exchanges and enabled them to get a view of what was going on in the network. “So we could say this many bitcoins passed through this exchange per month, or how many were going to underground services like Silk Road.”

Continue reading Sarah Meiklejohn – Security and Cryptography

Category errors in (information) security: how logic can help

(Information) security can, pretty strongly arguably, be defined as being the process by which it is ensured that just the right agents have just the right access to just the right (information) resources at just the right time. Of course, one can refine this rather pithy definition somewhat, and apply tailored versions of it to one’s favourite applications and scenarios.

A convenient taxonomy for information security is determined by the concepts of confidentiality, integrity, and availability, or CIA; informally:

Confidentiality
the property that just the right agents have access to specified information or systems;
Integrity
the property that specified information or systems are as they should be;
Availability
the property that specified information or systems can be accessed or used when required.

Alternatives to confidentiality, integrity, and availability are sensitivity and criticality, in which sensitivity amounts to confidentiality together with some aspects of integrity and criticality amounts to availability together with some aspects of integrity.

But the key point about these categories of phenomena is that they are declarative; that is, they provide a statement of what is required. For example, that all documents marked ‘company private’ be accessible only to the company’s employees (confidentiality), or that all passengers on the aircraft be free of weapons (integrity), or that the company’s servers be up and running 99.99% of the time (availability).

It’s all very well stating, declaratively, one’s security objectives, but how are they to be achieved? Declarative concepts should not be confused with operational concepts; that is, ones that describe how something is done. For example, passwords and encryption are used to ensure that documents remain confidential, or security searches ensure that passengers do not carry weapons onto an aircraft, or RAID servers are employed to ensure adequate system availability. So, along with each declarative aim there is a collection of operational tools that can be used to achieve it.

Continue reading Category errors in (information) security: how logic can help

An Analysis of Reshipping Mule Scams

Credit cards are a popular target for cybercriminals. Miscreants infect victim computers with malware that reports back to their command and control servers any credit card information that the user inserts in her computer, or compromise large retail stores stealing their customers’ credit card information. After obtaining credit card details from their victims, cybercriminals face the problem of monetising such information. As we recently covered on this blog, cybercriminals monetise stolen credit cards by cloning them and using very clever tricks to bypass the Chip and PIN verification mechanisms. This way they are able to use the counterfeit credit card in a physical store, purchase expensive items such as cigarettes, and re-sell them for a profit.

Another possible way for cybercriminals to monetise stolen credit cards is by purchasing goods on online stores. To this end, they need more information than the one contained on the credit card alone: for those of you who are familiar with online shopping, some merchants require a billing address as well to allow the purchase (which is called “card not present transaction”). This additional information is often available to the criminal – it might, for example, have been retrieved together with the credit card credentials as part of a data breach against an online retailer. When purchasing goods online, cybercriminals face the issue of shipping: if they shipped the stolen goods to their home address, this would make it easy for law enforcement to find and arrest them. For this reason, miscreants need intermediaries in the shipping process.

In our recent paper, which was presented at the ACM Conference on Computer and Communications Security (CCS), we analyse a criminal scheme designed to help miscreants who wish to monetise stolen credit cards as we described: A cybercriminal (called operator) recruits unsuspecting citizens with the promise of a rewarding work-from-home job. This job involves receiving packages at home and having to re-ship them to a different address, provided by the operator. By accepting the job, people unknowingly become part of a criminal operation: the packages that they receive at their home contain stolen goods, and the shipping destinations are often overseas, typically in Russia. These shipping agents are commonly known as reshipping mules (or drops for stuff in the underground community). The operator then rents shipping mules as a service to cybercriminals wanting to ship stolen goods abroad. The cybercriminals taking advantage of such services are known as stuffers in the underground community. As a price for the service, the stuffer will pay a commission to the operator for each package reshipped through the service.

reshippinggraphic-580x328

In collaboration with the FBI and the United States Postal Inspection Service (USPIS) we conducted a study on such reshipping scam sites. This study involved data coming from seven different reshipping sites, and provides the research community with invaluable insights on how these operations are run. We observed that the vast majority of the re-shipped packages end up in the Moscow, Russia area, and that the goods purchased with stolen credit cards span multiple categories, from expensive electronics such as Apple products, to designer clothes, to DSLR cameras and even weapon accessories. Given the amount of goods shipped by the reshipping mule sites that we analysed, the annual revenue generated from such operations can span between 1.8 and 7.3 million US dollars. The overall losses are much higher though: the online merchant loses an expensive item from its inventory and typically has to refund the owner of the stolen credit card. In addition, the rogue goods typically travel labeled as “second hand goods” and therefore custom taxes are also evaded. Once the items purchased with stolen credit cards reach their destination they will be sold on the black market by cybercriminals.

Studying the management of the mules lead us to some surprising findings. When applying for the job, people are usually required to send the operator copies of their ID cards and passport. After they are hired, mules are promised to be paid at the end of their first month of employment. However, from our data it is clear that mules are usually never paid. After their first month expires, they are never contacted back by the operator, who just moves on and hires new mules. In other words, the mules become victims of this scam themselves, by never seeing a penny. Moreover, because they sent copies of their documents to the criminals, mules can potentially become victims of identity theft.

Our study is the first one shedding some light on these monetisation schemes linked to credit card fraud. We believe the insights in this paper can provide law enforcement and researchers with a better understanding of the cybercriminal ecosystem and allow them to develop more effective mitigation techniques against these problems.

George Danezis – Smart grid privacy, peer-to-peer and social network security

“I work on technical aspects of privacy,” says George Danezis, a reader in security and privacy engineering at UCL and part of the Academic Centre of Excellence in Cyber Security Research (ACE-CSR). There are, of course, many other limitations: regulatory, policy, economic. But, he says, “Technology is the enabler for everything else – though you need everything else for it to be useful.” Danezis believes providing privacy at the technology level is particularly important as it seems clear that both regulation and the “moralising” approach (telling people the things they shouldn’t do) have failed.

https://www.youtube.com/watch?v=wAbKB0kaH6c

There are many reasons why someone gets interested in researching technical solutions to intractable problems. Sometimes the motivation is to eliminate a personal frustration; other times it’s simply a fascination with the technology itself. For Danezis, it began with other people.

“I discovered that a lot of the people around me could not use technology out of the box to do things personally or collectively.” For example, he saw NGOs defending human rights worry about sending an email or chatting online, particularly in countries hostile to their work. A second motivation had to do with timing: when he began work it wasn’t yet clear that the Internet would develop into a medium anyone could use freely to publish stories. That particular fear has abated, but other issues such as the need for anonymous communications and private data sharing are still with us.

“Without anonymity we can’t offer strong privacy,” he says.

Unlike many researchers, Danezis did not really grow up with computers. He spent his childhood in Greece and Belgium, and until he got Internet access at 16, “I had access only to the programming books I could find in an average Belgian bookshop. There wasn’t a BBC Micro in every school and it was difficult to find information. I had one teacher who taught me how to program in Logo, and no way of finding more information easily.” Then he arrived at Cambridge in 1997, and “discovered thousands of people who knew how to do crazy stuff with computers.”

Danezis’ key research question is, “What functionality can we achieve while still attaining a degree of hard privacy?” And the corollary: at what cost in complexity of engineering? “We can’t just say, let’s recreate the whole computer environment,” he said. “We need to evolve efficiently out of today’s situation.”

Continue reading George Danezis – Smart grid privacy, peer-to-peer and social network security

Just how sophisticated will card fraud techniques become?

In late 2009, my colleagues and I discovered a serious vulnerability in EMV, the most widely used standard for smart card payments, known as “Chip and PIN” in the UK. We showed that it was possible for criminals to use a stolen credit or debit card without knowing the PIN, by tricking the terminal into thinking that any PIN is correct. We gave the banking industry advance notice of our discovery in early December 2009, to give them time to fix the problem before we published our research. After this period expired (two months, in this case) we published our paper as well explaining our results to the public on BBC Newsnight. We demonstrated that this vulnerability was real using a proof-of-concept system built from equipment we had available (off-the shelf laptop and card reader, FPGA development board, and hand-made card emulator).

No-PIN vulnerability demonstration

After the programme aired, the response from the banking industry dismissed the possibility that the vulnerability would be successfully exploited by criminals. The banking trade body, the UK Cards Association, said:

“We believe that this complicated method will never present a real threat to our customers’ cards. … Neither the banking industry nor the police have any evidence of criminals having the capability to deploy such sophisticated attacks.”

Similarly, EMVCo, who develop the EMV standards said:

“It is EMVCo’s view that when the full payment process is taken into account, suitable countermeasures to the attack described in the recent Cambridge Report are already available.”

It was therefore interesting to see that in May 2011, criminals were caught having stolen cards in France then exploiting a variant of this vulnerability to buy over €500,000 worth of goods in Belgium (which were then re-sold). At the time, not many details were available, but it seemed that the techniques the criminals used were much more sophisticated than our proof-of-concept demonstration.

We now know more about what actually happened, as well as the banks’ response, thanks to a paper by the researchers who performed the forensic analysis that formed part of the criminal investigation of this case. It shows just how sophisticated criminals could be, given sufficient motivation, contrary to the expectations in the original banking industry response.

Continue reading Just how sophisticated will card fraud techniques become?

Gianluca Stringhini – Cyber criminal operations and developing systems to defend against them

Gianluca Stringhini’s research focuses on studying cyber criminal operations and developing systems to defend against them.

Such operations tend to follow a common pattern. First the criminal operator lures a user into going to a Web site and tries to infect them with malware. Once infected, the user is joined to a botnet. From there, the user’s computer is instructed to perform malicious activities on the criminal’s behalf. Stringhini, whose UCL appointment is shared between the Department of Computer Science and the Department of Security and Crime Science, has studied all three of these stages.

https://www.youtube.com/watch?v=TY3wsqGOZ28

Stringhini, who is from Genoa, developed his interest in computer security at college: “I was doing the things that all college students are doing, hacking, and breaking into systems. I was always interested in understanding how computers work and how one could break them. I started playing in hacking competitions.”

At the beginning, these competitions were just for fun, but those efforts became more serious when he arrived in 2008 at UC Santa Barbara, which featured one of the world’s best hacking teams, a perennial top finisher in Defcon’s Capture the Flag competition. It was at Santa Barbara that his interest in cyber crime developed, particularly in botnets and the complexity and skill of the operations that created them. He picked the US after Christopher Kruegel, whom he knew by email, invited him to Santa Barbara for an internship. He liked it, so he stayed and did a PhD studying the way criminals use online services such as social networks

“Basically, the idea is that if you have an account that’s used by a cyber criminal it will be used differently than one used by a real person because they will have a different goal,” he says. “And so you can develop systems that learn about these differences and detect accounts that are misused.” Even if the attacker tries to make their behaviour closely resemble the user’s own, ultimately spreading malicious content isn’t something normal users intend to do, and the difference is detectable.

This idea and Stringhini’s resulting PhD research led to his most significant papers to date.

Continue reading Gianluca Stringhini – Cyber criminal operations and developing systems to defend against them

Mathematical Modelling in the Two Cultures

Models, mostly based on mathematics of one kind or another, are used everywhere to help organizations make decisions about their design, policies, investment, and operations. They are indispensable.

But if modelling is such a great idea, and such a great help, why do so many things go wrong? Well, there’s good modelling and less good modelling. And it’s hard for the consumers of models — in companies, the Civil Service, government agencies — to know when they’re getting the good stuff. Worse, there’s a lot of comment and advice out there which at best doesn’t help, and perhaps makes things worse.

In 1959, the celebrated scientist and novelist C. P. Snow delivered the Rede Lecture on ‘The Two Cultures’. Snow later published a book developing the ideas as ‘The Two Cultures and the Scientific Revolution’.

A famous passage from Snow’s lecture is the following (it can be found in Wikipedia):

‘A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare’s?

‘I now believe that if I had asked an even simpler question — such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, Can you read? — not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.’

Over the decades since, society has come to depend upon mathematics, and on mathematical models in particular, to a very great extent. Alas, the mathematical sophistication of the great majority of consumers of models has not really improved. Perhaps it has even deteriorated.

So, as mathematicians and modellers, we need to make things work. The starting point for good modelling is communication with the client.

Continue reading Mathematical Modelling in the Two Cultures

What are the social costs of contactless fraud?

Contactless payments are in the news again: in the UK the spending limit has been increased from £20 to £30 per transaction, and in Australia the Victoria Police has argued that contactless payments are to blame for an extra 100 cases of credit card fraud per week. These frauds are where multiple transactions are put through, keeping each under the AUS $100 (about £45) limit. UK news coverage has instead focussed on the potential for cross-channel fraud: where card details are skimmed from contactless cards then used for fraudulent online purchases. In a demonstration, Which? skimmed volunteers cards at a distance then bought a £3,000 TV with the card numbers and expiry dates recorded.

The media have been presenting contactless payments are insecure; the response from the banking industry is to point out that customers are not liable for the fraudulent transactions. Both are in some ways correct, but in other ways are missing the point.

The law in the UK (Payment Services Regulations (PSR) 2009, Regulation 62) indeed does say that the customers are entitled to a refund for fraudulent transactions. However a bank will only do this if they are convinced the customer has not authorised the transaction, and was not negligent. In my experience, a customer who is unable to clearly, concisely and confidently explain why they are entitled to a refund runs a high risk of not getting one. This fact will disproportionately disadvantage the more vulnerable members of society.

Continue reading What are the social costs of contactless fraud?

Experimenting with SSL Vulnerabilities in Android Apps

As the number of always-on, always-connected smartphones increase, so does the amount of personal and sensitive information they collect and transmit. Thus, it is crucial to secure traffic exchanged by these devices, especially considering that mobile users might connect to open Wi-Fi networks or even fake cell towers. The go-to protocol to secure network connection is HTTPS i.e., HTTP over SSL/TLS.

In the Android ecosystem, applications (apps for short), support HTTPS on sockets by relying on the android.net, android.webkit, java.net, javax.net, java.security, javax.security.cert, and org.apache.http packages of the Android SDK. These packages are used to create HTTP/HTTPS connections, administer and verify certificates and keys, and instantiate TrustManager and HostnameVerifier interfaces, which are in turn used in the SSL certificate validation logic.

A TrustManager manages the certificates of all Certificate Authorities (CAs) used to assess a certificate’s validity. Only root CAs trusted by Android are contained in the default TrustManager. A HostnameVerifier performs hostname verification whenever a URL’s hostname does not match the hostname in the peer’s identification credentials.

While browsers provide users with visual feedback that their communication is secured (via the lock symbol) as well as certificate validation issues, non-browser apps do so less extensively and effectively. This shortcoming motivates the need to scrutinize the security of network connections used by apps to transmit user sensitive data. We found that some of the most popular Android apps insufficiently secure these connections, putting users’ passwords, credit card details and chat messages at risk.

Continue reading Experimenting with SSL Vulnerabilities in Android Apps