What I have been doing lately:
NB: I often write about what I've been doing on the Security Group "blog" www.lightbluetouchpaper.org. It's well worth a visit.I presented a paper on Ignoring the Great Firewall of China at the 6th Workshop on Privacy Enhancing Technologies held in Cambridge in June 2006. It turns out that this censorship system works by sending reset packets to each end of the connection, rather than blocking packets. If they don't dutifully close, but just discard the packets, the firewall is completely ineffective. More about this in the paper and in my security group blog posting.
Abstract The so-called "Great Firewall of China" operates, in part, by inspecting TCP packets for keywords that are to be blocked. If the keyword is present, TCP reset packets (viz: with the RST flag set) are sent to both endpoints of the connection, which then close. However, because the original packets are passed through the firewall unscathed, if the endpoints completely ignore the firewall's resets, then the connection will proceed unhindered. Once one connection has been blocked, the firewall makes further easy-to-evade attempts to block further connections from the same machine. This latter behaviour can be leveraged into a denial-of-service attack on third-party machines.In October-November 2005 I helped Poul-Henning Kamp track down a "DDoS" attack on his stratum 1 time server. I've now had a paper "The Rising Tide: DDoS by Defective Designs and Defaults" accepted at SRUTI'06
.
Abstract We consider the phenomenon of distributed denial of service attacks that occur through design defects (and poorly chosen defaults) in legitimately operated, entirely secure systems. Particular reference is made to a recently discovered \attack" on stratum 1 Network Time Protocol servers by routers manufactured by D-Link for the consumer market, the latest example of incidents that stretch back for decades. Consideration is given to how these attacks might have been avoided, and why such failures continue to occur.
I wrote a background paper on "Complexities in Criminalising Denial of Service Attacks" for the Internet Crime Forum (ICF) Legal subgroup. The idea was to give the lawyers some understanding of what DoS and DDoS attacks were all about, and how it can be hard to pin down concepts such as authorisation when one looks at how we use Internet resources today. The Home Office has now brought forward the Police and Justice Bill, which contains amendments to the Computer Misuse Act 1990 to deal (they hope) with denial-of-service attacks. Thus events have overtaken the document -- so there is little value in progressing the document through the ICF procedures needed to make it an Official Publication. Hence I'm making it available here on my own website, so as to provide a background resource to those considering whether the Home Office have got it right!
I've found that Earthlink's CAPTCHA-based Challenge-Response system for dumping spam filtering costs onto strangers has a teensy little flaw -- they only seem to have 31 CAPTCHAs. A longer overview is on the Security Research Group blog www.lightbluetouchpaper.org or you can go direct to my own webpages that explain my attempts to automate Responses to Challenge-Response.
I've started work on a project (funded by the nice people at Intel Research) to examine sampled sFlow data at the LINX, with a view to detecting senders of email spam (their traffic patterns should be detectiable because when spam is sent Happily It's Not The Same). Much more on this on the spamHINTS project website.
My PhD thesis on "Anonymity and Traceability in Cyberspace" has been published as Technical Report 653. You can trust me now! I'm a doctor!!
I examined BT's CleanFeed system (its proper name is the BT Anti-Child-Abuse Initiative). This was designed to be a low cost, but highly accurate, system for blocking "child pornography". At first sight it is significant improvement upon existing schemes. However, CleanFeed derives its advantages from employing two separate stages, and this hybrid system is thereby made more fragile because circumvention of either stage, whether by the end user or by the content provider, will cause the blocking to fail.
I wrote a paper "Failures in a Hybrid Content Blocking System" which was presented at the Workshop on Privacy Enhancing Technologies, Dubrovnik, Croatia, 30 May 2005 -- 1 June 2005. It describes attacks on both stages of the CleanFeed system and set out various countermeasures to address them. Some attacks concern the minutiae of comparing URLs, others address fundamentals of the system architecture. In particular, the CleanFeed system relies on data returned by the content provider, especially when doing DNS lookups. It also relies on the content provider returning the same data to everyone. All of this reliance upon the content providers' probity could well be entirely misplaced.
The CleanFeed design is intended to be extremely precise in what it blocks, but to keep costs under control this has been achieved by treating some traffic specially. This special treatment can be detected by end users and this means that the system can be used as an oracle to efficiently locate illegal websites. (ie you can use it to look for child pornography) This runs counter to its high level policy objectives.
Although legal and ethical issues prevent most experimentation at present, the attacks are extremely practical and would be straightforward to implement. If CleanFeed is used in the future to block other material, which may be distasteful but is legal to view, then there will be no bar to anyone assessing its effectiveness. It must be expected that knowledge of how to circumvent the system (for all material) will then become widely known and countermeasures will become essential.
An important general conclusion to draw from the need for a manual element in many of the countermeasures is that the effectiveness of any blocking system, and the true cost of ensuring it continues to provide accurate results, cannot be properly assessed until it comes under serious assault. Thinking of these systems as "fit-and-forget" arrangements will be a guarantee of their long-term failure.
After I presented my paper, Brightview announced that the oracle attack also broke their WebMinder product because it also has a two-stage arrangement and a superficially similar architecture to that of CleanFeed. They claim to have fixed the system by discarding all packets with a low TTL at the first stage filter (so that you cannot distinguish which outward route they took by the type of response). This isn't enough -- and there's a similar distinguisher for the return route... so I updated the paper to include this attack as well.
Immediately after the PET Workshop, I went to the Fourth Workshop on the Economics of Information Security, Harvard University, 2--3 June 2005. I presented Modelling Incentives for Email Blocking Strategies which I co-wrote with Andrei Serjantov. It looks at the economic arguments for blacklisting entire ISPs on the basis that they send you "nothing but spam". The model is neat, but the real world data we present is somewhat more messy!
At the Second Conference on Email and Anti-Spam, Stanford University, July 2005, I presented a paper on Stopping Outgoing Spam by Examining Incoming Server Logs which shows how examining incoming server logs permits you to spot customers who are infected with a virus or who are inadvertently relaying spam. The good news is that it is a very successful detector. The bad news is that it is only spotting a small fraction of the problem because most of the bad email is not being sent to other ISP customers where it could be detected.
I attended the First Conference on Email and Anti-Spam held in Mountain View, California at the end of July 2004. I presented a paper on Stopping Spam by Extrusion Detection which deals with processing ISP smarthost logs to heuristically determine which customers are insecure and have been exploited by the senders of bulk unsolicited email (spam). I was also on a panel on Payment Systems for Email, suggesting that none would work. The slides from my talk are here and a, pretty accurate, news story about the panel can be found here and a detailed blog entry here which captures my final quote as I remember saying it!
Ben Laurie and I have written a paper addressing the use of "proof-of-work" systems (often known as "hashcash") in countering "spam". Ben wanted to know what difficulty of puzzle should be set. But, when we'd done all the sums, it turned out that the complexity needed to make spam uneconomic is so high that you'd impede the legitimate activities of quite a number of normal users. Sceptical? Well read our paper, "Proof-of-Work" Proves Not to Work which was presented at The Third Annual Workshop on Economics and Information Security (WEIS04) on May 13th 2004.
Unfortunately, there was an error in one of the sums in this paper! and the calculation for the "profit margin" on goods advertised by spamming was out by a factor of ten! [it should have been $43.70, not $4.37]. Ted Wobber from Microsoft gets the credit for spotting this goof. We're still in the process of fixing the paper to address this (current draft is here), but a flavour of the correct sums can be obtained from the slides to the talk I recently gave in the local Security/Economics/Networks Seminar series. A more realistic average value is probably around $33 -- suggesting that the economic sums make proof-of-work viable AT THE SPAMMERS' CURRENT RESPONSE RATES. However, the second, "security" analysis we made is far more significant in showing that simple-minded proof-of-work schemes don't work. This DOESN'T mean that proof-of-work might not be useful in a hybrid anti-spam scheme, but it DOES mean that a trivial to deploy, purely proof-of-work scheme is, in our view, a non-starter.
I've been doing a fair bit of thinking about phishing -- and it's mainly depressing. We're not going to educate the users any time soon; the marketing people are still sending out HTML email (so simple rules like "never click on anything" won't be helpful); the browser security model doesn't address the problem; and the banks have far too simple a view of what "authorised" means. We can improve things, but if the fixes are done piecemeal the bad guys will just overcome the obstacles one by one.
I was on a panel on the topic at FC'05 Financial Cryptography and Data Security and my position paper is here. I also talked about the underlying protocol problems at the Thirteenth Cambridge Protocols Workshop in April 2005, that paper is here.
In late 2002, early 2003 I acted as the 'specialist adviser' to the All Party Internet Group (APIG) inquiry which looked at the issues surrounding access to ISP and Telco logging data by UK Law Enforcement. The inquiry details are archived here, and the resulting 'Communications Data' report is now essential background reading to anyone interested in this topic. Over the summer of 2003, I repeated this role for the APIG inquiry into "spam". The resultant report can be read here. In the Spring of 2004, I was at it again, as APIG held an inquiry into a possible revision of the Computer Misuse Act. Details of this inquiry are here or you can jump straight to the final report. In Spring 2006, I was specialist adviser to another APIG inquiry, this time into Digital Rights Management (DRM). Details are here. APIG then merged with APMobile to form apComms (the All Party Parliamentary Communications Group) and in 2009 I assisted with their report into network traffic, which addressed network neutrality, behavioural advertising and child sexual abuse images. The report of this is called 'Can we keep our hands off the net?'
A little while ago now, there was a lot of media interest in my work with Mike Bond on banking security and DES cracking hardware. You might possibly have seen me trying to explain this on BBC2's Newsnight program in November 2001. There is an entire website which covers this topic: http://www.cl.cam.ac.uk/~rnc1/descrack/
If you'd prefer an academic paper on the topic then you can read our CHES2002 paper here in PDF format or here in HTML.
If you are interested in "brute-force" attacks in general, then I have collected together here a large number of results and created an annotated bibliography on this topic.
Turning now to what I do when not robbing banks... You might be interested in:
A paper presented at the Information Hiding Workshop 2002, co-written with George Danezis, on "Chaffinch: Confidentiality in the face of legal threats" available here in PDF format or here in HTML. The PDF version is recommended as having much the prettier diagrams within it. Note that Chaffinch also has its own web page.
Abstract We present the design and rationale of a practical system for passing confidential messages. The mechanism is an adaptation of Rivest's "chaffing and winnowing", which has the legal advantage of using authentication keys to provide privacy. We identify a weakness in Rivest's particular choice of his "package transform" as an "all-or-nothing" element within his scheme. We extend the basic system to allow the passing of several messages concurrently. Only some of these messages need be divulged under legal duress, the other messages will be plausibly deniable. We show how this system may have some resilience to the type of legal attack inherent in the UK's Regulation of Investigatory Powers (RIP) Act.A report (in PDF format) of the papers presented and the ensuing discussion at the 1st International Peer-to-Peer Systems Workshop, MIT 7-8 March 2002. This will eventually appear in the Springer Verlag LNCS "Hot Topics" series and was also in SIGOPS OSR (July 2002).
A paper, co-written with George Danezis and Markus Kuhn, on "Real World Patterns of Failure in Anonymity Systems" available here in PDF format or here in HTML. The paper was presented at the Information Hiding Workshop 2001. The PowerPoint slides for the presentation are here.
Abstract We present attacks on the anonymity and pseudonymity provided by a "lonely hearts" dating service and by the HushMail encrypted email system. We move on to discuss some generic attacks upon anonymous systems based on the engineering reality of these systems rather than the theoretical foundations on which they are based. However, for less sophisticated users it is social engineering attacks, owing nothing to computer science, that pose the biggest day-to-day danger. This practical experience then permits a start to be made on developing a security policy model for pseudonymous communications.A paper "Improving Onion Notation" available here in PDF or HTML which I presented to the PET2003 Workshop held in Dresden, March 2003.
Abstract Several different notations are used in the literature of MIX networks to describe the nested encrypted structures now widely known as ``onions''. The shortcomings of these notations are described and a new notation is proposed, that as well as having some advantages from a typographical point of view, is also far clearer to read and to reason about. The proposed notation generated a lively debate at the PET2003 workshop and the various views, and alternative proposals, are reported upon. The workshop participants did not reach any consensus on improving onion notation, but there is now a heightened awareness of the problems that can arise with existing representations.A paper on "The Limits of Traceability" available here in PDF format or here in HTML.
Abstract Traditional "traceability" (the flipside idea to "anonymity") on the Internet attempts to identify the IP address that caused an action to occur. This is sufficient for an Internet Service Provider (ISP) to take action against the authorised user of that IP address. Law enforcement usually needs to go beyond this in order to identify the individual concerned. However, real world experience shows that shared accounts, unavailable Caller Line Identification (CLI) and spoofing on Ethernets means that there is poor traceability for the "last hop". This means that it is not possible to consider the information gathered as "conclusive evidence" suitable for a Court, but just as "intelligence", an investigative tool, albeit a valuable one. Law enforcement should be especially wary that a sophisticated opponent might be able to frame an innocent bystander.A discussion document entitled "Judge and Jury ? how 'Notice & Take Down' gives ISPs an unwanted role in applying the Law to the Internet" available here in PDF format or here in HTML.
Abstract Many UK laws are such that the safest thing that ISPs can do is to "take down" material once put on "notice" that it may be unlawful. This can lead to injustices as lawful material is censored because ISPs cannot take the risk that a court will eventually agree that it can indeed remain available. Several legal frameworks are reviewed, particularly the situation in the USA, where ISPs enjoy considerable immunity.
Two particular legal solutions to the problems faced by ISPs are then examined. The first possible solution is blanket immunity for ISP activities. The second proposed approach is a new statutory regime that would cover not only defamation but also a range of other issues. It works in a similar way to the USA's Digital Millennium Copyright Act's approach to copyright infringement. The proposal is designated "R4", and works as follows:
The key legal protections provided by both solutions would be that the ISP is not liable to either complainant or author if they follow the process; and, as a safeguard for all concerned, malicious or negligent claimants can be penalised by the courts.
- Report: a complainant serves a notice of infringing material
- Remove: the ISP removes it, without judging the merits
- Response:the author can contest this by asking forflat-after-20101209 replacement
- Replace: again the ISP acts automatically
Since this is a policy area in which I am especially interested, I wrote a short personal response to the Home Office public consultation on Access to Communications Data which closed on 3rd June 2003. I was also involved in shaping some other responses, notably the one by FIPR.
I have also written a brief response to the DTI consultation on Implementation of the Directive on Privacy and Electronic Communications. The Directive is partially concerned with the issue of "spam". Last year I wrote a briefing paper on the topic of "spam" and a copy (with some updates made in March 2003) can be found here.
I have also wrote a FIPR-branded briefing paper "The Problem of 'Making'" on a difficulty which was not being properly addressed by the Sexual Offences Bill. The final Act, which came into force in May 2004, is rather better and has a statutory defence for ISP staff and 'sysadmins' who inadvertently commit a crime if they come across child pornography during the course of their work.
PGP Keys
If you wish to write to me then I welcome PGP encrypted email. If I write to you then I usually sign what I send and, if I know your key, will encrypt it as well (please say if this annoys!). You can find my keys (and an explanation of how they all interact) here.Lectures & Projects
The notes for the various undergraduate (Part 1B & II) lectures I've given can be found here. I've also made available here the slides (and sometimes notes) for various talks that I have given and I have started to collate the various media articles that refer to me and my work. I also keep a formal list of my academic publications.My project proposals for 2005-2006 students are here. I'm usually also prepared to consider supervising other projects in the security or cryptanalysis milieu.
from http://www.cl.cam.ac.uk/~rnc1/
No comments:
Post a Comment