A recent Arstechnica article discussed several U.S. states that are considering adding a “roadside textalyzer” that operates analogously to roadside Breathalyzer tests. In the same way that alcohol and drugs can impair a driver’s ability to navigate the road, so can paying attention to your phone rather than the world beyond. Many states “require” drivers to consent to Breathalyzer tests, where that “requirement” boils down to serious penalties if the driver declines. Vendors like Cellebrite are pushing for analogous requirements, for which they just happen to sell products.
[Read more…]
On distracted driving and required phone searches
Gone In Six Characters: Short URLs Considered Harmful for Cloud Services
[This is a guest post by Vitaly Shmatikov, professor at Cornell Tech and once upon a time my adviser at the University of Texas at Austin. — Arvind Narayanan.]
TL;DR: short URLs produced by bit.ly, goo.gl, and similar services are so short that they can be scanned by brute force. Our scan discovered a large number of Microsoft OneDrive accounts with private documents. Many of these accounts are unlocked and allow anyone to inject malware that will be automatically downloaded to users’ devices. We also discovered many driving directions that reveal sensitive information for identifiable individuals, including their visits to specialized medical facilities, prisons, and adult establishments.
URL shorteners such as bit.ly and goo.gl perform a straightforward task: they turn long URLs into short ones, consisting of a domain name followed by a 5-, 6-, or 7-character token. This simple convenience feature turns out to have an unintended consequence. The tokens are so short that the entire set of URLs can be scanned by brute force. The actual, long URLs are thus effectively public and can be discovered by anyone with a little patience and a few machines at her disposal.
Today, we are releasing our study, 18 months in the making, of what URL shortening means for the security and privacy of cloud services. We did not perform a comprehensive scan of all short URLs (as our analysis shows, such a scan would have been within the capabilities of a more powerful adversary), but we sampled enough to discover interesting information and draw important conclusions. Our study focused on two cloud services that directly integrate URL shortening: Microsoft OneDrive cloud storage (formerly known as SkyDrive) and Google Maps. In both cases, whenever a user wants to share a link to a document, folder, or map with another user, the service offers to generate a short URL – which, as we show, unintentionally makes the original URL public.
[Read more…]
Why Making Johnny’s Key Management Transparent is So Challenging
In light of the ongoing debate about the importance of using end-to-end encryption to protect our data and communications, several tech companies have announced plans to increase the encryption in their services. However, this isn’t a new pledge: since 2014, Google and Yahoo have been working on a browser plugin to facilitate sending encrypted emails using their services. Yet in recent weeks, some have criticized that only alpha releases of these tools exist, and have started asking why they’re still a work in progress.
One of the main challenges to building usable end-to-end encrypted communication tools is key management. Services such as Apple’s iMessage have made encrypted communication available to the masses with an excellent user experience because Apple manages a directory of public keys in a centralized server on behalf of their users. But this also means users have to trust that Apple’s key server won’t be compromised or compelled by hackers or nation-state actors to insert spurious keys to intercept and manipulate users’ encrypted messages. The alternative, and more secure, approach is to have the service provider delegate key management to the users so they aren’t vulnerable to a compromised centralized key server. This is how Google’s End-To-End works right now. But decentralized key management means users must “manually” verify each other’s keys to be sure that the keys they see for one another are valid, a process that several studies have shown to be cumbersome and error-prone for the vast majority of users. So users must make the choice between strong security and great usability.
In August 2015, we published our design for CONIKS, a key management system that addresses these usability and security issues. CONIKS makes the key management process transparent and publicly auditable. To evaluate the viability of CONIKS as a key management solution for existing secure communication services, we held design discussions with experts at Google, Yahoo, Apple and Open Whisper Systems, primarily over the course of 11 months (Nov ‘14 – Oct ‘15). From our conversations, we learned about the open technical challenges of deploying CONIKS in a real-world setting, and gained a better understanding for why implementing a transparent key management system isn’t a straightforward task.
[Read more…]
Internet Voting, Utah GOP Primary Election
Utah’s Republican presidential primary was conducted today by Internet. If you have your voter-registration PIN, or even if you don’t, visit https://ivotingcenter.gop and you will learn something about Internet voting!
An Unprecedented Look into Utilization at Internet Interconnection Points
Measuring the performance of broadband networks is an important area of research, and efforts to characterize the performance of these networks continues to evolve. Measurement efforts to date have largely relied on inhome devices and are primarily designed to characterize access network performance. Yet, a user’s experience also relies on factors that lie upstream of ISP access networks, which is why measuring interconnection is so important. Unfortunately, as I have previously written about, visibility about performance at the interconnection points to ISPs have been extremely limited, and efforts to date to characterize interconnection have largely been indirect, relying on inferences made at network endpoints.
Today, I am pleased to release analysis taken from direct measurement of Internet interconnection points, which represents advancement in this important field of research. To this end, I am releasing a working paper that includes data from seven Internet Service Providers (ISPs) who collectively serve approximately half of all US broadband subscribers.
Each ISP has installed a common measurement system from DeepField Networks to provide an aggregated and anonymized picture of interconnection capacity and utilization. Collectively, the measurement system captures data from 99% of the interconnection capacity for these participating ISPs, comprising more than 1,200 link groups. I have worked with these ISPs to expose interesting insights around this very important aspect of the Internet. Analysis and views of the dataset are available in my working paper,which also includes a full review of the method used.
The research community has long recognized the need for this foundational information, which will help us understand how capacity is provisioned across a number of ISPs and how content traverses the links that connect broadband networks together.
Naturally, the proprietary nature of Internet interconnection prevents us from revealing everything that the public would like to see—notably, we can’t expose information about individual interconnects because both the existence and capacity of individual interconnects is confidential. Yet, even the aggregate views yield many interesting insights.
One of the most significant findings from the initial analysis of five months of data—from October 2015 through February 2016—is that aggregate capacity is roughly 50% utilized during peak periods (and never exceeds 66% for any individual participating ISP, as shown in the figure below. Moreover, aggregate capacity at the interconnects continues to grow to offset the growth of broadband data consumption.
I am very excited to provide this unique and unprecedented view into the Internet. It is in everyone’s interest to advance this field of research in a rigorous and thoughtful way.
Apple, FBI, and Software Transparency
The Apple versus FBI showdown has quickly become a crucial flashpoint of the “new Crypto War.” On February 16 the FBI invoked the All Writs Act of 1789, a catch-all authority for assistance of law enforcement, demanding that Apple create a custom version of its iOS to help the FBI decrypt an iPhone used by one of the San Bernardino shooters. The fact that the FBI allowed Apple to disclose the order publicly, on the same day, represents a rare exception to the government’s normal penchant for secrecy.
The reasons behind the FBI’s unusually loud entrance are important – but even more so is the risk that after the present flurry concludes, the FBI and other government agencies will revert to more shadowy methods of compelling companies to backdoor their software. This blog post explores these software transparency risks, and how new technical measures could help ensure that the public debate over software backdoors remains public.
[Read more…]
Apple/FBI: Freedom of speech vs. compulsion to sign
This week I signed the Electronic Frontier Foundation’s amicus (friend-of-the-court) brief in the Apple/FBI iPhone-unlocking lawsuit. Many prominent computer scientists and cryptographers signed: Josh Aas, Hal Abelson, Judy Anderson, Andrew Appel, Tom Ball (the Google one, not the Microsoft one), Boaz Barak, Brian Behlendorf, Rich Belgard, Dan Bernstein, Matt Bishop, Josh Bloch, Fred Brooks, Mark Davis, Jeff Dean, Peter Deutsch, David Dill, Les Earnest, Brendan Eich, David Farber, Joan Feigenbaum, Michael Fischer, Bryan Ford, Matt Franklin, Matt Green, Alex Halderman, Martin Hellman, Nadia Heninger, Miguel de Icaza, Tanja Lange, Ed Lazowska, George Ledin, Patrick McDaniel, David Patterson, Vern Paxson, Thomas Ristenpart, Ron Rivest, Phillip Rogaway, Greg Rose, Guido van Rossum, Tom Shrimpton, Barbara Simons, Gene Spafford, Dan Wallach, Nickolai Zeldovich, Yan Zhu, Phil Zimmerman. (See also the EFF’s blog post.)
The technical and legal argument is based on the First Amendment: (1) Computer programs are a form of speech; (2) the Government cannot compel you to “say” something any more than it can prohibit you from expressing something. Also, (3) digital signatures are a form of signature; (4) the government cannot compel or coerce you to sign a statement that you don’t believe, a statement that is inconsistent with your values. Each of these four statements has ample precedent in Federal law. Combined together, (1) and (2) mean that Apple cannot be compelled to write a specific computer program. (3) and (4) mean that even if the FBI wrote the program (instead of forcing Apple to write it), Apple could not be compelled to sign it with its secret signing key. The brief argues,
By compelling Apple to write and then digitally sign new code, the Order forces Apple to first write a message to the government’s specifications, and then adopt, verify and endorse that message as its own, despite its strong disagreement with that message. The Court’s Order is thus akin to the government dictating a letter endorsing its preferred position and forcing Apple to transcribe it and sign its unique and forgery-proof name at the bottom.
What Your ISP (Probably) Knows About You
Earlier this week, I came across a working paper from Professor Peter Swire—a highly respected attorney, professor, and policy expert. Swire’s paper, entitled “Online Privacy and ISPs“, argues that ISPs have limited capability to monitor users’ online activity. The paper argues that ISPs have limited visibility into users’ online activity for three reasons: (1) users are increasingly using many devices and connections, so any single ISP is the conduit of only a fraction of a typical user’s activity; (2) end-to-end encryption is becoming more pervasive, which limits ISPs’ ability to glean information about user activity; and (3) users are increasingly shifting to VPNs to send traffic.
An informed reader might surmise that this writeup relates to the reclassification of Internet service providers under Title II of the Telecommunications Act, which gives the FCC a mandate to protect private information that ISPs learn about their customers. This private information includes both personal information, as well as information about a customer’s use of the service that is provided as a result of receiving service—sometimes called Customer Proprietary Network Information, or CPNI. One possible conclusion a reader might draw from this white paper is that ISPs have limited capability to learn information about customers’ use of their service and hence should not be subject to additional privacy regulations.
I am not taking a position in this policy debate, nor do I intend to make any normative statements about whether an ISP’s ability to see this type of user information is inherently “good” or “bad” (in fact, one might even argue that an ISP’s ability to see this information might improve network security, network management, or other services). Nevertheless, these debates should be based on a technical picture that is as accurate as possible. In this vein, it is worth examining Professor Swire’s “factual description of today’s online ecosystem” that claims to offer the reader an “up-to-date and accurate understanding of the facts”. It is true that the report certainly contains many facts, but it also omits important details about the “online ecosystem”. Below, I fill in what I see as some important missing pieces. Much of what I discuss below I have also sent verbatim in a letter to the FCC Chairman. I hope that the original report will ultimately incorporate some of these points.
[Update (March 9): Swire notes in a response that the report itself doesn’t contain technical inaccuracies. Although there are certainly many points that are arguable, they are hard to disprove without better data, so it is difficult to “prove” the inaccuracies. Even if we take it as a given that there are no inaccuracies, that’s a very different thing than saying that the report tells the whole story.]
[Read more…]
An analogy to understand the FBI’s request of Apple
After my previous blog post about the FBI, Apple, and the San Bernadino iPhone, I’ve been reading many other bloggers and news articles on the topic. What seems to be missing is a decent analogy to explain the unusual nature of the FBI’s demand and the importance of Apple’s stance in opposition to it. Before I dive in, it’s worth understanding what the FBI’s larger goals are. Cyrus Vance Jr., the Manhattan DA, states it clearly: “no smartphone lies beyond the reach of a judicial search warrant.” That’s the FBI’s real goal. The San Bernadino case is just a vehicle toward achieving that goal. With this in mind, it’s less important to focus on the specific details of the San Bernadino case, the subtle improvements Apple has made to the iPhone since the 5c, or the apparent mishandling of the iCloud account behind the San Bernadino iPhone.
Our Analogy: TSA Luggage Locks
When you check your bags in the airport, you may well want to lock them, to keep baggage handlers and other interlopers from stealing your stuff. But, of course, baggage inspectors have a legitimate need to look through bags. Your bags don’t have any right of privacy in an airport. To satisfy these needs, we now have “TSA locks”. You get a combination you can enter, and the TSA gets their own secret key that allows airport staff to open any TSA lock. That’s a “backdoor”, engineered into the lock’s design.
What’s the alternative? If you want the TSA to have the technical capacity to search a large percentage of bags, then there really isn’t an alternative. After all, if we used “real” locks, then the TSA would be “forced” to cut them open. But consider the hypothetical case where these sorts of searches were exceptionally rare. At that point, the local TSA could keep hundreds of spare locks, of all makes and models. They could cut off your super-duper strong lock, inspect your bag, and then replace the cut lock with a brand new one of the same variety. They could extract the PIN or key cylinder from the broken lock and install it in the new one. They could even rough up the new one so it looks just like the original. Needless to say, this would be a specialized skill and it would be expensive to use. That’s pretty much where we are in terms of hacking the newest smartphones.
Another area where this analogy holds up is all the people who will “need” access to the backdoor keys. Who gets the backdoor keys? Sure, it might begin with the TSA, but every baggage inspector in every airport, worldwide, will demand access to those keys. And they’ll even justify it, because their inspectors work together with ours to defeat smuggling and other crimes. We’re all in this together! Next thing you know, the backdoor keys are everywhere. Is that a bad thing? Well, the TSA backdoor lock scheme is only as secure as their ability to keep the keys a secret. And what happened? The TSA mistakenly allowed the Washington Post to publish a photo of all the keys, which makes it trivial for anyone to fabricate those keys. (CAD files for them are now online!) Consequently, anybody can take advantage of the TSA locks’ designed-in backdoor, not just all the world’s baggage inspectors.
For San Bernadino, the FBI wants Apple to retrofit a backdoor mechanism where there wasn’t one previously. The legal precedent that the FBI wants creates a capability to convert any luggage lock into a TSA backdoor lock. This would only be necessary if they wanted access to lots of phones, at a scale where their specialized phone-cracking team becomes too expensive to operate. This no doubt becomes all the more pressing for the FBI as modern smartphones get better and better at resisting physical attacks.
Where the analogy breaks down: If you travel with expensive stuff in your luggage, you know well that those locks have very limited resistance to an attacker with bolt cutters. If somebody steals your luggage, they’ll get your stuff, whereas that’s not necessarily the case with a modern iPhone. These phones are akin to luggage having some kind of self-destruct charge inside. You force the luggage open and the contents will be destroyed. Another important difference is that much of the data that the FBI presumably wants from the San Bernadino phone can be gotten elsewhere, e.g., phone call metadata and cellular tower usage metadata. We have very little reason to believe that the FBI needs anything on that phone whatsoever, relative to the mountain of evidence that it already has.
Why this analogy is important: The capability to access the San Bernadino iPhone, as the court order describes it, is a one-off thing—a magic wand that converts precisely one traditional luggage lock into a TSA backdoor lock, having no effect on any other lock in the world. But as Vance makes clear in his New York Times opinion, the stakes are much higher than that. The FBI wants this magic wand, in the form of judicial orders and a bespoke Apple engineering process, to gain backdoor access to any phone in their possession. If the FBI can go to Apple to demand this, then so can any other government. Apple will quickly want to get itself out of the business of adjudicating these demands, so it will engineer in the backdoor feature once and for good, albeit under duress, and will share the necessary secrets with the FBI and with every other nation-state’s police and intelligence agencies. In other words, Apple will be forced to install a TSA backdoor key in every phone they make, and so will everybody else.
While this would be lovely for helping the FBI gather the evidence it wants, it would be especially lovely for foreign intelligence officers, operating on our shores, or going after our citizens when they travel abroad. If they pickpocket a phone from a high-value target, our FBI’s policies will enable any intel or police organization, anywhere, to trivially exercise any phone’s TSA backdoor lock and access all the intel within. Needless to say, we already have a hard time defending ourselves from nation-state adversaries’ cyber-exfiltration attacks. Hopefully, sanity will prevail, because it would be a monumental error for the government to require that all our phones be engineered with backdoors.
Apple, the FBI, and the San Bernadino iPhone
Apple just posted a remarkable “customer letter” on its web site. To understand it, let’s take a few steps back.
In a nutshell, one of the San Bernadino shooters had an iPhone. The FBI wants to root through it as part of their investigation, but they can’t do this effectively because of Apple’s security features. How, exactly, does this work?
- Modern iPhones (and also modern Android devices) encrypt their internal storage. If you were to just cut the Flash chips out of the phone and read them directly, you’d learn nothing.
- But iPhones need to decrypt that internal storage in order to actually run software. The necessary cryptographic key material is protected by the user’s password or PIN.
- The FBI wants to be able to exhaustively try all the possible PINs (a “brute force search”), but the iPhone was deliberately engineered with a “rate limit” to make this sort of attack difficult.
- The only other option, the FBI claims, is to replace the standard copy of iOS with something custom-engineered to defeat these rate limits, but an iPhone will only accept an update to iOS if it’s digitally signed by Apple. Consequently, the FBI convinced a judge to compel Apple to create a custom version of iOS, just for them, solely for this investigation.
- I’m going to ignore the legal arguments on both sides, and focus on the technical and policy aspects. It’s certainly technically possible for Apple to do this. They could even engineer their customized iOS build to measure the serial number of the iPhone on which it’s installed, such that the backdoor would only work on the San Bernadino suspect’s phone, without being a general-purpose skeleton key for all iPhones.
With all that as background, it’s worth considering a variety of questions.
[Read more…]