In October the European Court of Justice invalidated the 15 year old “Safe Harbour framework” which enabled commercial data to flow between Europe and the United States. This ruling was groundbreaking as it sets in motion a domino effect which, unless addressed, may entail serious, uncomfortable consequences for Europe. Let me guide you through this domino effect towards data isolationism and finally point out some possible solutions.
By now you’ve probably seen an article or five taking down the viral Facebook “notice” appearing in people’s feeds, which purports to revoke rights from Facebook regarding one’s Facebook feed content and data (e.g., Jeff John Roberts here, or Caitlin Dewey here). This hoax reappears on Facebook with the regularity of some astronomical phenomenon, and is regularly debunked. But like a horror movie villain, it keeps coming back.
It is one thing to understand that this notice is ineffective. However, it is worth having a clear explanation why such unilateral proclamations aren’t enforceable for when this thing comes back again in six months dressed up with even more preposterous legal incantations citing the Rule Against Perpetuities.
More to the point, the fact that such a declaration is irrelevant actually turns out to be a pretty good thing for Internet users.
Over the past several months, the tech industry has been experiencing a terrible bout of déjà vu. In a campaign led by FBI Director James Comey, law enforcement and intelligence community voices have argued against the proliferation of ubiquitous strong encryption in consumer devices and communication platforms. By ubiquitous strong encryption, I mean both the expanded availability of device encryption for smartphones, and end-to-end encryption of communications protocols with only the sender and recipient (but no third parties) holding keys.
This sort of strong encryption has proliferated for a variety of reasons. Hacks and data breaches are larger, more consequential, and more prevalent than ever. The last two years have also been marked by controversy over widespread government surveillance in the U.S. and elsewhere. These circumstances have caused consumers to demand, and technology companies to develop, the tools to protect sensitive personal and financial information from those who consumers would prefer not have access.
Naturally, the renewed focus on product and platform security on behalf of consumers has led to conflict between the tech industry and law enforcement. Governments fear “going dark”—the idea that there will be some set of encrypted communications and content that they will not be able to access even after having obtained the necessary legal process to seek that information, which in the U.S. usually means a search warrant.
If “going dark” sounds somewhat familiar, that’s because these are largely the same fears that led to the first “Crypto Wars” in the United States.
The cross-border nature of the Internet disrupts the traditional notion of geographically defined national jurisdictions. Increasingly, contradicting privacy laws confuse consumers and force international companies to violate one country’s law in order to comply with another’s.
Three high profile cases have privacy experts, and all the rest of us, really confused:
Case one: The European Parliament vs The U.S. Foreign Intelligence Surveillance Act (FISA)
The European Union is in its final stretch of agreeing a reformed data protection framework. In the frenzy following the Snowden revelations, the European Parliament (EP) looked for ways to force the U.S. Government to reform its surveillance practices. One means for doing so is a well-intended article in the proposed EU general data protection regulation, which is popularly known as the “anti-FISA clause.” This provision, Article 43a, would severely limit the circumstances under which a company is allowed to provide third country authorities with Europeans’ data. Today, companies are often asked to respond to requests for data to be used in a criminal investigation or by a regulatory authority acting in the public interest, e.g., from third countries’ consumer and environmental agencies. As EU negotiators are about to conclude the data protection regulation, it is becoming evident that the EP’s proposal would place companies in legal limbo due to contradictory EU and U.S. laws.
There’s been a lot of buzz lately around facial recognition and what exactly that means to consumers. When the average consumer thinks of facial recognition technology, a Jason Bourne-esque scenario probably comes to mind, with high-tech devices constantly scanning crowds, identifying individuals, accessing vast databases, and using that recognition information to some nefarious end. Present use of facial recognition technology is far more benign and useful to consumers than some would have you think. The emergence of new photo sharing and storage apps like Google Photos and Facebook’s Moments demonstrates that current facial recognition tools are better suited to helping users categorize and share their photos, rather than populating an ominous law enforcement or commercial database. As such innocuous uses of facial recognition become more and more common, consumers’ comfort with and understanding of the technology will grow correspondingly. The next step is ensuring that privacy-focused regulators and legislators are able to develop frameworks that enable this growth and adapt as consumer expectations and facial recognition technology change.
The Google Photos app was recently released as a tool to streamline the photo backup and sharing process. It boasts free and unlimited photo storage and aims not only to keep similar events grouped together, but also to scan, identify, and easily share different photo subjects with others. Part of the identification process involves scanning and recognizing people, places, and things. However, the app doesn’t see faces in the same way humans do. Humans see faces and recognize specific people, whereas a computer scans an image and recognizes colors, patterns, and shapes. This process is called facial detection. Facial recognition is one step above facial detection in that it compares “known faces” or patterns to newly uploaded faces to see if there is a probable match.
Facebook’s Moments utilizes similar technology. Moments is a recent app that was launched to help organize and share photos with frequently photographed friends. When photos are uploaded to the app, Facebook’s technology scans for a face in the photo. If there is a face, then the app compares features and patterns of your profile picture and other tagged pictures to the newly uploaded photo. Like Google Photos, Moments looks for unique characteristics and patterns, such as the shape of your face or distance between facial features, to recognize and connect profiles. In essence, both apps use algorithms to create a system of recognizable models or templates for comparison, rather than referring to an on-file photograph. This pattern of connecting and “recognizing” faces creates tag suggestions of the subjects of the uploaded photo—but only if the photo uploader and picture subjects are friends.
Both apps are a win for consumers. The idea is that quick recognition helps streamline the social sharing process for group events. Whereas people once might have followed a social event like a wedding or reunion with an endless email chain of shared pictures (or exchanged CDs or flash drives), users can instead allow Google Photos or Moments to group and categorize photos into events based on the subjects of the photos and additional metadata. Given that social networking sites and mobile devices are already the chief digital photo curation tools used by consumers, extending their existing functions via facial recognition-based apps seems like the sort of harmless, pro-consumer innovation that ought to be encouraged.
Last Wednesday, Federal Trade Commission (FTC) Chairwoman Edith Ramirez announced that on September 9 the FTC will hold the first seminar of its “Start with Security” campaign (which we previewed in March). The campaign is aimed at helping small and medium sized companies improve their data security practices based on the knowledge the FTC has accumulated over a decade of enforcement action. Also last week, the FTC launched IdentityTheft.gov, a website that offers victims of identity theft tools to report and recover from identity theft and data breaches.
The FTC’s recent focus on privacy issues, particularly identity theft and data security, is a recognition of the priority consumers place on trust in the Internet. Trust in the integrity and security of the Internet and associated products and services is essential to its success as a platform for digital communication and commerce. One of the earliest government reports on the viability of the Internet for commerce said, in 1997, “[i]f Internet users do not have confidence that their communications and data are safe from unauthorized access or modification, they will be unlikely to use the Internet on a routine basis for commerce.”
Internet users continue to prioritize confidence in the security of digital services above all other privacy concerns online. In late 2013, CCIA commissioned a survey of Internet users that aimed to better identify the priorities and concerns of Internet users with respect to the handling of the information they share online. As far as privacy risks go, the study found that nothing is more important to Internet users than the security of their information online, in particular ensuring that their personal data is out of the hands of those who would do them harm.
Earlier this month, the Federal Trade Commission (“FTC” or “the Commission”) announced a new “Start with Security” campaign. Announced in a speech by Chairwoman Edith Ramirez, the campaign she described will be centered on a series of resources and presentations that the FTC’s Consumer Protection Bureau will deliver to corporate groups to provide companies, especially small to medium sized businesses, with best practices and other guidance on specific data security topics.
This is a welcome development. The “Start with Security” presentations are intended to fill a gap presented by the FTC’s current method of protecting consumers and ensuring good corporate data security practices, which embodies light-touch regulatory principles. Today, the FTC’s primary tool for recommending and enforcing reasonable behavior in the data security space is through the enforcement authority granted to it by Section 5 of the FTC Act, which allows it to stop unfair or deceptive acts or practices.
With unfair practices, the FTC brings a case when a particular company’s data security practices caused, or were likely to cause, a substantial injury that consumers could not reasonably avoid and were not outweighed by benefits to consumers or competition. In the case of deceptive acts, the FTC brings cases when it believes a company has failed to support a promise to keep information secure with reasonable and appropriate processes.
Through settlements and guidances informed by the last decade of data security cases, the Commission has developed a set of commercially reasonable security practices that companies should implement in a manner appropriate for their respective businesses. This case-by-case approach allows for the flexible development of policy that reflects what companies are actually doing with consumers’ information and points to areas for improvement or minimum standards.
The need for innovation and start-up cultures is a given in the tech world: tech companies that don’t innovate don’t last very long. Outside this environment many companies and governments prefer to paint themselves as victims of the intense changes wrought by Internet and high technology. These parallel cultures mean most companies, and especially governments, find it hard to disrupt themselves (examples like gov.uk are relatively few). Where many governments fight innovations, Estonia decided to embrace them.
In this small Northern European state, the government is fostering a start-up culture and marrying it with radical administrative disruption. The President is a geek and the job of the few bureaucrats that exist is to “exploit the dynamic forces of private competition.“ Here digital technology breeds opportunity and jobs instead of chafing against red tape.
You’ve undoubtedly heard about Skype; perhaps Estonia’s 13 year old eID system is a flicker in your memory. But are there shared ingredients? Can it be copied?
The first ingredient is hard to copy. Estonia has a population of 1.3 million – a close-knit community half the size of Brooklyn, which is easier to shape than a decentralized Germany or a mammoth United States.
The second is a historical legacy of both tech and direct action. The Soviet Union’s Institute of Cybernetics (still running today) was founded in the Estonian capital Tallinn in 1960. It’s no coincidence that the parents of some of Skype’s founders worked there or that its headquarters is next door. Meanwhile citizens each year participate in “Let’s Do It” day – banding together to fix things in their community with their own hands.
But it’s the fourth ingredient that matters most: partnerships for achieving scale.
Yesterday, the Federal Trade Commission (“FTC” or “the Commission”) released its long-awaited staff report on the Internet of Things (“IoT”), which was announced by Chairwoman Ramirez in her keynote at the 2015 State of the Net conference. Building on a workshop held in 2013, the Commission’s report is a comprehensive look at the promise of Internet-connected everyday objects, the risks that they might pose to consumers, and the Commission’s recommended regulatory and legislative paths forward. Fortunately for consumers, the Commission’s suggestions, born of a collaborative workshop with privacy groups and industry, do not approach the onerous attempts by the EU to regulate the IoT well-before it gained a market foothold, which DisCo covered way back in 2012.
First, a short primer. The Internet of Things constitutes the growing wave of innovative technologies set to revolutionize the interactivity of the mundane products that we use every day. Smartwatches and other wearable devices get the most press, but introducing connectivity to other traditionally “dumb” devices in our environments will make them all more personal, adaptive, and efficient. Learning thermostats, networked refrigerators, Internet-enabled dog collars that track your pet’s location and wearable fitness trackers are already on sale, with driverless cars, wireless pacemakers, and home automation systems making their way to the main floor of this year’s Consumer Electronics Show (“CES”).
The FTC highlighted the array of benefits of connected devices early in its report. Connected health devices can provide richer sources of data and improve preventative care for physicians and patients. An adaptive thermostat coupled with automated lighting and security can reduce energy costs for homeowners and allow for remote monitoring of homes. Connected cars can offer on-demand vehicle diagnostics to drivers and service facilities, real-time traffic information, and provide automatic alerts to first responders when airbags are deployed. Eventually, self-driving cars may one day be widely available. Each additional type of connected device can provide another convenience or efficiency in the everyday lives of users.
Besides cold temperatures, inevitable musings about an Ovechkin-led Capitals being positioned to make a run at the Stanley Cup (followed by them falling off a cliff), and the occasional wayward arctic fowl, January in the District of Columbia comes with at least one constant ritual: the time honored tradition of speculating on what will be included in the State of the Union. (And, in recent times, the SOTU-themed drinking games that flow from the anticipation… even the Washington Post has one this year). Although some of the suspense has been dampened with media leaks and a multi-week presidential tour highlighting important SOTU themes, some surprises remain.
With political watchers fixated on what President Obama will and will not include in this year’s SOTU, I thought it was a good time for DisCo to lay out a potential tech policy roadmap for what to watch for this year in the President’s annual “setting priorities” exercise.