Recognizing and Dealing with the Tension between Competition and Privacy Policies
Tech policy is in a moment of conflict. Some folks are calling for data to be shared to increase competition, while other folks are asking for far greater restrictions on data to improve privacy. We can serve both needs, privacy and competition, but we first need to have a conversation about how these policy needs interact and we may need to find less obvious solutions to the unique problems caused by clashing policies.
A starting point is to recognize that privacy and competition remedies can run in conflict with each other. A way to promote privacy might be to limit access to information as much as possible. It’s like keeping a secret: the fewer number of people who know, the fewer the opportunities for the secret to get out. This is how we often deal with privacy concerns: through regulations like the Video Privacy Protection Act that restrict access to sensitive information. However, a potential way to promote competition might be to lower barriers of entry by increasing access to inputs. For example, antitrust agencies have long used compulsory licensing of intellectual property as a remedy. Similarly, data portability is currently being examined as a way to make it easier for consumers to change services thereby increasing competition. A specific example is the ACCESS Act introduced by Senators Warner, Hawley, and Blumenthal.
These different policy needs can be balanced. A good question to start with is just how necessary any particular data set is to allow a new entrant to compete? New entrants may not need old data or certain kinds of sensitive data to flourish, and they may be able to use alternative available sources for some data rather than relying on a potentially intrusive competition remedy. Understanding this dynamic should help guide policy that has to balance privacy needs with competition needs.
In other cases, conflicts can be mitigated through creative solutions. For example, some data sets have value not related to identity of who the data belongs to. These data sets can be accessed at low risk if properly de-identified. Other issues can be addressed by putting the user in control to make decisions about their own privacy needs. These solutions may sound simple, but there are many potential pitfalls in their application. Fortunately, experts looking at this tension between competition and privacy seem to generally believe we can work through these issues as long as we recognize the problems.
How competition remedies can impact privacy interests
There is an argument that new entrants need access to incumbents’ data in order to compete, but the sharing of this data raises privacy concerns. This risk balancing raises a preliminary question over whether the complete transfer of user data is necessary to increase competition. Often it is not the data itself, but the successful processing and application of that data that provides the value. This means that a company may not need all of a user’s data in its applications, and some companies may simply lack the capabilities to use most, if not all, of the data provided to them.
The question is how much data is necessary to increase competition if policy-makers choose to adopt data sharing solutions? If less data is shared, then there is less privacy risk. Therefore, it is useful to understand and make data-sharing decisions based on the comparative value of each data set in terms of competition and privacy. If a data set is extremely useful for encouraging competition, and has little privacy concern, then it may be an easy call to require access to that data. However, data that is highly sensitive but does not have much usefulness to promoting competition should probably not face access requirements.
There are a couple other issues with data sharing that should be contemplated in a data access remedy. The first is trust. The users presumably trusted the company when they gave their nonpublic data to them. This trust was hopefully based on the user’s view of that company’s privacy and security policies. The user may not necessarily have that same trust in a company that is granted access to their data through policy. This may not impact policies that give the user complete decision making authority over their data, but it does raise other questions like how the user will be informed about differences among companies in privacy and data security. The second is security. Bad actors could exploit access policies to steal consumers’ data. This could be done by tricking users into giving them access to their data or by posing as users and approving a data transfer. Access remedies should promote security measures to prevent these abuses.
How privacy remedies impact entry and competition
Whenever a story breaks about a privacy misuse or breach, there is a tendency to demand the source of that data to clamp down on access to all user information. But this restriction of data transfer can actually have an impact on competition policy goals. For example, new companies may create business models that access data provided by existing companies, and restricting that access could negatively impact these new business models. In response, some companies may design systems to safely share such data to these other companies. Other companies may completely shut down information sharing in a way that cuts off data being used by other companies in order to protect user privacy, which would impact the business interests of those third party companies.
Balancing policy interests
The paper offers a framework for how to start thinking through the issues and potential problems with data portability. There are three separate interests in this kind of data transaction: 1) the person requesting the transfer; 2) other people whose data is caught up in the data potentially being transferred; and 3) the entity receiving the data. Each of these interests have their own policy needs. For example, requesters should ideally be informed about any differences in how their data may be treated by the new entity, including security features. Who should provide this information? For non-requesters, how should their privacy be treated? And should they be informed of any transfer that may include their data? Finally, how do we ensure that receiving entities are actually entitled to the requested data? In other words, could they be asking for more than they are entitled to, or could they have faked a user request in order to steal data? Who is responsible for data misuse after a transfer has occurred? And finally, are there ways for the entities to receive data in the most privacy-friendly way?
Facebook offers suggestions on how to resolve some of these questions. For example, non-requesting users could be asked for consent to share associated data, or a system of de-identification could be used. Problems with entities receiving data could be resolved through an accreditation system maintained either by a government body or an independent organization. These problems look like they may have solutions that account for both competition and privacy. More work is needed to understand the balancing including, as mentioned above, understanding how specific data sets rate in terms of their competitive usefulness and their privacy needs.