Let’s not punish digital gatekeepers for keeping customers safe
Imagine the scenario: a security guard at a hotel sees a notorious drug dealer enter the premises, and has at least two options. The drug dealer is let in with the guard’s knowledge, and the hotel becomes a renowned criminal hotspot, with the owners risking their reputation and customer trust, criminal fines, closure, or worse. Alternatively, the guard politely refuses the drug dealer entry, and if needed calls the police.
In the digital world, many platforms face a similar conundrum, and in the EU’s draft Digital Markets Act (DMA), rules are now being crafted that could put platforms, or what the DMA terms ‘gatekeepers’, in a position of legal double jeopardy where they are punished if they maintain order on their service, like a responsible and well-behaving security guard would, but also punished if they let bad actors in. This is an impossible situation that lawmakers now have the opportunity to correct.
There is no question that rules apply for anyone in a position of power – like a security guard or a digital gatekeeper. Until now, when it comes to digital platforms, most of these rules have been set by the enforcement of competition law, a broad, objective, and relatively simple set of rules that require an ex-ante effects-based self-assessment of any behaviour to determine if it is likely to harm consumer welfare. Conduct that is likely to cause such antitrust harm is prohibited under antitrust enforcement and subject to remedies ex-post.
Clearly, digital gatekeepers can abuse their privileges and cause consumer harm, and some of them have. But gatekeepers also have a duty to keep their platforms safe and trustworthy for their users, and in general, this is also what customers and policymakers want them to do. This means keeping malware from an app store, stopping counterfeit and dangerous goods from being sold online, blocking the spread of misinformation or preventing the spread of child abuse or money laundering. Like bad actors at a hotel or bar, bad actors on digital platforms will also often claim that their exclusion is ‘unfair’; these days that often comes with antitrust scrutiny and the potential for billions of euros in fines. That’s why an effects-based assessment is necessary to distinguish between good platform governance and anticompetitive conduct.
In the DMA, however, the experience gained over decades of antitrust enforcement, including as regards to digital platforms, is being used to define sweeping generalisations about behaviour that should be made illegal, without any recognition that, as confirmed by courts, context does matter. A security guard turning away a person based on the colour of their skin, creed or sexual orientation is obviously bad. A guard turning away an individual because they are a known criminal and likely to engage in illegal acts is legitimate. A person is turned away in both cases, but with very different perspectives on their legality. What the EU legislators are now proposing is akin to banning a security guard from turning away anyone, preventing them from acting in the public interest.
The point, as lawmakers consider the DMA and its proposed amendments, isn’t whether new rules should be put in place – this is an entirely legitimate exercise. However, they need to be conscious that a specific action is rarely, if ever, always a bad thing just because it can be. The enforcement of the rules need to ensure that the specificities of each allegation, and proportionality of enforcing the rules effectively, strike the right balance to allow gatekeepers to do their job, whilst discouraging abuse.