Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.
Close

RightsCon 2018 – The Human Rights and Digital Technology Zeitgeist

Last week, activists, technologists, business leaders, policymakers, and government representatives came together in Toronto for RightsCon 2018, a three-day conference for discussing pressing issues at the intersection of human rights and technology.

RightsCon included lively discussions about the importance of human rights considerations in the development of artificially intelligent systems, content moderation and misinformation online, and cross-border law enforcement demands for evidence.

Conference-goers tackled ethical approaches to artificial intelligence across the range of fields in which systems are being deployed, with a variety of takes. In one panel, participants suggested transporting some of the lessons learned with respect to (the lack of) international governance of Lethal Autonomous Weapons Systems to other arenas, including the autonomous generation of fake news and automated criminal profiling. Other panels discussed now common concerns about the advent of machine learning and automated technologies, including:

● How ethical decision-making might be incorporated into research, development, and implementation processes by design,
● Examining the tension between machine learning systems’ need for vast quantities of training data and data protection rules, and
● 
Putting new methods for improving the explainability of machine learning “black boxes” in the field.

But RightsCon conversations were not all sour with respect to the potential human rights impacts of artificially intelligent systems. Human rights technologists and security professionals also highlighted the variety of ways machine learning tools are beginning to improve discourse and security online. For example, some bot-based approaches can help reinforce democratic institutions by reducing barriers between governments and citizens to strengthen communication, improving government transparency, and facilitating reporting on problems or abuses for increased accountability, while others can be deployed to combat misinformation in online communities. On the security front, machine learning systems are already being deployed to improve network security, but have recently shown promise in identifying online interactions that may become hostile or abusive.

A related set of discussions examined the present day challenges of online content moderation at scale. Definitions are a key aspect of these challenges. At RightsCon, some inventive panels focused on having audience members consider what their respective societies and governments might characterize as “hate speech” or “extremist content,” as well as the concomitant risks to civil and political rights—distinct from the question of how companies might deploy technology to address the presence of such content. Other conversations focused on the relationship between activists, journalists, users, and technology companies, with the aim of building better understandings of existing content moderation practices and their motivations through transparency, and clarifying the means through which stakeholders can contribute to companies’ policy development processes.

This discussion was capped by a preview of David Kaye’s upcoming report on state regulation and commercial moderation of user-generated online content. Kaye, who is the UN’s Special Rapporteur on free expression, examines how states should fulfill their primary duty to ensure an enabling environment for freedom of expression and access to information online, even in the face of contemporary threats such as “fake news” and online extremism. The Special Rapporteur is also conducting an in-depth investigation of how Internet companies moderate content, and argues that human rights law gives companies the tools to articulate their positions in ways that respect democratic norms and counter authoritarian demands.

Finally, another key track at RightsCon involved the development of human rights standards applied to cross-border law enforcement access to data and evidence. Following the passage of the CLOUD Act in the U.S. and the proposed European E-Evidence Regulation, the question has now turned to the negotiation and implementation of domestic laws and bilateral agreements for cross-border law enforcement requests for data. Stakeholders at RightsCon held a number of conversations designed to determine what sorts of human rights considerations, due process requirements, and protections for rule of law ought to be included in the terms of bilateral agreements and the final E-Evidence Regulation to potentially raise standards beyond those already contemplated in the CLOUD Act.

RightsCon merits attention because it is the beginning point for many of the conversations that policymakers, advocates, and technology companies will have in the next few years. It’s an opportunity for human rights advocates to surface their concerns with existing policies and work with other stakeholders to chart the course of how new technologies develop and take root in societies worldwide.

Innovation

New technologies are constantly emerging that promise to change our lives for the better. These disruptive technologies give us an increase in choice, make technologies more accessible, make things more affordable, and give consumers a voice. And the pace of innovation has only quickened in recent years, as the Internet has enabled a wave of new, inter-connected devices that have benefited consumers around the world, seemingly in all aspects of their lives. Preserving an innovation-friendly market is, therefore, tantamount not only to businesses but society at large.