Algorithmic Transparency: End Secret Profiling

Disclose the basis of automated decision-making
  • Bayes
  • EPIC has ESP
  • Open the Code
  • Code Should Not Discriminate

Introduction

“At the intersection of law and technology - knowledge of the algorithm is a fundamental human right” - EPIC President Marc Rotenberg
Algorithms are complex mathematical formulas and procedures implemented into computers that process information and solves tasks. Advancements in artificial intelligence (AI), machines capable of intelligent behavior, are the result of integrating computer algorithms into AI systems enabling the system to not only follow instructions but also to learn.

As more decisions become automated and processed by algorithms, these processes become more opaque and less accountable. The public has a right to know the data processes that impact their lives so they can correct errors and contest decisions made by algorithms. Personal data collected from our social connections and online activities are used by the government and companies to make determinations about our ability to fly, obtain a job, get security clearance, and even determine the severity of criminal sentencing. These opaque, automated decision-making processes bear risks of secret profiling and discrimination as well as undermine our privacy and freedom of association.

Without knowledge of the factors that provide the basis for decisions, it is impossible to know whether government and companies engage in practices that are deceptive, discriminatory, or unethical. Algorithmic transparency, for example, plays a key role in resolving the question of Facebook's role in the Russian interference of the 2016 Presidential Election. Therefore, algorithmic transparency is crucial to defending human rights and democracy online.

Top News

EPIC and Algorithmic Transparency

AI Policy Frameworks

The speed of AI innovation and its impact on society prompts a serious concern for ethical review. There are currently no agreed upon set of standards for ethical AI design and implementation. Researchers and technical experts have grappled with how to align AI research and development with fundamental human values and norms. As a response, several organizations have begun to address the ethical issues in AI by creating AI principles and guidance documents. Below are four existing principles that guide in the development of safe AI.

Asilomar AI Principles

More than 100 AI researchers gathered in Asilomar, California to attend The Future of Life Institute’s “Beneficial AI 2017” conference. Through a multi-day survey and discussion process, attendees developed the Asilomar AI Principles, a list of 23 framework principles geared toward the safe and ethical development of AI. More than 1,200 AI/Robotics researchers and 2,541 others have signed onto the principles. Notable signers include Tesla co-founder Elon Musk, theoretical physicist Stephen Hawking, and EPIC Advisory Board member Ryan Calo. The draft principles are divided into three themes: (1) Research issues, (2) Ethics and Values, and (3) Long-term Issues. The principles highlight concerns ranging from creating beneficial intelligence, safety, transparency, privacy, avoiding an AI weaponry arms race, and non-subversion by AI.

IEEE’s Guide to Ethically Aligned Design

In December 2016, The Institute of Electrical and Electronics Engineers (IEEE) and its Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems published a first draft framework document on how to achieve ethically designed AI systems. Titled “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems,” the 136-page document encourages technologists to prioritize ethical considerations when creating autonomous and intelligent technologies. Broken down into eight sections, the document begins with a set of general principles and then moves onto specific issue areas such as how to embed human values into their systems, how eliminate data asymmetry and grant greater individual control to personal data, and how to improve legal accountability for harms caused by AI systems. The general principles that apply to all types of AI/AS are: (1) embody the highest ideals of human rights; (2) prioritize the maximum benefit to humanity and the natural environment; and (3) mitigate risks and negative impacts as AI/AS evolve as socio-technical systems.

USACM’s Principles on Algorithmic Transparency and Accountability

In January 2017, the Association for Computing Machinery U.S. Public Policy Council (USACM) issued a statement and list of seven principles on algorithmic transparency and accountability. The USACM statement provides a context for what algorithms are, how they make decisions, and the technical challenges and opportunities to address potentially harmful bias in algorithmic systems. The USACM believes that this set of principles, consistent with the ACM Code of Ethics, should be implemented during every phase of development to mitigate potential harms. The seven principles are: (1) awareness, (2) access and redress, (3) accountability, (4) explanation, (5) data provenance, (6) auditability, and (7) validation and testing.

Japan’s AI Research & Development Guidelines (AI R&D; Guidelines)

On April 2016 at the G7 ICT Ministers’ Meeting in Japan, Sanae Takaichi, Minister of Internal Affairs and Communications (MIC) of Japan, proposed to start international discussions toward establishing “AI R&D; guidelines” as a non-regulatory and non-binding international framework for AI research and development. In March 2017, the MIC released a report summarizing the current progress of drafting AI R&D; Guidelines for International Discussions as well as a Draft AI R&D; Guidelines with comments. One of the goals of the guidelines is to achieve a human-centered society, where people can live harmoniously with AI networks while human dignity and individual autonomy is respected. Modeled after OECD privacy guidelines, the nine R&D; principles found within the guidelines are: (1) collaboration, (2) transparency, (3) user assistance, (4) controllability, (5) security, (6) safety, (7) privacy, (8) ethics, and (9) accountability.

White House Report on the Future of Artificial Intelligence

In May 2016, the White House announced a series of workshops and a working group devoted to studying the benefits and risks of AI. The announcement recognized the "array of considerations" raised by AI, including those "in privacy, security, regulation, [and] law." The White House established a Subcommittee on Machine Learning and Artificial Intelligence within the National Science and Technology Council.

Over the next three months, the White House co-hosted a series of four workshops on AI:

EPIC Advisory Board members Jack Balkin, danah boyd, Ryan Calo, Danielle Citron, Ed Felten, Ian Kerr, Helen Nissenbaum, Frank Pasquale, and Latanya Sweeney each participated in one or more of the workshops.

The White House Office of Science and Technology issued a Request for Information in June 2016 soliciting public input on the subject of AI. The RFI indicated that the White House was particularly interested in "the legal and governance implications of AI," "the safety and control issues for AI," and "the social and economic implications of AI," among other issues. The White House received 161 responses.

On October 12, 2016, The White House announced two reports on the impact of Artificial Intelligence on the US economy and related policy concerns: Preparing for the Future of Artificial Intelligence and National Artificial Intelligence Research and Development Strategic Plan.

Preparing for the Future of Artificial Intelligence surveys the current state of AI, its applications, and emerging challenges for society and public policy. As Deputy U.S Chief Technology Officer and EPIC Advisory Board member Ed Felten writes for the White House blog, the report discusses "how to adapt regulations that affect AI technologies, such as automated vehicles, in a way that encourages innovation while protecting the public" and "how to ensure that AI applications are fair, safe, and governable." The report concludes that "practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations."

The companion report, National Artificial Intelligence Research and Development Strategic Plan, proposes a strategic plan for Federally-funded research and development in AI. The plan identifies seven priorities for federally-funded AI research, including strategies to "understand and address the ethical, legal, and societal implications of AI" and "ensure the safety and security of AI systems."

The day after the reports were released, the White House held a Frontiers Conference co-hosted by Carnegie Mellon University and the University of Pittsburgh. Also in October, Wired magazine published an interview with President Obama and EPIC Advisory Board member Joi Ito.

EPIC's Interest

EPIC has promoted Algorithmic Transparency for many years and is has litigated several cases on the front lines of AI. EPIC's cases include:

  • EPIC v. FAA, which EPIC filed against the Federal Aviation Administration for failing to establish privacy rules for commercial drones
  • EPIC v. CPB, in which EPIC successfully sued U.S. Customs and Border Protection for documents relating to its use of secret, analytic tools to assign "risk assessments" to travelers
  • EPIC v. DHS, to compel the Department of Homeland Security to produce documents related to a program that assesses "physiological and behavioral signals" to determine the probability that an individual might commit a crime.
  • EPIC v. DOJ, to compel the Department of Justice to produce documents concerning the use of “evidence-based risk assessment tools,” algorithms that try to predict recidivism, in all stages of sentencing.

EPIC has also filed amicus briefs supporting in Cahen v. Toyota that discusses the risks inherent in connected cars and has filed comments on issues of big data and algorithmic transparency.

EPIC also has a strong interest in algorithmic transparency in criminal justice. Secrecy of the algorithms used to determine guilt or innocence undermines faith in the criminal justice system. In support of algorithmic transparency, EPIC submitted FOIA requests to six states to obtain the source code of "TrueAllele," a software product used in DNA forensic analysis. According to news reports, law enforcement officials use TrueAllele test results to establish guilt, but individuals accused of crimes are denied access to the source code that produces the results.

Resources

News Articles & Blogposts

Reports

Academic Articles

Books

Share this page:

Support EPIC

EPIC relies on support from individual donors to pursue our work.

Defend Privacy. Support EPIC.