Algorithmic Transparency: End Secret Profiling
Disclose the basis of automated decision-making
Introduction
“At the intersection of law and technology - knowledge of the algorithm is a fundamental human right” - EPIC President Marc Rotenberg
Algorithms are complex mathematical formulas and procedures implemented into computers that process information and solves tasks. Advancements in artificial intelligence (AI), machines capable of intelligent behavior, are the result of integrating computer algorithms into AI systems enabling the system to not only follow instructions but also to learn.
As more decisions become automated and processed by algorithms, these processes become more opaque and less accountable. The public has a right to know the data processes that impact their lives so they can correct errors and contest decisions made by algorithms. Personal data collected from our social connections and online activities are used by the government and companies to make determinations about our ability to fly, obtain a job, get security clearance, and even determine the severity of criminal sentencing. These opaque, automated decision-making processes bear risks of secret profiling and discrimination as well as undermine our privacy and freedom of association.
Without knowledge of the factors that provide the basis for decisions, it is impossible to know whether government and companies engage in practices that are deceptive, discriminatory, or unethical. Algorithmic transparency, for example, plays a key role in resolving the question of Facebook's role in the Russian interference of the 2016 Presidential Election. Therefore, algorithmic transparency is crucial to defending human rights and democracy online.
Top News
- EPIC To Congress: Require Algorithmic Transparency For Google, Dominant Internet Firms: EPIC has sent a statement to the House Judiciary Committee in advance of a hearing on Google's business practices. EPIC said that "algorithmic transparency" should be required for Internet firms. EPIC explained that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. EPIC pointed out that Google's algorithm preferences YouTube's web pages over EPIC's in searches for videos concerning "privacy." Last year the European Commission found that Google rigged search results to preference its own online service. The Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors. The US Federal Trade Commission has failed to take similar action, after even receiving substantial complaints. EPIC also urged Congress to consider the Universal Guidelines for AI as a basis for federal legislation. (Dec. 10, 2018)
- EPIC to Senators: Universal Guidelines for Artificial Intelligence Are a Model Policy: In a statement to a Senate committee focused on technology and privacy, EPIC urged Senators to implement the Universal Guidelines for Artificial Intelligence in US law. The Guidelines maximize the benefits of AI, minimize the risk, and ensure the protection of human rights. More than 200 experts and 50 organizations, including the American Association for the Advancement of Science, have endorsed the Universal Guidelines. EPIC also expressed concern about the secrecy surrounding the Senate workshops on AI. In a petition earlier this year, EPIC and leading scientific organizations, including AAAS, ACM and IEEE, and nearly 100 experts urged the White House to solicit public comments on AI policy. EPIC told the Senate committee that the Senate must also ensure a public process for developing AI policy. EPIC has pursued several criminal justice FOIA cases, and FTC consumer complaints to promote transparency and accountability for AI decisionmaking. In 2015, EPIC launched an international campaign for Algorithmic Transparency. (Nov. 30, 2018)
More top news
- EPIC Files Amicus in Case Concerning Government Searches and Google's Email Screening Practices (Oct. 18, 2018) +
EPIC has filed an amicus brief with the U.S. Court of Appeals for the Sixth Circuit in
United States v. Miller, arguing that the Government must prove the reliability of Google email screening technique. The lower court held that law enforcement could search any images that Google's algorithm had flagged as apparent child pornography. EPIC explained that a search is unreasonable when the government cannot establish the reliability of the technique. EPIC also warned that the government could use this technique "to determine if files contain religious viewpoints, political opinions, or banned books." EPIC has promoted
algorithmic transparency for many years. EPIC
routinely submits amicus briefs on the application of the Fourth Amendment to investigative techniques. EPIC previously urged the government to prove the reliability of investigative techniques in
Florida v. Harris.
- EPIC Files Appeal with D.C. Circuit, Seeks Release of 'Predictive Analytics Report' (Oct. 12, 2018) +
EPIC has appealed a federal district court
decision for the release of a "Predictive Analytics Report." The district court backed the Department of Justice when the agency claimed the "presidential communications privilege." But neither the D.C. Circuit Court of Appeals nor the Supreme Court has ever permitted a federal agency to invoke that privilege in a FOIA case. EPIC
sued the agency in 2017 to obtain records about "risk assessment" tools in the criminal justice system. These
controversial techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence. EPIC has pursued numerous FOIA cases concerning
algorithmic transparency,
passenger risk assessment,
"future crime" prediction, and
proprietary forensic analysis. The D.C. Circuit will likely hear EPIC's appeal next year.
- International Privacy Convention Open for Signature (Oct. 11, 2018) +
The Council of Europe has
opened for signature updates to
Convention 108, the international Privacy Convention. Among other changes, the modernized Convention requires prompt data breach notification, establishes national supervisory authorities to ensure compliance, permits transfers abroad only when personal data is sufficiently protected, and provides new user rights, including
algorithmic transparency.
Twenty-one nations have signed the treaty. Many more are expected to sign.
EPIC and
consumer coalitions have
urged the United States to ratify the international Privacy Convention. The complete text of the modernized Convention will be available in the 2018 edition of the Privacy Law Sourcebook, available at the
EPIC Bookstore.
- California Bans Anonymous Bots, Regulates Internet of Things (Oct. 2, 2018) +
California Governor Jerry Brown recently signed two modern privacy laws, including a first in the nation law governing the security of the
Internet of Things.
SB327 sets baseline security standards for IoT devices. EPIC recently submitted
comments to the Consumer Product Safety Commission recommending similar action. Governor Brown also signed a
bill banning anonymous bots. The law makes it illegal to use a bot, or automated account, to mislead California residents or communicate without disclosing the identity of the actual operator. EPIC President Marc Rotenberg had earlier
proposed that Asimov's Laws of Robotics be updated to require that robots reveal the basis of their decisions (
Algorithmic Transparency) and that robots reveal their actual identity.
- EPIC To Congress: Require Algorithmic Transparency For Twitter, Dominant Internet Firms (Sep. 4, 2018) +
In advance of a
hearing on Twitter: Transparency and Accountability, EPIC has sent a
statement to the House Energy and Commerce Committee. EPIC said that
"algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. In a 2011
statement to the FTC during the investigation of Google, EPIC said that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. EPIC pointed out that it was then competing with the search giant for the rankings of "privacy" videos and that Google's algorithm preferences Google's web pages over EPIC's. The FTC took no action on EPIC's complaint. But last year the European Commission
found that Google in fact rigged search results to give preference to its own shopping service. The Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors.
- EPIC To Congress: Public Participation Required for US Policy on Artificial Intelligence (Aug. 21, 2018) +
In advance of a
hearing concerning the Office of Science and Technology Policy, EPIC said that OSTP should ensure public participation in the development of AI policy. EPIC
told the Senate Commerce Committee that Congress must also implement oversight mechanisms for the use of AI. EPIC said that Congress should require
algorithmic transparency, particularly for government systems that involve the processing of personal data. In a recent
petition to OSTP, EPIC, leading scientific organizations, including AAAS, ACM and IEEE, and nearly 100 experts urged the White House to solicit public comments on artificial intelligence policy. EPIC has pursued several
criminal justice FOIA cases, and
FTC consumer complaints to promote transparency and accountability. In 2015, EPIC launched an international campaign for
Algorithmic Transparency.
- EPIC to FTC: Algorithmic Decision-Making Requires Transparency (Aug. 21, 2018) +
EPIC has
advised the FTC on algorithmic decision tools, artificial intelligence, and predictive analytics for the
hearings on "Competition and Consumer Protection in the 21st Century." In the comments, EPIC
urged the FTC to (1) prohibit unfair and deceptive algorithms, (2) seek legislative authority for "algorithmic transparency" to establish consumer protection in automated decision-making, (3) provide guidance on the ethical design and implementation of algorithms, and (4) make public the
"Universal Tennis Rating" algorithm that secretly scores young athletes. Calling on the Commission to act on EPIC's
repeated complaints on the proprietary algorithm that poses risks to children's privacy, EPIC said: "secret algorithms are unfair and deceptive," conceal bias, and deprive consumers of opportunities in the marketplace. EPIC champions
"Algorithmic Transparency", and has
advised Congress that algorithmic transparency is necessary for fairness and accountability.
- Court Blocks EPIC's Efforts to Obtain "Predictive Analytics Report" (Aug. 16, 2018) +
A federal court in the District of Columbia has
blocked EPIC's efforts to obtain a secret "Predictive Analytics Report" in a
FOIA case against the Department of Justice. The court sided with the agency which had withheld the report and asserted the "Presidential communications privilege." Neither the Supreme Court nor the D.C. Circuit has ever permitted a federal agency to invoke that privilege in a FOIA case. EPIC sued the agency in 2017 to obtain records about "risk assessment" tools in the criminal justice system. These techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence.
Many criminal justice experts oppose their use. EPIC has pursued several FOIA cases concerning
"algorithmic transparency," passenger risk assessment,
"future crime" prediction, and
proprietary forensic analysis. The case is EPIC v. DOJ (Aug. 14, 2018 D.D.C.). EPIC is considering an appeal.
- Bot Disclosure Act Would Promote Identification, Accountability (Jul. 19, 2018) +
Sen. Dianne Feinstein (D-Calif.) has introduced S. 3127, the
Bot Disclosure and Accountability Act of 2018. The bill directs the FTC to create a rule to require social media companies to disclose any social media bots on their platform. The bill also prohibits candidates and political parties from using bots. "This bill is designed to help respond to Russia's efforts to interfere in U.S. elections through the use of social media bots, which spread divisive propaganda," Feinstein
said. Earlier this week, EPIC sent a
statement to the House Judiciary Committee arguing that
"algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. EPIC has also recommended identification requirements for drones.
- EPIC To Congress: Require Algorithmic Transparency For Dominant Internet Firms (Jul. 16, 2018) +
In advance of a
hearing on Filtering Practices of Social Media Companies, EPIC has sent a
statement to the House Judiciary Committee. EPIC said that
"algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. In 2011, EPIC sent a
letter to the FTC stating that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. The FTC took no action on EPIC's complaint. But last year, after a seven year investigation, the European Commission
found that Google rigged search results to give preference to its own shopping service. The Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors.
- EPIC to Irish Data Protection Commission: Privacy Assessments Require Algorithmic Transparency (Jul. 5, 2018) +
In
comments to the Irish Data Protection Commission, EPIC
proposed guidance for Data Protection Impact Assessments. The
EU General Data Protection Regulation requires organizations to carefully assess the collection and use of personal data. EPIC explained that Data Protection Impact Assessments require the disclosure of the reason for the processing of personal data. EPIC also urged the Irish Privacy Commission to protect individuals against profiling and tracking by minimizing the collection of sensitive data. EPIC supports
"Algorithmic Transparency" and brought
FTC consumer complaints to promote accountability over secret algorithms. EPIC has also advised the UK Information Commissioner's Office on
Data Protection Impact Assessments and
GDPR implementation.
- EPIC Testifies at FEC Hearing on Online Political Ads, Urges Greater Transparency (Jun. 27, 2018) +
The Federal Election Commission is holding a two day
hearing to hear expert testimony on the agency's
proposed rule governing disclosures for political ads on the Internet. Christine Bannan, EPIC Administrative Law and Policy Fellow, will testify on the second day of the hearing. EPIC submitted
multiple comments to the FEC urging the agency to promulgate rules that would require online political ads to disclose funders as is required for traditional media ads. EPIC proposed the FEC adopt "algorithmic transparency" procedures that would require advertisers to disclose the demographic factors behind targeted political ads, as well as the source and payment, and maintain a public directory of advertiser data. EPIC's
Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to safeguard democratic institutions from various forms of cyber attack.
- EPIC To Congress: Require Transparency for Use of AI (Jun. 25, 2018) +
In advance of a
hearing on "Artificial Intelligence - With Great Power Comes Great Responsibility," EPIC
told the House Science Committee that Congress must implement oversight mechanisms for the use of AI. EPIC said that Congress should require
algorithmic transparency, particularly for government systems that involve the processing of personal data. EPIC said that Congress should amend the E-Government Act to require disclosure of the "logic" of algorithms that profile individuals. EPIC also said that the White House Select Committee on Artificial Intelligence should be open to public comment. EPIC has pursued several
criminal justice FOIA cases, and
FTC consumer complaints to promote transparency and accountability. In 2015, EPIC launched an international campaign for
Algorithmic Transparency.
- EPIC Calls on FEC to Pass Stronger Transparency Rules for Political Ads (May. 24, 2018) +
EPIC submitted
comments on the Federal Election Commission's (FEC)
proposed rules for political ads on the internet. The FEC proposed two alternative rules, one which would hold internet companies to the same standard as traditional media companies and one which would make exceptions for online ads. EPIC stated: "FEC rules should be technology-neutral and consistent across media platforms." EPIC also recommended that the FEC adopt
algorithmic transparency rules, which would require advertisers to disclose the demographic factors behind targeted political ads, as well as the source and payment, and maintain a public directory of advertiser data. EPIC's
Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to safeguard democratic institutions from various forms of cyber attack.
- EPIC Renews Call For FTC To Stop Secret Scoring of Young Athletes (May. 23, 2018) +
EPIC has
urged the Federal Trade Commission to act on a
Complaint EPIC previously filed with the FTC about the secret scoring of young tennis players. The EPIC complaint concerns the
"Universal Tennis Rating," a proprietary algorithm used to assign numeric scores to tennis players, many of whom are children under 13. According to EPIC, "the UTR score defines the status of young athletes in all tennis-related activity; impacts opportunities for scholarship, education and employment; and may in the future provide the basis for 'social scoring' and government rating of citizens." EPIC pointed to objective, provable, and transparent rating systems such as
ELO as far preferable. EPIC has championed
"Algorithmic Transparency" as a
fundamental human right. Earlier this month, the Council of Europe adopted the modernized
Privacy Convention that establishes a legal right for individuals to obtain "knowledge of the reasoning" for the processing of personal data.
- White House Establishes AI Advisory Committee (May. 10, 2018) +
The White House has
established the "Select Committee on Artificial Intelligence" to advise the President and coordinate AI policies among executive branch agencies. The Office of Science and Technology Policy, NSF, and DARPA will lead the interagency committee. According to the White House, the
goals of the Committee are (1) prioritize funding for AI research and development; (2) remove barriers to AI innovation; (3) train the future American workforce; (4) achieve strategic military advantage; (5) leverage AI for government services; and (6) lead international AI negotiations. The Committee will also coordinate efforts across federal agencies to research and adopt technologies such as autonomous systems, biometric identification, computerized image and video analysis, machine learning and robotics. It is unclear whether the Committee will include public perspectives in its work. In 2014, EPIC, joined by 24 consumer privacy, public interest, scientific, and educational organizations
petitioned the OSTP to accept public comments on a White House project concerning Big Data. The petition stated, "The public should be given the opportunity to contribute to the OSTP's review of 'Big Data and the Future of Privacy' since it is their information that is being collected and their privacy and their future that is at stake." In 2015 EPIC launched an international campaign for
Algorithmic Transparency and recently
urged Congress to establish oversight mechanisms for the use of AI by federal agencies.
- EPIC Urges Congress to Require Algorithmic Transparency For Dominant Internet Firms (Apr. 25, 2018) +
In advance of a
hearing on Filtering Practices of Social Media Companies, EPIC has sent a
statement to the House Judiciary Committee. EPIC said that "algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. In 2011, EPIC sent a
letter to the FTC stating that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. The FTC took no action on EPIC's complaint. But last year, after a seven year investigation, the European Commission
found that Google rigged search results to give preference to its own shopping service. The Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors.
- EPIC Tells House Committee: Require Transparency for Government Use of AI (Apr. 19, 2018) +
In advance of a
hearing on "Game Changers: Artificial Intelligence Part III, Artificial Intelligence and Public Policy," EPIC
told the House Oversight Committee that Congress must implement oversight mechanisms for the use of AI by federal agencies. EPIC said that Congress should require algorithmic transparency, particularly for government systems that involve the processing of personal data. EPIC also said that Congress should amend the E-Government Act to require disclosure of the logic of algorithms that profile individuals. EPIC made similar
comments to the UK Privacy Commissioner on issues facing the EU under the GDPR. A
recent GAO report explored challenges with AI, including the risk that machine-learning algorithms may not comply with legal requirements or ethical norms. EPIC has pursued several
criminal justice FOIA cases, and
FTC consumer complaints to promote transparency and accountability. In 2015, EPIC launched an international campaign for
Algorithmic Transparency.
- EPIC to UK Privacy Commissioner: Data Protection Assessments Require Algorithmic Transparency (Apr. 13, 2018) +
EPIC has submitted extensive
comments on proposed
guidance for Data Protection Impact Assessments. The new European Union privacy law - the "GDPR" — requires organizations to carefully assess the collection and use of personal data. In comments to UK privacy commissioner, EPIC said that disclosure of the technique for decision making is a core requirement for Data Protection Impact Assessments. EPIC supports
"Algorithmic Transparency". EPIC has pursued
criminal justice FOIA cases, and
FTC consumer consumer complaints to promote transparency and accountability. EPIC has
warned Congress of the risks of "citizen scoring."
- Congress Launches Caucus on Artificial Intelligence (Apr. 3, 2018) +
Congressional leaders have announced the establishment of the
Congressional Artificial Intelligence Caucus. The Caucus will bring together experts from academics, government, and the private sector to inform policymakers of the technological, economic and social impacts of advances in AI. The Congressional AI Caucus is bipartisan and co-chaired by Congressmen John Delaney (D-MD) and Pete Olson (R-TX). This is one of several initiatives in Congress to pursue AI policy objectives. Rep. Delaney introduced the FUTURE of Artificial Intelligence Act (
H.R. 4625) and Rep. Elise Stefanik (R-NY)
introduced a bill (
H.R. 5356) that would create the National Security Commission on AI. In 2015, EPIC launched an international campaign for
Algorithmic Transparency. EPIC has also
warned Congress about the growing of opaque and unaccountable techniques in automated decision-making.
- French President: Algorithmic Transparency Key to National AI Strategy (Apr. 2, 2018) +
French President Emmanuel Macron has expressed
support for "Algorithmic transparency" as a core democratic principle. In an
interview with Wired magazine, President Macron said that algorithms deployed by the French government and companies that receive public funding will be open and transparent. President Macron emphasized, "I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm." President Macron's statement echoed similar
comments in 2016 by German Chancellor Angela Merkel, "These algorithms, when they are not transparent, can lead to a distortion of our perception, they narrow our breadth of information." EPIC has a longstanding
campaign to promote transparency and to end secret profiling. At UNESCO headquarters in 2015, EPIC said that algorithmic transparency should be a
fundamental human right. In recent
comments to UNESCO, EPIC highlighted the risk of secret profiling, content filtering, the skewing of search results, and adverse decision-making, based on opaque algorithms.
- EPIC to UNESCO: Algorithmic Transparency is an Internet Universality Indicator (Mar. 16, 2018) +
EPIC has provided comments to UNESCO on a proposed framework for
Internet Universality Indicators. The UNESCO
framework emphasizes Rights, Openness, Accessibility, and Multistakeholder participation. UNESCO
said that the framework will help guide protections for fundamental rights. EPIC also proposed
"Algorithmic Transparency" as a key indicator of Internet Universality. EPIC highlighted the risk of secret profiling, content filtering, the skewing of search results, and adverse decisionmaking, based on opaque algorithms. EPIC has worked closely with UNESCO for
over 20 years on Internet policy issues. At UNESCO headquarters in 2015, EPIC said that algorithmic transparency should be a
fundamental human right.
- Senators Question Intelligence Officials on Russian Election Interference (Feb. 13, 2018) +
The Senate Intelligence Committee held a
hearing today with top officials from all U.S. intelligence agencies: Office of the Director of National Intelligence, CIA, NSA, Defense Intelligence Agency, FBI, and the National Geospatial-Intelligence Agency. The officials unanimously agreed that Russia interfered in the 2016 election and will interfere in the 2018 election, noting that they have already observed attempts to influence upcoming elections. Director of National Intelligence Dan Coats said: "There should be no doubt that Russia perceived that its past efforts as successful and views the 2018 U.S. midterm elections as a potential target for Russian influence operations." EPIC launched the
Project on Democracy and Cybersecurity, after the 2016 presidential election, to safeguard democratic institutions. EPIC is currently pursuing several FOIA cases concerning Russian interference, including
EPIC v. FBI (cyberattack victim notification),
EPIC v. ODNI (Russian hacking),
EPIC v. IRS (release of Trump's tax returns), and
EPIC v. DHS (election cybersecurity). EPIC also provided
comments to the Federal Election Commission to
improve transparency of election advertising on social media.
- NYC Establishes Algorithm Accountability Task Force (Dec. 21, 2017) +
New York City has passed
the first bill to examine the discriminatory impacts of "automated decision systems." A task force will develop recommendations for how to make the city's algorithms fairer and more transparent. James Vacca, the bill's sponsor, said "If we're going to be governed by machines and algorithms and data, well, they better be transparent." EPIC supports
algorithmic transparency and opposed
systemic bias in
"risk assessment" tools used in the criminal justice system. EPIC has filed
Freedom of Information lawsuits to obtain information about
"predictive policing" and
"future crime prediction" algorithms. EPIC President Marc Rotenberg
has called for laws that mandate algorithmic transparency and prohibit automated decision-making that results in discrimination.
- EPIC FOIA: Justice Department Admits Algorithmic Sentencing Report Doesn't Exist (Dec. 15, 2017) +
The Justice Department, in response to an EPIC FOIA
lawsuit, has
admitted that the United States Sentencing Commission never produced an evaluation of
"risk assessment" tools in criminal sentencing. In 2014, Attorney General Eric Holder expressed concern about bias in criminal sentencing "risk assessments" and called on the Sentencing Commission to study the problem and produce a report. But after EPIC
requested that study and sued the DOJ to obtain it, the DOJ conceded that the report was never produced. EPIC did obtain
emails confirming the existence of a 2014 DOJ report about "predictive policing" algorithms, but the agency also
withheld that report. "Risk assessments" are secret techniques used to set bail, to determine criminal sentences, and even make decisions about guilt or innocence. EPIC has pursued several FOIA cases to promote
"algorithmic transparency", including cases on
passenger risk assessment,
"future crime" prediction, and
proprietary forensic analysis.
- Support for Bills Establishing Oversight of AI Grows in Congress (Dec. 12, 2017) +
Senators Maria Cantwell (D-WA) and Brian Schatz (D-HI) are planning legislation to establish new oversight committees for the use of AI. Cantwell's bill—
Future of Artificial Intelligence Act of 2017—is cosponsored by Senators Ed Markey (D-MA) and Todd Young (R-IN) and would establish an AI committee at the Commerce Department. A companion bill in the House is sponsored by Representatives John Delaney (D-MD) and Pete Olson (R-TX), co-chairs of the
Artificial Intelligence Caucus. Schatz has
announced his intent to introduce a bill creating an independent AI commission. In 2015, EPIC launched an international campaign in support of
Algorithmic Transparency and has
warned Congress about the use of opaque technique in automated decision-making.
- EPIC Urges Congress to Regulate AI Techniques, Promotes 'Algorithmic Transparency' (Dec. 12, 2017) +
In advance of a
hearing on "Digital Decision-Making: The Building Blocks of Machine Learning and Artificial Intelligence," EPIC
warned a Senate committee that many organizations now make decisions based on opaque techniques they don't understand. EPIC told Congress that algorithmic transparency is critical for democratic accountability. In 2015, EPIC launched an international a campaign in support of
Algorithmic Transparency. At a speech to
UNESCO in 2015, EPIC President Marc Rotenberg called knowledge of the algorithm "a fundamental human right." Earlier this year, EPIC filed a
complaint with the FTC that challenged the secret scoring of athletes by Universal Tennis. EPIC said to the FTC that it "seeks to ensure that all rating systems concerning individuals are open, transparent and accountable."
- EPIC Promotes 'Algorithmic Transparency,' Urges Congress to Regulate AI Techniques (Nov. 28, 2017) +
In advance of a
hearing on "Algorithms: How Companies' Decisions About Data and Content Impact Consumers," EPIC
warned a Congressional committee that many organizations now make decisions based on opaque techniques they don't understand. EPIC told Congress that algorithmic transparency is critical for democratic accountability. In 2015, EPIC launched an international a campaign in support of
Algorithmic Transparency. At a speech to
UNESCO in 2015, EPIC President Marc Rotenberg called knowledge of the algorithm "a fundamental human right." Earlier this year, EPIC filed a
complaint with the FTC that challenged the secret scoring of athletes by Universal Tennis. EPIC said to the FTC that it "seeks to ensure that all rating systems concerning individuals are open, transparent and accountable."
- After Public Pressure, FEC To Begin Rulemaking On Online Ad Transparency (Nov. 16, 2017) +
After receiving over
150,000 public comments, the Federal Election Commission
voted unanimously to make new rules governing online political ad disclosures.
EPIC,
numerous other organizations, and
lawmakers pressed the FEC to require transparency for online ads to combat foreign interference in U.S. elections. The FEC had solicited
public comments on its internet disclosure rules three times in six years before finally taking action. A group of
15 Senators wrote, "The FEC must close loopholes that have allowed foreign adversaries to sow discord and misinform the American electorate." And a group of
18 members of Congress urged the FEC to "address head-on the topic of illicit foreign activity in U.S. elections." EPIC suggested the FEC go a step beyond simple disclosures and require
"algorithmic transparency" for online platforms that deliver targeted ads to voters. Several senators have also introduced a
bipartisan bill that would require the same disclosures for online ads as for television and radio. EPIC is fully engaged in protecting the integrity of elections with its
Project on Democracy and Cybersecurity.
- EPIC, Coalition Oppose Government's 'Extreme Vetting' Proposal (Nov. 16, 2017) +
EPIC and a coalition of civil rights organizations have sent a
letter to the Acting Secretary of Homeland Security strongly opposing the
Extreme Vetting Initiative. A similar
letter was sent by technical experts. The government's 'Extreme Vetting' initiative uses opaque procedures, secret profiles, and obscure data including social media post, to review visa applicants and make final determinations. EPIC has warned against both the government's use of social media data and
secret algorithms to profile individuals for decision making purposes. EPIC is also pursuing a
FOIA request for details on the relationship between the Immigration and Customs Enforcement agency and Palantir, a company that provides software to analyze large amounts of data.
- Consumer Bureau Proposes Policy Guidance for Data Aggregation Services (Nov. 16, 2017) +
The Consumer Financial Protection Bureau recently set out
guidance for financial services that aggregate consumer data. The Bureau outlined Consumer Protection Principles that "express the Bureau's vision for realizing a robust, safe, and workable data aggregation market that gives consumers protection, usefulness, and value." The Consumer Protection Principles for aggregated consumer data services are: (1) consumer access to information, (2) usability and limited scope of access by third parties, (3) consumer control and informed consent, (4) authorizing payments, (5) security (6) access transparency, (7) accuracy, (8) ability to dispute and resolve unauthorized access, and (9) efficient and effective accountability mechanisms. EPIC has
urged Congress to establish privacy and data security standards for consumer services and has championed
algorithmic transparency. In testimony before Congress, EPIC Board member Professor Frank Pasquale
explained that the use of secret algorithms often have adverse consequences for consumers.
- Senators Urge FEC to Promote Transparency in Online Ads (Nov. 13, 2017) +
A group of 15 Senators led by Mark Warner (D-VA), Amy Klobuchar, (D-MN) and Claire McCaskell, (D-MO) have
urged the Federal Election Commission to improve transparency for online political ads. The Senators stated that, "the FEC can and should take immediate and decisive action to ensure parity between ads seen on the internet and those on television and radio." The Senators emphasized how "Russian operatives used advertisements on social media platforms to sow division and discord" during the 2016 election. EPIC provided
comments to the FEC calling for
"algorithmic transparency" and the disclosure of who paid for online ads. Senators Klobuchar, Warner, and McCain (R-AZ) have also introduced a
bipartisan bill that would require the same disclosures for online political advertisements as for those on television and radio. EPIC's
Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to promote election integrity and safeguard democratic institutions from various forms of cyber attack.
- EPIC Promotes 'Algorithmic Transparency' for Political Ads (Nov. 3, 2017) +
In
comments to the Federal Election Commission, EPIC urged new
rules to require transparency for online political ads. EPIC said voters should "know as much about advertisers as advertisers know about voters." EPIC called for
algorithmic transparency which would require advertisers to disclose the demographic factors behind targeted political ads, as well as the source and payment. The FEC
reopened a comment period on proposed rules "in light of developments." This week representatives from Facebook, Twitter and Google testified at two
Senate hearings on the role that social media played in Russian meddling in the 2016 election. Senators Klobuchar (D-MN), Warner (D-VA), and McCain (R-AZ) have also introduced a
bipartisan bill that would require increased disclosures for online political advertisements. EPIC's
Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to safeguard democratic institutions from various forms of cyber attack.
- EPIC FOIA: EPIC Uncovers Report on "Predictive Policing" but DOJ Blocks Release (Nov. 1, 2017) +
EPIC has just received new documents in a FOIA case against the Department of Justice, however the agency is refusing to release reports about the use of
"risk assessment" tools in the criminal justice system. In 2014, the Attorney General called on the U.S. Sentencing Commission to review the use of "risk assessments" in criminal sentencing, expressing the concern about potential bias. EPIC
requested that document and
filed suit against the DOJ to obtain it, but the agency failed to release the report by a court-ordered
deadline. EPIC did obtain
emails confirming the existence of a 2014 DOJ report about "predictive policing" algorithms, but the agency also
withheld that report. "Risk assessments" are secret techniques used to set bail, to determine criminal sentences, and even decide guilt or innocence. EPIC has pursued several FOIA cases to promote
algorithmic transparency, including cases on
passenger risk assessment,
"future crime" prediction, and
proprietary forensic analysis.
- At OECD, EPIC Renews Call for Algorithmic Transparency (Oct. 27, 2017) +
Speaking at the OECD conference
"Intelligent Machines, Smart Policies," EPIC President Marc Rotenberg urged support for
Algorithmic Transparency. "We must establish this principle of accountability as the cornerstone of AI policy," said Mr. Rotenberg. Rotenberg spoke in support of Algorithmic Transparency at the
2014 OECD Global Forum for the Knowledge Economy in Tokyo. EPIC is now working with OECD member states,
NGOs, business groups, and technology exports on the development of an AI policy framework, similar to earlier OECD policy frameworks on
privacy,
cryptography, and
critical infrastructure protection.
- Mattel Cancels "Aristotle," an Internet Device that Targeted Children (Oct. 5, 2017) +
Mattel will
scrap its plans to sell Aristotle, an Amazon Echo-type device that collects and stores data from
young children. The Campaign for a Commercial-Free Childhood
sent a
letter and 15,000 petition signatures to the toymaker, warning of privacy and childhood development concerns. CFCC said that "young children shouldn't be encouraged to form bonds and friendships with data-collecting devices." Senator Markey (D-MA) and Representative Barton (R-TX) also
chimed in, demanding to know how Mattel would protect families' privacy. EPIC
backed the CFCC campaign and urged the FTC in 2015 to regulate
"always-on" Internet devices. A pending
EPIC complaint at the FTC concerns the secret scoring of young athletes.
- NGOs to Meet with Privacy Commissioners at Public Voice Event in Hong Kong (Sep. 19, 2017) +
The Public Voice will host an
event with NGOs and Privacy Commissioners at the
39th International Conference of Data Protection and Privacy Commissioners in Hong Kong. "Emerging Privacy Issues: A Dialogue Between NGOs & DPAs" will address emerging privacy issues, including
biometric identification,
Algorithmic transparency, border surveillance, the India privacy decision, and implementation of the
GDPR. Speakers include Chairman Isabelle Falque-Pterrotin of the CNIL and
Article 29 Working Party, Commissioner John Edwards of New Zealand, and Director Eduardo Bertoni of Argentina. Also participating will be representatives of
Access Now, EPIC,
GP Digital,
Privacy International, and the
World Privacy Forum. The
Public Voice, established in 1996, facilitates public participation in decisions concerning the future of the Internet.
- EPIC Urges Senate To Establish Data Protection Standards For Financial Technologies (Sep. 11, 2017) +
In advance of a
hearing on financial technology, EPIC
recommended that the Senate Committee establish privacy standards for financial companies that use social media and
secret algorithms to make determinations about consumers. In light of the recent Equifax breach, EPIC proposed that the Committee make privacy and security its top priorities. Earlier this year, EPIC submitted a similar
statement to the House Committee on Energy and Commerce. EPIC also recently filed a
complaint with the CFPB regarding "starter interrupt devices" deployed by auto lenders to remotely disable cars when individuals are late on their payments.
Testimony of Professor Frank Pasquale on "Exploring the Fintech Landscape."
- EPIC FOIA: EPIC Seeks Details of ICE, Palantir Deal (Aug. 15, 2017) +
EPIC has submitted a Freedom of Information Act
request to Immigration and Customs Enforcement seeking details of the agency's relationship with
Palantir. The federal agency contracted with the Peter Thiel company to establish vast databases of personal information, and develop new capabilities for searching, tracking, and profiling. EPIC is seeking the ICE contracts with Palantir, as well as training materials, reports, analysis, and other documents. The ICE
Investigative Case Management System and the
FALCON system now connect personal data across federal government, oftentimes in violation of the federal
Privacy Act. The Intercept
reported that FALCON "will eventually give agents access to more than 4 billion 'individual data records.'" In FOIA lawsuit
EPIC v. CBP, EPIC uncovered Planter's role in Analytical Framework for Intelligence, a program that assigns "risk assessment" scores to travelers. EPIC continues to advocate for
greater transparency in computer-based decision making.
- Supreme Court Won't Review Ruling on Secretive Sentencing Algorithms (Jun. 26, 2017) +
The Supreme Court has declined to review the
ruling of a state court that upheld the use of a secret algorithm to determine a criminal sentence. The petitioner Loomis argued that he was not able to assess the fairness or accuracy of the legal judgement, and that the secret
"risk assessment" algorithm therefore violated fundamental Due Process right. EPIC has pursued several related cases to establish the principle of
algorithmic transparency in the United States. In EPIC v. DHS, EPIC obtained documents about
secret behavioral algorithms that purportedly determine an individual's likelihood of committing a crime. In a series of state FOI cases, EPIC obtained records from state agencies about the
use of propriety
DNA analysis tools to determine guilt or innocence. EPIC is currently litigating
EPIC v. CBP before the DC Circuit Court of Appeals, a case concerning the secret scoring of airline passengers by the federal government.
- Court Rules Secret Scoring of Teachers Unconstitutional (Jun. 13, 2017) +
A federal district court has
held that firing public school teachers based on the results of a secret algorithm is unconstitutional. The case, Houston Federation of Teachers vs. Houston Independent School District, concerned a commercial software company's proprietary appraisal system that was used to score teachers. Teachers could not correct their scores, independently reproduce their scores, or learn more than basic information about how the algorithm worked. "When a public agency adopts a policy of making high stakes employment decisions based on secret algorithms incompatible with minimum due process, the proper remedy is to overturn the policy," the court wrote. EPIC recently filed a
complaint asking the FTC to stop the secret scoring of young tennis players. EPIC has pursued several cases on
"Algorithmic Transparency," including one for
rating travelers and another for
assessing guilt or innocence.
- EPIC to Congress: Data Protection Needed for Financial Technologies (Jun. 9, 2017) +
EPIC submitted a
statement to a House Committee
hearing on financial technologies on the risks with new financial services. Companies now use social media data and
secret algorithms to make determinations about consumers. They are also reaching out, through the "Internet of Things," to control consumers. EPIC's recently filed a
complaint with the CFPB about "starter interrupt devices," deployed by auto lenders to remotely disable cars when individuals are late on their payments.
- EPIC Asks FTC to Stop System for Secret Scoring of Young Athletes (May. 17, 2017) +
EPIC has filed a
complaint with the Federal Trade Commission to stop the secret scoring of young tennis players. The EPIC complaint concerns the "
Universal Tennis Rating", a proprietary algorithm used to assign numeric scores to tennis players, many of whom are children under 13. "The UTR score defines the status of young athletes in all tennis-related activity; impacts opportunities for scholarship, education and employment; and may in the future provide the basis for 'social scoring' and government rating of citizens," according to EPIC. EPIC urged the FTC to “find that a secret, unprovable, proprietary algorithm to evaluate children is an unfair and deceptive trade practice.” In 2015, EPIC launched a campaign on "
Algorithmic Transparency" and has pursued several cases, including one for
rating travelers and another for
assessing guilt or innocence, that draw attention to the social risks of secret algorithms.
- In Merger Reviews, EPIC Advocates for Privacy, Algorithmic Transparency (May. 9, 2017) +
EPIC has sent a
statement to the Senate Judiciary Committee ahead of a
hearing on the new Antitrust Chief. EPIC urged the Committee to consider the role of consumer privacy and data protection in merger reviews. EPIC warned that "monopoly platforms" are reducing competition, stifling innovation, and undermining privacy. EPIC pointed to the FTC's failure to block the
Google/DoubleClick merger which accelerated Google's dominance of Internet advertising and the
WhatsApp/Facebook merger which paved the way for Facebook to access confidential WhatsApp user data. EPIC also suggested that "algorithmic transparency" would become increasingly important for merger analysis. EPIC is a leading consumer privacy advocate and regularly submits
complaints urging
investigations and
changes to unfair business practices.
- European Parliament Adopts Resolution on Big Data (Mar. 24, 2017) +
The European Parliament has adopted a
resolution on the fundamental rights implications of big data. The resolution stresses that "the prospects and opportunities of big data" can only be realized "when public trust in these technologies is ensured by a strong enforcement of fundamental rights and compliance with current EU data protection law." The resolution discusses the importance of data protection, accountability, transparency, data security, and privacy by design. EPIC has warned about the risks of
big data and launched campaigns on
"Algorithmic Transparency" and
data protection.
- EPIC Urges Senate Commerce Committee to Back Algorithmic Transparency, Safeguards for Internet of Things (Mar. 22, 2017) +
EPIC has sent a
letter to the Senate Commerce Committee concerning "
The Promises and Perils of Emerging Technologies for Cybersecurity." EPIC urged the Committee to support "
Algorithmic Transparency," an essential strategy to make accountable automated decisions. EPIC also pointed out the "significant privacy and security risks" of the
Internet of Things. EPIC has been at the forefront of policy work on the Internet of Things and Artificial Intelligence, opposing government use of
"risk-based" profiling, and recommending safeguards for
connected cars, "
smart homes,"
consumer products, and
"always on" devices.
- EPIC Sues Justice Department Over "Risk Assessment" Techniques (Mar. 7, 2017) +
EPIC has filed a
FOIA lawsuit against the Department of Justice for information about the use of
"risk assessment" tools in the criminal justice system. These proprietary techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence.
Many criminal justice experts oppose their use. EPIC has pursued several FOIA cases to promote
"algorithmic transparency." The EPIC cases include
passenger risk assessment,
"future crime" prediction, and
proprietary forensic analysis. The Supreme Court is now considering whether to take a
case on the use of a secretive technique to predict possible recidivism.
- Pew Research Center Releases Report on Algorithms (Feb. 8, 2017) +
The Pew Research Center has released a report,
"Code-Dependent: Pros and Cons of the Algorithm Age." The Pew report discusses the impact that experts expect algorithms to have on individuals and society. Among the themes in the report are the biases and lack of human judgment in algorithmic decisionmaking and the need for "algorithmic literacy, transparency, and oversight." EPIC has promoted
"Algorithmic Transparency" for many years and has proposed two
amendments to
Asimov's Laws of Robotics that would require autonomous devices to reveal the basis of their decisions and their actual identity.
- Aspen Institute Report Explores Artificial Intelligence (Jan. 30, 2017) +
The Aspen institute released a
report on the Artificial Intelligence workshop on connected cars, healthcare, and journalism. "Artificial Intelligence Comes of Age" explored issues at "the intersection of AI technologies, society, economy, ethics and regulation." The Aspen report notes that "malicious hacks are likely to be an ongoing risk of self-driving cars" and that "because self-driving cars will generate and store vast quantities of data about driving behavior, control over this data will become a major issue." The Aspen report discusses the tension between privacy and diagnostic benefits in healthcare AI and describes "some of the alarming possible uses of AI in news media." EPIC has promoted
Algorithmic Transparency and has been at the forefront of
vehicle privacy through
testimony before Congress,
amicus briefs, and
comments to the NHTSA.
- The Verge Features EPIC FOIA Docs on Secret Profiling System (Dec. 21, 2016) +
In an
article today, The Verge featured an EPIC
Freedom of Information Act lawsuit about a controversial government data mining program, operated by the Department of Homeland Security. EPIC is seeking documents on the "
Analytical Framework for Intelligence," a program that assigns "risk assessment" scores to travelers using data from sources including the
Automated Targeting System, also operated by the DHS. Travelers "don't know how the scores are being generated and what the factors are," said EPIC FOIA Counsel, John Tran. "What if there's an error? Users should have an opportunity to correct the error, users should have an opportunity to understand what goes into generating the score." The case is currently pending before a federal judge in Washington, DC. EPIC expects to obtain more
records on AFI. The FOIA case is also related to EPIC's ongoing work on
"Algorithmic Transparency."
- European Parliament Explores Algorithmic Transparency (Nov. 7, 2016) +
A hearing today in the European Parliament brought together technologists, ethicists, and policymakers to examine
"Algorithmic Accountability and Transparency in the Digital Economy." Recently German Chancellor Angela Merkel
spoke against secret algorithms, warning that that there must be more transparency and accountability. EPIC has promoted
Algorithmic Transparency for many years and is currently litigating several cases on the front lines of AI, including
EPIC v. FAA (drones),
Cahen v. Toyota (autonomous vehicles), and
algorithms in criminal justice. EPIC has also
proposed two amendments to
Asimov's Rules of Robotics, requiring autonomous devices to reveal the basis of their decisions and to reveal their actual identity.
- EPIC Urges Massachusetts High Court to Protect Email Privacy (Oct. 24, 2016) +
EPIC has filed an
amicus brief in the Massachusetts Supreme Judicial Court regarding
email privacy. At issue is Google's scanning of the email of non-Gmail users. EPIC argued that this is prohibited by the Massachusetts Wiretap Act. EPIC described Google's complex scanning and analysis of private communications, concluding that it was far more invasive than the interception of a telephone communications, prohibited by state law. A federal court in California recently
ruled that non-Gmail users may sue Google for violation of the state wiretap law. EPIC has filed many
amicus briefs in federal and state courts and participated in the
successful litigation of a cellphone privacy case before the Massachusetts Judicial Court. The
EPIC State Policy Project is based in Somerville, Massachusetts.
- EPIC Promotes "Algorithmic Transparency" at Annual Meeting of Privacy Commissioners (Oct. 20, 2016) +
Speaking at the
38th International Conference of the Data Protection and Privacy Commissioners in Marrakech, EPIC President Marc Rotenberg highlighted EPIC's recent work on
algorithmic transparency and also
proposed two amendments to
Asimov's Rules of Robotics. Rotenberg cautioned that autonomous devices, such as drones, were gaining the rights of privacy - control over identity and secrecy of thought - that should be available only for people. Rotenberg also highlighted EPIC's recent publication
"Privacy in the Modern Age",
the Data Protection 2016 campaign, and the various publications available at the
EPIC Bookstore. The
2017 Privacy Commissioners conference will be held in Hong Kong.
- White House Releases Reports on Future of Artificial Intelligence (Oct. 13, 2016) +
The White House has
released two new reports on the impact of Artificial Intelligence on the US economy and related policy concerns.
Preparing for the Future of Artificial Intelligence surveys the current state of AI, applications, and emerging challenges for society and public policy. The report concludes "practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations." A companion report
National Artificial Intelligence Research and Development Strategic Plan proposes a strategic plan for Federally-funded research and development in AI. President Obama will discuss these issues on October 13 at the
White House Frontiers Conference in Pittsburgh. #FutureofAI EPIC has promoted
Algorithmic Transparency for many years and is currently litigating several cases on the front lines of AI, including
EPIC v. FAA (drones), and
Cahen v. Toyota (autonomous vehicles).
- Presidential Science Advisors Challenge Validity of Criminal Forensic Techniques (Sep. 8, 2016) +
According to an upcoming report by the President’s Council of Advisors on Science and Technology, much of the forensic analysis in criminal trials is not scientifically valid. The report, to be released this month, attacks the validity of analysis of evidence like bite-marks, hair, and firearms. The "lack of rigor in the assessment of the scientific validity of forensic evidence is not just a hypothetical problem but a real and significant weakness in the judicial system,” wrote the council. The Senate Judiciary Committee held hearings in
2009 and
2012 to discuss the need to strengthen forensic science, and Sen. Patrick Leahy (D-VT) introduced a
forensic reform bill in 2014. EPIC has
pursued FOIA requests on the reliability of
proprietary forensic techniques. EPIC also filed a
brief on the reliability of novel forensic techniques in the Supreme Court case
Florida v. Harris.
- Wisconsin Supreme Court Upholds Use of Sentencing Algorithms, But Recognizes Risks (Jul. 16, 2016) +
The Wisconsin Supreme Court this week
rejected a challenge to the use of a risk-assessment algorithm in a sentencing proceeding. These algorithms score an individual's
risk of committing future crime. The Court sanctioned the use of such algorithms, provided they are not the exclusive determining factor of a sentence, and judges receive written warnings about the algorithm's shortcomings. Professor Danielle Citron
warned that the court's faith in the secret techniques is "unwarranted" particularly because "human beings have a tendency to rely on automated decisions even when they suspect system malfunction." EPIC has advocated for
algorithmic transparency and maintains a
website describing the use of algorithms in the criminal justice system.
- White House Report Points to Risks with Big Data (May. 5, 2016) +
A new White House report
"Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights" points to risks with big data analytics.
According to the authors, "[t]he algorithmic systems that turn data into information are not infallible--they rely on the imperfect inputs, logic, probability, and people who design them." An earlier White House
report warned of "the potential of encoding discrimination in automated decisions." EPIC launched a
campaign on "Algorithmic Transparency" after
warning about the risks of secretive decision making coupled with
"big data."
- At UNESCO, EPIC's Rotenberg Argues for Algorithmic Transparency (Dec. 8, 2015) +
Speaking at UNESCO headquarters in Paris, EPIC President Marc Rotenberg explained that algorithms, complex mathematical formulas, have an increasing impact on people's lives in such areas as commerce, employment, education, and housing. He warned that processes would continue to become more opaque as more decision making was automated. He said to experts in
Freedom of Expression, Communication, and Information at UNESCO that "knowledge of the algorithm is a fundamental right, a human right," EPIC has launched a
new program on Algorithmic Transparency, building on the work of
several members of the EPIC Advisory Board.
- EPIC Pursues Public Release of Secret DNA Forensic Source Code (Oct. 14, 2015) +
EPIC has filed public records requests in six states to obtain the source code of "TrueAllele," a software product used in
DNA forensic analysis. According to recent
news reports, law enforcement officials use TrueAllele test results to establish guilt, but individuals accused of crimes are denied access to the source code that produces the results. A similar program used by New Zealand prosecutors was
recently found to have a coding error that provided incorrect results in 60 cases, including a high-profile murder case. EPIC has
previously urged the US Supreme Court to carefully consider the reliability of new investigative techniques and
argued a federal appeals case against DNA dragnet surveillance. Citing the importance of
algorithmic transparency in the criminal justice system, EPIC filed requests in
California,
Louisiana,
New York,
Ohio,
Pennsylvania, and
Virginia.
- EPIC Pursues Lawsuit about Secret Government Profiling Program (Aug. 11, 2015) +
EPIC has filed a
reply brief in federal court, rebutting the government's
claim that it can
withhold information about automated profiling. In
EPIC v. CBP, a Freedom of Information Act case, EPIC seeks documents about the "Analytical Framework for Intelligence," which incorporates personal information from government agencies, commercial data brokers, and the Internet. The agency then uses
secret, analytic tools to assign
"risk assessments" to travelers, including U.S. citizens traveling solely within the United States. EPIC submitted a
FOIA request in 2014 for documents relating the framework. EPIC has called for
"algorithmic transparency" in automated decisions concerning individuals.
- Facebook Applies for Patent to Collect Users' Credit Scores (Aug. 5, 2015) +
Facebook has applied for a
patent that would allow lenders to make
credit decisions on a user based on the user's Facebook activity. If the patent is approved, Facebook will be able to collect the credit scores of a user's "friends" and supply a creditor with their average score. If that average is below a certain threshold, the lender will reject the application. EPIC has filed
extensive comments with the
Consumer Financial Protection Bureau, urging the agency to
limit the amount of information creditors can access about consumers. EPIC has called for
algorithmic transparency in automated decisions concerning individuals.
- EPIC Pursues Documents about Secret Government Profiling Program (Jul. 1, 2015) +
EPIC has
filed papers in federal court
challenging the government's
claim that it can
withhold information about automated profiling. In
EPIC v. CBP, a Freedom of Information Act case, EPIC
seeks documents about the "Analytical Framework for Intelligence" which incorporates personal information from government agencies, commercial data brokers, and the Internet. The agency then uses
secret, analytic tools to assign
"risk assessments" to travelers, including U.S. citizens traveling solely within the United States. EPIC has called for
"algorithmic transparency" in automated decisions concerning individuals.
- White House Report on "Big Data" Explores Price Discrimination, Opaque Decisionmaking (Feb. 5, 2015) +
A White House report on
Big Data and Differential Pricing released today examines new forms of discrimination resulting from big data analytics. The White House explained the risks to consumers, acknowledged the failure of self-regulatory efforts, and called for greater transparency and consumer control over their personal information. Last year, EPIC and a coalition of NGOs
urged the President to establish privacy protections - including
"algorithmic transparency", consumer control, and robust privacy techniques - to address
Big Data risks.
- Senators Challenge Verizon's Secret Mobile Tracking Program (Jan. 30, 2015) +
In a
letter to Verizon, Senators on the Commerce Committee challenged the company's practice of placing a "super cookie" oncustomers' smartphones. The letter follows the recent discovery that the advertising company Turn was secretly tracking Verizon customers, even after customers deleted its cookies. In the letter, the Senators asked Verizon to stop tracking users with undeletable cookies. EPIC has urged
the White House and the
Federal Trade Commission to limit the use of persistent identifiers. EPIC supports
opt-in requirements and
Privacy Enhancing Techniques for consumers, and
algorithmic transparency for data collectors.
- EPIC Urges House to Safeguard Consumer Privacy (Jan. 26, 2015) +
EPIC has sent a
statement to the House Commerce Committee for the hearing, "What are the Elements of Sound Data Breach Legislation?". EPIC had
testified before the House Committee in 2011 on data breach notification, urging Congress to set a national baseline standard. EPIC also supports enactment of the
Consumer Privacy Bill of Rights. EPIC also urged the House Committee to promote "
algorithmic transparency." EPIC
has warned that “[t]he ongoing collection of personal information in the United States without sufficient privacy safeguards has led to staggering increases in identity theft,security breaches, and financial fraud.”
- Facebook Modifies User Privacy Policy (Jan. 2, 2015) +
Facebook has
modified its privacy and data use policies, effective January 1, 2015. Facebook will now allow advertisers to include a “buy” button directly on targeted advertisements on a user’s page. Facebook will also allow advertisers to use the location data gathered from tools like “Nearby Friends” and location "check-ins” to push geolocation-based targeted advertisements. For instance, a Facebook user who checks in near a restaurant that partners with Facebook may now be shown menu items from that restaurant. Last month, the Dutch data protection commission
announced that it planned to open an investigation into Facebook’s policy modifications. In July 2014, EPIC and a coalition of consumer privacy groups
urged the FTC to halt
Facebook’s plan to collect web-browsing information from its users. Facebook is already under a
20 year consent decree from the FTC that requires Facebook to protect user privacy. The consent decree resulted from complaints brought by EPIC and a coalition of consumer privacy organizations in
2009 and
2010. For more information, see
EPIC: Facebook Privacy; and
EPIC: FTC.
EPIC and Algorithmic Transparency
- EPIC, Coalition Oppose Government’s ‘Extreme Vetting’ Proposal (Nov. 6, 2017)
- EPIC Promotes ‘Algorithmic Transparency’ for Political Ads (Nov. 3, 2017)
- At OECD, EPIC Renews Call for Algorithmic Transparency
(Oct. 27, 2017)
- EPIC Asks FTC to Stop System for Secret Scoring of Young Athletes (May 17, 2017)
- In Merger Reviews, EPIC Advocates for Privacy, Algorithmic Transparency (May 9, 2017)
- EPIC Urges Senate Commerce Committee to Back Algorithmic Transparency, Safeguard for Internet of Things (Mar. 22, 2017)
- EPIC Promotes “Algorithmic Transparency” at Annual Meeting of Privacy Commissioners (Oct. 20, 2016)
- At UNESCO, EPIC’s Rotenberg Argues for Algorithmic Transparency (Dec. 8, 2015)
- At OECD Global Forum, EPIC Urges “Algorithmic Transparency” (Oct. 3, 2014)
AI Policy Frameworks
The speed of AI innovation and its impact on society prompts a serious concern for ethical review. There are currently no agreed upon set of standards for ethical AI design and implementation. Researchers and technical experts have grappled with how to align AI research and development with fundamental human values and norms. As a response, several organizations have begun to address the ethical issues in AI by creating AI principles and guidance documents. Below are four existing principles that guide in the development of safe AI.
Asilomar AI Principles
More than 100 AI researchers gathered in Asilomar, California to attend The Future of Life Institute’s “Beneficial AI 2017” conference. Through a multi-day survey and discussion process, attendees developed the Asilomar AI Principles, a list of 23 framework principles geared toward the safe and ethical development of AI. More than 1,200 AI/Robotics researchers and 2,541 others have signed onto the principles. Notable signers include Tesla co-founder Elon Musk, theoretical physicist Stephen Hawking, and EPIC Advisory Board member Ryan Calo. The draft principles are divided into three themes: (1) Research issues, (2) Ethics and Values, and (3) Long-term Issues. The principles highlight concerns ranging from creating beneficial intelligence, safety, transparency, privacy, avoiding an AI weaponry arms race, and non-subversion by AI.
IEEE’s Guide to Ethically Aligned Design
In December 2016, The Institute of Electrical and Electronics Engineers (IEEE) and its Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems published a first draft framework document on how to achieve ethically designed AI systems. Titled “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems,” the 136-page document encourages technologists to prioritize ethical considerations when creating autonomous and intelligent technologies. Broken down into eight sections, the document begins with a set of general principles and then moves onto specific issue areas such as how to embed human values into their systems, how eliminate data asymmetry and grant greater individual control to personal data, and how to improve legal accountability for harms caused by AI systems. The general principles that apply to all types of AI/AS are: (1) embody the highest ideals of human rights; (2) prioritize the maximum benefit to humanity and the natural environment; and (3) mitigate risks and negative impacts as AI/AS evolve as socio-technical systems.
USACM’s Principles on Algorithmic Transparency and Accountability
In January 2017, the Association for Computing Machinery U.S. Public Policy Council (USACM) issued a statement and list of seven principles on algorithmic transparency and accountability. The USACM statement provides a context for what algorithms are, how they make decisions, and the technical challenges and opportunities to address potentially harmful bias in algorithmic systems. The USACM believes that this set of principles, consistent with the ACM Code of Ethics, should be implemented during every phase of development to mitigate potential harms. The seven principles are: (1) awareness, (2) access and redress, (3) accountability, (4) explanation, (5) data provenance, (6) auditability, and (7) validation and testing.
Japan’s AI Research & Development Guidelines (AI R&D; Guidelines)
On April 2016 at the G7 ICT Ministers’ Meeting in Japan, Sanae Takaichi, Minister of Internal Affairs and Communications (MIC) of Japan, proposed to start international discussions toward establishing “AI R&D; guidelines” as a non-regulatory and non-binding international framework for AI research and development. In March 2017, the MIC released a report summarizing the current progress of drafting AI R&D; Guidelines for International Discussions as well as a Draft AI R&D; Guidelines with comments. One of the goals of the guidelines is to achieve a human-centered society, where people can live harmoniously with AI networks while human dignity and individual autonomy is respected. Modeled after OECD privacy guidelines, the nine R&D; principles found within the guidelines are: (1) collaboration, (2) transparency, (3) user assistance, (4) controllability, (5) security, (6) safety, (7) privacy, (8) ethics, and (9) accountability.
White House Report on the Future of Artificial Intelligence
In May 2016, the White House announced a series of workshops and a working group devoted to studying the benefits and risks of AI. The announcement recognized the "array of considerations" raised by AI, including those "in privacy, security, regulation, [and] law." The White House established a Subcommittee on Machine Learning and Artificial Intelligence within the National Science and Technology Council.
Over the next three months, the White House co-hosted a series of four workshops on AI:
- Legal and Governance Implications of Artificial Intelligence, May 24, 2016, Seattle, WA
- Artificial Intelligence for Social Good, June 7, 2016, in Washington, DC
- Safety and Control for Artificial Intelligence, June 28, 2016, in Pittsburgh, PA
- The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term, July 7, 2016, in New York City
EPIC Advisory Board members Jack Balkin, danah boyd, Ryan Calo, Danielle Citron, Ed Felten, Ian Kerr, Helen Nissenbaum, Frank Pasquale, and Latanya Sweeney each participated in one or more of the workshops.
The White House Office of Science and Technology issued a Request for Information in June 2016 soliciting public input on the subject of AI. The RFI indicated that the White House was particularly interested in "the legal and governance implications of AI," "the safety and control issues for AI," and "the social and economic implications of AI," among other issues. The White House received 161 responses.
On October 12, 2016, The White House announced two reports on the impact of Artificial Intelligence on the US economy and related policy concerns: Preparing for the Future of Artificial Intelligence and National Artificial Intelligence Research and Development Strategic Plan.
Preparing for the Future of Artificial Intelligence surveys the current state of AI, its applications, and emerging challenges for society and public policy. As Deputy U.S Chief Technology Officer and EPIC Advisory Board member Ed Felten writes for the White House blog, the report discusses "how to adapt regulations that affect AI technologies, such as automated vehicles, in a way that encourages innovation while protecting the public" and "how to ensure that AI applications are fair, safe, and governable." The report concludes that "practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations."
The companion report, National Artificial Intelligence Research and Development Strategic Plan, proposes a strategic plan for Federally-funded research and development in AI. The plan identifies seven priorities for federally-funded AI research, including strategies to "understand and address the ethical, legal, and societal implications of AI" and "ensure the safety and security of AI systems."
The day after the reports were released, the White House held a Frontiers Conference co-hosted by Carnegie Mellon University and the University of Pittsburgh. Also in October, Wired magazine published an interview with President Obama and EPIC Advisory Board member Joi Ito.
EPIC's Interest
EPIC has promoted Algorithmic Transparency for many years and is has litigated several cases on the front lines of AI. EPIC's cases include:
- EPIC v. FAA, which EPIC filed against the Federal Aviation Administration for failing to establish privacy rules for commercial drones
- EPIC v. CPB, in which EPIC successfully sued U.S. Customs and Border Protection for documents relating to its use of secret, analytic tools to assign "risk assessments" to travelers
- EPIC v. DHS, to compel the Department of Homeland Security to produce documents related to a program that assesses "physiological and behavioral signals" to determine the probability that an individual might commit a crime.
- EPIC v. DOJ, to compel the Department of Justice to produce documents concerning the use of “evidence-based risk assessment tools,” algorithms that try to predict recidivism, in all stages of sentencing.
EPIC has also filed amicus briefs supporting in Cahen v. Toyota that discusses the risks inherent in connected cars and has filed comments on issues of big data and algorithmic transparency.
EPIC also has a strong interest in algorithmic transparency in criminal justice. Secrecy of the algorithms used to determine guilt or innocence undermines faith in the criminal justice system. In support of algorithmic transparency, EPIC submitted FOIA requests to six states to obtain the source code of "TrueAllele," a software product used in DNA forensic analysis. According to news reports, law enforcement officials use TrueAllele test results to establish guilt, but individuals accused of crimes are denied access to the source code that produces the results.
Resources
- EPIC: Algorithms in the Criminal Justice System
- USACM, the ACM U.S. Public Policy Council, "Algorithmic Transparency and Accountability" Discussion Panel Event (Sept. 14, 2017)
- CPDP 2017 Conference, "Algorithms: Too Intelligence to be Intelligible?" (Jan. 26,2017)
- We Robot 2017
- We Robot 2016
- Marc Rotenberg, Algorithmic Transparency and Emerging Privacy Issues, UNESCO Presentation (Dec. 2, 2015)
- Ed Felten, CITP Web Privacy and Transparency Conference Panel 2 (Nov. 7, 2014)
- Alessandro Acquisti, Why Privacy Matters, TedGlobal 2013 Conference (June 2013)
- Alessandro Acquisti, Ralph Gross & Fred Stutzman, Faces of Facebook: Privacy in the Age of Augmented Reality, BlackHat USA 2011 Conference (Aug. 4, 2011)
- Steven Aftergood, Secret Law and the Threat to Democratic Government,” Testimony before the Subcommittee on the Constitution of the Committee on the Judiciary, U.S. Senate (Apr. 30, 2008)
News Articles & Blogposts
- Gary Kasparov, Pursuing Transparency and Accountability for Both Humans and Machines, Avast Blog (July 30, 2017)
- Lee Rainie & Janna Anderson, Code-Dependent: Pros and Cons of the Algorithm Age, Pew Research Center (Feb. 8, 2017)
- Kate Crawford & Ryan Calo, There is a blind spot in AI research, Nature: International Weekly Journal of Science (October 13, 2016)
- Rebeca MacKinnon, Where is Microsoft Bing’s Transparency Report?, The Guardian (Feb. 14, 2014)
- Bruce Schneier, Accountable Algorithms, Schneier on Security (Sep. 21, 2012)
- Ian Kerr, Privacy, Identity and Anonymity, Iankerr.ca (Sep. 1, 2011 )
- Ed Felten, Algorithms can be more accountable than people, Freedom to Tinker (Mar. 19, 2014)
- Jeff Jonas, Using Transparency as a Mask, JeffJonas.typepad.com (Aug. 4, 2010)
- Tim Wu, TNR Debate: Too Much Transparency?, New Republic (Oct. 11, 2009)
Reports
Academic Articles
- Jack Balkin, The Three Laws of Robotics in the Age of Big Data, Ohio State Law Review Vol. 78 (2017), Forthcoming
- Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Washington Law Review 1 (2014)
- Cynthia Dwork & Aaron Roth, The Algorithmic Foundations of Differential Privacy, Theoretical Computer Science Vol. 9: No. 3-4, 211 (2014)
- Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Washington Law Review 1 (2014) (with Danielle Citron)
- Ian Kerr & Jessica Earle Prediction, Presumption, Preemption: The Path of Law After the Computational Turn, 66 Stanford Law Review 65 (2013)
- Julie E. Cohen, Power/play: Discussion of Configuring the Networked Self, 6 Jerusalem Rev. Legal Stud. 137-149 (2012)
- Frank Pasquale, Restoring Transparency to Automated Authority, 9 Journal on Telecommunications & High Technology Law 235 (2011)
- Grayson Barber, How Transparency Protects Privacy in Government Records (May 23, 2011) (with Frank L. Corrado)
- David J. Farber & Gerald R. Faulhaber, The Open Internet: A Consumer-Centric Framework, International Journal of Communication (2010)
- Ed Felten, David G. Robinson, Harlan Yu & William P Zeller, Government Data and the Invisible Hand, 11 Yale Journal of Law & Technology 160 (2009)
- Julie E. Cohen, Privacy, Visibility, Transparency, and Exposure 75 University of Chicago Law Review 181 (2008)
- Frank Pasquale, Internet Nondiscrimination Principles: Commercial Ethics for Carriers and Search Engines, 2008 University of Chicago Legal Forum 263 (2008)
- Alessandro Acquisti, Price Discrimination, Privacy Technologies, and User Acceptance (2006)
- Urs Gasser, Regulating Search Engines: Taking Stock and Looking Ahead, 9 Yale Journal of Law & Technology 124 (2006)
- Latanya Sweeney, Privacy Enhanced Linking, ACM SIGKDD Explorations 7(2) (Dec. 2005)
- Phil Agre, Your Face Is Not A Bar Code: Arguments Against Automatic Face Recognition in Public Places (Sept. 7, 2001)
- A. Michael Froomkin, The Death of Privacy, 52 Stanford Law Review 1461 (2000)
Books
- Ryan Calo, A. Michael Froomkin, & Ian Kerr, Robot Law (Edward Elgar 2016)
- Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015)
- Colin Bennett, Transparent Lives: Surveillance in Canada (2014)
- danah boyd, Networked Privacy (2012)
- Julie E. Cohen, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (2012)
- James Bamford, The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America (2009)
- David Burnham, The Rise of the Computer State (1983)