On 25 June, EDRi sent an open letter to the CEO of IBM in response to their 8 June statement on racial equality and facial recognition in the US.
EDRi asked IBM to provide more information about what will change as a result of their commitment to end general purpose facial recognition, and whether these issues will lead to changes in IBM’s contracts and work in the EU.
In May 2020, EDRi’s 44 civil society organisations launched the first European coalition to call on the EU for a “Ban on Biometric Mass Surveillance” including public facial recognition. We agree with IBM that biometric surveillance technologies can have seriously damaging impacts on our rights and societies and have no place in a democratic society.
Read the full letter here or find it below:
Dear Mr. Krishna,
Chief Executive Officer of IBM
We are European Digital Rights (EDRi), a coalition of 44 digital rights organisations across Europe, working to protect fundamental rights in the digital environment. We read your recent statement on facial recognition with great interest and hope, and were pleased to see Amazon and Microsoft follow suit.
We, too, have been advocating for protections against the harms caused by invasive, discriminatory facial recognition and other forms of biometric mass surveillance, and are heartened to see influential companies such as IBM stepping up to take action. Our own call to action has urged the EU to ban biometric mass surveillance, and our members are working at a national level to increase awareness and drive positive changes to protect people from the threats of surveillance.
We would greatly appreciate the opportunity for a dialogue between IBM and EDRi to better understand the specific actions that you will be taking to act upon your recent commitments. It would be very powerful if we could show IBM as an example for other companies.
We will make this letter and your response public, and therefore would like to ask for your written reply by 10 July. We would also like to suggest a call to discuss the details of our questions in the meantime.
In particular, we are seeking insight into the following:
- Which existing contracts will be stopped/cancelled as a result of IBM’s new position?
- Which applications specifically will IBM stop developing and selling in response to the new position? Are there other applications that IBM would consider within the remit of this position, but which have already been stopped? When and why were they stopped?
- What are the features of the applications that will be stopped?
- Does IBM have government contracts at the moment that fall into these categories in the United States and elsewhere? Which governments are IBM’s business partners for facial (or other biometric) recognition, analysis or processing software products?
- In the statement, IBM states that it opposes use of technology “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” Are these values and principles reinforced in IBM’s contracts with clients/customers or in a human rights policy or statement? How is compliance with these values and Principles ensured?
- What are IBM’s structures, policies and processes to meet and demonstrate human rights compliance? Does IBM conduct human rights impact assessments or human rights due diligence on its products, in particular taking into account privacy concerns? Which stakeholders are included in IBM’s analyses?
- Was the recent statement developed in conjunction with human rights experts, and are any human rights experts supporting IBM with its implementation? Did IBM consult communities most impacted by use of its technology?
- In the statement, IBM speaks of “general purpose” technology. How do you define this, and does this mean that IBM anticipates that there will be exceptions? How are exceptions being justified, given the similarly violatory nature of both general purpose and specific purpose tools?
- Also linked to the “general purpose”, what specific purposes would IBM not support with your technology and by what criteria? What specific purposes would IBM therefore support?
- In the statement, IBM refers to “IBM facial recognition and software analysis”. Does IBM continue to (re)sell general purpose software from others?
- In the statement, IBM talks about “domestic law enforcement agencies”. What about military, border police, intelligence, security services etc?
- IBM places the statement in the context of federal policing, national policy and other US-specific areas. Is IBM taking action outside of the US context, recognising that such technologies are equally harmful in the EU and other regions?
- Will IBM apply the commitments in this statement to other areas of business or technologies such as smart city and smart policing projects?
We are looking forward to your response.
Sincerely,
Diego Naranjo, Head of Policy<close