Skip to main content
. 2021 Oct 6;34(4):1135–1193. doi: 10.1007/s13347-021-00474-3

Table 3.

Companies practicing and developing AI in order to apply/sell/advise about AI that explicitly have committed to principles or guidelines for AI and/or to the tenets for AI of the Partnership on AI—tabulated according to their internal governance for responsible AI; training and educational materials about responsible AI; new tools for fair/non-biased AI and explainable AI and secure/privacy-proof AI and accountability of AI; and their external collaboration and funding for research into responsible AI

AI
companies committed to AI principles1
Governance for responsible AI
inside the firm2
Training and educational materials produced about responsible AI New tools for fair/non-biased AI New tools for explainable AI New tools for secure/privacy-proof AI3 New tools for accountability of AI External collaboration and funding for responsible AI research
Amazon SHAP values and feature importance tools (proprietary) Co-funding of NSF project ‘Fairness in AI’
Google Advanced Technology External Advisory Council (now defunct); Ethics & Society team; responsible innovation teams review new projects for conformity to AI principles Employee training about ethical AI, educational materials (see ‘People + AI Guidebook’) Facets, What-If tool, Fairness Indicators (all open source) What-If tool (open source) CleverHans (open source); Private Aggregation of Teacher Ensembles (open source), Tensor Flow Privacy (open source); Federated Learning, RAPPOR, Cobalt (open source) Model cards
Microsoft AI and Ethics in Engineering Research Committee, Office of Responsible AI Internal guidelines and checklists (e.g., ‘In Pursuit of Inclusive AI’, ‘Inclusive Design’)

FairLearn

(open source)

InterpretML

(open source)

WhiteNoise package

(open source)

Data sheets for datasets
IBM AI Ethics Board (chaired by AI Ethics Global Leader and Chief Privacy Officer) Guidelines for AI developers (‘Everyday Ethics for AI’)

AI Fairness 360 Toolkit

(open source)

AI Explainability 360 Toolkit (open source) Adversarial Robustness 360 Toolbox (open source) Fact sheets Joint research with Institute for Human-Centred AI (Stanford University), funding of Tech Ethics Lab (University of Notre Dame)
Intel AI Ethics and Human Rights Team
Facebook AI Ethics Team Fairness Flow (proprietary) Captum (for deep neural networks) (open source) Funding of Institute of Ethics in AI (TU Munich)
Telefónica AI Ethics course and AI Ethics self-assessment (for employees)
Accenture Advocates ‘responsible, explainable, citizen AI’ to clients, educational materials AI Fairness Tool, part of AI Launchpad (proprietary)
SAP AI Ethics Advisory Panel, AI Ethics Steering Committee; diverse and interdisciplinary teams Course about ‘trustworthy AI’ (for employees and other stakeholders)
Philips
Salesforce Ethical Use Advisory Council, Office of Ethical and Humane Use of Technology, data science review board; inclusive teams Teaching module about bias in AI (for employees and clients) Einstein discovery tools (proprietary) Einstein discovery tools (proprietary)
McKinsey (Quantum Black) Advocates ‘responsible AI’ approach to clients, educational materials

CausalNex

(open source)

Sage Team diversity
Tieto In-company ethics certification, special AI ethics engineers, and trainers appointed
Health Catalyst
Deep Mind (Google subsidiary) External ‘fellows’, Ethics Board, Ethics and Society Team
Element AI Team diversity Internal blogposts about responsible AI Fairness tools (proprietary) Explainability tools (proprietary)

1Companies are ordered by revenue. Apple, Samsung, Deutsche Telekom, Sony, Kakao, Unity Technologies, and Affectiva have been omitted from the table since my searches yielded no results for them.

2 ‘Ethics Team’ denotes what is variously referred to as ethical/ethics/review committee/board (highest corporate body concerning affairs of ethical AI).

3Techniques such as privacy by design (de-identification of data) and security by design (encryption) are not mentioned in the table since they are well-known and do not refer specifically to problems of ML.

Note: The sources that I drew my information in the table from are for the most part given in footnotes in the text of the article; otherwise, the sources are available on request.