Opening up AI Black Boxes

"Explanations are … the currency in which we exchange beliefs.” Lombrozo, 2016

The increase of the popularity of Artificial Intelligence (AI) technologies with a plethora of applications into several sectors is followed by the raising concerns in the wider public on the safety of these intelligent systems. These concerns become particularly impellent when the actions of an AI system can put a human life at risk, such as errors made by self-driving cars.

Providing techniques that explain the inner working of these models has been one of the central topics of the AI research and has led to the creation of a new branch of AI, called Explainable AI (XAI). However, explanations usually come at the cost of accuracy; there is a trade-off between the necessity of developing more complex models that can handle real-world data (containing errors, missing values, and inconsistencies) and make accurate predictions, and the requirement to provide enough transparency to satisfy the demands of the human users. Complex models are by nature difficult to be understood, in particular when they feature a large number of interacting components (agents, processes, etc.) whose aggregate activity is nonlinear (not derivable from the summations of the activity of individual components). It is not feasible to explain such models in their entirety because the learner will feel overwhelmed by the huge amount of information thrown upon him/her. On the other hand, an explanation must provide a comprehensive and truthful description of the model.

Regulators are now showing an increasing interest in this topic. The new GDPR has introduced into EU legislation a “right of explanation” for people who are subjected to decisions made with the aid of algorithms. Article 22 of GDPR (1) says that “The data subject should have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. […] The data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”

Clearly, the main scope of the GDPR article is to avoid discrimination and unfair treatment of people belonging to particular groups, such as specific ethnicities or genders. Despite everyone’s agreement on the goodwill of this rule, researchers are now debating the effectiveness of article 22 and the general effects that GDPR will have on the AI field.

According to Nick Wallace, a senior policy analyst working at the Center for Data Innovation, GDPR will “slow down the development and use of AI in Europe by holding developers to a standard that is often unnecessary and infeasible”(2). Additionally, Wallace rightly points out that it is not always practical to explain how an algorithm works. Understanding the rationale of some algorithms requires a strong background knowledge in maths for both the explainer and the learner. Hence, it is necessary to find systems that automatically generate explanations that are intelligible for the wider public; researchers have been working on these systems since the early 80s (3). This is not easy at all as there is no agreement on what constitutes an explanation or when one explanation is better than another one.

Another criticism regards the source of discrimination. Discriminatory behaviours can be intrinsic in the data and no algorithm could avoid discrimination as they learn from such data. However, it cannot be taken for granted that a person will behave in a fairer way than an algorithm. Every person is affected to some extent by prejudices and other mental biases that are inherent in their cultural background. These biases can heavily influence people’s decisions, specially when they are not supported by a standard, rigorous procedure and are based on the “gut feeling”. These biases can be present in the data used to train the AI models when the data contain information on decisions made by humans. AI algorithms can spot these biases and reproduce them over and over in their predictions.

On the other hand, other researchers see this as an opportunity to boost research on XAI. As said at the beginning, AI is now employed in areas that can affect the safety of its users, so it is necessary to gain their trust before they will agree to use it. People tend to not trust what they do not understand and they might not want to deal with algorithms that base their decisions upon an opaque, not-intelligible process. Particularly, if they believe or fear that the outcome of these decision processes can be unfair or even dangerous for them. Been Kim, a research scientist at Google Brain, strongly believes that without interpretability “in the long run, […] humankind might decide — perhaps out of fear, perhaps out of lack of evidence — that this technology is not for us” (4).

Explainability can bring also other positive effects, according to many researchers, such as speeding up the debugging process of AI models, make them easier to be used and discovering comprehensible, interesting knowledge from the data (5). However, it is still debated how an explanation should be constructed to be considered clear and useful for the users. The XAI literature is vast and contains several properties of explainability such as transparency, usability, readability and interestingness.

Here in Creme Global we are committed to making our decision models as transparent as possible with the goal to provide actionable insights in the data for our customers. Therefore we will closely follow developments in XAI fields in order to provide the best, cutting-edge solutions to our clients.

  1. https://gdpr-info.eu/art-22-gdpr/
  2. https://www.techzone360.com/topics/techzone/articles/2017/01/25/429101-eus-right-explanation-harmful-restriction-artificial-intelligence.htm
  3. https://www.sciencedirect.com/science/article/abs/pii/0004370280900211#!
  4. https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/
  5. https://www.cs.kent.ac.uk/people/staff/aaf/pub_papers.dir/Expert-Update-BCS-SGAI-2006.pdf

Appendix

If you wish to know more on the right of explanation and XAI, hereunder some useful links.

You might also like

Latest NHANES data and the right tools to understand it

Properly harnessing available data requires innovative and validated solutions that circumvent hype and buzzwords and instead focus on using advanced data access, analytics and predictive modelling harmoniously. Such solutions can then be used to underpin important safety and risk assessment decisions made by the industry and governments.

Read more
Cares NG replacing DEEM-FCID creme global

Cares NG replacing DEEM-FCID

Recently the U.S. Environmental Protection Agency (EPA) and commercial organizations have been transitioning towards using CARES NG (Cumulative and Aggregate Risk Evaluation System, Next Generation) software for their pesticide risk assessments.

Read more

Get weekly industry insights from Creme Global

Download The Overview Now

Data Sharing on Creme Global Platform

Gain critical business intelligence
from shared, anonymized data.