SEARCH BY:
Blog  |  December 07, 2023

Transparency, Explainability, and Interpretability of AI

In our previous blog the “nuts and bolts” series of artificial intelligence (AI) for legal professionals, we discussed privacy considerations, both the negatives and positives of AI.  Two of the biggest questions associated with AI are “why does AI do what it does”? and “how does it do it?” Depending on the context in which the AI algorithm is used, those questions can be mere curiosity (e.g., why does ChatGPT give me an obvious hallucination when I ask it this question?) or life-impacting (e.g., how did facial recognition identify the suspect of a specific crime?). Those questions also extend to the purpose for the use of the AI algorithm, which gets into those privacy considerations we discussed last time.

When people don’t understand the “why” and “how” of AI, they consider the technology to be a “black box”, which causes hesitancy for its use. Understanding the “why” and “how” ties to three concepts: transparency, explainability, and interpretability. In this post, we’ll discuss what these concepts are, how they differ, and the considerations associated with them.

What are Transparency, Explainability, and Interpretability?

Many use the term “transparency” to refer to the ability to understand the “why” and “how” of the AI algorithm, but transparency is only one concept of understanding AI. Here are definitions of transparency, explainability, and interpretability:

Transparency

Transparency refers to the openness in the design, development, and deployment of AI systems. A transparent AI system is one where its mechanisms, data sources, and decision-making processes are openly available and understandable.

An example of transparency in an AI algorithm is an open-source AI project where the code, data, and methodologies used are publicly accessible and understandable. An example would be a machine learning project on GitHub where the developers have provided the complete source code, a comprehensive dataset and clear documentation explaining how the algorithm works, and details about the training process.

Explainability

Explainability involves the ability to describe in understandable terms how the AI system reached a specific decision or output. Explainability is more about the logic or reasoning behind individual AI decisions, making the AI’s processes accessible and relatable to end-users.

An example of explainability in an AI algorithm might be a machine learning model used in credit scoring, where the AI evaluates an individual’s creditworthiness based on various factors like income, credit history, employment status, and debt levels. The explainable aspect would be the AI providing reasons for its decision, such as stating that a loan application was denied due to a low credit score and high debt-to-income ratio.

Interpretability

Interpretability goes deeper than explainability, focusing on the inner workings of the algorithm. It’s about understanding the model’s decision-making process on a detailed level, often requiring technical insight into the AI’s functioning.

An example of interpretability could be a decision tree used in a medical diagnosis tool. Each branch of the decision tree represents a decision based on patient data points (like age, symptoms, medical history, and blood test results), leading to a specific diagnosis or recommendation. The interpretability is due to the ability to trace the path taken by the algorithm through the tree for each decision, understanding exactly how and why it reached a particular conclusion based on the input data.

Challenges Associated With Transparency, Explainability, and Interpretability

While in a perfect world, AI algorithms should be transparent, explainable, and interpretable, developers of many algorithms have not been prioritizing these factors in the AI algorithms they create. Reasons for this include:

Complexity of Algorithms

Many AI models, especially deep learning algorithms, are inherently complex, making it difficult to understand how they arrive at specific decisions or predictions.

That could certainly be the case for the tendency for ChatGPT and other generative AI solutions to “hallucinate” fake information in response to questions. It seems no one can predict how accurate the responses will be from large language models like ChatGPT – in fact, some experts don’t think the problem of hallucinations is fixable.

Lack of Standardization

There’s no universally accepted framework or standard for explaining AI decisions, leading to variability in how transparency is approached and implemented.

Data Privacy Concerns

Explaining AI decisions often requires revealing information about the data it was trained on, which can raise privacy concerns, especially with sensitive or personal data. As we discussed last time, the handling of personal data by AI algorithms has been a major catalyst for new comprehensive data privacy laws. Even with over € 4.4 billion in fines under GDPR, companies are making many more billions of dollars in revenue for the insights they gain from profiling personal data.

Intellectual Property (IP) Protection

Another major motivation for algorithm developers to be less transparent and interpretable is protection of their trade secrets and IP. Especially in criminal cases, there have been several instances where parties have challenged the results of an algorithm requesting to understand more information about how it determines its output, while developers or users of the technology have fought against disclosure of that information as proprietary or trade secret information. Two examples:

  • Wisconsin v. Loomis (2016): Northpointe, Inc., the developer of the risk-assessment software COMPAS (which we discussed previously here), objected to disclosure of how the risk scores were determined or how the factors were weighed in sentencing recommendations, citing that COMPAS was a “proprietary instrument and a trade secret”. Because the Wisconsin Supreme Court determined that COMPAS was not the sole determinant of the lower court’s sentencing decision, its use did not violate the defendant’s right to due process.
  • New Jersey v. Arteaga (2022): The defendant requested detailed discovery on the facial recognition systems used by the NYPD to identify him, the original photo and any edits performed by the NYPD before a search was run, and information on the analyst who performed the search that identified him. The New Jersey district court denied his motion to compel discovery.

In both cases, concerns over bias of the AI algorithms and a desire for greater transparency, explainability, and interpretability were at odds with the desire to protect the trade secret protections of the developers.

Conclusion

As discussed by Maura R. Grossman, J.D., Ph.D. and Gordon V. Cormack, Ph.D. and now-retired Maryland District Judge Paul W. Grimm in their 2021 article Artificial Intelligence as Evidence, “an entire domain of research exclusively devoted to this problem has emerged, known as “Explainable AI” (‘XAI’). Those who advocate for XAI believe that AI can only be trustworthy if it can be explained to humans, although they acknowledge that the level or type of explanation may vary for different applications or users.” One such group – the Defense Advanced Research Project Agency (DARPA) XAI program – “aims to produce ‘glass-box’ models that are explainable to a “human-in-the-loop” without sacrificing AI performance.

So far, however,  transparency proponents are fighting an uphill battle. With the inability to ensure an appropriate level of transparency, explainability, and interpretability, it’s more important than ever to validate the results of the algorithms you use to the extent possible. “Trust but verify” is the best approach with AI algorithms when you can’t fully understand how they work.

Next time, we’ll discuss recent guidance and resolutions on the use of AI from the American Bar Association (ABA). For more regarding Cimplifi specialized expertise regarding AI & machine learning, click here.

In case you missed the previous blogs in this series, you can catch up here:

The “Nuts and Bolts” of Artificial Intelligence for Legal Professionals

The “Nuts and Bolts” of AI: Defining AI

The “Nuts and Bolts” of AI: Types of Bias in AI

The “Nuts and Bolts” of AI: Privacy Considerations

The “Nuts and Bolts” of AI: Transparency, Explainability, and Interpretability, of AI

The “Nuts and Bolts” of AI: ABA Guidance on the Use of AI

The “Nuts and Bolts” of AI: The Current State of AI Regulations

The “Nuts and Bolts” of AI: Current Proven AI Legal Use Cases

The “Nuts and Bolts” of AI: Emerging Use Cases and the Future of AI for Legal

We invite you to stay informed and join the conversation about AI. If you have questions, insights, or thoughts to share, please don’t hesitate to reach out.

>