Interpretability

Interpretability

The unreasonable impact of machine learning techniques demonstrate that they are here to stay. That being the case, it is critically that when an important decision is made by an algorithm, it be possible for the people affected by that decision to understand how the algorithm arrived at it's conclusion. This season is a broad exploration of explainability and interpretability techniques for AI and ML.

Interpretability
Interpretability

Interpretability

Machine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask.

Algorithmic Fairness
Algorithmic Fairness

This episode includes an interview with Aaron Roth author of The Ethical Algorithm.

Fooling Computer Vision
Fooling Computer Vision

Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person.  Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable.

Visualization and Interpretability
Visualization and Interpretability

Enrico Bertini joins us to discuss how data visualization can be used to help make machine learning more interpretable and explainable.

ObjectNet
ObjectNet

Andrei Barbu joins us to discuss ObjectNet - a new kind of vision dataset.

Adversarial Explanations
Adversarial Explanations

Walt Woods joins us to discuss his paper Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness with co-authors Jack Chen and Christof Teuscher.

Anchors as Explanations
Anchors as Explanations

We welcome back Marco Tulio Ribeiro to discuss research he has done since our original discussion on LIME.

Shapley Values
Shapley Values

Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation.

Interpretability Tooling
Interpretability Tooling

Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability.

AlphaGo, COVID-19 Contact Tracing and New Data Set
AlphaGo, COVID-19 Contact Tracing and New Data Set

Announcing Journal Club

I am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to.

Uncertainty Representations
Uncertainty Representations

Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates.

Computer Vision is Not Perfect
Computer Vision is Not Perfect

Computer Vision is not Perfect

Julia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks.

Plastic Bag Bans
Plastic Bag Bans

Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out!

Self-Explaining AI
Self-Explaining AI

Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user.

Understanding Neural Networks
Understanding Neural Networks

What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.

Black Boxes Are Not Required
Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”.

GANs Can Be Interpretable
GANs Can Be Interpretable

Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself.