SHAP: Shapley Values in Machine Learning - a podcast by Ben Jaffe and Katie Malone

from 2018-05-13T14:24:38

:: ::

Shapley values in machine learning are an interesting and useful enough innovation that we figured hey, why not do a two-parter? Our last episode focused on explaining what Shapley values are: they define a way of assigning credit for outcomes across several contributors, originally to understand how impactful different actors are in building coalitions (hence the game theory background) but now they're being cross-purposed for quantifying feature importance in machine learning models. This episode centers on the computational details that allow Shapley values to be approximated quickly, and a new package called SHAP that makes all this innovation accessible.

Further episodes of Linear Digressions

Further podcasts by Ben Jaffe and Katie Malone

Website of Ben Jaffe and Katie Malone