2-AIN-505, 2-AIN-251: Seminár z bioinformatiky (1) a (3)
Zima 2017
Abstrakt

Avanti Shrikumar, Peyton Greenside, Anshul Kundaje. Learning Important Features Through Propagating Activation Differences. In ICML 2017, pp. 3145-3153, 2017.

Download preprint: not available

Download from publisher: http://proceedings.mlr.press/v70/shrikumar17a.html

Related web page: not available

Bibliography entry: BibTeX

Abstract:

The purported “black box” nature of neural networks is a barrier to adoption 
in applications where interpretability is essential. Here we present 
DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the 
output prediction of a neural network on a specific input by backpropagating 
the contributions of all neurons in the network to every feature of the 
input. DeepLIFT compares the activation of each neuron to its `reference 
activation’ and assigns contribution scores according to the difference. By 
optionally giving separate consideration to positive and negative 
contributions, DeepLIFT can also reveal dependencies which are missed by 
other approaches. Scores can be computed efficiently in a single backward 
pass. We apply DeepLIFT to models trained on MNIST and simulated genomic 
data, and show significant advantages over gradient-based methods. Video 
tutorial: http://goo.gl/qKb7pL code: http://goo.gl/RM8jvH