Wojciech Samek
Wojciech Samek
Head of AI Department, Fraunhofer HHI, Germany, BIFOLD Fellow & ELLIS Berlin
Verified email at hhi.fraunhofer.de - Homepage
Title
Cited by
Cited by
Year
On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation
S Bach, A Binder, G Montavon, F Klauschen, KR Müller, W Samek
PLOS ONE 10 (7), e0130140, 2015
20572015
Methods for interpreting and understanding deep neural networks
G Montavon, W Samek, KR Müller
Digital Signal Processing 73, 1-15, 2018
13162018
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
W Samek, T Wiegand, KR Müller
ITU Journal: ICT Discoveries 1 (1), 39-48, 2018
8052018
Explaining nonlinear classification decisions with deep taylor decomposition
G Montavon, S Lapuschkin, A Binder, W Samek, KR Müller
Pattern Recognition 65, 211-222, 2017
7842017
Evaluating the visualization of what a deep neural network has learned
W Samek, A Binder, G Montavon, S Lapuschkin, KR Müller
IEEE Transactions on Neural Networks and Learning Systems 28 (11), 2660-2673, 2017
6542017
Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment
S Bosse, D Maniry, KR Müller, T Wiegand, W Samek
IEEE Transactions on Image Processing 27 (1), 206-219, 2018
4812018
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
S Lapuschkin, S Wäldchen, A Binder, G Montavon, W Samek, KR Müller
Nature Communications 10, 1096, 2019
4102019
Explainable AI: Interpreting, explaining and visualizing deep learning
W Samek, G Montavon, A Vedali, LK Hansen, KR Müller
Springer Nature, 2019
3522019
Robust and communication-efficient federated learning from non-iid data
F Sattler, S Wiedemann, KR Müller, W Samek
IEEE Transactions on Neural Networks and Learning Systems 31 (9), 3400-3413, 2020
3392020
Interpretable deep neural networks for single-trial EEG classification
I Sturm, S Lapuschkin, W Samek, KR Müller
Journal of Neuroscience Methods 274, 141-145, 2016
2452016
Explaining recurrent neural network predictions in sentiment analysis
L Arras, G Montavon, KR Müller, W Samek
EMNLP'17 Workshop on Computational Approaches to Subjectivity, Sentiment …, 2017
2322017
"What is relevant in a text document?": An interpretable machine learning approach
L Arras, F Horn, G Montavon, KR Müller, W Samek
PLOS ONE 12 (8), e0181142, 2017
2322017
Stationary common spatial patterns for brain–computer interfacing
W Samek, C Vidaurre, KR Müller, M Kawanabe
Journal of Neural Engineering 9 (2), 026013, 2012
2302012
Layer-wise relevance propagation for neural networks with local renormalization layers
A Binder, G Montavon, S Lapuschkin, KR Müller, W Samek
Artificial Neural Networks and Machine Learning – ICANN 2016, LNCS 9887, 63-71, 2016
1892016
iNNvestigate neural networks!
M Alber, S Lapuschkin, P Seegerer, M Hägele, KT Schütt, G Montavon, ...
Journal of Machine Learning Research 20 (93), 1-8, 2019
1872019
Towards explainable artificial intelligence
W Samek, KR Müller
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 5-22, 2019
1762019
Layer-Wise Relevance Propagation: An Overview
G Montavon, A Binder, S Lapuschkin, W Samek, KR Müller
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning 11700 …, 2019
1732019
Divergence-based framework for common spatial patterns algorithms
W Samek, M Kawanabe, KR Müller
IEEE Reviews in Biomedical Engineering 7, 50-72, 2014
1722014
A deep neural network for image quality assessment
S Bosse, D Maniry, T Wiegand, W Samek
23th IEEE International Conference on Image Processing (ICIP), 3773-3777, 2016
1672016
Transferring Subspaces Between Subjects in Brain-Computer Interfacing
W Samek, FC Meinecke, KR Müller
IEEE Transactions on Biomedical Engineering 60 (8), 2289-2298, 2013
1562013
The system can't perform the operation now. Try again later.
Articles 1–20