There’s a lot of hot air around Generative Adversarial Networks, and a lot of progress since Goodfellow’s 2014 seminal work.
What if this is the way society works? A few years ago (2011) I came across this paper from Hugo Mercier, which somehow made the case for adversarial learning, fully aware of some of its intrinsic problems
Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilledarguers, however, are not after the truth but after arguments supporting their views.
Why do humans reason? Arguments for an argumentative theory – Hugo Mercier & Dan Sperber
Now, it seems that these limitations might not be escapable.
Adversarial examples from computational constraints – Sébastien Bubeck, Eric Price, Ilya Razenshteyn Arxiv
Adversarial examples have become an emblematic failure of deep learning, exhibiting fundamental shortcomings of today’s machine learning. How can we avoid them?
Is that a good argument for the debate between Gerd Gigerenzer and Dan Kahneman?
No Comments