We recently wrote a piece in Science about the danger of adversarial attacks in AI systems. It was a great collaboration with friends at two universities and three schools (Medicine, Law and Public Health). Afterwards, I found myself returning again and again to the a phrase "They don't think like we do," a phrase burned into my long-term memory by a song—Apart—on my exercise playlist. But why? Yes, it's a catchy song but why did the article's publication start the repetitve autoplay? It took me a while to realize that I had triggered myself.
What was the trigger? One small and tangential point of the article was that the adversarial noise added to the clinical image (of a retina, or X-ray or skin lesion) while not random was imperceptible to humans. That is, before and after the addition of this engineered noise, most clinicians would come to the same diagnosis. The deep learning algorithm by contrast would dramatically change its diagnosis (as intended by the designer of THE attack). Although this result in medical application is novel it's not a surprise to machine learning experts such as Ian Goodfellow who has extensively characterized adversarial attacks. Superficially, and perhaps more deeply, this supports the hypothesis that computers using convolutional neural networks do not have vision like ours. Nor have the current leading practioners claimed that the lastest generation of machine learning algorithsm—broadly-speaking 'deep' neural networks—ressemble are a model for the way biological neurons function of human. This has not always been the case for researchers in artificial intelligence.
Especially in understanding senory processing, the intellectual cross-pollination between cognitive scientists and artificial intelligence has been a source of inspiration although my impression is that cognitive scientists How the study of human cognition can be informed by AI/ML insights (and vice versa) was the topic of a very well attended conference in 2017 called Cognitive Computational Neuroscience at MIT Nikolaus Kriegeskorte of Columbia University recently was quoted: "Current neural network models can perform this kind of task using only computations that biological neurons can perform. Moreover, these neural network models can predict to some extent how a neuron deep in the brain will respond to any image."