This is not a cat

I haven't done any blogging for a while, so I thought I'll kick the new year off by writing some posts about AI. This one is about adversarial examples and understanding.

My wife is Belgian, and I spend quite a lot of time in Brussels. Walking around town (I did quite a bit of this over Christmas), I often get reminded of the Belgian painter Rene Magritte, including one of his most famous paintings, "Ceci n'est pas une pipe" (This is not a pipe).

This is not a pipe

Magritte wanted to provoke the viewer, as it does look like a pipe. The writing is undeniably true though. The point is that it is an image of a pipe, not a real pipe, and therefore has very different properties. I feel that the same basic mistake is taking place in artificial intelligence.

Over the last couple of years, you might have heard about adversarial examples in AI. The idea is that you have a trained, very accurate, machine learning model. You often see examples using image recognition, but the problem also applies to other types of models.

You take an image, add some specific noise to it, and run it through the model. The image is suddenly categorized as something completely different. In the example below, after adding noise to a cat image, the model classifies the image as guacamole, even though it's still clearly an image of a cat.

adversarial example
The image is from https://github.com/anishathalye/obfuscated-gradients

The conclusion from these examples is that even though neural networks are inspired by the brain, the models must work in very different ways. I'm quite surprised that this isn't completely obvious. The model is trained on images, not on the objects themselves, and the patterns it discovers are limited. Humans, on the other hand, don't learn about objects by only looking at pictures; they learn concepts by interacting with them in the world. When a human sees an image of a cat, we categorize it by using our robust and multifaceted concept of a cat that we have learned through a lifetime of seeing and interacting with cats. Basically, humans are trained on real cats, not images of cats.

There would be a couple of conclusions to draw from this. One is that it would be very hard (or impossible) to make a machine learning model as robust as a human for classifying and understanding things. The machine learning model would have to interact with objects, animals, and people in the world to be able to do this, which would be very hard to achieve.

The best chance for a machine learning model to be better than a human is probably in narrow use-cases where humans also don't have a very multifaceted concept based on interactions in the real world. An example might be the recent result where Google's DeepMind outperformed doctors to detect breast cancer from x-ray images (https://www.bbc.co.uk/news/health-50857759). With only the x-ray image at hand, the machine learning model was more effective than doctors by identifying patterns in the image. I suspect that x-ray images are one of many methods for doctors to detect cancer, and these kinds of models will be tools for doctors to use in the future together with the understanding they have developed by working as doctors. I believe that the same will apply to many professions.

I suspect that some of the hype about artificial general intelligence will be toned down over the next decade. Machine learning models are very useful in specific cases, but don't expect real understanding.

Show Comments