This file has been truncated. show original
"# Adversarial Examples for Vanilla Neural Networks\n",
"Adversarial examples are inputs to a neural network that are designed to \"trick\" the neural network. For example, [here](https://blog.openai.com/robust-adversarial-inputs/) is a cool project that an intern at OpenAI did on adversarial examples. They managed \"convince\" an image recognition neural network that a picture of a cat was a desktop computer. Adversarial examples are incredibly important when it comes to the security of neural network models and is currently a very active field of research (for example, a [paper](https://arxiv.org/pdf/1611.02770.pdf) from Berkeley's own Dawn Song).\n",
"Here is an example of an image of a panda with added noise that a neural network thinks with 99.3% confidence is a gibbon:\n",
"This notebook does something similar. It takes a neural network trained to recognize handwritten digits (MNIST) and implements code to generate images that \"trick\" the neural network. For example, we'll generate images that look like a '2' but the network will think is a '6'. The digits that the network was trained on look something like this:\n",
"(The neural network was implemented by Michael Nielsen for his [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/) book. We encourage you to read it!)"