If you're interested in the generalization ability of neural networks I can recommend the following paper:
"Intriguing properties of neural networks" http://arxiv.org/pdf/1312.6199v4.pdf
TLDR: The authors create adversarial examples, i.e., they slightly modify the original images, which look exactly the same to humans but neural networks can't classify the images correctly anymore. What does that imply on the ability to generalize? :)
On a more general note: NN are always treated as something magic. I think a "sober view" is that NN are a special way to parameterize non-liner functions. This is neither good nor bad but it's easy to see when you look at, for example, a 2 layer NN:
>"which look exactly the same to humans but neural networks can't classify the images correctly anymore. What does that imply on the ability to generalize?"
Usually that means that the model is experimenting overfitting, and that's actually one challenge on any machine learning model.
TLDR: The authors create adversarial examples, i.e., they slightly modify the original images, which look exactly the same to humans but neural networks can't classify the images correctly anymore. What does that imply on the ability to generalize? :)
On a more general note: NN are always treated as something magic. I think a "sober view" is that NN are a special way to parameterize non-liner functions. This is neither good nor bad but it's easy to see when you look at, for example, a 2 layer NN:
$f(x) = w_2^T \sigma(W_1 \sigma (W_0 x))$