Hacker Newsnew | past | comments | ask | show | jobs | submit | sotte's commentslogin

If you're interested in the generalization ability of neural networks I can recommend the following paper: "Intriguing properties of neural networks" http://arxiv.org/pdf/1312.6199v4.pdf

TLDR: The authors create adversarial examples, i.e., they slightly modify the original images, which look exactly the same to humans but neural networks can't classify the images correctly anymore. What does that imply on the ability to generalize? :)

On a more general note: NN are always treated as something magic. I think a "sober view" is that NN are a special way to parameterize non-liner functions. This is neither good nor bad but it's easy to see when you look at, for example, a 2 layer NN:

$f(x) = w_2^T \sigma(W_1 \sigma (W_0 x))$


>"which look exactly the same to humans but neural networks can't classify the images correctly anymore. What does that imply on the ability to generalize?"

Usually that means that the model is experimenting overfitting, and that's actually one challenge on any machine learning model.



The comments say that there's an Ogg download but I can't find it. Any pointers?


Someone posted the bliptv link above: http://blip.tv/fosslc/everything-you-need-to-know-about-cryp...

  $ youtube-dl http://blip.tv/blahblah
  $ ffmpeg -i file.flv file.ogv


youtube-dl also has a --recode-video option that takes ogg or webm.


Thank you, for introducing me to youtube-dl.


Does anybody know docopt? I really like it because you simply write the help/usage as text and docopt automatically generates the parser for it.

Take a look at the example in the README: https://github.com/docopt/docopt


I'm using docopt....It's really nice with it's automagicness. But for some stuff it's just not enough, too unflexible. I'm really excited about click.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: