Would love to see mention of several of the main contributors to deep learning, such as Geoffrey Hinton, the “father” of deep learning, Andrew Ng and Demis Hassabis in future posts.
Geoff Hinton - surely. But I think most experts will disagree on the other two.
In terms of deep fundamental contributions I don't the think other two have made much.
I think Andrew Ng has been a great popularizer/marketing guy - primarily with that Cats project.
Likewise Demis Hassabis has been a great application creator - with amazing results of course - AlphaGo, Atari, etc.
I felt he got away too easy on that, without any apology or even a public statement - especially considering he is a former academic. (And that too he silently deleted his google+ posts.)
Imagine if something like that had happened at a Google research team - I am pretty sure Jeff Dean or Peter Norvig would have stepped down.
I didn't even realize this happened. Thank you for posting. Sucks because now I have less respect for Andrew.
I don't understand why you'd want to cheat for a competition like this? I get it, people cheat all the time, but the field of machine learning is built on a foundation of open and shared research, and trust.
On skim-reading the article it seems they were banned from submitting entries to a competition server for 12 months because they made a significant number of submissions.
It's arguable that they were gaming the system somewhat, but unless a limit was explicitly defined then this just seems like they were doing a lot of exploration in the area.
Imagine if you published some research showing you'd made something that did something cool, but then people lost respect for you because you'd made a lot of previous attempts.
1. Third paragraph: "twice a week"
2. They registered multiple accounts to get around the limit. That removes almost all doubt that the submitter knew he/she was cheating
From a pure technical point of view, I agree with you. But there is no doubt they all have played an important role in popularising deeplearning. I am fancisnated in the history of deeplearning and how it went from a field no one cared to what it is today.
IMHO the ImageNet 2012 competition and the winning solution AlexNet (Krizhevsky et al.) was the pivotal moment in which (deep) neural nets went from a field only a few wizened academics cared about to becoming today's buzz word.
It did 3 things:
1. Provided a usable solution for what was previously an intractable real world problem, large multiclass image
classification, with decent accuracy
2. Crushed the prior benchmark on this task
3. Found a practical workaround to what was the biggest bottleneck, computation time, by utilizing GPUs (and made Nvidia stock explode /s)
The subsequent ImageNet competitions then later provided the perfect catalyst to refining and making deep neural nets mainstream. In parallel the sudden interest from everyone else who in turn started applying neural nets to pretty much every domain out there under the sun, was what I think ultimately made deep learning as it is to what it's today.