Well put. I think it's really valuable to take inventory of human cognitive capabilities that we don't yet know how to implement in neural nets.
We seem to have more or less solved perception. Given a high dimensional, "raw" input space, we know how to process it into a more usable, more abstract representation.
Work on perception has also given us certain limited kinds of behavior. We can generate images from abstract representations by inverting our image recognition architectures. RNNs can do perception over sequences (e.g. of words), but can also generate language output or a sequence of control commands for robotics.
One area in which I think human cognition is far ahead of neural network research is control flow. Maybe there's a better name for this. We seem to be able to encounter a new cognitive challenge and quickly design a mental program to solve it. We can attend to relevant sensory streams, various kinds of memory, and then design (sometimes novel) behavior to solve the problem.
Work on attention and architectures with more sophisticated internal representations like stack-augmented RNNs are definitely moving in this direction, but it seems like we have much further to go on this front than in visual perception, for example.
We seem to have more or less solved perception. Given a high dimensional, "raw" input space, we know how to process it into a more usable, more abstract representation.
Work on perception has also given us certain limited kinds of behavior. We can generate images from abstract representations by inverting our image recognition architectures. RNNs can do perception over sequences (e.g. of words), but can also generate language output or a sequence of control commands for robotics.
One area in which I think human cognition is far ahead of neural network research is control flow. Maybe there's a better name for this. We seem to be able to encounter a new cognitive challenge and quickly design a mental program to solve it. We can attend to relevant sensory streams, various kinds of memory, and then design (sometimes novel) behavior to solve the problem.
Work on attention and architectures with more sophisticated internal representations like stack-augmented RNNs are definitely moving in this direction, but it seems like we have much further to go on this front than in visual perception, for example.