Which papers? There are thousands of papers being published.
There are a couple potential reasons. Powerful GPUs accelerate research and iteration. Some state of the art problems have hit the limits of current theory and make up the deficit by building massive nets - but even there we already have multiple automatic pruning/optimization algorithms to shrink those nets so that they work with smaller resources.
Make no mistake, the field is advancing exponentially. The state of the art googlenet/inception that arguably kicked off the whole craze with image recognition are laughably obsolete now and easily outperformed by simpler nets.
MNIST was the gold standard for recognition problems just a couple years ago, and now it's considered a solved toy problem.
If i google for it specificly, the paper in nature states:
"This study had some limitations. Mammograms were downsized to fit the available GPU (8 GB). As more GPU memory becomes available, future studies will be able to train models using larger image sizes, or retain the original image resolution without the need for downsizing. Retaining the full resolution of modern digital mammography images will provide finer details of the ROIs and likely improve performance." https://www.nature.com/articles/s41598-019-48995-4
There are a couple potential reasons. Powerful GPUs accelerate research and iteration. Some state of the art problems have hit the limits of current theory and make up the deficit by building massive nets - but even there we already have multiple automatic pruning/optimization algorithms to shrink those nets so that they work with smaller resources.
Make no mistake, the field is advancing exponentially. The state of the art googlenet/inception that arguably kicked off the whole craze with image recognition are laughably obsolete now and easily outperformed by simpler nets.
MNIST was the gold standard for recognition problems just a couple years ago, and now it's considered a solved toy problem.