As a data scientist, I disagree strongly on this. Writing "typical" application code, sure, jupyter is (probably) overkill. But for cv, nlp, data sanitizing, etc, you are constantly iterating over algorithms and visually viewing the output. Multi-stage pipelines just require rerunning a cell.
Caching to disk is cumbersome for data that's usually junk.
Cells and integrated vis is such a massive leap forward that using plain old text feels like banging rocks together.
Pretty much this. As a quant / data scientist, I quite often have notebooks just hanging there for weeks with a few hundred GB of ready-to-use data preloaded and preprocessed in the kernel which makes the experimenting with it incredibly ergonomic.
Being able to quickly check the output while iterating on a an algorithm, or visualise intermediate results is irreplaceable.
If you need more than that, use the plain text file source code.
Actually, just forget the Jypyter Notebooks and use good old plain text source code like the rest of the programmers.