"Who says modern scientists are dull? Compared to when?"
Compare non-fiction books of the 70s to non-fiction books today. With the exception of a couple people like Ray Kurzweil, our books today aren't a tenth as creative or brilliant. I'd argue that in most fields today scientists are just refining models created between the 60s and early 80s. Linguistics, educational theory, psychology, and organization behavior are all good examples of this.
You sure about that? It sounds a lot like "Boy, this music these kids are listening to is just a lot of noise-- not nearly as good as the music when I was young...."
"Most scientists don't publish books. They do research and publish papers."
But this is because they no longer get "credit" for writing books, which is exactly the point the article was making. Scientists used to write books all the time.
It has little to do with credit. Writing a book/survey paper can get you great recognition if you finish it and do a good job.
The reason scientists write papers is because science is fun, but writing is boring (compare: coding vs documentation). Research papers represent the bare minimum level of documentation you can get away with but still keep your job.
I would venture that doing original research should be given more weight than writing books.
Books contained calcified knowledge that can be disseminated by someone familiar with the field; a non-scientist as it were. This is the point of publications of popular science.
Original research that leads to new insights into a field are more difficult to obtain, require a higher level of skill and ability and therefore should be more important to a scientific reputation.
Books used to contain original research and theories. (Think Stanley Milgram's book Obedience.) It's only in the last 30 years that science books tend to just be collections of other people's papers rewritten for the lay audience. Because modern academia rewards people for churning out large amounts of derivative work but not for spending time coming up with groundbreaking theories, pretty much the only time scientists write books is when they get 600k advances (like Gilbert).
This is generally the case across the physical and life sciences too. Even "revolutionary" fields like epigenetics and genomics are effectively refinements and extensions of the pioneering work done in molecular biology during the 1960s and 70s. The same goes for particle physics, where the framework for the standard model is decades old. It seems that scientific innovation occurs in punctuated equilibria.
There's no question things go in S-curves, but I really get the feeling that the kind of people who did great work in the 60s and 70s generally aren't allowed to go into academia today. I personally would probably get a PhD in another life, but there's no point because academia is generally designed to filter people like me out, so I would have to spend 2-4 years convincing the right people to go to bat for me and it's just not worth it.
I actually abandoned a PhD (after 2+ years) in favor of industry. The modern academic world -- at least in CS -- seems largely to consist of producing as many papers as you can, putting forth unimportant results, so as to get more 'points' and another funded project and more students. (British universities really are assessed on published papers, with different weightings accorded to international conferences and journals. It's a game.)
To do anything interesting means reducing or delaying your paper output (which your supervisor will not appreciate), and to get the results published requires you to write so as to persuade people with no vested interest in publishing novel output to agree with you... but without being able to have a dialogue.
I recall one amusing paper review by three reviewers which was essentially
1. "the treatment of X is great, but I'd like to see more Y",
2. "too much Y, not enough X",
3. "both X and Y are fine, but I'd have been really interested in seeing Z instead", where Z was entirely unrelated.
How on earth could an author anticipate the preferences of those authors and get three recommendations? Well, picking a non-controversial subject would be a good start.
Multiply this anecdote up to the size of the academic and industrial systems, and it's no wonder that so many people decide that academia is not for them. These people tend to be lumped in with the people "not cut out for it", but I think a significant percentage simply realize that academia has a different value system, and don't want to participate. Some of those people go on to do interesting or revolutionary things in corporate research, or even in mainstream industry.
Would the world be better if they were in universities instead? Probably not... but we'd have our interesting scientists back.
Academia does tend to have a different value system, but it's a mistake to suggest that the one used by industry is better. It's just different.
Yes, in academics, professors X,Y and Z will usually try to get you to introduce their pet subjects in review. You also later find out that getting a paper approved has a lot to do with who you know (as opposed to what you know), and that getting the choice talks, collaborations and grants has a lot more to do with schmoozing and networking than anyone likes to admit. It's a social problem.
But you know what? That's life. Humans are fundamentally just a bunch of howler monkeys, flinging poo at the cage walls and trying to steal the shiny rock (or the girl) from the alpha. Put more than a few people together in any activity, and you'll get competition, preening, back-biting and politics.
In academics, powerful professors demand that you brown-nose and compete for favor; in industry, powerful rich people do the same. In academics, you find that most days are spent managing mundane chaos, with real research advancements happening only a few times a year; in industry, most of your time is spent appeasing people, reacting to short-term crises, and coordinating with your co-workers. When you really think about how chaotic humans are in groups, it's amazing that forward momentum happens at all.
Point is, I don't think it's fair to imply that "interesting" scientists migrate to industry because of some fundamental difference in values. Personally, I made the migration because the monetary up-side is much better in industry. Otherwise, I think the bullshit-to-accomplishment ratio is roughly the same.
What I mean is that the best grad schools in the US are purposely designed to have an extreme bias toward selecting students with the best grades and the highest GRE scores. You can still get in without good grades and without taking the GRE if you've done very good research and have earned the trust of the right people, but it's very difficult. And even if you do get accepted, you'd probably face an uphill battle throughout your entire career.
This is true at all the top CS programs as well. They look at:
1. Past research (including published papers)
2. Recommendation letters
GRE scores are a pre-filter (not set too high, I think, and certainly the CS subject GRE is ignored, even when it is required), grades matter only slightly, and the admission essays are basically a chance to disqualify yourself by saying something stupid.
I've heard this too, but do the admission statistics actually confirm it? I suspect it's more along the lines of, "we only care about your recommendations, as long as you have above a 3.7 GPA." I could be wrong...
Compare non-fiction books of the 70s to non-fiction books today. With the exception of a couple people like Ray Kurzweil, our books today aren't a tenth as creative or brilliant. I'd argue that in most fields today scientists are just refining models created between the 60s and early 80s. Linguistics, educational theory, psychology, and organization behavior are all good examples of this.