Often when using C++ I find myself solving C++ problems, not real problems. In a domain with very difficult math and performance challenges, the C++ problems will be dwarfed by the difficultly of the real problem you are solving.
For domains outside of physics and applied math, I think the dominating factor is often the inherent trickiness of writing correct procedural code vs. the ease of writing strongly typed functional code.
It will also lead to, if the app is a good experience for the customer of course, more word of mouth marketing which can lead to more publicity and more sales.
Let's say there was an product that you were willing to pay $10 for, but someone was selling it for only $6. By purchasing the product you get $4 worth of consumer surplus.
In general you probably get at least a bit of consumer surplus for everything you buy. The more surplus, the easier the decision to purchase (Of couse I would like to buy that brand new MacBook Pro for 10 bucks, thank you good sir!).
@ half the price and twice the customers you have all the customers who would have bought at the higher price "earning" whatever their surplus would have been + the (half) price - . On top of that you have all of the new customers "earning" a surplus beteen zero and the lower price.
My microeconomics professor calls it the "good deal feel". That is, the difference between what customers expected to pay and what they ended up paying (savings).
The problem with anecdotal evidence is that it makes you under the impression that you know something. If it were as harmless as it were useless, there would be no problem.
Sure you are right, it's anecdotal and subjective.
If you think nothing of it, please go ahead and try Sony.
Getting so many problems in such a short time despite a costly service repair didn't really make my day. Now the machine is a brick and under no guarantee.
But we do have reasonable priors on parapsychology from its wasteland of unreplicated, flawed studies, with no convincing results despite decades of effort.
Clinical researchers too. Because lives are at stake.
The whole field of systematic reviews and meta-analyses has developed around the need to aggregate results from multiple studies of the same disease or treatment, because you can't just trust one isolated result -- it's probably wrong.
Statisticians working in EBM have developed techniques for detecting the 'file-drawer problem' of unpublished negative studies, and correcting for multiple tests (data-dredging). Other fields have a lot to learn...
Clinical researchers working for non-profits / universities do, occasionally. I suspect it has become popular recently not because lives are at stake, but because it lets you publish something meaningful without having to run complex, error prone and lengthy experiments.
Regardless of the true reason, these are never carried out before a new drug or treatment is approved (because there is usually one or two studies supporting said treatment, both positive).
And if you have pointers to techniques developed for/by EBM practitioners, I would be grateful. Being a bayesian guy myself and having spent some time reading Lancet, NEMJ and BMJ papers, I'm so far unimpressed, to say the least.
For domains outside of physics and applied math, I think the dominating factor is often the inherent trickiness of writing correct procedural code vs. the ease of writing strongly typed functional code.