There is such a thing as "negative people". Folks that cost more in correcting their mistakes than in contributing to a problem. This often happens with anemic partnerships.
The cost of losing someone largely depends on how good and useful they were. It is so case-by-case that such a list is useless.
It seems like quantifying the loss of a developer in terms of dollar value is pretty futile. That value is volatile from company to company.
One thing we can try to do is measure the skill difference between two programmers. This is a hard problem, but if we assume that some percentage of their checkins are new features, some percentage are bug fixes, and so on.. Perhaps even going through and manually tagging each changelist (have a convention to prefix a changelist with BUG: or FEATURE:), then we have a sort of rough statistical model of their performance.
So now we have two categories, bug fix and new feature. (I'm grouping refactoring into bug fixes for now.) Their performance in each category can be measured by (number_of_changelists / time). This assumes a perfect world, where each bug fix doesn't introduce new bugs, and each feature doesn't add any new bugs. These totally contradict one another, so we can tweak some parameters, perhaps introducing a percentage chance that one of their changelists will introduce a bug, which counts against their productivity score. (You can do some more interesting statistical analysis here at the sourcecode level to determine who's bug was fixed, and actually measure how often they produce bugs.)
But wait a second, what's the value for 'time'? It could be their whole stay at the company. But what if we change it to be, say, a week? Now we can get a velocity value by doing productivity(t) - productivity(t-1).
Those two metrics combined are pretty useful to measure a programmer. You have their productivity over a time value, and you have their velocity based on previous producitivity values.
Now we apply it to the whole company. Add up everyone's productivity for the week and you get the current company productivity. Subtract that with everyone's productivity for the last week and you get the company's productivity velocity. To measure how each of them would affect the company if they left, subtract out their productivity from the equation and compare that with them in the equation.
If your productivity score includes how often you produce bugs, you can actually measure who is hurting productivity more than they're helping it. However, each bug is of a different magnitude, so that number is bogus until you can measure the complexity of each bug.
The one thing this system does not take into account is information lost because only that person knew that information. That's a very valuable asset and nearly impossible to measure. So the best way to account for that is to increase a programmer's productivity velocity over time. You can assume that if they're setting out to accomplish something, whether it's a new bug fix or a new feature, they will accomplish it in less time if they start working on it 10 years on the job versus one day on the job.
Also, someone (perhaps the project leader) needs to put a weight for each changelist. The weight measures how valuable the new feature was, or how much value was added by fixing a bug. This has to be done by someone other than the developer, because they'd have an incentive to lie.
This seems to work, because if you have a superstar leave after one day with the company, it doesn't hurt the company at all. Whereas if he leaves after ten years with the company, not only will the company's absolute productivity go down, but the velocity will take a hit for the next month or so too.
All in all, it seems pretty hard to accurately measure a programmer's value, so I say screw it. Humans are great at recognizing patterns. The whole team should be able to recognize that Bob isn't pulling his weight or is under-motivated, and either give him extra vacation or reassign him to a sister company. Plus if the programmers ever found out they were being measured, the company's productivity would drop like Wile E Coyote. This sort of analysis should really only be applied after a programmer has left or if the company has made the affirmative decision to fire them (possibly to reestimate schedules), not as an ongoing thing.
This is something I came up with off the top of my head just now, so it's probably totally wrong. More research into this field would be really interesting, because if we can measure productivity for a programmer, one could measure the global productivity scores for all programmers. Somehow, I think a certain company that starts with a Y and ends with an R that would be really interested in that.
New startup idea? I'm too busy with mine, go for it. Businesses might pay oodles if you manage to measure programmer productivity accurately (or give the impression that you are).
The cost of losing someone largely depends on how good and useful they were. It is so case-by-case that such a list is useless.