One of the things I've found as a side effect of people using code coverage tools is that instead of testing the behaviour of a method they end up testing the implementation. I think this is because they initially test the behaviour, but then see that one path is missing, so add a test to ensure that path is ran - instead of just testing the behaviour that calls that path and checking the code coverage tool. This ends up causing trouble if you ever want to change the implementation as you end up having to throw away half the tests, which means a lot of effort you spent to get 100% code coverage is now gone.
I've faced this problem in my own tests. I want to achieve total coverage so that I know that I've got all the cases covered, but then I end up testing the implementation rather than the contract. I'm not sure what to do about it.
Then, write tests so that you hit all paths in the condition tests.
If that doesn't hit 100% in the rest of the function, the function has code it doesn't need, or your precondition checks aren't complete.
Think about other edge conditions. For example, does your code special-case x=n/2? Add a check on top. And yes, that is implementation-specific, but there is nothing you can do about that.
Of course, you don't need the edge condition and implementation-specific checks in release builds.
With these in hand, you can also split tests into implementation-specific ones and contract-based ones.
You cant have it both ways in my opinion. High test coverage == testing every possible paths == looking at implementation details. If you are testing an algorithm(and games are full of these)you want it to be a 100% accurate therefore you dont have much choice.
Tests should be based on the specification. If I want to change some internal implementation detail I should only have to verify that the current tests pass.
If a e.g game contains a sorting in some place in the renderer, I can replace the quicksort with a mergesort as long as the renderer interface is still testing ok. The new sort algorithm may have new special case paths (even number of items vs odd for example) but it's not a concern of the renderer public interface. I may however have introduced a bug with an odd number of items here and the old code was 100% covered and now it isn't. So there is a potential problem and the 99% has actually helped spot it.
If the sorting is a private implementation detail of the renderer then there is no other place to test it than to add a new test to the renderer component only because the sorting algo requires it for a code path. This is BAD.
The proper action here is NOT to add tests to the renderer component to test the sorting code path, but instead to make the sorting visible and testable in isolation via its own public interface.
So one of the positive things about requiring coverage is that if you do it right, it will lead to smaller and more decoupled modules of code.
The bad thing is that if you do it wrong you will have your God classes and a bunch of tests coupled tightly to them.
If you write a test that forces a particular codepath in the original implementation, like "parameter one is an empty string and parameter two is a null pointer, and verify the return value of the function call", then it should still be a perfectly valid test if the implementation changes, it just might not be a very meaningful test.