Huh? This is C (well, asm). If you pass in garbage, you get undefined behaviour (a crash by NULL pointer dereference); this is how all str* functions work. Even
assert(str != NULL)
is being overly generous, I think - if only because a debugger would lead you to the source of the crash immediately.
You have not received any specification about the context in which the code operates, therefore the correct course of action is to assume the worse and produce bullet proof code.
If you were operating under constraints in terms of speed that would warrant a more cavalier attitude towards input checking then that would be another thing but since that wasn't specified and there is no context, 'safety first' is the way to approach the issue.
The specification states that "your routine should modify the given zero-terminated string in place". This implies that a zero-terminated string is passed in, and that it is mutable. If this is not the case, it is impossible to perform as specified; any behavior is wrong. An assertion error would be the ideal way to handle things; an immediate segfault is almost as good (and probably the practical thing to do if the string is immutable or not null-terminated). And of course, there are some error situations you simply can't detect - eg, not all cases of non-terminated strings are detectable, because you run into and mutate some other heap garbage before hitting an unmapped or immutable page.
I think it is actually really cool that RoG did not specify that part of the problem because in real life that is exactly how stuff gets specified, and plenty of times that's where the trouble comes from. Asking to revise a spec so that it is explicit is usually a very good course of action.
Not all hardware will segfault when writing to NULL, the immutable should segfault and strings that are not null terminated are indeed hard to impossible to check for (pointer wrap..., after a lot of swapping, or a stack overwrite).
Also note that libc string functions generally seg-fault when passed invalid addresses (including NULL). Implicitly expecting more sophisticated error handling in this scenario is silly.
So, you either go back to the source and ask for more info, or you take a stab at what is the right thing to do.
In this case that wasn't possible, so I assumed the worst, and I think that simply noting that there is a potential problem there would already score you points with the interviewer because they'd realize you spotted that the specification was incomplete, it didn't tell you what to do in case of invalid input.
I think this may be a cultural thing. C programmers are very much used to the standard not defining the behaviour of some things, e.g. what happens when you call strcmp(str, NULL). Segfaulting is expected here, and easy to track down.
(I would use some assert()s if a function could cause silent memory corruption; but this particular function either works (valid string), segfaults (NULL) or cannot detect a problem (stray pointer).)
Oh definitely. I agree. I assumed though that the "problem" itself was the main aim here, not testing if someone can write scaffolding error checking code.
How far do you go? Check that the memory isn't mapped as read only? ;) Check that there's actually some memory mapped into that address space?
Obviously you need both skills, but they're pretty related I'd say. If you can solve low level problems you can probably think through any potential issues, invalid inputs, edge cases etc and churn out code to deal with them.
I guess my issue is I don't see what being able to solve this says about someone, apart from "isn't completely terrible at programming".