Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly -- write code that matches clear, intuitive, logical, coherent organization.

Because easy counterexamples to both of these rules are:

1) I'd much rather have a function check a condition in a single place, than have 20 places in the code which check the same condition before calling it -- the whole point of functions is to encapsulate repeated code to reduce bugs

2) I'd often much rather leave the loop to the calling code rather than put it inside a function, because in different parts of the code I'll want to loop over the items only to a certain point, or show a progress bar, or start from the middle, or whatever

Both of the "rules of thumb" in the article seem to be motivated by increasing performance by removing the overhead associated with calling a function. But one of the top "rules of thumb" in coding is to not prematurely optimize.

If you need to squeeze every bit of speed out of your code, then these might be good techniques to apply where needed (it especially depends on the language and interpreted vs. compiled). But these are not at all rules of thumb in general.



I think a key thing software engineers have to deal with opposed to physical engineers is an ever changing set of requirements.

Because of this we optimize for different trade-offs in our codebase. Some projects need it, and you see them dropping down to handwritten SIMD assembly for example.

But for the most of us the major concern is making changes, updates, and new features. Being able to come back and make changes again later for those ever changing requirements.

A bridge engineer is never going to build abstractions and redundencies on a bridge "just in case gravity changes in the future". They "drop down to assembly" for this and make assumptions that _would_ cause major problems later if things do change (they wont).

I guess my point is: optimizing code can mean multiple things. Some people want to carve out of marble - it lasts longer, but is harder to work with. Some people want to carve out of clay - its easier to change, but its not as durable.


I've been impressed watching the crews meticulously replace each cable assembly of the George Washington Bridge over the past year or so. All the work that doesn't require disconnected cables is done in parallel, so you can get a sense of the whole process just driving across once (they've finished the north side so you want to drive into Manhattan for the current view).

It's more or less the same as code migrations we're doing on a regular basis, done far more diligently.


Whether marble or clay, both ideally take into consideration the fact that he/she who write it today may not be he/she who maintains it tomorrow.

When stuck between longevity v less durable, maintainability should be the deciding factor.


Part of my point though was that the bridge builder of today does not need to take into consideration that the person maintaining it 20 years from now will have to deal with gravity changing. So they can make certain assumptions that will be impossible for future maintainers to ever undo.

Software doesn't have these set-in-stone never-changing requirements. I think we are making similar points.


1) I'd much rather have a function check a condition in a single place, than have 20 places in the code which check the same condition before calling it -- the whole point of functions is to encapsulate repeated code to reduce bugs

That's fine, but it's often a good idea to separate "do some work on a thing" and "maybe do work if we have a thing". Using the example in the article, it is sometimes useful to have multiple functions for those cases:

    fn frobnicate(walrus: Walrus) {
        ...
    }

    fn maybe_frobnicate(walrus: Option<Walrus>) {
        walrus.map(frobnicate)
    }
But also… in languages like Rust, most of the time that second one isn't needed because of things like Option::map.


I think the argument here could be stated sort of as push "type" ifs up, and "state" ifs down. If you're in rust you can do this more by representing state in the type (additionally helping to make incorrect states unrepresentable) and then storing your objects by type.

I have a feeling this guide is written for high performance, while it's true that premature optimization is the devil, I think following this sort of advice can prevent you from suffering a death from a thousand cuts.


A good rule of thumb is validate early and return early. Prevents endless if else nesting


Conditions inside the function are also in line with Postel's law, if we drag it from networking to API design. And in large parts of programming the entire "enforce it with types" (say null check without saying null check) isn't a thing at all. It only gets worse with languages where api evolution and compatibility is done by type-sniffing arguments. Those will just laugh at the idea of pushing an if up.

But it's an interesting discussion nonetheless. What I picked up, even if it wasn't directly mentioned (or I might have missed it?), is that a simple check on the caller side can be nice for the reader. Almost zero cost reading at the call site because the branch is a short one, and chances are the check provides some context that helps to understand what the call is all about:

   if(x is Walrus) frobnicate(x);
is not just control flow, it doubles as a friendly reminder that frobnication is that thing you do with Walrusses. So my takeaway is the check stays in the function (I also don't agree with the article), but make it a part of the naming consideration. Perhaps "frobnicateIfWalrus" wouldn't be so bad at all! I already do that occasionally, but perhaps it could happen more often?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: