Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programming Patterns I Like (johnstewart.dev)
136 points by fagnerbrack on June 28, 2019 | hide | past | favorite | 122 comments


Nested ternaries are horrible. They add an additional burden to think about operator precedence, which is actually a hard thing especially if you switch between languages regularly. They are harder to extend for those reasons and often it is just stupid to handle condition prerequisites outside of the nesting of an if statement.

Conditional logic is the third hardest thing just after variable naming and cache invalidation. There is no excuse to make it harder to understand.


Ironically, it could have been written far more concisely and clearly by using IIFE, if-then statements and reordering the logic:

  const result = (() => {
    if (!conditionA) return 'Not A';
    if (conditionB) return 'A & B';
    return 'A';
  })();
Yes, I know, I "need" to use curlies. But there's really no reason why someone shouldn't be able to quickly make sense of this. The ternary, on the other hand, is unnecessarily cryptic.


Whether that’s clearer or not should be a matter of debate. I find the introduction of a function for this a potential source of confusion, when it just serves to turn statements into an expression. But it’s not an unreasonable solution.

However, I’m not sure how this is supposed to be “written far more concisely”? It’s more verbose, involves more different constructs, more levels of indirection - pretty much for any definition of “conciseness“ I can think of, it is worse.


Can't help but call out Rust here for it's (imo excellent) syntax.

if condition {do_x()} else {do_y()}

Basically there's no need for line breaks between if/else, but also they decided not to add a ternary conditional operator.


  if (condition) { do_x() } else { do_y() }
That's the JavaScript version of what you wrote, and it's nearly identical. JavaScript doesn't require line breaks.


The significant difference is that in Rust, like in Lisp, `if` is an expression, so if-else essentially is the ternary operator.

    const x = cond1 ? a 
            : cond2 ? b
            : c
becomes in Rust

    let x = if cond1 { a }
            else if cond2 { b }
            else { c }


I always thought one of the main arguments for the guard pattern ("early exits" in the article) was _specifically_ to reduce nesting, so I find it interesting that both guard patterns and nested ternaries were mentioned. For complicated nesting that's nontrivial to simplify in this fashion, you should then think about extracting logic into helper functions.

(The example is also a bit misleading, since the logic flow between the nested ifs and the chained ternaries are subtly different.)

In the provided example, const result = !conditionA ? "Not A" : conditionB ? "A & B" : "A";

is really equivalent to:

  const result = null;

  if (!conditionA) {
    result = "Not A";
  } else if (conditionB) {
    result = "A & B";
  } else {
    result = "A";
  }
which I don't think is that bad (look ma! no nesting!). You could also use a function, especially if the logic is too complex to flatten reasonably:

  function checkConditions() {
    if (!conditionA) {
      return "Not A";
    }
    // A is true from here onwards.
    if (!conditionB) {
      return "A";
    }
    // B is also true from here onwards.
    return "A & B";
  }
  
  const result = checkConditions();


You're code example is technically nesting btw.

That language doesn't have an "elseif", so you're actually writing the shorthand for

    if(!a) {
      ...
    } else {
      if(!b){
        ... 
      } else {
        ...
      }
    }
Your point stands nonetheless. I just had to smile seeing that example


TypeScript documentation for conditional types advertises the use of nested conditionals:

https://www.typescriptlang.org/docs/handbook/advanced-types....

I must say that I find those examples clearly express what is going on.

Also, about another poster's comment, while it may be true that for regular code needing this may be a sign that the code needs refactoring, for business logic this is not true. You may very well end up with "business logic code" where things like huge functions and long nested conditionals actually are the optimal way to express messy reality, and that trying to use refactoring methods like "put this into extra functions" actually increases complexity.


Well in the example in the article they are more chained than "really" nested... So I think with adequate formatting, they can reasonably replace switch/case or if/elseif expressions (in contrast to statements) for languages that don't have them...


I know that is a matter of taste, but nested ternaries are not straightforward to understand as the author try to convey. The example shows two nested levels, and I need to simulate the program flow for a while in my head to really understand it. Definitely not desirable for my everyday programming sessions. However, simple and short ternaries are welcomed.


That’s true of anything you’re not used to. A traditional for loop is very confusing the first dozen times.

Simple chained ternaries like in the example really aren’t hard to reason about. They are a sequence of conditions, each followed by a `?`. The first condition that evaluates to true will return the value after the `?`. If none evaluate to true, the value after the final colon is returned.

You can chain as many ternaries as you want and follow the same simple rules to understand what they return.

EDIT: The example in the article is rather unfortunate though, as the if statement is nested in a different way than the ternary, for unclear reasons.


When formatted like this:

       result = (cond_A ? 1 :
                 cond_B ? 2 :
                 cond_C ? 3 : 
                 5)
they look quite a bit nicer than the if/else equivalent IMO.


Is everyone preferring to state these on multiple lines? With the advent of the widescreen monitor, I find we have all the more reason today to use all that extra space.

Using the example from the article, I'd refactor:

  const result = (!conditionA) ? ("Not A") : (conditionB ? "A & B" : "A");
or, since code should be simple to read:

  const result = (conditionA) ? (conditionB ? "A & B" : "A") : ("Not A");
All the parentheses' are redundant for the compiler or computer, but remind myself as a human that "const result" is not a direct statement, but a result of a conditional expression.

EDIT: 2nd set of parentheses' in 2nd expression might not be redundant.


This is actually nice because in some languages a case statement is a statement and not an expression -- so you can't assign the value easily. +1, would use this in production code :-)


And when conditions are not direct don't forget to add Lisp level of parentheses. I bet one time you will forget and everything will explode.


You need to do the same in-head simulation for the standard if, it just feels effortless due to familiarity. Note the author messed up by not using the same logic in the ternary.

    conditionA
      ? conditionB
        ? ‘A + B’
        : ‘A’
      : ‘Not A’;


> need to simulate the program flow for a while in my head to really understand it.

This is a matter of practice. It doesn’t take that many times thinking about it to understand how this works in general (it’s always the same).


I agree completely, the provided example is very confusing


> This pattern is nothing really special and I should have realized it sooner but I found myself filtering a collection of items to get all items that matched a certain condition, then doing that again for a different condition. That meant looping over an array twice but I could have just done it once.

Write a generic function for this instead? Lodash has one like this called "partition" for example.

I try and avoid explicit for loops as much as possible as they tend to make code harder to follow and state manipulation is a big source of bugs. Most operations on collections can be decomposed into some easy to follow combination of map, reduce and filter instead (this is a filter variation).

Nice article!


The "call it builder instead of reduce" comment here was interesting.

My knee jerk response was to disagree, being that reduce is well known to mean fold / accumulate. However I've seen a lot of js usages where it does things like modify structures in the containing scope and isn't an FPish fold at all, so a name like that could probably make a lot of sense.

Nonetheless for this particular usage, I'd agree that a more specific partition function like lodash's would be an improvement.


The lodash one might be inspired by the Python itertools "partition" recipe, https://docs.python.org/3/library/itertools.html#itertools-r...

TFA’s inlined reduce version is much less readable than calling a function which wraps this behavior up.


It's common in functional programming libraries too, usually with the same name. See e.g. OCaml, Haskell and Standard ML.


I'm very surprised that early returns are still commonly used and advocate in 2019. In my opinion, they are obviously terrible. I'll try to convince you.

First the practical, in my experience, a big function with many early returns has a bug or has a convoluted logic, 100% of the time. I've literally made a living and a career by untangling big functions with early returns. I've really saved many situations and code just by doing that. The reason this happens is that early returns are gotos and gotos are harder to follow.

Also, you can't move easily (think copy/paste) code than has early returns, which means that the code is defacto harder to refactor. So in practice, code with early returns stays in place and grows.

Early returns are artefacts of the assembly jump instruction. They shouldn't even be in a high-level programming language because it makes no sense to have jumps in such a language. They make the programmer lives in the assembly language world (succession of instructions that update a state) instead of the wonderful world of high-level programming (expressions).

If you interested by what I'm saying, you should try removing all returns, continues and breaks from your code, you'll see what wonderful things happens. It's literally magic. Things becomes so much more expressive without them. Which is the point, high level programming is about gaining more expressiveness over assembly code.

Early returns is one of the big pillars of bad inexpressive entangled convoluted programming. And what is people reason for using early returns? It makes less indentations...

You may wonder why early returns doesn't sound that much like a big deal after all. And you are right, there is no big deal. No big deal in terms of what the market reward. Now if you care about the art of programming, this is a huge deal imo.

edit: thank you for the fair discussion. I would like to insist that obviously, if your code is more expressive, you will take better decisions and at the end, what I'm saying is that you will end up with a totally different program. Early return or not is not about cosmetic. It's about writing better program.


As, with everything, it should be applied wisely. In most cases, I think it is perfectly fine to have early exits on parameter checks at the start of the function/method. Otherwise you end up with deep nested if-then-else or complex conditional expressions. And yes, having multiple exits in complex functions, is a bad code smell. Often indicating that the function needs to be refactored, split is smaller functions. I also prefer to use early exits (also with break), if I can avoid the introduction of an Boolean variable. Returns, gotos, breaks, and continues are not in themselves bad. It is people who do not know how to use them wisely, are bad at software engineering.


As, with everything, it should be applied wisely.

Early returns is not a feature of the language. It's an artefact of assembly language. You have no reasons to use it in the first place.

In most cases, I think it is perfectly fine to have early exits on parameter checks at the start of the function/method.

I believe you'd be surprised what aberrations you'd find in your code and all the simplifications you could make if you would removed all the jumps.


I'm currently writing a lot of Elixir where early returns are a feature of the language (https://hexdocs.pm/elixir/Kernel.html#module-guards).

I also write a lot of JS and tend to use guard clauses fairly liberally. I find they make boundaries (x < 0) much clearer. My functions are also < 10 lines in most cases. For me it leads to less convoluted code but YMMV.


I love Elixir, but I don't think of guards as "early returns". They're more like preconditions or filters: they prevent the function from running at all if they don't match, and because they're applied top-down in the module, it's really easy for the programmer to find the one that should match. They're great!

Other programming languages let you return from any line in the function, no matter how deep in a conditional, inside nested loops, before or after variable side effects, etc. Tricky to know just _what_ that function will do given some inputs.


You're right, they're not really "early returns" but for me at least they are often in similar fashion to the "early exit" pattern in the article:

    def transform_data(raw_data) when length(raw_data) == 1, do: []
    def transform_data(raw_data) do
        # actual function code goes here
    end
That said, I wouldn't want to see the above snippet in my team's codebase :)


Not a fan of prolog then? As an excercise, find the guard in this snippet (https://rosettacode.org/wiki/Babbage_problem#Prolog) and imagine such code in a team codebase!


Quite the opposite :) My preference would be to rely on pattern matching (unification) in the function head:

    def transform_data([_|[]]) do: []
I just wanted an example of using guard clauses. It does depend on the actual code though, there's probably a more elixiry solution.


Are early returns essentially equivalent to gotos?

I suppose I could get rid of early returns with a variable and breaks, but is it worth it? My functions tend to be small enough for me to not run into problems with early returns.


massively edited. very interesting, thanks for the comment

They are literally goto end. It used to be done like that a lot actually.

I suppose I could get rid of early returns with a variable and breaks, but is it worth it?

Very good question. My claim is that yes it is tremendously worth it if you care about great code. The reason is because by introducing intermediate variables (or by using more the ternary operator ?) you will have a more expressive code, and then you will subsequently take better decisions when you will write/maintain/refactor it. And more importantly, you will refactor it because you can (how do you move code that's full of early returns?). At the end, your program will be much simpler and better (less bugs, faster, etc). Just try it in a big-enough program, you will not come back to the old ways, I think.

My functions tend to be small enough for me to not run into problems with early returns.

Very interesting. This point would require a live discussion. Many things are at play here, between over engineered object-oriented programs where everything is broken into little pieces that shouldn't (most of the time you should make a function when it's called twice, not because it represent a world object) and also, I'm not surprised you guys don't do bigger functions since you fill them with early returns. In clear, what I'm saying is: are you guys not able to write any big functions because of the early returns? I wasn't expecting that, sounds almost plausible. You end up breaking things up way too much, over engineering things and all just to avoid bigger functions because you can't have them because of the early returns? Very very interesting. Of course this is complete conjecture and certainly not true. Or is it?

edit: it also depend on what you mean by worth it. In terms of market value, you will not be rewarded because the market the doesn't reward great code so much anyway, broadly speaking.


> They are literally goto end.

So are a lot of other constructs:

break is goto the end of the loop, continue is goto the beginning of the loop, the closing of a loop block is goto the beginning of loop, else is goto the end of the conditional, if is maybe goto the next else, and return (early or not) is pop from the stack and goto that address.

The fact that a language feature uses a JUMP instruction does not make it necessarily evil; most of them do. Everything has both benefits and drawbacks which are often highly situational.

You may find it easier to convince people of the problem with early returns if you limit the scope of the argument to a more specific context than “always”: it’s usually easier to make an argument for a concrete case rather than an abstraction, and then show how the same reasoning applies to a large class of situations.


Thanks a lot for the elaborate reply. I do tend to break things up into small functions. Why is that over-engineering? It makes it easier for me to reason about my code. It also makes it easier to refactor sometimes, although of course you're now restrained by the function's construct, so it becomes more difficult to move things between functions.

Do I understand correctly that you tend to make large functions because they're easier to refactor (and not returning early makes this easier)?


I'm also a fan of early exits, for more reasons than in the post: an early return results in less nestings and less variables, it makes the code simpler. Consider:

    Option<T> sink(int pos) {
        if(pos>=this.length()){
          return Nope;
        }
        // code that has depends on the bounds check
        Some(value)
     }
Versus:

      Option<T> sink(int pos){
         Option<T> ret = Nope
         if(pos < this.length()) {
             // same code
             ret = Some(value)
         }
         return ret;
     }
I argue that the second is more complex: an added variable, deeper nesting, useless return line.

When rewriting multiple guards you get more empty statements that reassign the default value of the return.

I do have the following opposite opinion too: guards should be simple: no loops, no complex control flow (name the loop). No side effects in guards. And no guards handled by the expected case, even for clarification.


  return pos < this.length() ? Some(value) : Nope
See, mine is already way more simple than yours. This is my point, if you force yourself not to use jumps, you will end up with simpler and better programs. Also the second example you wrote is expressive and more importantly you can move that code. Also note that in many other programming language like lisp and haskell, you would never need the intermediate variable as you can basically write return if...


The comment twice stands for some complex logic with side-effects which calculates value. You shouldn't put that in a ternary if, in most languages you can't.

And even then I would reverse the order, placing the exceptional and short case first.

Moving the code: why? You have a perfectly encapsulated function. If you need to, you have more problems with shadowing variables, functions and with return types.

For functional languages: doesn't Haskell explicitly have guard pattern? I thought the general guidelines favored matching and guards over if statements.


But that requires you to have the output assigned to a variable even without valid input..


This whole guard/validation thing is a fabrication of the mind. Because early returns do returns something on an invalid input. If anything, those validations should be asserts/exceptions.


> try removing all returns, continues and breaks from your code, you'll see what wonderful things happens. It's literally magic. Things becomes so much more expressive without them

LOL. Try adding returns, continues, and breaks where appropriate to some random code that doesn’t currently use them, and you’ll see what wonderful things happen. It’s literally magic. The control flow becomes clearer and more linear and the code becomes more logical and concise, with fewer weird conditionals repeated several times for no reason. Also often improves performance, “for free”.


I guess I should explain that expressiveness leads to better decisions which leads to better code. In short, what I'm saying is that you will end up with a totally different program.


Early returns is the best pattern I ever adapted. Because of it I rarely if ever use 'else' expressions anymore which leads to less convoluted code.

It seems we have different experiences because where I see early returns used is mostly where functions are already small.


Or is it that you (guys) can't have bigger functions because you impulsively fill them with early returns? So you end up over-engineering things just to avoid bigger functions (because you can't have them because they are filled with early returns). Real question, I don't have the answer, but that question is my take from this discussion.


If you don't use early returns don't you end up with functions doing lots of state/variable mutation (within the function)? Thats what I consider the problem with big function and/or not using early returns. You then need to keep track of all those modification. I find it much easier to reason with early returns.

You can certainly over-engineer things just to avoid bigger functions. However from my experience I see that far less than complicated large functions with little care to readability.


don't you end up with functions doing lots of state/variable mutation (within the function)?

Good question. This is true, but this is a programming language bug. In some programming language you can write the equivalent of return if... or return for.... Also note that those intermediate variables add a lot in expressiveness (you're computing result or screensize etc). The mutation is unfortunate indeed but a programming language bug.


I used to use early returns a fair bit in Python and Ruby (my primary languages).

But lately I've been working with Elixir which doesn't support early returns and while Elixir does have some tricks up its sleeve to better handle the idea of having an early return without using it (such as pattern matching on function arguments and / or guards), I'm now slowly shifting into thinking early returns in any language makes things more difficult to reason about in the end most of the time.

I say "most of the time" because for very short functions where you might return early to avoid an else clause or deal with varying types of input, that's not too bad. It's really a problem in the use case you explained where you have a larger function that might have 4 or 5 early returns scattered about and that function might be nothing more than having a few short if statements that say "if this, then do_something and return". It becomes difficult to maintain because now the function ends in 5 different spots based on 5 different conditions.


An interesting conjecture I've made during this discussion is the following: can early returns be responsible for over-engineered code? Think about it. If it's acceptable for programmers to use early returns then they can't stand bigger functions because those big functions systematically end up full of early returns. So programmers breaks things into small pieces instead, systematically. While there is many reasons why breaking things into small pieces could be the right thing to do, early returns is certainly not among them. You have to picture what I'm saying. What I'm conjecturing here is that many programmers simply can't make bigger functions so they'll take a battery of bad decisions, unconsciously, to avoid them. How big is that?


How would you rewrite the first example?


   return rawData && rawData.length > 1 ? rawData.map((item) => item) : []
In many other cases, I would use a result variable. It's more expressive and more importantly you can move that code (the result variable may become just another variable in a bigger function). (note that in many programming language like lisp or haskell you wouldn't need the result variable because everything is an expression) Granted, it looks like no big deal in many many spots. Until it is, particularly at scale when you can't refactor anything because everything is full of early returns.


Your program can focus on executing statements, or it can focus on evaluating expressions (or a mix of both), and I would agree there are pros and cons to each.

These small concrete examples are good to talk about. How would you rewrite the following:

    def contains_3(xs):
        """Returns True if the given list contains a 3."""
        for x in xs:
            if x == 3:
                return True
        return False
It does use an early return, but it is not long or convoluted. You could use something like `any()`, but I would never find fault with the above function or attempt to refactor it. It is as clear and efficient as possible, and uses basic language features even beginners would be familiar with.


This is how I write it:

  def contains_3(xs):
        """Returns True if the given list contains a 3."""
        found = false
        for x in xs && !found: 
            found = x == 3
        return found
Not a valid python code but who cares. Using proper python, you could write the whole thing in one line using list comprehension.

In terms of expressiveness (as much as this short example needs expressiveness), note how found is x == 3 which is what you want right. Note how the loop condition is now full. One could also note how every line is about found so you can instantly tell what lines is about found, which one is about foo or bar, if you'd picture a bigger function. Note how there is now only one condition (the loop) and not two (loop and if) (in many situations you would end up with less lines of code, surprisingly). Finally, note how DRY this is: you return x == 3, not True if x == 3 is True (there is many cases where this DRY property is more clearly visible).

This code reads like a list comprehension.

In addition to be more expressive (although, again, this is more striking on a bigger function) and movable, you can also actually write bigger functions like that which you can't with early returns (everybody agrees this is a mess).

If I'd tell you not to use jumps, you would have found that solution. Simply ban jumps from your code (find the alternative way of writing things) and you will obtain a code that will surprises you, especially in bigger functions.


> you would have found that solution

I don't think so, even after seeing your pseudo code don't see a nice way to avoid the early return. I don't understand how your for-loop / if-condition mix works, but let me try to translate it as best I can into valid Python:

    def contains_3(xs):
        found = False
        index = 0
        if not found:
            found = xs[index] == 3
            index += 1
        return found
This was honestly my first best effort attempt at creating this function without an early return. But it is wrong, it only checks if the first value of the list is 3. My opinion is that this style is more error prone (and indeed, I did create an error), and objectively there is more code and more state and thus more opportunities to get things wrong.

    def contains_3(xs):
        found = False
        index = 0
        while not found:
            found = xs[index] == 3
            index += 1
        return found
That works, but again, there is more code and more state. When reading this code I must ask myself "are they iterating over the list correctly?" and I must spend mental effort answering that question. This doesn't happen with a for-loop.

I don't understand why you keep talking about jumps in assembly code. Code which has zero early returns will still have hundreds of jumps in the assembly code.

Code that is short is good. Code that is readable is good. I don't think we'll agree on what code is more readable, but we can at least agree on which code is shorter.

As I've gained programming experience I care less and less about how individual functions are implemented and more and more about what those functions do and what they return. For something like `contains_3`, I see that it is a function with no side-effects, which returns a boolean, and wont fail (at least, wont fail in a language with type checking). This is a small simple function, and I don't care how it's implemented, it is a good function. If it becomes a problem for some reason, it is isolated and can be easily fixed in isolation.


If what you say is true, a mechanical transformation done by a simple soure-to-source compiler could improve code by turning early returns into nested if/then statements. The code would be longer and more nested, but (you claim) better.


Wrong, because once you would see the code with the if statements, you would discover bugs and that the logic is convoluted. You would refactor it into a much simpler program.


Exactly! Just add early returns when appropriate and all becomes much clearer.

Also beware of people with strong opinions about way too general things in areas where specific things are.. specific, and where discussions of the sort you started are nearly almost a bad sign. Without something concrete to solve/look at these discussions are more religious in nature. You can say whatever, no way to prove or, worse, disprove. When people get into heated discussions there is a good chance that there is no practical test possible because the discussion is not about anything concrete but about "ideas" - where everybody has different ones for the same keyword(s).


We can all agree that convoluted logic is bad, but there are functions with early returns which are not convoluted.


>a big function with many early returns

And how would putting these early returns in a gigantic if condition or several increasingly indented if blocks fix these bugs? It will not fix any bugs since both forms are equivalent.

>early returns are gotos

All types of control flow can be represented with goto.

>gotos are harder to follow

Harder to follow than what? The majority of programmers are familiar with following gotos by using an abstraction over it such as using loops or if blocks.

>You can't move easily (think copy/paste) code than has early returns

If code is able to be moved to or be copied into another part of the codebase, this is a signal that the piece of code should be put in a function. If you are talking about using a subset of code following early returns, then you do run into the problem in not knowing what subset of early returns (aka preconditions) that are needed to bring be in a new function. This is the same with using large or nested if blocks.

>Early returns are artifacts of the assembly jump instruction

By this logic all control flow constructs are artifacts of that. Regardless whether or not you use early returns, you get equivalent assembly code after compilation.

>it makes no sense to have jumps in [a high level] language

I disagree. It makes perfect sense to have jumps in a high level language. A high level language should allow you to write abstractions over these jumps though. This way you can name and reuse patterns of jumps in your program. In addition you can think of early returns as a high level construct too instead of a low level one. You can think of if blocks as a monad. If we take a look at the article and used nested if blocks you could think of the type of the function as If If List (we can imagine that there is an interpreter behind the scene to convert this into a plain List). Since If is a monad it means we can do the join operation. This would turn it into a single If List. The obvious way to do this transformation by hand to the source file it would be equivalent to merging both if statements into a single one. If the else statements would return different results you would need to make sure you return the value associated with how far into the combined condition you get. Doing it this way is a little awkward. Also remember that monads give us the bind operation too. If we manually transform the source into using the bind operation we actually get the same thing as using early returns. Using early returns is a much clearer implementation of handling the conceptional If monad. By using the If monad we are only having to deal with a single if statement / condition at a time.

>Things becomes so much more expressive

I beg to differ. They are equally expressive, but early returns allow you to mention the preconditions / special cases all at the beginning of your function.

>Early returns is one of the big pillars of bad inexpressive entangled convoluted programming

I'm not sure how early returns make something entangled. Since they can only be at the beginning of a function, they can't really be entangled in your whole function.

>And what is people reason for using early returns? It makes less indentations

Yes, this is a side effect of being a monad. It means we can focus on a single if statement at a time instead of focusing on a bunch of nested if statements at the same time. This is the same thing as when javascript developers realize that converting callbacks that call callbacks, etc into using promises reduces the nesting that is needed.

In summary, I am not convinced from your argument that early returns are bad since they are an abstraction over assembly.


You have to use two newlines to get a link break in HN (https://news.ycombinator.com/formatdoc). I added a bunch of those to your post to make it readable.


Thank you. For some reason I was unable to edit my post to fix the issue myself.


But break/continue statements are also like goto, and to be consistent they would need to be eliminated as well.


The more I read about for loops, non local exits.. the less I want to go back to imperative programming.


If you interested by what I'm saying, you should try removing all returns, continues and breaks from your code


How would you replace early returns? If statements?


This object / function switching is horrible. Obfuscates code reading a lot. Why lambdas and not named functions. Same for nested ternaries. This is the kind of code which no one will like afterwards to maintain.

Early exits are fine when they are used for invalid input/use cases. But not for regular algorithmic outputs. This would hide cyclomatic complexity. Unfortunately, not understood by the author once you understand the reduce example. I am not experienced enough with reduce to make a statement about the array pattern... But it looks good especially when the language has array deconstruction.


Early exits should be used carefully - security critical code should not rely on a single check due to TOCTOU errors. Essentially every check hit after the primary - and there should be such cold redundant checks - is a critical error - not necessarily in "System Halted" way but high severity. These may light up on normal use in concurrent cases too if there are bugs. Even hardware bugs.

Guard macro/template/local assertion is almost always better, the key is to not produce too much distracting syntactic noise or work against language features. Redundant rare checks are eaten by branch prediction. (Needs care too, call when Spectre is fixed.)


Re Object/functions, I think his example is a bit flawed too, but not for the reason you give. A lambda vs named function doesn't really change his pattern at all.

The big problem for me is that it's offloading the logic for picking to the caller, so that if they forget to default null to ```contentType["default"]``` everywhere there will be a bug. putting the choice+ default fallback at the callee side reduces this boilerplate.


The reason I complain about the lambdas is the lack of readability what the function does (the name), the lack of a speaking stack trace and finally in most languages this pattern also allocates memory.


> 2. Switch to object literal

I've generally called a much more general form of this "data driven programming". (The data here being the mapping.)

See: https://en.wikipedia.org/wiki/Data-driven_programming#Anothe... and the referenced https://homepage.cs.uri.edu/~thenry/resources/unix_art/ch09s...

> the important part is moving program logic away from hardwired control structures and into data


Of course the logical conclusion of any data driven programming technique is to build a DSL and a parser. I always think that if my solution is starting to look like a compiler I’m doing something right since I’m modeling the domain and allowing end users to solve their own problems. After all, what is a compiler/interpreter of a Turing complete language if not (provably) the most flexible data-driven program?


You really should just use a for loop over reduce. It’s much easier to understand, shorter, and is more performant:

    const truthy = [], falsy = [];
    for (const i of values) {
      if (i) truthy.push(i);
      else falsy.push(i);
    }


    const truthy = [], falsy = [];
    for (const i of values) {
      (i ? truthy : falsy).push(i);
    }
If for some reason you did want to use reduce (and I see no reason to), it would be

    const [truthy, falsey] = exampleValues.reduce((arrays, i) => {
      arrays[+!i].push(i);
      return arrays;
    }, [[], []]);


A better option (for me, who likes fp) would be writing a group_by reducer once and apply it to any situation at hand

    group_by = f => (acc, item) => ({...acc, [f(item)]: [...(acc[f(item)] || []), item]});

    const {true: truthy, false: falsy} = values.reduce(group_by(Boolean), {});
Though as long as you don't need to keep writing for loops for each grouping you do, group_by using reduce or for loops doesn't really matter.


That really is more a matter of taste than facts


1. Yes, love avoiding nested if-statements.

2. Yes, similar to my love my avoiding nesting, I too like avoiding indentation. Switch -> Case -> Statement+Break vs Object -> Key/Value.

3. I think this example dips a bit into the arcane. It's not that I can't read it, but you have a reduce starting with two arrays and some conditional logic. I don't often run into it, but I might try to think of where this can apply further.

4. Yes. Even though I love short concise code, I want the variables and function names to be clear to their meaning. When I read acceptable abstract code I expect the pieces to read like a set of instructions. Ideally the nicely named functions do what they say they do, and no more.

5. Hard pass. That ternary looks like a blob of characters and I have to think very hard about the logic going on. Not a... if b then a & b else a... I don't like picking on specific examples inside these kinds of posts, but on first pass:

  if(!conditionA) return "Not A";
  if(!conditionB) return "A";
  return "A & B";
Probably give it a nice snappy function name so there isn't a bunch of conditional logic in the middle of a function, when I just want to know the state of two booleans, (with a caveat).


I think the problem with #5 is the formatting and use case. Ternaries are really useful for certain things and can make code much easier to read but they are also super easy to misuse. The big thing I find is that ternaries allow for greater code density and better alignment of similar elements.

The style I find works best is as follows. Note that alignment of the ? and : characters (and depending on the code style, the parentheses) is vital for the legibility of this style.

  x = (condA) ? (trueA) : (falseA)
      (condB) ? (trueB) : (falseB)
      (condC) ? (trueC) : (falseC)
      (condD) ? (trueD) : (falseD)
      (condE) ? (trueE) : (falseE)
                        : (final case);
Mind you this has it's uses mostly in low level stuff but I find it a hell of a lot easier to read than a giant if else chain.


If I'm reading it right, those false cases shouldn't exist; it should be:

  x = (condA) ? (trueA) :
      (condB) ? (trueB) :
      (condC) ? (trueC) : 
      (condD) ? (trueD) : 
      (condE) ? (trueE) : (final case);
and the equivalent if/else chain should be:

  if   (condA) {x = trueA }
  elif (condB) {x = trueB }
  elif (condC) {x = trueC }
  elif (condD) {x = trueD }
  elif (condE) {x = trueE }
  else         {x = (final case) }
In which case it doesn't really get you much readability-wise, since the formatting consistency appears in either case; the main benefit really is that ternary is an expression rather than a statement

It’s definitely superior, since it’s still less noise all around, but almost worthlessly so. And then the same pattern gets covered by switches too...


Yep I'm dumb. That's what I get for writing that snippet last night with a few drinks in me.

As for comparison against switches, I find that switches are very limited in a lot of languages (looking at you C/C++). The problem with the if else chain as well is that most code formatters seem to blow it up into the fully expanded form which kills readability.


yes


I used to write php and JavaScript at the same time. Turns out nested ternaries get evaluated differently in both languages, leading to some subtle bugs sometimes. I will never write those without parentheses again.


iirc php is the only (popular) language with messed up ternary-if associativity, it works as you'd expect pretty much everywhere else


I believe you're correct; I'm not aware of any other language with a left-associative ternary.

This will be deprecated as of PHP 7.4[0], and will cause a compile error in PHP 8.0.

However, instead of switching straight to right-associativity, PHP will require explicit parentheses to disambiguate nested ternaries.

[0] https://wiki.php.net/rfc/ternary_associativity


  contentTypes[contentType] || contentTypes['default'];
Don't do this. Untrusted user input could be anything, such as "hasOwnProperty" or "toString", for example:

  contentTypes['hasOwnProperty'] // [Function: hasOwnProperty]
  contentTypes['toString'] // [Function: toString]


I always thought the urge to replace switches with objects was fashionably bad advice.

I get that the object literal approach feels more functional, and that forgotten break statements will cause havoc tho.

Seems like a safeswitch construct is in order, that does not allow dropthrough. something like:

  safeswitch(s) {
    case 'alpha':
      calc(alpha(x), 2);
    case 'beta':
      calc(beta(x), beta(x*x));
  } else {
    // default case here
    calc(x);
  }
alternatively, switch could be modified so that passthrough is not the default, and so, instead of a break statement one would use a passthrough statement to enable that behavior (opt in instead of opt out)


What are the implications of this? What could go wrong outside of runtime errors?

When I've used this pattern in the past I've used a reusable 'getter' object which implements `hasOwnProperty` on itself for the sake of being sure I'm getting only the stuff I want. But it's not quite as concise as what's shown in the example, apart from getting the default internally rather than externally using ||.

I didn't do this for security though, I did it for sanity - I don't want to get false positives when accessing, and the prototype has a ton of properties.


"I started deferring to ternaries and nested ternaries and I found I was able to quickly understand at a glance what was happening."

That sort of reads like "I was able to read my own code". That's a good start, but can other people also "quickly understand at a glance"?


As a future educator interested in how people think about code, I like these submissions! Whenever things like functional style or language syntax pop up on HN, they tend to create a big stink in comment threads, as people defend their (obviously different) tastes, perhaps by providing contrived code examples, or focusing on why Alice's code is "unreadable" to Bob, rather than understanding "that's just what works for Alice." Honest, unprovoked opinions like in this blog post (like "Hey, this works for me!") are a breath of fresh air.


The brain rewires - literally.

So whatever you are used to - you are used to it because your brain wired itself to recognize those patterns. So obviously, when you look at something you are not used to, you are not wired for, it takes a lot longer to process because a lot more of the brain needs to get involved. However, with some time the brain rewires and what was alien now seems completely natural.

Of course, this is within larger limits, you cannot adapt to everything and not to everything at the same speed.

Anyway, my point is, for any such comment people make they should make this test: Do I feel how I feel because of the above, or do I have actual hard knowledge that there is a deeper effect? Meaning, is my comment simply a report of my current wiring state (so, mostly useless to others), or do I have to share actual insights?

Then there's what I wrote in another comment here, about disagreements on waaayyyy to general topics, where people might not actually disagree nearly as much if there was a specific context, but since the topic is so broad everybody comes with their own imagined scenario, depending on their own past experiences, which may very well be best served with very different approaches, but since the discussion remains abstract everybody keeps defending their opinion. Which may actually all be right - for the concrete scenarios they all have in their heads.

For example, there are several commenters here arguing against nested ternaries. I did the same for the longest time. But TypeScript's conditional types require - and officially advertise - exactly this pattern. The way they write it it looks perfectly clear, and I found myself agreeing with there actual concrete code (i.e. types; I posted the link in another comment here).


Note: this is mostly about style at the syntax level, not about design patterns at the architectural level.


I agree with the 1st pattern.

The 2nd one, there's a caveat about space/GC here.

The 3rd one depends a lot on the situation.

4th one is too obvious for it to be called a pattern, but I agree with it.

5th one is, meh, not really easy to read. I prefer his "bad" example. It could be less verbose, e.g,

if (!conditionA) result = "Not A"

else if (conditionB) result = "A & B"

else result = A


Personally, I'd prefer to go with early return, I think. Depending on what the conditions mean semantically.

It would be:

  if (conditionA && conditionB) {
    return "A & B"
  }

  if (conditionA) {
    return "A" 
  } else {
    return "Not A"
  }
  
  return "neither A nor B"
I think this comes from a general dislike of more than one else on an if and a dislike of nesting.


Regarding nested ternaries, while not recommended, short circuit evaluations may be even more readable:

return ( (cA && cB && "A & B") || (cA && "A") || (cB && "B") || "neither" );


Quick, it's another case of Javsacript Stockholm Syndrome!


:-)

However, this would also work in Perl, etc, (anything featuring short-circuit evaluation and type coercion for evaluating conditions.)


Same concerns about #2 here, prettier code but worse performance, Form over Function.


Re the switch example, another possible solution is the command pattern, where you encapsulate the action in an object.

e.g. (in C# because my JavaScript isn't good):

  interface IContentType {
   void CreateType();
  }
  
  class ContentPost : IContentType {
   public void CreateType() {
    return console.log("creating a post...");
   }
  }
  
  class ContentVideo : IContentType {
   public void CreateType() {
    return console.log("creating a video...");
   }
  }

  void CreateContentType(IContentType contentType) {
   // Instead of a switch, you can now do:
   contentType.CreateType();
  }
Mainly useful if you want to do more than one thing depending on content type, of course. It's overkill if you'd only need a single switch anyway.


I disagree with the "object literal" because it hides the traces when which is taken. Sometimes it's better to write more self explaining code instead saving a vew lines.


It’s also dangerous in JavaScript where properties of the Object prototype can mistakenly be dereferenced instead of the default/fallthrough case.


I feel Proxies could be a cleaner way to handle default cases in using object literals to replace switches:

    const cases = new Proxy({...cases}, {
      get: (obj, val) => obj[val] || () => {...default case}
    });
It does not add any lines to the definition, and saves you from writing `|| cases[default]` all the time and bugs from forgetting to write it. Additionally, you can also use 'default' as a case.


Proxy is slow and unnecessary. Passing around a wrapped object that behaves in this way would be exceptionally surprising. Just move this behavior into a function if you find yourself having to repeat something many times.


or write a `get` function, surely that'd be cleaner and more explicit


The 'One loop two arrays' example looks like premature optimisation to me. Going through the list twice is much easier to understand:

  let exampleValues = [2, 15, 8, 23, 1, 32];
  let truthFn = (x) => x > 10;
  let truthyValues = exampleValues.filter(truthFn);
  let falsyValues = exampleValues.filter((x) => !truthFn(x));


When I found out about the 1st pattern I liked it, but at another workplace I was advised to avoid it since it creates multiple exit points for the function, leading to confusion. I'm not quite convinced of that and tend to agree with the author of this article, but are there any other arguments against it?


There's a very old programming style rule about only ever having one entry and one exit to a function. This rule dates back to the days of assembly programming, where any 'function' could just jump directly into the middle of some other function, or jump out before the end. This made understanding and maintaining code super-difficult, and this rule made a lot of sense.

Eventually, "structured programming" languages like C and Pascal were invented, where you couldn't just leap from one arbitrary part of the program to another. In these languages, the compiler enforced that each function had a single entry point, solving part of the problem. However, for C in particular the rule was still kind of useful - all the resources allocated during the function needed to be cleaned up by the end, so it was still good practice to have a single exit point that always did whatever cleanup was required.

These days, most languages have a garbage collector that will automatically clean up for you. Languages that don't have a garbage collector, like C++ and Rust, generate all the deallocation logic at compile time (the RAII idiom).

If you're using a language designed after the early 1980s, the "single entry and single exit point" rule is either wholly irrelevant, or does more harm than good.


To me,that depends on the language. If you are using C, it's probably best to have one exit point where everything that needs to be cleaned up is taken care of. If it's C++ (RAII/scoped lifetimes) or another language that does not require manual cleanup, maybe it's fine to return early.

I personally never use this pattern because I like having a central place to cleanup/set status/return values etc. I also find it easier to add/remove more functionality that way.

The code that I have most often seen using early returns is in large functions that do several things, so they're exacerbating poor design with confusing flow of control.

I feel that if you keep your functions small enough, you can keep them readable either way. So I'm back to mostly worrying about cleaning up resources in a consistent way.


I think having multiple exit points is good when the early exits are all at the beginning for trivial parameter validation, but likely to be bad if they're buried in the middle of the function.

In either case, there's the disadvantage that if you want to do printf debugging and make the function print "function foo returned 'false'" when it returns, early return makes it a pain. (Generally not so much a problem for real cleanup work, since your early returns are likely before the function's actually done any work that needs cleanup)


It's hard to argue against one-liner early exits at the very beginning of the function.


Although I've been avoiding multiple exists, I don't think the the patten is that bad, just more of personal taste. However, I think it's import to let the readers of the function know what the function mainly do first, insead of the edge cases.

In functional thinking, it's actually very helpful to think of the edge cases and get them out of the way first. When put in coding, I like to think it's more helpful to let the readers know the purpose of the function first, they could dig deeper later if they choose to.


Back in the mid 80's my more computer sciency coworkers gravely warned about 'bad things' that would happen if you had multiple returns.

I found multiple returns were useful places to stick break points[1]. Like when you're trying to track down a rare error condition that happens every so often. Like once a week.

[1] I use a debugger cause I'm a small brained primate.


Replacing switch, which is compile time construct (converts to an if-else statement or a lookup table or a binary search) with an object which is a run time construct (hash table) is pretty inefficient at least in compiled languages.


Early returns are good because they also save a lot of indentation.


re: switch to object literal.

Run a perf test comparing execution speed on the two strategies. I've had to defend switch like three times this year; there's a reason it exists, it's super easy for compilers to optimize :\


Thanks for sharing this. I didn’t know about ‘reduce()’. Only thing I’d add is to use TypeScript.


Should have applied item 4 to item 3. I don't really know what's going on there.


I disagree strongly with the "no foo variables" rule. If you put meaning into a variable name that you use twice, then you are repeating yourself, which is very bad. Thus, variable names should NEVER be meaningful (except maybe when they are used only once). It is better to name your variable "x" with a clear comment of its meaning upon its declaration.

Moreover, mixed-case symbols suck big time.


Are you familiar with the concept of ‘coding your documentation?’ Code often changes without the comments being updated so making the code as informative as possible has lasting benefits.

It is much easier to use descriptive variables than keep the comments up to date.

Additionally, descriptive variables allow anyone to read and more quickly understand the code.


> Are you familiar with the concept of ‘coding your documentation?’ Code often changes without the comments being updated so making the code as informative as possible has lasting benefits

Yes, and I agree 100% with it.

It is much better to write self-documenting code than comments. I just contend that descriptive variable names are not the way to self-documenting code. Of course, I'm talking about local variables whose scope should never go beyond a single screen of code. Globally accessible identifiers (e.g. functions) should of course have very long and descriptive names.

> Additionally, descriptive variables allow anyone to read and more quickly understand the code.

This does not ring true to me, but maybe it depends a lot on the context. At least in numeric mathematics, you always want to name your numbers, vectors and matrices using single letters (as they appear in the corresponding paper). Using "cute" descriptive names hinders a lot the readability of formulas.


Ah, I think the context of your programming and mine are quite different. I don’t do any mathematical formulations beyond basic manipulations.

I can see where someone writing complex mathematical formulas in C would definitely benefit from single character variable names


The "x" in mathematics is completely different to an "x" in some business logic.

Business logic tends to be concrete, not abstract. Mathematics tends to be abstract, not concrete.

Yes, using x,y,a,b,c as names in the quadratic formula makes sense.

But so does "isOverCreditLimit" and "MonthlyInsurancePremium" in business (leaving aside arguments about camelCase etc etc).

Business logic needs to reflect the intent of the business. There is rarely the case for an "x" to "solve for".

Deconstructing a mobile phone plan into code is not the same as deconstructing a math paper. Quantitative mathematics is a very different computing environment to business logic.


This sounds fine if you're the only one using the code. In business, the practice you're describing is needlessly fragile. My opinion is that you shouldn't have a home style and a work style — you should just have correct style, something that works equally well in both scenarios.

> If you put meaning into a variable name that you use twice, then you are repeating yourself, which is very bad. Thus, variable names should NEVER be meaningful (except maybe when they are used only once)

I don't understand this sentence at all. My initial interpretation was that one should only use a variable no more than once which seems completely arbitrary. If a variable is only being used once, that's not a variable, that's a constant — but why are constants more deserving of a meaningful name than a variable, something that could (and should) be used multiple times?

After all, if you're declaring a variable but not modifying it or only modifying it once, you should use a constant — but that still doesn't explain why constants are special enough to merit meaningful names.

Further, you haven't narrowed down to when your assertion applies. What about variables in OOP as members of objects? Should mutating members always have nonsense names?


I think this conversation is at cross purposes. In algebra, the statement:

x = x + 1

is meaningless. Algebraic variables are not the same as what are effectively names for value storage locations, which is the common meaning of a "variable" in most imperative languages.

Of course, with FP, the concept of a mutating variable is frowned upon in general. Mutation is effectively a performance hack.


I suspect you're right that GP was likely talking about a different context but I think that person failed to make that clear, therefore making it seem like their mantra for variable names was universally applicable. Clearly, it isn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: