Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The important aspect of this article is Cognitive Complexity.

Of course, everything being equal, more code leads to more complexity.

In this context, complexity is related to human capacity to collectively read/understand/maintain a code base.

I've been writing software almost non-stop for close to 25 years, and I still don't understand most of the code written by others.

I especially struggle to read anything with templates, generics, lambda...

But I can read straight C code, code written by Fabrice Bellard or John Carmack in the early days of id software.



Interesting. This reminds me of the "simple made easy" talk by Rich Hickey, about complexity vs difficulty: https://paulrcook.com/blog/simple-made-easy


React developers take notes,random lambdas are dumb.


React is self-similar which to me makes it simple. The framework gets out of your way. I can’t say the same about, say, Angular.


If a codebase requires more than one person's mind to reason about it then the complexity and bug count grows exponentially per extra mind.


I think this is a bit of a small-scale conway's law-style effect. You shield your ability to reason about the code from the complexity of the code of others by abstracting around it, thus increasing the total complexity with an ad-hoc abstraction layer.


Essentially for every layer of indirection, a reset of complexity happens at the expense of unknown unknown's creeping into the simulacrum the layer is modeled as. At least, that's how I see it. :)


Citation needed. As far as I know the best available evidence (which is still pretty weak) says number of bugs is correlated only with number of lines of code, regardless of whether they're simple or complex lines.


The experiment should be pretty straightforward to test.

X lines of simple code containing a bug vs X lines of complex code also containing a bug.

How long does it take for average joe coder to find it in both cases? Is the experiment really needed?


It's not so easy. In my experience, most bugs that aren't extremely trivial are not obvious locally. A function, whether simple or complex, will look totally fine, but it's in fact buggy, because it violates the implied expectation of the caller, or makes the caller violate implied expectations of their caller, etc. Decoding a complex line of code isn't the hard part here - narrowing down the mismatched expectations and identifying where to apply the fix is.


Splitting code into functions isolates blocks of their context of what situations they are called in and what parameters are passed in. In my experience, Transcribing code into notes, where struct fields are inlined as indented blocks into their surrounding types, and function calls' interiors are inlined as indented blocks into their call sites, restores this context, places lines on-screen in the order they execute, and enables casual reasoning. (You still won't know data invariants without asking the original programmers, but you can guess and check at this point.) http://number-none.com/blow/john_carmack_on_inlined_code.htm... advocates for a style where code is inlined in the actual source code whenever possible.


I'm heavily in favor of inlining over tiny little functions, and the transcribing you describe is something that should, IMO, be done automatically by your IDE, on the fly, as a different rendering mode. It's unfortunate that there's no such feature for any of the languages I know.


I don't think that's a valid way of framing things, beacuse it's not like the amount of review time available is proportional to the number of lines of code.


>I've been writing software almost non-stop for close to 25 years, and I still don't understand most of the code written by others.

I think you nailed it. In my experience, the increase in complexity isn't necessarily inherent to the amount of code so much as the amount of abstraction used as code size increases.

I've worked on some large codebases that only used simple, well-understood and/or well-documented abstractions which didn't feel as nearly complex as other codebases with abstractions that were more complicated than they were worth.


Yes. All modern systems run a huge amount of code if we count the underlying microcode, operating system, runtime libraries, 3rd party dependencies and the application itself.

The thing is not the total size but the boundaries and the understanding of each layer and how manageable each layer is for the humans modifying and mantaining it ... which is, of course, the total cognitive complexity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: