Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, exactly. Pulling a huge npm dependency is usually not a problem if they didn't go out of their way to make it super hard to analyze at build time.

This is tree shaking though, dead code elimination means it will find code that isn't used at all and remove it - for example you might have if (DEV) {...}, and DEV is static false at build time, the whole if is removed.

So first it performs dead code elimination, then it removes unused imports, and then it calculates what is actually needed for your imports and removes everything else.



That's very cool! I already knew that this was something compilers did, but somehow never even considered you might do the same for an interpreted language like js.

Makes me wonder why some js bundles are still so big, am I over hyping what dead code elimination and tree shaking might achieve? Do some teams just not use it?

Either way, I've come away from my question with a pretty big reading list. This is exactly what I love about HN.


I think it's not so much about interpreted vs. compiled but more about the delivery of client code to the user - every time any user visits any website the browser may have to download the code (if not cached), then parse it, then execute. The less code that needs to be shipped, the faster time to interactivity and also less bandwidth usage.

Some bundles may still be big if teams don't use it, and some libraries are not structured in a way that facilitates dead code elimination.

Consider libraries that use `class`, such as moment.js, all functionality is made available as methods on the Moment class. If you only use 1 method, you still have to bring in the whole class. Whereas if a library is structured as free functions and you only use one, then only that gets included and the rest is eliminated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: