Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I understand that the tradeoffs exist, my surprise is mainly that would be an uncommon workaround in pythonland for workload specific tasks (eg most projects dont have differing library versions across branches; at least not for very long) is common practice in jsland

Although one factor I just realized is that pip also ships pre-compiled binaries (wheels) instead of the actual source, when available. Which would generally be pretty dumb to want in your repo, since its developer-platform specific; assuming js only has text files, it would be a more viable strategy in that ecosystem to have as a common case

Regarding B and C, its not like you’re wiping out your libraries every commit; the common case is install once on git clone, and only again on the uncommon library update. A and C is a bit of an obtuse concern for most projects; I can see it happening and being useful, but eg none of my public project repos in python have the issue of A or B(they’re not big enough to have version dependency upgrades last more than a day, on a single person, finished in a single go) and for C, its much more likely my machine(s) will die long before all the pypy mirrors do;

Which I’m pretty sure is true of like 99% of packages on pypy, and on npm; which makes the divergent common practice weird to me. It makes sense in a larger team environment, but if npm tutorials are also recommending it (or node_modules/ isn’t in standard .gitignores), its really weird.

And now that you’ve pointed it out, I’m pretty sure I’ve seen this behavior in most js projects I’ve peeked at (where there’ll be a commit with 20k lines randomly in the history), which makes me think this is recommended practice



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: