Hacker Newsnew | past | comments | ask | show | jobs | submit | mickeyp's commentslogin

The winning strategy for all CI environments is a build system facsimile that works on your machine, your CI's machine, and your test/uat/production with as few changes between them as your project requirements demand.

I start with a Makefile. The Makefile drives everything. Docker (compose), CI build steps, linting, and more. Sometimes a project outgrows it; other times it does not.

But it starts with one unitary tool for triggering work.


This line of thinking inspired me to write mkincl [0] which makes Makefiles composable and reusable across projects. We're a couple of years into adoption at work and it's proven to be both intuitive and flexible.

[0]: https://github.com/mkincl/mkincl


I think the README would be better with a clearer, up-front explanation of what this builds on top of using `make` directly.

[flagged]


Because, in 2026, most build tools still aren't really all that good when it comes to integrating all the steps needed to build applications with non-trivial build requirements.

And, many of them lack some of the basic features that 'make' has had for half a century.


Ye, kick off into some higher-level language instead of being at the mercy of your CI provider's plugins.

I use Fastlane extensively on mobile, as it reduces boilerplate and gives enough structure that the inherent risk of depending on a 3rd-party is worth it. If all else fails, it's just Ruby, so can break out of it.


Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system. https://www.gnu.org/software/make/manual/html_node/Catalogue...

What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.


https://github.com/casey/just is an uncursed make (for task running purposes - it's not a general build system)

How does `just` compare to Task (https://taskfile.dev/)?

Just uses make-like syntax, not yaml, which I view as a huge advantage.

No I'm saying use Makefiles, which work just fine. Mark your targets with PHONY and move on.

You still get bash scripts in the targets, with $ escape hell and weirdness around multiline scripts, ordering & parallelism control headaches, and no support for background services.

The only sane use for Makefiles is running a few simple commands in independent targets, but do you really need make then?

(The argument that "everyone has it installed" is moot to me. I don't.)


I agree, but this is kind of an unachievable dream in medium to big projects.

I had this fight for some years in my present work and was really nagging in the beginning about the path we were getting into by not allowing the developers to run the full (or most) of the pipeline in their local machines… the project decided otherwise and now we spend a lot of time and resources with a behemoth of a CI infrastructure because each MR takes about 10 builds (of trial and error) in the pipeline to be properly tested.


It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call. Some things just don't run on a local machine: fair. But a lot of things do, even very large things. Things can be scaled down; the same harnesses used for the development environment and your CI environment and your prod environment. You don't need a full prod db, you need a facsimile mirroring the real thing but 1/50th the size.

Yes, there will always be special exemptions: they suck, and we suffer as developers because we cannot replicate a prod-like environment in our local dev environment.

But I laugh when I join teams and they say that "our CI servers" can run it but our shitty laptops cannot, and I wonder why they can't just... spend more money on dev machines? Or perhaps spend some engineering effort so they work on both?


> You don't need a full prod db, you need a facsimile mirroring the real thing but 1/50th the size.

My experience has been that the problems in CI systems come from exactly these differences “works on my machine” followed by “oops, I guess the build machine doesn’t have access to that random DB”, or “docker push fails in our CI environment because credentials/permissions, but it works when I run it just on my machine”


> It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call.

In my experience at work. Anything that demands too much though, collaboration between teams and enforcing hard development rules, is always an unachievable dream in a medium to big project.

Note, that I don't think it's technically unachievable (at all). I just accepted that it's culturally (as in work culture) unachievable.


Sometimes the problem is that the project is bigger than it needs to be.

Funny enough, the LLMs are allowed to run builds on your local machine. The humans, not any more.

But it isn't a question of security. The project would very much like the developers to be able to run the pipelines on their machines.

It's just that management don't see it as worth it, in terms of development cost and limitations it would introduce in the current workflow, to enable the developers to do that.


> But it isn't a question of security.

Where did i mention security?

> in terms of development cost and limitations it would introduce in the current workflow

Well said. "in the current workflow". As in, not "in the development process". Those are unrelated items.


Why not just use the best known emacs lisp core, then? Like say emacs.

To allow it to run on other lisp dialects as well.

(I’m just trying to defend GP’s point – I’m not a heavy lisp user myself, tbh.)


Portability across Lisp dialects is usually not a thing. Even Emacs Lisp and Common Lisp which are arguably pretty close rarely if ever share code.

You could make a frontend for dialect A to run code from dialect B. Those things have been toyed with, but never really took off. E.g. cl in Emacs can not accept real Common Lisp code.

I'm not arguing against the idea, I'm just curious how it would work because I see no realistic way to do it.


Gotcha. Too bad – I was hoping there was at least some (non-trivial) subset you can run on both :(

Any idea why is it not a thing? Is this level of interop not practical for some reason?


Lisp dialects have diverged quite a bit, and it would be a lot of work to bridge the differences to a degree approaching 100%. 90% is easy, but only works for small trivial programs.

I say this, having written a "95%" Common Lisp for Emacs (still a toy), and successfully ran an old Maclisp compiler and assembler in Common Lisp.

https://github.com/larsbrinkhoff/emacs-cl

https://github.com/PDP-6/ITS-138/blob/master/tools/maclisp.l...


It does indeed. It's used in several games.

https://www.masteringemacs.org/article/fun-games-in-emacs


I completely forgot about Tetris and its nice visuals! Thanks, will see if I can use the library for ElCity.

I think misinterpreting the "el" as Spanish is fun. In that vein, your game could be called ElCiudad.

Somebody somewhere suggested doing a clone of Tropico called ElPresidente, which is even cooler.

Btw, Lars, you have endlessly more experience in Elisp than I do. Do you maybe have any ideas/directions on how to make the graphical mode look... A bit more decent and snappy?


Sorry, I don't know anything about Emacs graphics. Some people confuse me with larsi, but I'm not that guy.

SoftICE gang represent :-)

That's a skill onto itself, and I mean the general stuff does not fade or at least come back quickly. But there's a lot of the tail end that's just difficult to recall because it's obscure.

How exactly did I hook Delphi apps' TForm handling system instead of breakpointing GetWindowTextA and friends? I mean... I just cannot remember. It wasn't super easy either.


> No surprise, really. You can use AI to explore new horizons or propose an initial sketch, but for anything larger than small changes - you must do a rewrite. Not just a review. An actual rewrite. AI can do well adding a function, but you can't vibe code an app and get smarter.

Sometimes I wonder if people who make statements like this have ever actually casually browsed Twitter or reddit or even attempted a "large" application themselves with SOTA models.


You can definitely vibecode an app, but that doesn't mean that you can necessarily "get smarter"!

An example: I vibecoded myself a Toggl Track clone yesterday - it works amazingly but if I had to rewrite e.g. the PDF generation code by myself I wouldn't have a clue!


That's what I meant, it's either, or. Vibe coding definitely has a place for simple utilities or "in-house" tools that solve one problem. You can't vide code and learn (if you do, then it's not vibe coding as I define it).

Did I say that you can't vibe code an app? I browse reddit and have seen the same apps as you did, I also vibe code myself every now and then and know what happens when you let it loose.

Is the professional _just_ a translator, or an expert in translation _and_ the domain? The latter is preferable; for the former? I'd trust Gemini or Claude.


Perfect summary. I'll add: insane defaults that'll catch you unaware if you're not careful! Like foreign keys being opt-in; sure, it'll create 'em, but it won't enforce them by default!


Is it possible to fix some of these limitations by building DBMSes on top of SQLite, which might fix the sloppiness around types and foreign keys?


Using the API with discipline goes a long way.

Always send "pragma foreign_keys=on" first thing after opening the db.

Some of the types sloppiness can be worked around by declaring tables to be STRICT. You can also add CHECK constraints that a column value is consistent with the underlying representation of the type -- for instance, if you're storing ip addresses in a column of type BLOB, you can add a CHECK that the blob is either 4 or 16 bytes.


Wait until you learn what Apple charged for their incredibly basic ipods in the 2000s.


That's generally how it works in most editors that support both.

Tree-sitter has okay error correction, and that along with speed (as you mentioned) and its flexible query language makes it a winner for people to quickly iterate on a working parser but also obviously integration into an actual editor.

Oh, and some LSPs use tree-sitter to parse.


Tree-sitter is great. It powers Combobulate in Emacs. Structured editing and movement would not have been easily done without it.


Hey Mickey! Thanks for all the stuff you've made in the Emacs space. Thanks for commenting here. :)


Thanks for the kind remarks, Ashton :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: