Hacker Newsnew | past | comments | ask | show | jobs | submit | Valodim's commentslogin

I also did that with git, but it's no comparison in ergonomics. For instance, "move this hunk two commits up" is a task that makes many git users sweat. With jj it's barely something that registers as a task.

You sweat because you are working with the CLI. Git is intrinsically "graphical". Use a good GUI client or higher level interface (maybe jj) to manipulate git graphs --- stop worrying about "how" (i.e. wrangling with CLI to achieve what you want) and focus more on "what".

GitButler from OP also allows you to do this incredibly easily. This and stacked commits is IMO their main selling point.

> For instance, "move this hunk two commits up" is a task that makes many git users sweat.

Citation needed. You split the commit anyway you like, e.g. with the mouse or using cursor movements or by duplicating and deleting lines. Then you move it with the mouse or cursor or whatever and squash it into the other commit. Maybe some people never intend to do it, but then these probably also don't want to learn JJ. I guess this is more of a selection bias, that these that care about history editing are also more likely to learn another VCS on their own.


I'm confirming the sentiment is accurate. Background: using Git (involuntarily) since 2010, did my fair share reading it's source, put honest effort into reading it's man pages, so. Jujutsu _is_ a revelation and I'm moving to it every time I'm able to: the git repository stays the same, it's the jj runs it now.

If you ever tried to have multiple WIP features merged in a Git working copy, I have a great news — with jujutsu complexity of the workflow increases linearly over the number of branches, if ever: it's almost trivial. Otherwise I very much encourage you to try — in and of itself the workflow is extremely effective, it's just Git makes it complex af.


I'm one of the git users who would sweat. Can you explain a bit (out link relevant docs) how I might split a commit up, and move it?

Here's two "raw" methods:

1. Use "git rebase -i commitid^" (or branch point, tag etc), ideally with editor set to Magit, set that commit to "edit" (single key 'e' in Magit) and let the rebase continue, do "git reset -p HEAD^" and select the hunks you want to remove from the first commit, "git commit --amend", then "git commit -a" (add -c if useful, e.g. to copy author and date from the previous one). or to keep the author date), then "git rebase --continue" to finish.

2. Same, but use "git reset HEAD^" (add -N if useful), then "git add -p" to select the hunks you do want to include in the first commit.

Afterwards you can do the "git rebase -i" command again if you want to reorder those commits, move them relative to other commits, or move the split-out hunks into another existing commit (use the 'f' fixup or 's' squash rebase options).

After doing this a few times and learning what the commands actually do, it starts to feel comfortable. And of course, you don't have to run those exact commands or type them out, it's just a raw, git-level view. "git rebase -i" and "git add -p" / "git reset -p" are really useful for reorganising commit hunks.


Yeah, I mostly do it like that. I don't use Magit (yet? Haven't got the motivation to learn or find a good tutorial for Emacs.), but instead use the cursor to select the lines to stage or unstage with the cursor/mouse in my Git GUI. Also depending on what I want the commits to look like, I duplicate the pick commit line first (and potentially move it).

On an unrelated note, I use @~ instead of @^, because I think of moving up down the ancestry, not sideways, e.g. I'm more likely to want to change it to an older/newer commit, than I am to want to change the second parent instead. I don't get why most tutorials show it with @^, because you do focus on the commit being an ancestor, not precisely being the direct first parent, although of course for the first-level first parent, it amounts to the same.


It's already well explained in a sibling comment, but on a more conceptual basis, while commits are interpreted as diffs on the fly, a commit is a single (immutable) snapshot. So in these terms, "splitting a commit" amounts to introducing an intermediate snapshot. Having that in mind, it should become clear, that using Git you create the snapshot by working from the previous or next commit (what ever suits you more), bringing it to the state, you like it to be and commit. (In theory you could create that intermediate snapshot from any commit, but likely you want to do it from on of the direct neighbors.)

This was probably near the breaking point before, it just needed an idiot to catalyze.

I had a similar emotional outburst where after contributing hundreds of hours to Stack Overflow, when I asked a question of my own, instead of answering an objective yes/no question people just argued with me in the comments about why I could possibly want to do whatever prompted me to ask my question. I delete my account and quit ever contributing to that site right then and there. I think I was just looking for an out and it was ultimately a good thing.

No idea if this is the case here, but I hope the author sticks with this decision. Although, looking at https://github.com/nvim-treesitter/nvim-treesitter/graphs/co... , it doesn't look like he started this project, so I'm not sure it's his place to archive it.


If you had the option to also delete all your contributions to the side, would you have done it?

If you had the option to exclude only certain people (e.g. those who argues with you) from seeing/using your contributions, would you have done it instead of deleting your account?

I am asking because I've too been burned and it's very commonly how an open source contributor's journey ends. So I've been toying with the idea that contributors should be able to exclude certain people or perhaps even groups of people from using their work.

Basically "I give away my work for free for anyone to use and build upon but if you don't appreciate it, if you treat me like shit, if you do any of X Y Z which hurts me or other people, then you're no longer allowed to use it".


i understand the sentiment, but the nature of FOSS is that i can't really prevent anyone from using it. i'd have to police it, and that would just lead to more misery.

i too contributed to stackoverflow and eventually stopped because it didn't feel worth the effort. i never asked a question though, so i didn't have the experience GP made, but i doubt i would want to delete everything, at least not without moving all my answers to another location.

once or twice when searching for the solution to a specific problem i was lead to a stackoverflow question and had to discover that the answer that solved my problem was my own from a decade earlier. so i too benefit from posting answers. deleting them would reduce that benefit.


> the nature of FOSS is that i can't really prevent anyone from using it

That's my point - maybe FOSS isn't the absolute good we've been lead to believe.

It was a response to locked down proprietary software which increasingly became hostile to its users. And it is (from a user's perspective) better that that for sure. But from a dev perspective, it's not as good as it could be.

> my answers

Exactly, those are your answers, your work. We've spent a lot of our limited time working for other people's benefit because we believed in it or sometimes because it was fun. But ultimately, it's becoming clear other people don't care and will throw us under the bus as soon as we're no longer useful. And then there's people who are just looking for a way to take advantage of us.

And I want to exclude both from benefiting from my work.

We should strive to find methods to make good, productive, pro-social people to benefit while keeping anyone who wants to exploit us away.


Getting free stuff is good for the user of the stuff, yes. Giving away stuff for free might not feel good if you don't like the people you're giving the stuff away to, yes.

People aren't "taking advantage" of you by benefiting from the free work that you voluntarily do. They may be rude towards you, but it's your choice to work for them or not.

If you release your work to the world, there's no license agreement in existence that will prevent "undesirables" from benefiting from your work. See: all of the AIs being trained on publicly accessible code (regardless of its license).

The answer is just, do write open source code if you think it's fun, and you're okay with the worst people you can imagine using your code. If you write a geodata library, it might be used in a targeting module for a bomb, which might in turn be launched towards civilians. That's just a consequence you'll have to accept.


Do you believe in ownership of physical property?

Surely you have to understand that you own a plot of land, a house, the number in your bank account or the clothes on your back only to the extent that somebody is willing to perform violence on those who want to use "your property" for themselves. That might be you yourself but you can't be everywhere at once and you can't be awake all the time either. That protection comes from mutual agreement of people to defend each other's properties, usually through some institution such as the police/army/state.

Why should intellectual property be any different?

Why should I not be able to make an agreement with people like me that we only allow certain people to use our work under certain conditions and if any one of us violates the agreement (or an outsider decides to ignore it) we use violence to stop and punish that use?

> the people you're giving the stuff away to

Not giving it to them, they are taking it. I am making it available with instructions who can use it and how. Some people take it, following those instructions, some take it ignoring them. Would you use the word "give" if it was about leaked source code? What about leaked nudes of your girlfriend or daughter?

> See: all of the AIs being trained on publicly accessible code (regardless of its license).

That's a circular argument.

LLM companies claim what they're doing is legal. At best they're using a loophole - statistical interpolating autocompleters did not exist when copyright law was being written, I doubt many people could conceive of them at that time. At worst they are actively and knowingly violating the law, not to mention consent, of the best most altruistic people in the world to exploit them and bring about a new era of inequality and oppression.

Anyway, just because somebody gets away with something does not make it legal and certainly does not make it right.

> That's just a consequence you'll have to accept.

Or I can build both social and technical means to control the usage. Nothing is perfect but then if you want perfect, why do you lock your car or home?


> doesn't look like he started this project, so I'm not sure it's his place to archive it.

This is a very valid point. It indeed looks like it was done in affect rather than after careful discussion with the (at least) ten members of the nvim-treesitter org.


This is a common issue with tooling used by open source.

Either you alone own the repo but then you're a single point of failure. Or you give those perms to others but then any one of them can abuse it (or get hacked).

I'd like to see tooling which requires consensus or voting to make certain changes such as archiving a repo or publishing a new release.


If you read that as "we'll break the law for you", it's a you problem.

I read it as a commitment to do something, but I see nothing that comes close to matching it.

You make it sound like there is an obvious solution to this. So what is it? No changes ever? Make 20 year experiments before rolling out any change at scale? Hold decision makers personally accountable for billions of GDP loss? Compensate the cohort monetarily for the generational inconvenience?

For some things there just is no easy way.


> Make 20 year experiments before rolling out any change at scale?

Basically. It wouldn't require a 20 year experiment probably. Looking at whole words vs phonics as an example, you'd get a handful of schools to participate and they'd try phonics in one class and whole word in the other. By the time the kids were in 2nd grade the fact that whole word learning wasn't working and that a higher rate of kids needed remedial lessons to catch up would have been obvious. And if it had worked really well you'd expect to see that performance improvement in reading by 2nd grade too!

So the experiment would take 3 years. Though then you'd probably want a larger scale experiment. I'd think if things were going well once kindergarten finished you could probably start involving more schools in the experimwnt the next year. So like 3-6 years altogether.

We have been successfully educating kids for a long time; if we want to mix things up with some fancy new pedagogy we should absolutely be studying if it actually works before rolling it out at scale!


Err, no, it's actually really easy. Just give them a choice in the matter first. You do realize you're arguing for toying with people's futures based merely on effectively unproven methods because you just feel like doing it? Futures that have inherent value because the people being toyed with are inherently valuable too. But no, it's totally fine to not be held accountable for your actions. I mean, what's the real damage done here? Think about how expensive it'd be, or, or, how long it'd take if not! That surely justifies just going to town on people, right? :/

That's essentially what you're arguing. Perhaps not what you intended, but it is what it becomes given the context, and more importantly, the people involved that you disregard so callously.

If it's so darned expensive to do, have you considered that you have the free will and intellectual sophistication required to just . . . not do it? If it'd be so expensive to recuperate a group of people, either your methods have too high a probability of requiring it or your method is just perhaps not ready yet if the potential end result are that disastrously bad. Either way, it points towards going back to the drawing board instead of to town.

But if it's oh so difficult to get these studies done, you know what you can do? You can do it over longer periods of time, just like you bemoan, because that larger time scale will stop you from ruining other people for your own curiosity of will x work in y. You could give people the choice to join the study, you could have smaller cohorts every time and refine the process as you go, you could keep each cohort limited to a year or two to avoid long-term damage, and you could test in different age ranges to get more data.

The list goes on and on and on. Almost like studies on people require larger caution than just testing to see what works without any precautions and going from there. When learning about the scientific method, the idea that people are, you know, people and not test subjects is pushed constantly. Because certain people sadly need that reinforced to avoid being callous researchers. It's oh so easy to forget the numbers you toy with are real lives with real value regardless of what is done with those lives.

We trade immediate results and dubiously better efficiency for larger time spans exactly so that we can ensure the people in them remain protected. Giving people choice in the matter, and letting guardians weigh the value proposition (like other studies have done successfully) by giving them the prerequisite information required to make those decisions, allows for a higher likelyhood of avoiding disasterous effects on those very same people. It's not "generational inconvenience" when lives are affected for multitudes of years; it's callous impatience. It's not "no change ever," is respect for the people involved in attempting those changes and respect for the potential ramifications of those changes. It's borderline evil to disregard people because you, and I do mean you here, don't have the patience to ensure people's safety because, oh no, it'll take a while, or cost a lot if you're held accountable.

Rather, it's okay that things take time, it's wanted that we don't make haste. Because haste makes waste. Because we don't need immediate results. Because we're not working with machines, we're working with the single most valuable thing we have on this earth; a human life. Have some compassion for those people, and you'll find that change doesn't take so long after all.


That's a lot of words reiterating how intensely important the matter is. I agree, it is. But your suggestions are either doing nothing out of caution, vast fragmentation, or too small numbers to really see the effects at scale.

Mostly it's a question of middle ground for an acceptable scale of decision, but "only change something if we know for a fact it's purely beneficial" is not a realistic plan no matter how intensely important the matter is. At some point decisions have to be made.

This is one of the things that becomes harder and more entrenched the worse those decisions are democratically legitimated. I think it's not unlikely that the difference in expectations between us boils down to a general different level of trust in authority.


The mv3 problem was never about "does it work now". It was about "can it keep up". Ad blocking is a cat and mouse game, and the mouse is kneecapped now. You're being slow boiled.

Well said. I'm glad that as blockers have managed to develop effective approaches under Mv3, but it took a tremendous amount of engineering effort that was only necessary because Google was trying to impose these very large costs on them.

There is a faq entry about that on tfa. The main differences are use of rsbuild (not a big diff down the line I expect, since vite uses rolldown now), and design to accommodate llm agentic development.


I did read that LLM generated paragraph but I was thinking there was some architectural difference/some improvement to existing tooling.

I don't mean to sound too dismissive but just slapping on rsbuild and well formatted errors + llms.txt doesn't seem that useful?


Yes, the overall framework design is different. This isn't just a superficial Vibe coding project (although AI was used in its development), but rather the result of my long-term experience developing browser plugins. Incidentally, I'm also the author of the browser extension Video Roll (https://videoroll.app). The main differences are as follows: 1. First, it's based on Rsbuild. If you install it via `pnpm create addfox-app`, you can quickly integrate unit tests (Rstest) and analytics reports (Rsdoctor), and use it simply with `addfox test` and `addfox dev --report`. 2. The extension supports three paradigms for entry point identification: first, automatic file-based identification (which needs to follow AddFox rules); second, it supports directly writing the source file address in various fields of the manifest (e.g., `background.service_worker: 'myBackground.ts'`); and third, it supports custom entries. 3. For automatic browser startup, a default address is set for most Chromium-based browsers (if no custom address was selected during browser installation), so you don't need to configure it to start automatically (supports Vivaldi, Opera, Arc, Yandex, etc.). 4. Using the Rsbuild plugin, if --debug is enabled, error monitoring code will be injected in dev mode, which will output errors such as content_script and background to the terminal, making it easy to directly copy and tell the AI without having to open the browser's devtools. 5. llms.txt/meta.md/error.md were generated, containing basic source information for the project, including the entry file, version used, framework, etc. These files will be useful if you are using Agent in conjunction with Skills for development.

I agree that Vite and Rsbuild are just choices of build tools, and the difference in development experience may not be significant.

AddFox is not a perfect framework and is still in a very early stage. WXT and Plasmo are both excellent; you just need to find the one that suits your needs. Thank you very much for trying it and providing feedback.


Yes. But disk space isn't exactly the most valuable resource you have as a developer/power user


Depends on whether those businesses want to do business with the EU


Kagi is unfortunately in a tough spot, imo.

I'm a happy subscriber, and it's certainly a big improvement over Google search. But the internet just isn't the same place it was five years ago. And as search results (for non-navigational queries) are becoming less useful by the day, I find myself asking AI to do it for me more.

There's a lot to like about Kagi, but they'll probably have to reinvent themselves if they want to grow beyond the niche that high level internet search will probably become.


Why though. Why does it need to grow beyond a niche premium option? As long as they’re paying the bills and everyone is happy why not just let a good thing keep chugging along.


Amen. "Growth" is literally product cancer


Investors want to see growth, chugging along is not the return they expect. It would be nice tho.


Sounds like a poisoned chalice.


As far as I recall, the founder and angels have put in almost as much money as VCs. Certainly an interesting funding story.


Agreed. I LOVE Kagi as a search engine - so long as it answers queries in under 2s with no ads, I'm a very happy customer. I don't mind if they flirt with LLMs, but if the LLM work detracts from the search work they will lose me as a paying customer. If the LLM work slows down search results I also lose the only thing I pay for - search result response time and correctness.


I always thought the browser, coworking space (https://hub.kagi.com/), and mail (https://kagimail.com/) are a distraction.

I've been a paying customer since 04/2022, and have the early adopter badge. I was easily doing 600-800 searches/month, and now I do 400-300 searches. I think that's the reality. More and more people are asking ChatGPT or whatever for search.


"Asking AI" is doing a lot of work there.

The people who pay for Kagi do so for very specific reasons, often because they know what "asking AI" really means for their privacy.


I didn't renew my Kagi subscription, as I am now mostly using AI based search (google, chat.bing.com, perplexity). Search engine wise, Kagi was superior but it is just that traditional search engines are less and less needed with the rise of AI.

BUT Kagi is in a good spot, as they have their user data (and the feedback/upvote/downvote/blacklist feature) to train their own models on. Maybe their AI will one day be a superior search. Especially when the big ones like Google will start to enshittify the free AI tier with ads, or SEO-like AI manipulation on Google will take off.


In other companies they don't make this explicit during the interview, so something is different


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: