Hacker Newsnew | past | comments | ask | show | jobs | submit | alex23478's commentslogin

In this case the advantage are operators for running postgres.

With Docker Compose, the abstraction level you're dealing with is containers, which means in this case you're saying "run the postgres image and mount the given config and the given data directory". When running the service, you need to know how to operate the software within the container.

Kubernetes at its heart is an extensible API Server, which allows so called "operators" to create custom resources and react to them. In the given case, this means that a postgres operator defines for example a PostgresDatabaseCluster resource, and then contains control loops to turn these resources into actual running containers. That way, you don't necessarily need to know how postgres is configured and that it requires a data directory mount. Instead, you create a resource that says "give me a postgres 15 database with two instances for HA fail-over", and the operator then goes to work and manages the underlying containers and volumes.

Essentially operators in kubernetes allow you to manage these services at a much higher level.


> How will Google force Android users to "update" so sideloadinng can be prevented

Google has the Google Play Services, which can be remotely updated via the Play Store, as has been done for the COVID exposure notification system [0]. Google's Play Protect already hooks into the installation process and could be updated to enforce the signatures.

[0]: https://en.wikipedia.org/wiki/Exposure_Notification


Not the original commenter, but I've used https://github.com/wal-g/wal-g before for this and had a good experience with it.

If you haven't done so already, I'd highly recommend reading the postgres documentation about continuous backups before setting it up, as it teaches the fundamentals: https://www.postgresql.org/docs/current/continuous-archiving...


But isn't that what makes it so absurd? The people that this supposedly targets will then become "nerds" and use PGP for their messaging, while the majority of people not discussing illegal activities will just suffer from worse security.


I expect that a large portion of the actually – not supposedly – targeted demographic will still not care or know how to set up encrypted comms, and I guess the EP also expects them not to. If someone actually wants to evade CSAR, they probably would know how to (and if not, all the better).


Yes, but setting up e.g. users and pg_hba might be something you would need to research before doing your first postgres deployment even if you previously came from a managed postgres service. Also coming up some sort of backup strategy would be a good idea.

But once you know these things, you could of course be faster.


This happened to one of my previous Xiamoi Phones too. Apparently the Flash Storage degrades over time, which makes everything that uses I/O super slow. For instance, starting Firefox took almost a minute, but it was usable as soon as all needed data was available in RAM. This is a known issue in the respective community, and sadly kills devices that would still be usable with the latest Android thanks to Custom Roms.



At the Hetzner Summit two a few weeks ago they presented the servers used for this.

I am not sure about the exact specifications anymore, but they are building dedicated servers for this with custom chassis where each server has a ton of drives and I think they each had 40 GBits networking. These are special servers that are not available to customers directly.


Yes, but it's built around musl libc instead of glibc, which causes compatibility problems with some programs


I think this really boils down how your team is using Git and which code review tool you're using. (I've never used Gerrit personally, but as far as I understand it, we wouldn't have this conversation, since it aims to refine a single change by re-submitting a commit over and over again?)

For GitHub/GitLab reviews, I'm totally with you - this makes it more convenient for the reviewer to check that/how you've responded to feedback.

But now if you merge this without squashing, your Git history on the main branch contains all your revisions, which makes operations like bisecting or blame more complicated.

For me personally, the sweet spot is currently a mix of stacked-commits and the PR workflow: Use a single commit as the unit of review, and polish that commit using a PR, and use the commit descriptions within that PR to tell the story of you responding to feedback. Then, squash merge that commit.

This provides both a good review experience and a tidy history. If you find a strange bug later and can't grasp how someone could have come up with that, the PR still has all the history.

Together with tools such as git-branchless, this also makes working on stacked-PRs a breeze.


I have used standard github and graphite reviews. I tend to prefer what I mentioned in my original post than graphite stacked PR review (which are essentially atomic commits)

Yes I also advocate for squash-before-merge, so a lot of little commits is doesn't show up in the main history.

> For me personally, the sweet spot is currently a mix of stacked-commits and the PR workflow: Use a single commit as the unit of review, and polish that commit using a PR, and use the commit descriptions within that PR to tell the story of you responding to feedback. Then, squash merge that commit.

To me time spent on commit polishing (and dealing with conflicts) is time not spent on product. Author comments on PR review, squash-before-merge, and sit-together with reviewer for big PRs to me seems a better compromise. I don't think super polished git history is worth the extra effort for most types of product, as long as I can track a change down to a PR discussion that is enough to me. From there I can track PR review commit changes individually if needed.

Like it is so uncommon for me to go digging on git history that deeply, usually all I care is "code behaving weird && line changed < 1 month ago then probably a bug introduced then"

Of course if you are working on aviation software and the like maybe the priorities are different. But I have spent way too much time dealing with rebase conflicts when trying to chop up my PRs into smaller commits. Dealing with these conflicts often introduces bugs too.


Why are people talking about stacked PRs/MRs? Shouldn't they be called queued? A stack is LIFO and a queue is FIFO.

(Of course in some special case you might want to merge a later one earlier, but I don't think that's the normal case people are talking about.)


Why is it a "Pull Request" instead of a "Push Request"?

Someone named it that way and it stuck.


You request others/the maintainer to pull. That was the only way before the forges. I guess gitlab's merge request is more descriptive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: