Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanjkirk's commentslogin

You definitely shouldn't be storing logs on giant PVCs; they should be injested into a log aggregator.


yes of course but what i meant that its not uncommon to see elasticsearch instances who themselfs need huge pvcs etc. i was just abbreviating to make a point =)

What I was trying to say is just that the scale of data we work on has changed and henceforth "big data" became "data" and what is now considered "big data" really would have been called insane just 10 years ago.


"For Basecamp, we take a somewhat different path. We are on record about our feelings about sharding. We prefer to use hardware to scale our databases as long as we can, in order to defer the complexity that is involved in partitioning them as long as possible — with any luck, indefinitely."

— Mark Imbriaco, 37Signals

These quotes are from 2009 and 2010, and yet here most of us are in 2022, having learned the lesson the hard way over the last decade that there is no refuting this simple logic. I'll add my own truism: All else being equal, designing and maintaining simpler systems will always cost less than complex ones.

quote references:

https://signalvnoise.com/posts/2479-nuts-bolts-database-serv... http://37signals.com/svn/posts/1509-mr-moore-gets-to-punt-on...


Don't forget evolution: Business do change. A simple sistem far easier to evolve than a complex one, too.


You're half right. The "bre" e is pronounced like pen or bed, not bray.


And here I thought the schwa just wreaked havoc in English.


Outside of my home country of New Zealand, I think very few English speakers pronounce either "pen" or "bed" with a schwa. (actually, even we don't)


It's not.


How many "darknet markets" accept anything else these days? What's the alternative, cash in the mail? Venmo?


I'd be incredibly grateful if someone could break this down for me.


They're suggesting you enter 2 financial derivatives contracts. Specifically options on the level of the S&P 500 index as it will be in Dec 22.

The first transaction is to buy a, so-called, put option struck at $3625. This means that if the level of the S&P is below the strike price ($3625) on the 16th Dec, you will make the difference between the level then and the strike price. Buying this contract will cost you some money, called the premium.

They're also suggesting selling a similar contract struck at a lower price ($3600). It will have a similar payoff to the first one but for the buyer of the contract. For you, you'll lose the difference between the S&P level and the strike price if the S&P level is below $3600 in Dec. You will receive a premium from the buyer for this trade. The premium will be a little less than the cost of the option you bought because the strike price is further from the current market level.

The reason for the 2 together is that, for most future levels of the S&P, gains and losses will largely net out. You'll lose a bit of money (the differences between the premia) if the level doesn't go below $3625. But you'll gain a bit more if it does ($25 - the diff between the premia if I recall correctly).

The current level is about $4470, so the premia for both should be quite low as the strike price is a long way from the current level. We call that an out-of-the-money option. As such, the differences between the premia will also be very small.

In summary, they're suggesting a pretty cheap way to make money if the S&P tanks by Xmas. And you would lose (relatively) little if it didn't tank.

Do not consider this to be investment advice. I reserve the right to have completely forgotten options 101 which I did 20 odd years ago.


I don't think they're suggesting to actually make those trades, but trying to show what the market thinks the probability of a sizeable crash is.


That’s correct. It’s pretty infrequent that I assume I know better than the market is pricing.


Docker is heavier (and more dangerous) because of dockerd, the management and api daemon that runs as root. Actual process isolation is handled by cgroup controls which are already built into the kernel and have been for years. You can apply them to any process, not just docker ones.

However, Docker is essentially dead; the future is CRI-O or something similar which has no daemon and runs as an unprivileged user. And you still get the flexibility and process isolation, but with more security.


All the so-called "docker killers" are essentially unfinished products. They don't compare 1:1 to docker in feature set and even if they run as rootless, they still are vulnerable to namespace exploits in the Linux kernel. Though docker runs as root, it's still well protected out-of-the-box for the average user and is a very mature technology.


Are you from 2018? Everyone running OpenShift is using CRI-O and that footprint is not small. We made the switch in our EKS and vanilla k8s clusters in 2021. Docker has now even made their API OCI-compliant in order to not be left behind. And the point is that most people don't want a docker feature-for-feature running in prod. The attack surface is simply too large. I don't need an API server running as root on all my container hosts.

Use docker on your laptop, sure. Its time in prod is over.


Agreed. Tons of obsolete assumptions in this thread. We have been using Podman / OpenShift in production and never ran into a use case where Docker was needed.


One of the biggest benefits of k8s for me, back in 2016 when I first used it in prod, was that it threw away all the extra features of Docker and implemented them directly by itself - better. Writing was already in the wall that docker will face stern competition that doesn't have all of its accidental complexity (rktnetes and hypernetes were a thing already)


Kubernetes used to (tediously) pass everything through to Docker, but since 1.20, that's resolved, and it now uses containerd.


Not everything - for a bunch of things, the actual setup increasingly happened outside docker then docker was just informed how to access it, bypassing all the higher level logic in Docker.

1.20 is when docker mode got deprecated, IIRC, but many of us were already happily running in containerd for some time.


I thought its moving to CRI-o as well instead of containerd, or is k8s just not using docker, and containerd still supported in the future?


CRI-O is the target and containerd is the most common runtime implementing it at the moment.


Are you sure? This isn't my subject area but CRI-O looks like an alternative to containerd and implements the OCI compliant runtime like containerd does. And then there is a 3rd which is docker engine which is the one being dropped.


Sorry, I mixed up CRI and CRI-O. The roadmap is to remove dockershim (the interface to docker-compatible container runtime) and use only CRI - of which containerd and CRI-O are two compatible implementations.


Kubernetes has removed docker, so I think that's basically it from a large scale perspective.


Yep. Mantis has announced intent to continue maintaining the docker shim (which allows k8s to talk to docker programmatically, but I can't imagine many people switching the default to docker unless they are manually installing k8s on their nodes, which used to be common but no longer is.


Up until recently, I was the steward of a large, distributed, and profitable app that used no containers. The infrastructure was managed with packages and puppet, and it worked well.


Folders work perfectly in a pure hierarchical taxonomy. Many classifications defy this rigid of a structure, however. For example:

Widget 1: it is A and B but not C, so tag it with A and B.

Widget 2: it is A and B and C, so tag it A, B, and C.

Widget 3: it is B only, so tag it B.

That is pretty simple, but you couldn't represent that in a folder system without permutation folders, meaning you now have folder sprawl, making things harder to find.

This is how servers and ec2s are for almost everyone. Billing codes, environments, teams, business units, etc. A folder taxonomy to replace ec2 tags would be a nightmare.


Folders have the weakness which you describe. But that weakness is also a strength - folders are a form of encapsulation which, for example, discourages or prevents processes in one folder from destroying data in another folder (accidentally or maliciously). With just tags, that is a bigger danger.


That would be a hard link, though they can't cross filesystem boundaries.


Our current CloudFormation pipeline template is over 5,000 lines of yaml. That's probably on the high-end, but based on your statements, it sounds like you haven't seen a corporate cloud footprint.


I recommend using a tool that lets you break that up, something like Terraform.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: