> Cloudflare pushing PQ by default is probably the single most impactful thing that can happen for adotpion. Most developers will never voluntarily migrate their TLS config. Making it the default at the CDN layer means millions of sites get upgraded without anyone making a decision
> cloudflare making pq the default is the only way we get real adoption. most devs are never going to mess with their tls settings unless they absolutely have to. having it happen at the cdn level is the perfect silent upgrade for millions of sites without the owners needing to do anything
> Incredible that we are regressing back to webrings and hand-curated lists like this
One of these hand-curated blog aggregator websites pops up on HN about every month. They're cool and good on the author for trying to solve the problem, but it seems like the wrong approach to me. They're too disorganized, a random collection of mostly tech- and politics-related writing from random people with zero way to vet the quality of the writing. They also require the creator/owner to care about the project for the long-term, which is unlikely. I never revisit the aggregators.
I wonder if webrings are a better fix here. The low-tech version could be to put a static-URL page on my blog that links to other blogs I like, with a short description. Then people who find my blog interesting might also enjoy the blogs that I enjoy. That could be powerful if it caught on widely.
Maybe a clever person could come up with some kind of higher-tech version that could present a more interesting & consistent interface to users, encourage blogs to link back to each other, and also solve the dead-link problem.
I think we're going to reinvent Google's "circles" mechanism from G+. We all (well, the terminally online, at least) are going to be part of several more or less overlapping villages, and the people in those villages are going to trust each other to not be bad faith actors. Everything else... everything that tries to scale... everything public... wasteland.
Something something Dunbar's number, Tragedy of the commons.
Interesting. Each time I think about how we could reboot the (social) web I have this on mind. I don't want exposure to everything, so kind of whitelisting the contacts/peoples/blogs is the first thought.
I guess it could work to carve your own cozy echo chamber that once in a while lets something new in.
The conflict I cannot penetrate is that some things (could) need a larger exposure surface. I.e. OS projects, maintainers that will naturally generate a large following. There are also individuals that want to maximize exposure, mostly for the sake of it. The latter could be neglected but the former not. That leaves an natural backdoor to turn any networking into the same cesspools we have right now.
I am not sure, maybe we have to subdue to the fact that a massive focus on a single thing will turn out into something bad. Considering the importance of Linus Torvalds to the software world, it can even work. He isn't really digitally socialized in a "modern" sense and he still is networked enough to manage an high impact project. Sure he is networked via the linux ecosystem, but that walls him away from direct interactions with the general public.
It seems like many people have the same or similar ideas. I was thinking of using a tool similar to bookmark-managers as the foundation of a new web. Where you subscribe to RSS-feeds of specific (or clusters of) people to specific topics as the "follow" primitive and you publish your own feed(s), which bookmark-managers btw. already allow. The missing pieces are commenting on the feeds of friends and a layer of federated ML for ranking, which the user controls by simple sliders that set the mark for dimensions like retrieval-vs-discovery, hightrust-vs-highnovelty, recency-vs-trendingimpetus and so on.
The few niche social media websites I have seen able to prevent rapid deterioration in quality without dying in active user count typically have a high barrier of entry. Reminds me evilzone one of the few decent hacking websites on the clearnet that actually had a decent community. They had some challenge you had to complete I can't even remember what it was, but it prevented new users from joining unless they could solve it. Was very simple iirc but it stopped large amount of the skids/hf peeps.
I like the idea of tree curation. People view the branch of their interest. Anyone can submit anything to any point but are unlikely to be noticed if they submit closer to the trunk. Curated lists submit their lists to curators closer to the trunk.
The furthest branches have the least volume (need filters to stop bulk submission to all levels, but still allow some multi submission). It allows curators to contribute in a small field. They then submit their preferred items to the next level up. If that curator likes it they send it further. A leaf level curator can bypass any curator above but with the same risk of being ignored if the higher level node receives too much volume.
You could even run fully AI branches where their picks would only make all the way up by convincing a human curator somewhere above them of the quality. If they don't do a good job they would just be ignored. People can listen to them direct if they are so inclined
Instead of having that one god-author who has to keep maintaining everything, I think a better option may be to have the whole comprehensively community-maintained. Which opens up the question: How do you open source structured data and maintenance?
> The low-tech version could be to put a static-URL page on my blog that links to other blogs I like, with a short description. Then people who find my blog interesting might also enjoy the blogs that I enjoy. That could be powerful if it caught on widely.
That has both caught on, is well-supported by WordPress and lots of other tools since forever, and is notable enough that there's a glossary entry for it on Wikipedia:
> I wonder if webrings are a better fix here. The low-tech version could be to put a static-URL page on my blog that links to other blogs I like, with a short description. Then people who find my blog interesting might also enjoy the blogs that I enjoy. That could be powerful if it caught on widely.
I have been doing this by linking my linkhut profile with either my profile picture (I used to) or just mentioning it in comments like I am doing right now
https://ln.ht/~imafh , Although not really entirely to blogs, I have this place to recommend cool musicians,projects,links that I have found and I write a short note in all of them as to why I really liked the link. But with tags you can especially have a #blog #webring and use linkhut with notes feature
What do you think about linkhut, I had submitted it to hackernews as a submission after finding it but there wasn't really much traction to it, I am not going to lie when I say this when this feature really resonated with me so much.
I hope more people come to know about linkhut, I hope I am doing my part in making people know about it :)
I have submitted it again after reading your comment. I definitely feel like certain discussions can happen on linkhut side which will be both interesting to read/write on.
The quality concern is real but I think it assumes writers are optimising for reach in a broad sense. I started writing about backend engineering a few weeks ago and what I actually want is to find the right hundred people, not a million random ones. Substack surfaces AI takes. HN surfaces whatever's trending. There's no feed for 'engineer who's been running Postgres in production for a decade and has things to say about it.' The problem isn't that I want to go viral, it's that even targeted discovery doesn't exist.
I think a web ring combined with some kind of web of trust style system would be nice. Ideally they could be both centralized where an initial creator holds the keys to what's allowed and decentralized where it just sort of exists. I haven't quite been able to sketch out a reasonable way to keep sites persistent and consistent except DNS records, though. DNS of course making it hard or impossible for smaller and less tech-savvy creators while also having it's own issues regardless.
I'm a big web ring person though so I might be biased and trying to use a hammer in place of a screwdriver.
Thanks to a post here a week or two ago, I started looking at Gemini and the Smolnet in general. It looks really appealing to me. No layout. Just the data and accompanying meta semantics (this is a list item, this is a quote, etc.). There's even a Geocities-like hosting service that is completely free and without ads, and it provides a Gemtext -> HTML conversion for people accessing via HTTP instead of gemini:
Reading aggergated news is somewhat of an art. I add and remove feeds and do keyword filterin then i scan over the 5000 newest headines and find 1 to 4 things that are really great finds (to me).
Maybe if you do that for 1000 days some automation can find a pattern in it? I doubt it. Filtering out garbage hundreds of items at a time is definetly doable.
> people who find my blog interesting might also enjoy the blogs that I enjoy. That could be powerful if it caught on widely.
Imho this is better at the blog post level of granularity. Sometimes I will like someone's writing style, much more often I will be interested in topical recommended reading.
Aggregate the aggregators, then add a search box and ranking algorithm. You’ll have something like early-internet search, because these blogs are reminiscent of the early internet, and higher signal-noise (even if you think it’s still low, at least there’s less obvious marketing).
Couldn't you technically crawl all these blogs for their "blog's I'm reading" and create a social graph? You could start vetting based on how often other blogs link to that one, sort of like an impact factor in research.
I don't like counting the number of subscribers, that ends up surfacing things like major news websites, or the hacker news feed. But I've found the graph to be useful in finding recommendations.
I feel like every new iteration of ways to find good content online: webrings, blogrolls, user upvoting/downvoting, giving everyone their own microblog to share interesting links, ML to learn your own preferences by your behavior - they all worked really well at first, but then eroded significantly once people figured out how to game them.
The economic incentive is overwhelming to corrupt these signals, either directly (link sharing schemes, upvote rings, bots to like your content) or indirectly (shaping your content itself to have the shape of what will be promoted, regardless of its quality).
What you almost want is to use any of these ideas and hope for it to catch on widely enough in your small niche to be useful, but not so much that it comes an optimization target.
Smolnet might be the answer. There really isn't a feasible mechanism for monetizing it. At worst, you could have some text ad embedded. No images. Minimal semantic markup (links, lists, quotes, code, generic text) in the case of gemini/gemtext.
I think the simple reason why small web / webring sites don't work is that if you're in the mood of "let's pull the handle on the internet slot machine and see what it surprises me with today", then social media does a better job. Without fail, it gives you something to be outraged about or impressed with.
And if you're looking for something specific - "I want to learn category theory" - then you don't visit a small web site because the content you're looking for is probably not on any woefully short, hand-curated list of URLs. So you do a normal web search (or ask your chatbot).
Another problem with web rings is that if you're hopping sites at random, you more often than not end up someplace weird in 3-5 hops. I guess it's the internet version of six degrees of separation: you're always at most six clicks away from neo-Nazis or SEO spammers.
It's a good question, and I think worth trying to answer. I think the key thing is that discovery is derived from a curated index rather than social link posting and voting, and the darwinian race to the bottom/popularity/campaigning that drives link aggregators is replaced by a more deliberate human curation with all of its good and bad. You find new things, you feel a slower pace, but maybe get bored more frequently too.
> Can anybody understand what happens and maybe explain it a little?
I spent a lot of time squashing bugs like this.
Windows has one window manager. Linux has dozens. Windows apps are written to make assumptions about how the Windows window manager works. Things like windowing event message sequences, side-effects on values returned by other APIs, the exact sequence of fullscreen status side-effects such as window size and mouse cursor capture and window chrome presence. That's valid because those always work the same way on Windows. But Linux window managers all do all of those things differently, and trying to get all dozens of window managers to behave exactly the same way as Windows's does is near impossible.
Another possibility is it's just how the game works, even on Windows. It was pretty common to get windowing bugs reported, test them on Windows, and see the exact same behavior as we had on Linux.
> I get a feeling from overall anti-AI sentiment online that a lot of people feel they're entitled to 100% of value created by anything even tangentially related to their person
Rather, I don't like that the terms I released my work under aren't being respected. I believe LLMs are derivative works of the pieces they are trained on. I spent more than ten years working on open source code, and now the models that were trained on my GPL'd code are being used to make proprietary code against the terms of the license. I find this reprehensible.
While it wasn't an explicit term of release, generally I did not expect anyone to get any kind of financial value from the blog posts I wrote. I just wrote them for fun & maybe others would find them interesting. Now, LLMs have been trained on my blog posts and are generating financial value for some of the worst human beings on the planet who are using their money to murder, demean, and maim other humans.
I now know that blog posts I wrote for fun are putting money in some sociopath's bank account, and the GPL'd code I wrote is being used to create software to exploit me & other users. If I continue to create things publicly, it will be used against me and other people, and there's nothing I can do to stop it except to stop creating things. It's all very disrespectful & demoralizing.
> I believe LLMs are derivative works of the pieces they are trained on
That's your opinion with 0 legal backing. IMO, calling them derivative is untenable logically for anyone with some understanding of LLM/transformer architecture.
You desire a sharing community, but the takers/defectors are destroying that community.
Copyleft attempts to create a pool of code that forces sharing. But it broadly fails because you simply can't force antisocial people to be good sharers (plus source code usually isn't as valuable as we hope).
With any gifting/sharing, you have to accept that some of it will be abused. It is hard to filter for only community minded people who don't greedily abuse, and ideally who give freely.
I don't believe my circle of friends are becoming more selfish. I'm unsure what I would say about the rest of the world.
I am in exactly the same boat, down to the ~10 years. Only difference is I ended up picking AGPL for my later works. Like it made a difference...
The whole situation disgusts me.
- They expect me to pay for access to my own stolen code.
- Arguing stealing should be legal because China does it and if US companies don't, they'll be left behind.
- People like the poster you're replying to who argue you're not entitled to 100% of the value you create - completely ignoring that the value will go to some-one and that some-one is already much richer than any of us and getting richer faster while providing less value, if any. Honestly, this makes me wanna track these people down just to find out if they're also in the owner class and are just secretly laughing at us while pretending "we're all equal" or if they're workers who genuinely don't understand how much they're being exploited and how much worse it's gonna get.
- People don't give a fuck. Colleagues happily using "AI" because it "saves time", not realizing if this continues, we'll all be without jobs and the only way this was possible was by stealing from each other and most of us being OK with it.
Honestly, I am hoping for a revolution. A proper one, with guns if need be, but most importantly, where people get what they deserve in full.
Last time this happened was during the second industrial revolution, so many people got fucked so hard, entire countries turned to communism. That was a bad idea but we can do better. It's not (just) about how owns the means of production but who owns the product. Even if "AI" turns into actual AI, as long as it's built on top of our work, we should own it - that means both controlling it and getting paid proportionally to our contribution.
The currently rich people can negotiate what fraction they get paid if they show us they're providing value. Of course, only after we get back what they stole and unless they end up executed. The value of a human life is apparently $7.5M so anybody who steals more than that should logically get a death sentence.
But none of this will happen, people are too stupid and will get manipulated by a charismatic liar like every single time before.
Oh man, tangent into one of my favorite library book experiences. I checked out a sci-fi book at the library. It was good I was enjoying it. Then a few chapters in, I found a previous library patron had written nit-picky notes in the margin, poking holes in the author's fictional science tech explanations. And these weren't little one-word exclamations, they were whole sentences written in perfectly legible, almost impossibly-tiny pencil handwriting. Some of them even had little drawn diagrams! It went through the whole book, every hundred pages or so some little margin notes about how such-and-such sci-fi babble didn't reflect how space-time actually works or whatever. It was a hoot, a little bonus on top of the book itself.
I had a similar experience with a second-hand copy of House of Leaves [0].
This was a special treat because the book itself already uses copious footnotes and cross-references from fictional characters to create a maze. And now a real person added to the effect by trying to make sense of it themselves.
Seems to me like coordinating with an entity outside of the spooks' control, such as the BBC, would give more opportunities for leaks. It would also reveal some information about who is controlling the signal--someone with some kind of relationship with the broadcaster.
During WWII, the BBC would daily have a section after the news dedicated to "personal messages" - which everyone knew were instructions to the resistance in France, or similar. "William waits for Mary" was one of the more famous ones related to D-Day, I think.
I do wonder how much of the apparent demand is driven by companies automatically running these things when users didn't actually ask for it. For example every web search I make now has an AI response that I scroll right past. I'm sure that counts for someone's token usage data, but I got zero value from it. This is happening in almost every software product now.
Tokens as a metric is the analogue of users as a metric.
In the end value per user is what matters in relation to being a healthy going concern and valuation in relation to Meta for example. Value per token is what should matter too - after all that’s what people are paying for.
Context, two nearly identical comments from different users.
hackerman70000 at 16:09 https://news.ycombinator.com/item?id=47677483 :
> Cloudflare pushing PQ by default is probably the single most impactful thing that can happen for adotpion. Most developers will never voluntarily migrate their TLS config. Making it the default at the CDN layer means millions of sites get upgraded without anyone making a decision
valeriozen at 16:17 https://news.ycombinator.com/item?id=47677615 :
> cloudflare making pq the default is the only way we get real adoption. most devs are never going to mess with their tls settings unless they absolutely have to. having it happen at the cdn level is the perfect silent upgrade for millions of sites without the owners needing to do anything
reply