Hacker Newsnew | past | comments | ask | show | jobs | submit | georgeck's commentslogin

Thanks for the feedback. The system prompt for the web app is optimized to balance between too short a summary vs not long enough. The browser extension, on the other hand, allows you to customize the system prompt to make the result exactly the way you want.

Another option is to look at the Bluesky feed [0]. This bot looks at top stories in HN and creates a very short summary. It only does around 20 posts a day. That could be limiting too.

[0] - https://bsky.app/profile/hncompanion.com


Another interesting link relevant to AI summaries for HN discussion is the utility created by simonw [0]. Our tool is quite similar to what simonw did, except that we make the back-links more seamless. If you’re interested in a standalone tool similar to what simonw developed, you can find our script here [1]. If there’s community interest, we can improve the user experience of this tool.

[0] - https://til.simonwillison.net/llms/claude-hacker-news-themes

[1] - https://github.com/hncompanion/skills-scripts


Another interesting feature for HN users in the companion Bluesky bot we created [0]. It analyzes top stories from HN [1] and generates a new Bluesky thread every hour using the same cached summary. This bot is operational for the last 8 months.

[0] - https://bsky.app/profile/hncompanion.com

[1] - https://news.ycombinator.com/best


When people hear about AI summaries, this is what comes to mind. And I understand their concern. Wouldn’t a summary strip away the essence of a conversation? Let me explain with an analogy. When we’re overwhelmed with information, we benefit from a system that organizes it. For instance, I might use a tool to declutter my desktop files and categorize them into logical groups. Once the categorization is complete, I can view these clusters and focus on the specific files within those clusters that pique my interest. However, if the summarization process also alters the files, I wouldn’t want to use that tool.

In the context of HN Companion, our objective is to examine lengthy threads and group them into 3 to 4 topics. Within each topic, we present the actual discussion that represents that cluster. We invested significant effort in developing a system that enables you to not only read the actual comments but also seamlessly jump to that discussion and continue the conversation there.

I encourage you to explore a few of the summaries in the app. I’d greatly appreciate your feedback on how we can enhance our service.


> When we’re overwhelmed with information, we benefit from a system that organizes it.

That may be true for something like a HUD or where we're really overwhelmed with info that is fast and reaction time is paramount.

But you can read a hackernews thread one line at a line and never get overwhelmed, right? I literally have never felt overwhelmed looking at the threads (which are also organized into local groups already etc).

I read it for pleasure and engagement, it's not something I want AI to automate away.

And when you say "continue the conversation there", do you mean use AI to write comments? If so, then this is the opposite of what makes HN HN.


Let's look at an example post in HN Companion. This is the post on singularity in the home page right now:

https://app.hncompanion.com/item?id=46962996

This post has 500+ comments with various viewpoints and you see the summary on the right side.

You are right that most of the time threads are organized into local groups. But in the above example, there are many comments that relate to the same topic, but are not under the same parent comment. HN Companion's summary surfaces this into a topic "Limitations of Current AI Models" which shows comments from up and down the post.

You can click on the author name in that topic in the summary panel, it will take you directly to the comment. This is what we meant by "continue the conversation there", i.e you are now in the main HN experience, so you can navigate to child/parent/sibling comments (through the link buttons or keyboard navigation).

We definitely don't want AI to write comments. Happy to elaborate if you need.


Honestly, after checking out the link, seems like something I'll personally never use/want.

I'm okay with crawling through comments and taking in the various viewpoints instead of having an LLM summarize it for me.

It basically kills the entire tone/vibe of the place and makes everything seem like robot-written with no personality. Also it's kind of weird you're taking other people's words and then reframing it for them/others.

Also nowhere does that thread seem to be "overwhelming with information" like you originally claimed. Basically solving a non-problem.


Fair enough. I completely understand that the experience and hunting for gems in the comments is the core appeal of HN for many and AI summaries definitely aren't for everyone.

That said, we are seeing a consistent daily user base who do find value in the summarization, so it seems to be solving a pain point for a specific segment of readers, even if not for all.

Apart from the AI features, we actually built HN Companion as a general power-user client. It supports keyboard-first navigation (vim-style J/K bindings for comment navigation), seeing context for parent/child comments without losing your place, and tracking specific authors across a thread.

You might find those utility features useful even if you ignore the summary sidebar entirely. In the browser extension, the summary panel is something the user have to activate - it doesn't show-up by default.


Thanks a lot, Rayshan!

Your feedback through GitHub issues has been instrumental in helping us prioritize specific features. Additionally, the Chrome Web Store team has approved our extension for their ‘Featured’ badge, which could enhance the trust in installing the extension. I’m curious to know if you’ve had a chance to try the web app.


Congrats on the launch! How does this compare to https://finetunedb.com


I did a new user evaluation.

Problem: I have a 8 or 31 billion feature model to finetune like https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-30B-A3B-Ins... or https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-8B-Instruct...

1. finetunedb.com supports meta, mistral and openai 2. tinytune supports DeepSeek, Google, Meta, Mistral, OpenAI, Qwen and Teknium

Fine tune db doesn't do qwen3 so it fails.


Thank you so much! And thank you for your question.

Yes, so to answer it, the idea of TinyTune was to literally be "tiny", i.e., very simple to use. It (mostly) takes 3 steps + some waiting time and you have a custom model. I found other services to be a bit more difficult to use.

And yes, I also agree with iFire's findings that another big difference is the number of models that are available. But the main differentiator with other similar services would definitely be the focus on its ease of use.

I should say that FinetuneDB seems like a solid competitor though!


This was discussed previously on HN: https://news.ycombinator.com/item?id=36089176


Ah thanks. Seems like the downsides are invasive KYC policy + no replication. I'll stick with Backblaze for now.


It would be really useful to have more client-side control over media storage. That way, I could better manage storage growth without wiping entire threads.

For example, being able to see all media across chats, sort by file size, and optionally group by conversation would make it much easier to clean things up.


> It would be really useful to have more client-side control over media storage. That way, I could better manage storage growth without wiping entire threads.

> For example, being able to see all media across chats, sort by file size, and optionally group by conversation would make it much easier to clean things up.

I have good news for you: this already exists.

On Android:

Settings >> Data and Storage >> Manage Storage >> Review Storage

This allows you to view all of your media, files, and audio across all chats, sorted by the amount of storage used. You can also delete those files individually without affecting the rest of the chat.

You can also do the same thing within a conversation.


The issue I have with this is that it deletes the whole message, not just the media. In WhatsApp, you can delete media from the images/video folders and the messages remain in the conversation, they even still have the blurry preview iirc. In Signal, you end up with gaps in your history instead.


Thanks, that’s helpful.

I’m also hoping similar media management options are available on iOS and desktop, since I use Signal across devices.

By the way, does Signal treat synced devices (like desktop or a second phone) as “replicas” vs a “primary”? If so, does this affect how storage or message history is handled between them?

Would appreciate any insight from folks familiar with the technical side of this!


On my Samsung: Settings >> Device Care >> Storage


Does that give you per-attachment insight?


I think you're talking about Android settings, though, not Signal settings?


To your point: What I am missing with Signal:

Choice to always store media locally on the phone.

What I miss with most messenger apps: Archiving old stuff and offload it to a remote device.

Right now Signal is 8GB in size and doesn't stop growing.


SQLite's backward compatibility means many best practices - like WAL mode, foreign key enforcement, and sane busy timeouts - are not enabled by default.

The author's Go library, sqlitebp, automates these settings and others (NORMAL synchronous, private cache, tuned page cache, connection pool limits, automatic PRAGMA optimize, and in-memory temp storage) to make high-concurrency, reliable usage safer and easier right out of the box


The backwards compatibility also means that the frustration over concurrency and synchronization is largely a waste of time. Most SQLite builds are created such that all activity is serialized through a single mutex by default.

> In serialized mode, API calls to affect or use any SQLite database connection or any object derived from such a database connection can be made safely from multiple threads.

https://www.sqlite.org/threadsafe.html

Many libraries get this wrong and make it unsafe to use from multiple threads despite the underlying provider being capable. I think these are effectively bugs that should be resolved.

In my C# applications, I use System.Data.SQLite and share a single SQLiteConnection instance across the entire app. This connection instance typically gets injected as the first service, so I can just take a param on it any time I need to talk to SQL. Absolutely no synchronization occurs in my code. I've tried Microsoft.Data.Sqlite but it seems to have rare problems with sharing connections between threads.


In Go, a database/sql “connection” is actually a pool, and Go makes sure that it only calls driver methods serially for an actual driver connection from a single goroutine.

So your point (which is not very clear to me, with my limited knowledge of C# and SDS) is largely moot in Go terms.


This is a great idea! Exactly what I was also thinking and started working on a side-project. Currently the project can create summaries like this [1].

Since HN Homepage stories change throughtout the day, I thought it is better to create the Newsletter based on https://news.ycombinator.com/front

So, you are getting the news a day late, but it will capture the top stories for that day. The newsletter will have high-level summary for each post and a link to get the details for that story from a static site.

[1] - https://news.ycombinator.com/item?id=43597782


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: