Hacker Newsnew | past | comments | ask | show | jobs | submit | rahimnathwani's commentslogin

"I want to be perceived as someone who is effective _and_ pleasant to work with."

That seems like a good reason to adapt your communication to your audience. If x finds preamble unpleasant, but you use unnecessary preamble when communicating with x, that won't help you be perceived as pleasant to work with.


> If x finds preamble unpleasant, but you use unnecessary preamble when communicating with x, that won't help you be perceived as pleasant to work with.

Absolutely! But OP isn't suggesting preamble is unpleasant, they are saying there is little or even no value and to remove it altogether.

Even if OP did in fact mean to suggest this when speaking to them directly, it is unbelievably selfish to ask (let alone _beg_) someone to eschew their voice just so you don't have to read a few more words and "waste calories" to gather the information they believe is important.

The pleasantries and preambles and hollow words _are_ important. People might be adding them without having deep thoughts on them to the point where they explicitly include them, but they want to signal to you that they consider your humanity. That signal isn't noise, it's a very minute sign of camaraderie. If OP doesn't value that signal, that's fine, but pretending it's noise is antisocial.


"But OP isn't suggesting preamble is unpleasant, they are saying there is little or even no value and to remove it altogether."

I'm not sure about this. OP said:

"When you ... [SNIP] ... you are making the recipient wade through noise to get to signal."

So it seems like OP wants other people to:

- not waste their time and energy, but

- is happy to take any emotional cost that comes with that.


If you want your coding harness to be predictable, then use something open source, like Pi:

https://pi.dev/

https://github.com/badlogic/pi-mono/tree/main/packages/codin...

But if you want to use it with Claude models you will have to pay per token (Claude subscriptions are only for use with Claude's own harnesses like claude code, the Claude desktop app, and the Claude Excel/Powerpoint extensions).


"With Bluetooth, only Apple addresses this in a very limited manner with a lock in to specific models and up to 2 devices and no video calls or live audio support."

The Bose mobile app also allows me to use two pairs of Bose headphones on a single device, but still only 2 devices and AFAICT only for media consumption.


Tangential - The funny thing is, broadcasting Bluetooth to multiple devices isn't a new thing at all. Back in 2017, Motorola did it on their phone [1]. No extra hardware afaik, it was purely a software solution.

Of course, the company disappeared, and now in 2026, we have lesser tech than we had back in 2017.

If you're wondering "Well, how did a company disappear?!", feel free to take the most corpo/capitalist-dystopian guess.

If you guessed "They got bought out by Google - presumably for IP - with the founders joining Big G, and Google of course promptly shelved it and did absolutely diddly squat with it", congratulations, you win... frustration and disappointment, I suppose!

1 - https://www.cnet.com/tech/tech-industry/the-most-exciting-th...


This site presents models in an incomplete and misleading way.

When I visit the site with an Apple M1 Max with 32GB RAM, the first model that's listed is Llama 3.1 8B, which is listed as needing 4.1GB RAM.

But the weights for Llama 3.1 8B are over 16GB. You can see that here in the official HF repo: https://huggingface.co/meta-llama/Llama-3.1-8B/tree/main

The model this site calls 'Llama 3.1 8B' is actually a 4-bit quantized version ( Q4_K_M) available on ollama.com/library: https://ollama.com/library/llama3.1:8b

If you're going to recommend a model to someone based on their hardware, you have to recommend not only a specific model, but a specific version of that model (either the original, or some specific quantized version).

This matters because different quantized versions of the model will have different RAM requirements and different performance characteristics.

Another thing I don't like is that the model names are sometimes misleading. For example, there's a model with the name 'DeepSeek R1 1.5B'. There's only one architecture for DeepSeek R1, and it has 671B parameters. The model they call 'DeepSeek R1 1.5B' does not use that architecture. It's a qwen2 1.5B model that's been finetuned on DeepSeek R1's outputs. (And it's a Q4_K_M quantized version.)


They appear to be using Ollama as a data source. Ollama does that sort of thing regularly.

It seems like Australian law gives the consumer the choice of refund or replacement, i.e. the store can't refuse to replace: https://www.accc.gov.au/business/problem-with-a-product-or-s...

Laws and regulations require enforcement. If not, honest people have abide by them, but dishonest people do not.

If you're going to spend taxpayer money to enforce laws and regulations, it seems like you should take advantage of efficiencies.

It seems like using third party data (like that obtained from Thomson Reuters Clear) is a very cost-effective way to obtain information that's useful.

Some people in the comments here object that the district is over-relying on the third party data provider. But from the article we cannot tell what happened. We don't know whether this is a 'computer says no' situation or whether the information from third party sources was was tipped off the school district, and then they verified everything to their satisfaction.

In general, it's easy for parents to share a story about what happened to their child in school, and very hard for a school district to respond. Unless the parent signs some sort of waiver, the district can't easily respond, without breaking privacy laws. Even if the story is 100% false, the school district probably can't answer the journalist's questions without violating FERPA.


According to the X app:

- the user @hac has existed since 2008

- since then, it has posted 5 tweets totalling 14 words

- it does not follow any accounts

Is this your account, or is this a different account that recently took over the @hac username?


This is interesting but the result must depend on the screen and the brightness, no?

I tried it on a recent Pixel with brightness set to two-thirds, and this is my result:

https://www.keithcirkel.co.uk/whats-my-jnd/?r=ArggKP__c4_b


When GPT-4.5 came out, I used it to write a couple of novels for my son. I had some free API credits, and used a naive workflow:

while word_count < x: write_next_chapter(outline, summary_so_far, previous_chapter_text)

It worked well enough that the novels were better than the median novel aimed at my son's age group, but I'm pretty sure we can do better.

There are web-based tools to help fiction authors to keep their stories straight: they use some data structures to store details about the world, the characters, the plot, the subplots etc., and how they change during each chapter.

I am trying to make an agent skill that has two parts:

- the SKILL.md that defines the goal (what criteria the novel must satisfy to be complete and good) and the general method

- some other md files that describe different roles (planner, author, editor, lore keeper, plot consistency checker etc.)

- a python file which the agent uses as the interface into the data structure (I want it to have a strong structure, and I don't like the idea of the agent just editing a bunch of json files directly)

For the first few iterations, I'm using cheap models (Gemini Flash ones) to generate the stories, and Opus 4.6 to provide feedback. Once I think the skill is described sufficiently well, I'll use a more powerful model for generation and read the resulting novel myself.


this is fascinating. I would like to try this as a side project as well.

some other md files that describe different roles (planner, author, editor, lore keeper, plot consistency checker etc.)

- What are these meant to be exactly? are these sub agents in the workflow or am i completely misunderstanding?


"are these sub agents in the workflow"

The idea is that on any 'turn', the AI model should be doing only one of those tasks. That's true whether it's in the main thread (with all the past context) or has just been launched as a subagent.

You can see an example of this pattern here in Anthropic's skills repo: https://github.com/anthropics/skills/tree/main/skills/skill-... (the repo has four separate skill.md files: a main SKILLS.md and then three others for specialist roles)

Whether they're run as subagents (a separate AI chat session with clean context) is a separate decision, and it depends on whether the coding harness supports that. https://agentskills.io/client-implementation/adding-skills-s...

I'm still trying to figure out the subagent delegation stuff.


Do you mind posting these novels?

The ones I created so far had some characters from existing books and movies, and I don't want to take my chances with how 'fair use' is interpreted.

I see. Did your kid enjoy them?

One downside is there’s no community for him around the books, but maybe that’s not a big deal.


They were good enough that he finished reading them once, but they were not good enough that he would recommend them or re-read them.

I am 50 and the answer is 'yes'.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: