Hacker Newsnew | past | comments | ask | show | jobs | submit | dustfinger's commentslogin

> We’ve combined lab-grown neurons with silicon chips and made it available to anyone, for first time ever.

There is a line somewhere here that I personally feel we should not cross.


100%

We know that neurons can produce subjective experience.

This is the first time in my life that I've felt a scientific avenue of research should shut down.


Animal testing, weapons testing, medical trials, cloning, psychological experiments… had you just never considered them before? Why this?

Those things all exist within our conscious realm. “Human brain cells in a vat used for computation” suggests horrors beyond understanding

Same reason people get scared to fly but drive everyday. Humans are simultaneously wildly irrational and terrible at calculating risk.

This is somewhat novel unlike say weapons manufacturing. Also assuming that the GP is in the tech community to some degree, it makes sense they’d have a stronger reaction.

There’s lots of bad stuff humans shouldn’t be doing.


Not sure why this is being downvoted. It’s a valid point. This neuron chip stuff is far less problematic than a lot of animal testing where you clearly have a whole organism that experiences something.

Factory farming too. The way we treat chickens in particular is out of a horror movie, and that’s in countries with some standards. Globally I’m sure many billions of animals are constantly submitted to the most grotesque torture for food.


I spoke inaccurately. I’m an ethical vegan and nonconsensual animal industry and testing is an atrocity, as it would be were we to substitute in humans.

That said, the novelty of this, the unknowns, the mind reels at all the possibilities here and it frankly makes me nauseous.

What hells of existence could we create? I have no doubt that we could create an all encompassing misery that is beyond our comprehension.

Just, truly disgusting to me on a deep level.


At the very very least there are more productive ways of spending time.

We don't really know that.

Sounds like you're applying scifi tropes to real life. Don't do that. That's why some people are developing "AI psychosis" today after playing with LLMs.

The fear is that we don’t really understand what causes consciousness. I think that’s a valid fear, because we can’t know ahead of time whether we will inadvertently create a “person” inside the machine.

Unless your proposition is that no collection of human neurons outside of live birth can become sentient, and I’m not sure how you’d arrive at that conclusion without invoking some kind of spiritual argument.


You're equivocating two totally separate things

To be a fly on the wall in that ethics committee meeting...

I have no mouth and I must scream.

it is a terrifying thought.

We grew a brain on a petri dish, gave it a shotgun, and sent it to hell.

Next up, we teach it to speed run Getting Over It. What a horrible existence.


I’m confused by this statement. A neuron is a machine. A silicon chip computer is a machine. All they have done is interfaced two machines.

This is naive or in bad faith.

Sure, a neuron is a machine.

200,000 neurons connected in a matrix is a brain, albeit a very primitive one. Ants have 250,000 neurons in their brains.


How is it naive? You admit that an individual neuron is a machine. 200k neurons in a petri dish isn't a brain. I'm not the naive one here.

Appreciate this breakdown and the disclosure.

+1 for highlighting that PR quality is the bottleneck. Garbage-in/garbage-out is exactly what I ran into and it’s why I’m planning to introduce PR templates so the why/what changed/impact is consistently present. For sparse PR bodies, I also optionally add truncated diff context for the LLM summary so the output isn’t just a long list of raw PR titles.

Also agree there is a split between dev-facing changelogs vs user-facing release comms that need to land where users are. What I built is aimed at the "developer-consumer" audience, people using the library, not contributing to it: it renders into our docs and is meant to be readable as a curated changelog, not a raw list of commits.


I agree that the best quality notes are the ones hand written by a thoughtful human. In my case we had about two years of history with no curated notes, and writing that by hand would have meant significant time investment vs shipping fixes and features. The generator helped us get coverage fast, organized the notes chronologically and categorically. I specifically designed the generator with your your concern in mind, in that it preserves manual edits as well as omissions, so we can gradually curate it into something we are proud of.


I agree with the philosophy of curating release notes for the consumer of the release. When I first started looking for a release notes strategy, I was considering towncrier for that exact reason. You are also right that commit messages are not intended for the consumer of the release, but a dialogue between developers.

Your points are well received and largely why I went PR-based (title/body with optional GitHub metadata) instead of commit-based. A PR title and body tend to be focused on the deliverable, whereas commit messages are narrowly focused on the code change at that moment with developers as the intended audience.

Re: git-cliff, I honestly hadn’t evaluated this one, but it looks solid for commit-driven changelogs. I like the rationale behind conventional commits being parsable and templates enforcing consistency. What constraints pushed you toward git-cliff vs writing release notes by hand, and do you have a config/template you have found works well for surfacing breaking changes?


Yeah, that matches what I have seen: if the upstream metadata isn’t reliable, automation can amplify the mess.

I tried to avoid relying solely on contributors to accurately label or tag things correctly. The script is tag-driven only for release boundaries (version tags), while categorization is derived from PR title & body with optional GitHub metadata. The script is idempotent and preserves edits/omissions so you can correct the few bad ones post-generation.

If you are curious, I am happy to share my script and would be genuinely interested whether it reduces the manual cleanup for your workflow. Also, if you run it with `--ai --github` and a PR body is sparse, it fetches a truncated PR diff and uses that as extra context for the LLM summary.


+1 on separating "how to upgrade" due to breaking changes from "what’s new". A dedicated BREAKING.md / MIGRATIONS.md is a really good idea.

One thing I am trying to do is make the generator surface breaking/migration items explicitly, but I still think anything that requires human judgment (migration steps, caveats) should be hand-curated in a dedicated document like you suggested.


I hear what you are saying, there is a risk that auto-generated release notes end up as PR-title soup. I put a lot of effort in my script to mitigate against that.

If you are willing and interested enough to take a quick look, here is what my script generated for our 2025 changelog (no hand-curation yet, this is the raw output):

https://raw.githubusercontent.com/confident-ai/deepeval/refs...

I am curious: does this still seem too noisy in your opinion, or is it getting closer? And what would you want to see for breaking changes/migrations to make it actually useful?

I now have 2024 & 2025 generated; to fully hand-curate two years of history just wasn’t practical, so I’m trying to get the "80% draft" automatically and then curate over time.


That has been my concern as well. The script I wrote tries to bucket entries into categories, including "Backward Incompatible Change" so those are easier to spot. Since it is automated I am trading some accuracy for time saved, which seemed like the only practical choice for me, since I had to backfill a lot of history, but it’s been surprisingly decent so far.

I am also planning to add some PR templates so contributors include the context up front, which should make any release note generation more accurate.

Are you using any tooling to help with changelog curration? I know towncrier is all about fragments, so contributors must had write a brief summary of their contribution, which would be more in-line with your preference.


I am curious what are people using for release notes in their own projects? Towncrier, GitHub Releases, something else?

If anyone tries my script on their repo and runs into issues, I am happy to help troubleshoot. Also, the output is actually plain Markdown (no JSX). The only Docusaurus specific bit is the YAML frontmatter header. If you are not using Docusaurus, you can just strip that header and rename .mdx to .md.


I wonder if there is valuable information that can be learned by studying a companies prompts? There may be reasons why some companies want their prompts private.


I realize cache segregation is mainly about security/compliance and tenant isolation, not protecting secret prompts. Still, if someone obtained access to a company’s prompt templates/system prompts, analyzing them could reveal:

- Product logic / decision rules, such as: when to refund, how to triage tickets

- Internal taxonomies, schemas, or tool interfaces

- Safety and policy guardrails (which adversaries could try to route around)

- Brand voice, strategy, or proprietary workflows

That is just off the top of my head.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: