Hacker Newsnew | past | comments | ask | show | jobs | submit | consumer451's commentslogin

> With all the technology advancement and improvement with access to information in the last 30 years, why does it feel that all of this culminates to more disinformation, more pain, and less understanding?

One of the original adages in technology is:

    garbage in, garbage out. 
The more technology ate the world, the bigger a problem that became.

> if they sell a username that has been active (posting, replying) in the past 6 months then it'd be a big deal for sure.

What about this scenario:

If you register a domain name, a bot registers a related handle/name/brand pretty quick if you do not.

So, you register a twitter handle to preserve your brand identity right after registering a new domain.

You don't check it for 6 months.

Is it OK for Twitter to sell that handle?


If you don't pay for a domain name you could lose it too.

If I signed up for a free social media account hosted by another company and neither logged in or posted on it for a year then it got autodeleted for inactivity, I wouldn't really feel I had a particularly strong claim to it.


One thing that makes handle markets uncomfortable is that social media identifiers sit in a strange space between identity and platform resource.

Domain names are usually treated as leased assets with a clear renewal cycle. Social media handles, on the other hand, often feel more like identity markers, especially when someone has used them for years.

When platforms reclaim dormant handles and then auction them, the model shifts from “resource management” to “asset monetization”. That changes user expectations quite a bit.

If a platform wants to recycle dormant identifiers, a transparent policy with predictable timelines and clear notices would probably feel more legitimate than quietly moving them into a marketplace.


If your domain is used as a brand identity, you should register it as a trademark and sue anyone who uses your brand identity as a twitter handle.

I'm thinking more like solo founder territory here. And apparently, it can be as short as 30 days?

I just launched a free wysiwyg markdown editor. It currently uses only IndexedDB for storage making it as private as possible. The only network calls are polling for the "click to update" toast, and the feedback form.

I was sick of getting cross-eyed when looking at tables in raw markdown and was just running it locally. This weekend I realized it might be useful for others.

The goal was simple as possible UX. Open url, drag and drop or paste into wysiwyg -> very readable and editable markdown. No sign up, no tracking, no fuss.

Of note, if you copy from the richtext mode, it copies raw markdown. The inverse is done with paste.

Based on feedback, I am working on very optional cloud-sync for as cheap as I can make it.

https://md-edit.com


I would like to add that I have also added a Google Drive sync option. It does require creating an MD-Edit account for operational reasons, but I never see your document data in that scenario.

Free multi-device sync is now enabled with email, github, and google OAuth. Of course, privacy is not guaranteed in that case. However, people wanted it for convenience.

I was really trying my best for friction-less UX on this project. I would appreciate any feedback on how I did, either by comment or the feedback button.


I am confused, who in the Finnish government wrote the book that PM of Canada quoted at Davos?

Stubb is a realist. He says the rules based world order is gone. We have to hurry and learn how to deal with dictators, because the US is becoming a dictatorship real quick. And that EU countries will have to unite in order to be able to negotiate from a position of strength. It's the only way to survive while staying true to our values (internally).

> I can't imagine how I would migrate to another email address

Imapsync is your best friend for this, as far as syncing the new account with the old one.

https://github.com/imapsync/imapsync

https://imapsync.lamiral.info/


Nitpick/question: the "LLM" is what you get via raw API call, correct?

If you are using an LLM via a harness like claude.ai, chatgpt.com, Claude Code, Windsurf, Cursor, Excel Claude plug-in, etc... then you are not using an LLM, you are using something more, correct?

An example I keep hearing is "LLMs have no memory/understanding of time so ___" - but, agents have various levels of memory.

I keep trying to explain this in meetings, and in rando comments. If I am not way off-base here, then what should be the term, or terms, be? LLM-based agents?


> Nit pick/question: The LLM is what you get via raw API call, correct?

You always need a harness of some kind to interact with an LLM. Normal web APIs (especially for hosted commercial systems) wrapped around LLMs are non-minimal harnesses, that have built in tools, interpretation of tool calls, application of what is exposed in local toolchains as “prompt templates” to transform the context structure in the API call into a prompt (in some cases even supporting managing some of the conversation state that is used to construct the prompt on the backend.)

> If you are using an LLM via a harness like claude.ai, chatgpt.com, Claude Code, Windsurf, Cursor, Excel Claude plug-in, etc... then you are not using an LLM, you are using something more, correct?

You are essentially always using something more than an LLM (unless “you” are the person writing the whole software stack, and the only thing you are consuming is the model weights, or arguably a truly minimal harness that just takes setting and a prompt that is not transformed in any way before tokenization, and returns the result after no transformations or filtering other than mapping back from tokens to text.)

But, yes, if you are using an elaborate frontend of the type you enumerate (whether web or CLI or something else), you are probably using substantially more stuff on top of the LLM than if you are using the providers web API.


In meetings, I try to explain the roles of system prompts, agentic loops, tool calls, etc in the products I create, to the stakeholders.

However, they just look at the whole thing as "the LLM," which carries specific baggage. If we could all spread the knowledge of what is actually going on to the wider public, it would make my meetings easier, and prevent many very smart folks who are not practitioners from saying inaccurate stuff.


  If we could all spread the knowledge of what is actually going on to the wider public, it would make my meetings easier, and prevent very smart folks from outside the field from saying dumb-sounding stuff.
This is an example of why LLMs won't displace engineers as severely as many think. There are very old solved processes and hyper-efficient ways of building things in the real world that still require a level of understanding many simply don't care or want to achieve.

You're not off-base at all. The way I think about it:

- LLM = the model itself (stateless, no tools, just text in/text out) - LLM + system prompt + conversation history = chatbot (what most people interact with via ChatGPT, Claude, etc.) - LLM + tools + memory + orchestration = agent (can take actions, persist state, use APIs)

When someone says "LLMs have no memory" they're correct about the raw model, but Claude Code or Cursor are agents - they have context, tool access, and can maintain state across interactions.

The industry seems to be settling on "agentic system" or just "agent" for that last category, and "chatbot" or "assistant" for the middle one. The confusion comes from product names (ChatGPT, Claude) blurring these boundaries - people say "LLM" when they mean the whole stack.


I like to use the term "coding agents" for LLM harnesses that have the ability to directly execute code.

This is an important distinction because if they can execute the code they can test it themselves and iterate on it until it works.

The ChatGPT and Claude chatbot consumer apps do actually have this ability now so they technically class as "coding agents", but Claude Code and Codex CLI are more obvious examples as that's their key defining feature, not a hidden capability that many people haven't spotted yet.


I was not sure about the framing of this poll.

Percentages might not align with actual usage patterns. If anyone can come up with a better framing, please create a new poll. I just want to know response data.


I am very curious about this:

> Theme park simulation game made with GPT‑5.4 from a single lightly specified prompt, using Playwright Interactive for browser playtesting and image generation for the isometric asset set.

Is "Playwright Interactive" a skill that takes screenshots in a tight loop with code changes, or is there more to it?


The skill source is here: https://github.com/openai/skills/blob/main/skills/.curated/p...

$skill-installer playwright-interactive in Codex! the model writes normal JS playwright code in a Node REPL


Thanks!

> "How do we protect ourselves against a competitor doing this?"

I have been thinking about this a lot lately, as someone launching a niche b2b SaaS. The unfortunate conclusion that I have come to is: have more capital than anyone for distribution.

Is there any other answer to this? I hope so, as we are not in the well-capitalized category, but we have friendly user traction. I think the only possible way to succeed is to quietly secure some big contracts.

I had been hoping to bootstrap, but how can we in this new "code is cheap" world? I know it's always been like this, but it is even worse now, isn't it?


Related short by Strange Parts: https://youtube.com/shorts/4bVpnJHLV_0

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: