I wrote this piece to share my view about what cursor should do to manage some of the negative sentiment they’re getting online. What do you folks think?
Serious question for you, without the judgement that it's certainly going to imply. (There's just no way to ask this without sounding snarky.) I'm genuinely curious: is writing so difficult that you need to use AI to help you?
I understand the "dump voice notes -> AI transcription" step, but why the "clean up" and "iterate" steps with AI?
For me it's not about difficulty, it's about friction. BJ Fogg's behavioral model B=MAP says Behavior = Motivation × Ability × Prompt. When you increase ability (lower friction), you get more behavior for the same motivation.
AI lowers my writing friction. Same motivation, more output. I write more, I get feedback faster, I iterate more. That's the value for me.
My reasoning: I use AI for development work (Claude Code), and better models = fewer wasted tokens = less compute = less environmental impact. This isn't a privacy issue for work context.
I regularly run concurrent AI tasks for planning, coding, testing - easily hundreds of requests per session. If training on that interaction data helps future models be more efficient and accurate, everyone wins.
The real problem isn't privacy invasion - it's AI velocity dumping cognitive tax on human reviewers. I'd rather have models that learned from real usage patterns and got better at being precise on the first try, instead of confidently verbose slop that wastes reviewer time.
I agree it's not about the 10x engineers or the greenfield. I think YC's selection process is still focused on finding distinguished individuals, but within two specific constraints.
I've been using Figma to build my decks for a while now, and I bet a lot of other folks have too. Figma Slides looks like it takes what we've been doing and makes it way smoother.