Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If a $200/month pro level is successful it could open the door to a $2000/month segment, and the $20,000/month segment will appear and the segregation of getting ahead with AI will begin.


Agreed. Where may I read about how to set up an LLM similar to that of Claude, which has the minimum length of Claude's context window, and what are the hardware requirements? I found Claude incredibly useful.


Looking into running models locally, maybe a 405B parameter model sounds like the place to start.

Once understood you could practice with a private hosted llm (run your own model) to tweak and get it dialled in per hour, and then make the leap.


And now you can get the 405b quality in a 70b according to meta. Costs really come down massively with that. I wonder if it's really as good as they say though.


Full blown agents but they have to really able to replace a semi competent, harder than it sounds especially for edge cases where a human can easily get past


Agents still need a fair bit of human input and design and tweaking.


This is a significant concern for me too.


It's important to become early users of everything while AI is heavily subsidized.

Over time, using open source model as well will get more done per dollar of compute and hopefully the gap will remain close.


Question is if OpenAI is actually making money at $200/month.


With o1-preview and $20 subscription my queries typically were answered in 10-20 seconds. I've tried $200 subscription with some queries and got 5-10 minutes answer time. Unless the load is substantially increased and I was just waiting in queue for computing resources, I'd assume that they throw a lot more hardware for o1-pro. So it's entirely possible that $200/month is still at loss.


For funded startups, losing less can be a form of runway and capacity especially at the numbers they are spending.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: