Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ijk
4 months ago
|
parent
|
context
|
favorite
| on:
Sampling and structured outputs in LLMs
One pattern that I've seen develop (in PydanticAI and elsewhere) is to constrain the output but include an escape hatch. If an error happens, that lets it bail out and report the problem rather than be forced to proceed down a doomed path.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: