Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
danielhanchen
5 months ago
|
parent
|
context
|
favorite
| on:
Gemma 3 270M: Compact model for hyper-efficient AI
Oh :( Maybe the settings? Could you try
temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
canyon289
5 months ago
|
next
[–]
Daniel, thanks for being here providing technical support as well. Cannot express enough how much we appreciate your all work and partnership.
danielhanchen
5 months ago
|
parent
|
next
[–]
Thank you and fantastic work with Gemma models!
simonw
5 months ago
|
prev
[–]
My topping only lets me set temperature and top_p but setting them to those values did seem to avoid the infinite loops, thanks.
danielhanchen
5 months ago
|
parent
[–]
Oh fantastic it worked! I was actually trying to see if we can auto set these within LM Studio (Ollama for eg has params, template) - not sure if you know how that can be done? :)
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0