Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s not at all how multi-modal LLMs work - their visual input is not words generated by a classifier. Instead the image is divided into patches and tokenised by a visual encoder (essentially, it is compressed), and then fed directly as a sequence to the model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: