Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Our system only uses LLMs at particular points of the process, so we do not expect letting users do this to have much value. However, descriptions we generate and/or take in as input for both end and start schema columns have a significant effect on the generation of your transformations. Therefore, the ability to edit these descriptions can be a powerful way to experiment with our models.


It's also a way to prompt engineer/hack your stuff too keep in mind


Yes, I’m curious how they’re handling sandboxing for this effectively untrusted code.


Our transformations are executed in a staging database/schema before deployment. We also have versioning and backtesting capabilities. In addition, you will have complete visibility of the code we produce before and after deployment.


Yep - we do not expose any sort of prompting. We use the LLM only at specific parts of the process, and the user has no access to it.


Doesn't the user provide the input that's feed to that function calling the LLM tho? Prompt hacking is a bit like sql injection in my mind but we don't have ORM's yet


This would be a concern if we are feeding the raw user input and feed it directly into an LLM. In our case, we are not simply a wrapper over an LLM.

There are multiple parsing and rule-based steps done to the input schemas - we extract specific pieces from the schemas and convert them to our internal format before feeding it our models. Thus, it mitigates such malicious behavior.


Thanks for the answer, I just found out about kor on twitter and made me think back of this thread, sharing in case it's of your interest https://eyurtsev.github.io/kor/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: