Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep that's what I've been thinking. This shouldn't be that hard, at this point LLMs should already have all the 'rules' (e.g. credit card A buys flight X give you m point which can be converted into n miles) in their params or can easily query the web to get it out. Dev need to encode the whole thing into a decision mechanism and once executed ask LLM to chase down the specific path (e.g. bombard ticket office with emails).


And what happens to the 1% where this fails? At the moment the responsibility is on the person. If I incorrectly book my flight for date X, and I receive the itinerary and realise I chose the wrong month - then damn, I made a mistake and will have to rectify.

An LLM could organise flights with a lower error rate, however, when it goes wrong what is the recourse? I imagine it's anger and a self-promise never to use AI for this again.

*If you're saying that the AI just supplies suggestions then maybe it's useful. Though wouldn't people still be double checking everything anyway? Not sure how much effort this actually saves?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: