Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It IS their own work.

The simplest refutation of your point of view is, who or what is responsible if the work submission is wrong?

It will always be the person’s, never the computer’s. Conveniently, AI always acts as if it has no skin in the game… because it literally and figuratively doesn’t… so for people to treat it like it does, should be penalized



If it’s the output of an LLM, it’s not their own work.


Who prompted the LLM?

Who vetted the output?

Who ensured there was adequate test coverage?

Who insisted on a certain design?

Who is to blame if it's bad code? That is the same entity that is responsible, and the same entity that "did it"

tl;dr your stance is full of poop, my dude


“I looked up the topic on Wikipedia and I highlighted the text and I selected copy and I selected paste so I don’t see how this is plagiarism.”

That’s what you sound like.


You sound like someone who has literally zero understanding as to why that is a ridiculous comparison.

There are a thousand and one ways that I participate when building something with LLM assistance. Everything from ORIGINATING AN IDEA TO BEGIN WITH, to working on a thorough spec for it, to ensuring tests are actually valid, to asking for specific designs like hexagonal design, to specific things like benchmarks... literally ALL OF THE INITIATIVE IS MINE, AND ALL OF THE SUCCESS/FAILURE CONSEQUENCES ARE MINE, AND THAT IS ULTIMATELY ALL THAT MATTERS

Please head towards a different career if you now have a stupid and contrived excuse not to continue working with the machines, because you sound like a whining child

And you're not answering the question, because you know it would end your point: WHO OR WHAT IS RESPONSIBLE IF THE CODE SUCCEEDS OR FAILS?


I started working in the industry when you were able to buy a Lisp Machine new and have been studying AI even longer, and I’ve been very successful in it. I not only know what I’m talking about, I have the experience to back it up.

You sound like someone who’s deeply in denial about exactly how the LLM plagiarism machines work. You really do sound like a student defending themselves against a plagiarism charge by asserting that since they did the work of choosing the text to put into their essay and massaging the grammar so it fit, nobody should care where it came from.


By that definition, every single human who wrote a paper after reading a source document is a “plagiarism machine”

and I’m 53 and well remember Symbolics from freshman year at Cornell, in fact my application essay to it was about fuzzy logic (AI-tangential) and probably got me in, so I too am quite familiar

i’m also quite good at debate. the flaw in your logic is that plagiarism requires accountability and no machine can be accountable, only the human that used it, ergo, it is still the work of the human, because the human values, the human vets, the human initiates, and the human gains or loses based on the combined output, end of story; accelerated thought is still thought, and anyway, if a machine can replicate thought, then it wasn’t particularly original to begin with


and your stance is not your own if you got the LLM to stand for you. ;-P

human prompting != human production




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: