Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't that the point of code reviews, though? A human can write incorrect descriptions as well. This one is incredibly easy to catch for any maintainer of the project.

Of course, it'll get harder and harder to spot the problems, but that just brings the "bug" closer to the human-generated level.

Banning AI doesn't fix the problem, especially since the type of person that would have AI generate a description and then not even read it also isn't going to follow the rules.



The problem is that current LLMs are capable of creating junk that's prima-facie plausible in industrial quantities, and finding the sound material within it is a big and growing burden for any organization, especially those that accept contributions from the public.

> The type of person that would have AI generate a description and then not even read it also isn't going to follow the rules.

This is not a sound argument against this rule; it is an argument for the proposition that current LLMs present a threat to the open-source model, despite this rule.


> A human can write incorrect descriptions as well.

Sure they can, but the descriptions are completely irrelevant to what the package or project at hand does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: