Counterpoint: "Chatgpt said this" is an entirely legitimate approach in many contexts and this attitude is toxic.
One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"
We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.
Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.
I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.
I'd agree. Certainly mentioning that information came from an LLM is important so people know to discount or manage it. It's possibly incorrect but still useful as an averaged answer of some parts of the internet.
Certainly citing GPT is better than just assuming it's right and not citing it along with an assertion.
One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"
We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.
Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.
I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.