Neologism undersells what this term is being used for. It's a technical term of art that's created its own semantic category in LLM research that separates "text generated that is factually inaccurate according to ${sources}" from "text generated that is morally repugnant to ${individuals}" or "text generated that ${governments} want to censor".
These three categories are entirely identical at a technological level, so I think it's entirely reasonable to flag that serious LLM researchers are treating them as distinct categories of problems when they're fundamentally not at all distinct. This isn't just a case of linguistic pedantry, this is a case of the language actively impeding a proper understanding of the problem by the researchers who are working on that problem.
These three categories are entirely identical at a technological level, so I think it's entirely reasonable to flag that serious LLM researchers are treating them as distinct categories of problems when they're fundamentally not at all distinct. This isn't just a case of linguistic pedantry, this is a case of the language actively impeding a proper understanding of the problem by the researchers who are working on that problem.