While arguing may not be productive, I have had good results challenging Gemini on hallucinated sources in the past. eg, "You cited RFC 1918, which is a mistake. Can you try carefully to cite a better source here?" which would get it to re-evaluate, maybe by using another tool, admit the mistake, and allow the research to continue.
With this example, several attempts resulted in the same thing: Gemini expressing a strong belief that Github has a security capability which is really doesn't have.
If someone is able to get Gemini to give an accurate answer to this with a similar question, I'd be very curious to hear what it is.
One of the main problems with arguing with LLMs is your complaint becomes part of the prompt. Practically all LLMs have will take "don't do X" and do X, because part of "don't do X" is "do X," and LLMs have no fundamental understanding of negation.
With this example, several attempts resulted in the same thing: Gemini expressing a strong belief that Github has a security capability which is really doesn't have.
If someone is able to get Gemini to give an accurate answer to this with a similar question, I'd be very curious to hear what it is.