It really doesn’t matter for this discussion, because society and laws are structured as if people have free will. An AI has to live up to that as well.
Yes, that was their point. Humans have accountability to one another - either legally or in less formal ways (such as being fired if you're making your co-workers uncomfortable).
Current machines simply don't have that kind of accountability. Even if we wanted to, we can't punish or ostracize ChatGPT when it lies to us, or makes us uncomfortable.
The point of this thread is that the explanation a human gives carries some weight because the human know they may/will be held accountable for the truth of that explanation. ChatGPT's explanations carry no such weight, since ChatGPT itself has no concept of suffering consequences for its actions.
So, while both humans and ChatGPT can and do give bogus explanations for their actions, there are reasons to trust the humans' explanations more than ChatGPT's.
Whether or not we hold humans using ChatGPT accountable for their use of it is irrelevant to this thread.