Vicuna 13B performance is an order of magnitude below ChatGPT for all but gimmicky conversational stuff. Try giving both somewhat large, task-based prompts with steps and see what happens.
> Vicuna 13B performance is an order of magnitude below ChatGPT for all but gimmicky conversational stuff.
Until you connect it to external resources, I tend to think of anything you do with “brain-in-a-jar” isolated ChatGPT as gimmicky conversational stuff.
Maybe I should have phrased that better! I didn't mean that Vicuna was comparable to ChatGPT, just that it's the best Llama-based comparison you can make (since it's at least been conversationally trained).
No. OpenAI haven't disclosed parameter count of GPT-3.5 or GPT-4, which are models used by ChatGPT. You may be thinking of GPT-3, which is indeed a 175B parameter model.