Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would not be comfortable sending my bank info passwords and all sorts of other sensitive data that I input and see on my screen to Gemini. How much is the qualitative performance difference with a local model?


If I had to put a grade on my own experience and evals, Gemini 2.5 pro produces A- results and qwen2.5vl is maybe like B-/C+. Obviously everything's nondetermistic, so it's hard to guarantee a level of quality.

I'm reading through papers that suggest it should be possible to get SOTA performance on local models via distillation, and that's what I'll experiment with next.


Any insights on qwen-3 omni yet?


Looks awesome, but a 30B model is too big. Vast majority of people probably have 32GB of RAM or less unfortunately.


Google owns my email, browser, phone operating system, and a small amount of passwords. I assume that it has already stolen all my confidential data by now.


Also, if your not using an enterprise edition of gemini where your data is not used for model training, your sensitive data prompts and responses is 100% available to google.


Your passwords should never be visible on screen anyway: They go straight from a password manager into a censored input field.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: