Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, you’d also be forgiven for thinking ‘how on earth can a social website chatbot be a white supremacist?’ And yet xAI managed to prove that is a legitimate concern.

xAI has a shocking track record of poor decisions when it comes to training and prompting their AIs. If anyone can make a partisan coding assistant, they can. Indeed, given their leadership and past performance, we might expect them to explicitly try.



What’s their incentive to do this? What do they gain by making a partisan model instead of one that just works well?


You really can’t think of ANY advantage to becoming a perfected propaganda machine? Not one?


Enlighten me. How would a partisan coding model help.


Competence in every field is correlated for LLMs. Better coding probably means more competent rhetoric and more competent Swahili-Latin translation. But only "probably", the causation is being argued about.


Perhaps you’ve never heard of Tay?

Microsoft did pioneering work in the Nazi chatbot space.


Fwiw Tay was unintentional and was shut down immediately upon realization… very good case study for safety folks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: