Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hmm. Hypothetically if a human on first line help desk gives advice that is so completely bad as to be a crime, are they liable or the company? Because I guess a chat-bot would definitely not be liable.


How often is your fast food order 100% correct?


Correctness isn't one-dimensional. A wrong fast-food order might substitute or leave something out. There's essentially no chance the employee will swap in a random product from some other store.

But in this example the AI could hallucinate a statement attributed to you it actually formed by putting together reddit comments.


Sometimes they accidentally include tomatoes but they rarely include bombs.


Wrong food could be equivalent to bombs in worst case for who have allergy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: