I don’t disagree that they are clearly unhealthy for people who aren’t mentally well, I just differ on where the role of limiting access lies.
I think it’s up to the legal tutor or medical professionals to check that, and providers should at most be asked to comply with state restrictions, the same way addicts can figure on a list to ban access to a casino.
The alternative places openAI and others in the role of surveilling the population and deciding what’s acceptable, which IMO has been the big fuckup of social media regulation.
I do think there is an argument for how LLMs expose interaction - the friendliness that mimics human interaction should be changed for something less parasocial-friendly. More interactive Wikipedia and less intimate relationship.
Then again, the human-like behavior reinforces the fact that it’s faulty knowledge, and speaking in an authoritative manner might be more harmful during regular use.
I think it’s up to the legal tutor or medical professionals to check that, and providers should at most be asked to comply with state restrictions, the same way addicts can figure on a list to ban access to a casino.
The alternative places openAI and others in the role of surveilling the population and deciding what’s acceptable, which IMO has been the big fuckup of social media regulation.
I do think there is an argument for how LLMs expose interaction - the friendliness that mimics human interaction should be changed for something less parasocial-friendly. More interactive Wikipedia and less intimate relationship.
Then again, the human-like behavior reinforces the fact that it’s faulty knowledge, and speaking in an authoritative manner might be more harmful during regular use.