>should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues.
Is that a debate worth having though?
If the tool is available universally it is hard to imagine any way to stop access without extreme privacy measures.
Blocklisting people would require public knowledge of their issues, and one risks the law enforcement effect, where people don’t seek help for fear that it ends up in their record.
Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
If ChatGPT has "PhD-level intelligence" [1] then identifying people using ChatGPT for therapy should be straightforward, more so users with explicit suicidal intentions.
As for what to do, here's a simple suggestion: make it a three-strikes system. "We detected you're using ChatGPT for therapy - this is not allowed by our ToS as we're not capable of helping you. We kindly ask you to look for support within your community, as we may otherwise have to suspend your account. This chat will now stop."
>Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
I think it’s fair to demand that they label/warn about the intended usage, but policing it is distopic. Do car manufacturers immediately call the police when the speed limit is surpassed? Should phone manufacturers stop calls when the conversation deals with illegal topics?
I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.
If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.
Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice? From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.
> Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice?
Probably. If you are in therapy because you’re feeling mentally unstable, by definition you’re not as capable of separating bad advice from good.
But your question is a false dichotomy, anyway. You shouldn’t be asking ChatGPT for either type of advice. Unless you enjoy giving yourself psychiatric disorders.
I've been talking about my health problems to unaccountable bullshit machines my whole life and nobody ever seemed to think it was a problem. I talked to about a dozen useless bullshit machines before I found one that could diagnose me with narcolepsy. Years later out of curiosity I asked ChatGPT and it nailed the diagnosis.
Maybe the tool should not be available universally.
Maybe it should not be available to anyone.
If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them, and its primary purpose is simply to bring OpenAI more profit, then maybe the world is better off without it being publicly available.
>If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them
Should we stop selling kitchen knives, packs of cards or beer as well?
This is not a new problem in society.
>and its primary purpose is simply to bring OpenAI more profit
This is true for any product, unless you mean that it has no other purpose, which is trivially contradicted by the amount of people who decide to pay for it.
I don’t disagree that they are clearly unhealthy for people who aren’t mentally well, I just differ on where the role of limiting access lies.
I think it’s up to the legal tutor or medical professionals to check that, and providers should at most be asked to comply with state restrictions, the same way addicts can figure on a list to ban access to a casino.
The alternative places openAI and others in the role of surveilling the population and deciding what’s acceptable, which IMO has been the big fuckup of social media regulation.
I do think there is an argument for how LLMs expose interaction - the friendliness that mimics human interaction should be changed for something less parasocial-friendly. More interactive Wikipedia and less intimate relationship.
Then again, the human-like behavior reinforces the fact that it’s faulty knowledge, and speaking in an authoritative manner might be more harmful during regular use.
“El hambre agudiza el ingenio”, we say in Spanish. Hunger sharpens the mind.
Growing up with fewer resources than others paradoxically leads to better outcomes sometimes, since you’re conscious of the barriers around you and that motivates you to overcome them.
If I had grown up with the latest iPhone I would never have cared about rooting and custom ROMs, for example.
I didn’t know either - or rather, I had never stopped to consider what a server needs to do to expose a git repo.
But more importantly, I’m not sure why I would want to deploy something by pushing changes to the server. In my mental model the repo contains the SOT, and whatever’s running on the server is ephemeral, so I don’t want to mix those two things.
I guess it’s more comfortable than scp-ing individual files for a hotfix, but how does this beat pushing to the SOT, sshing into the server and pulling changes from there?
There's a lot of configuration possible due to the fact that git is decentralized. I have a copy on my computer which is where I do work. Another on a vps for backup. Then one on the app server which only tracks the `prod` branch. The latter is actually bare, but there's a worktree for the app itself. The worktree is updated via a post-receive hook and I deploy change via a simple `git push server prod`
You actually led me into a dive to learn what worktrees are, how bare repos + worktrees behave differently from a regular clone and how a repo behaves when it’s at the receiving end of a push, so thanks for that!
I’ve never worked with decentralized repos, patches and the like. I think it’s a good moment to grab a book and relearn git beyond shallow usage - and I suspect its interface is a bit too leaky to grok it without understanding the way it works under the hood.
I think a difference is that apple has the means to change the behavior of the device after the fact in ways that the person that purchased the product doesn’t.
This is unique to modern technology, and the fact that they sell you the house keeping sole ownership of the keys to certain rooms is indeed worth examining I think.
I have noticed that better devices just lead me to more time spent in apps I don’t really enjoy, just because I like the device itself.
I’ve had success consciously worsening my experience, doing stuff like reducing color intensity with accessibility options or using the web version of an app for added friction, which is ridiculous but here we are.
I had a similar experience rebooting my 9yo iPhone [0] after a more recent one went out of service. Hours of screen procrastination got replaced with IRL activities/thinking. I decided to not repair the fancy LCD and keep the little friend. It’s been two years and I don’t feel going back soon.
Reducing color intensity is a great idea to worsen the experience, I’ll give it a go. Yet first thing I do after wake up is checking Hacker News and the design is probably not at fault. Still some self improvement to do.
I have the same experience. I have felt it specially when moving to a new iPhone with 90 or 120Hz screen refresh frequency. Everything is so smooth that becomes pleasurable already by itself.
But not only that, also my work iPhone got recently upgraded from an old SE with small screen and laggy performance to the new 16e, and I found myself more eager to check work emails, ms teams than ever before.
I don’t think that’s a good development, but at the end it’s my responsibility and my own decision on how I use those devices. That also means I will probably downgrade to a worse iPhone instead of getting the best available.
I’ve considered that as well, simply getting rid of the high tech altogether and going for a budget or old phone. My main issue with that is the camera, as I place a lot of importance in photos/videos.
I know some people have gone back to carrying a digital pocket camera, but I haven’t really bought into the idea for convenience and because I think taking it out has different social implications.
It definitely does, but in my experience a standalone camera is usually better received than a phone.
I think it’s got to do with the implication of easy shareability. Pointing a phone at someone always brings to mind the idea that the photo can be sent anywhere within seconds. Are they going to post you on their instagram story? Are they going to send it to their friends and laugh about you?
The friction to sharing photos is so much higher with a standalone camera that I think a lot of people feel much more comfortable with one pointed at them.
Then again, that same friction quickly becomes a problem for the user - I know I’ve lost a lot of my photos just because I couldn’t be bothered to connect the camera, transfer the photos, organize them, back them up etc.
For me it’s not really the risk that it will be well received, but rather that cameras trigger a more artificial response.
Selfies or phone pictures are quick and people mostly don’t react, but cameras make us pose, subconsciously. At least I feel a phone gets me more natural photos, that work better as memories of the moment.
The lack of instant online backup is also a good point, I don’t know if that’s on the table on newer models.
It's a good idea. Companies try really hard to optimize and make everything they want you to do as easy and smooth as possible (and vice versa). Personally I avoid things like Apple Pay for this reason, it's there to remove friction from purchasing stuff, which results in us doing more of it.
Huge agree. Apple likes to pay lip service to this with "screen time" features, but will they make a smaller phone for people who don't want their life centered around staring at the shiny screen? No, because they don't sell as much as big phones.
Why YouTube specifically? In my experience it is the tamest of all feeds.
Not that they have any more morals or self control, they just seem to have a comparatively awful algorithm that brings up the same 14 videos over and over.
because all the safety stuff is bullshit. it's like asking a mirror company to make mirrors that modify the image to prevent the viewer from seeing anything they don't like
good fucking luck. these things are mirrors and they are not controllable. "safety" is bullshit, ESPECIALLY if real superintelligence was invented. Yeah, we're going to have guardrails that outsmart something 100x smarter than us? how's that supposed to work?
if you put in ugliness you'll get ugliness out of them and there's no escaping that.
people who want "safety" for these things are asking for a motor vehicle that isn't dangerous to operate. get real, physical reality is going to get in the way.
I think you are severely underestimating the amount of really bad stuff these things would say if the labs put no effort in here. Plus they have to optimize for some definition of good output regardless.
The term "safety" in the llm context is a little overloaded
Personally, I'm not a fan either - but it's not always obvious to the user when they're effectively poisoning their own context, and that's where these features are useful, still.
>it's not a problem of the language, but the author's unfamiliarity with the tools at their disposal.
If you have to share a codebase with a large group of people with varying skill levels, limiting their ability to screw up can definitely be a feature, which a language can have or lack.
As always, it comes with tradeoffs. Would you rather have the ability to use good, expressive abstractions or remove the group’s ability to write bad ones? It probably depends on your situation and goals.
Unlike the first time, it isn't new and isn't a technological flex. The payoff from the first time was marginal, measured mainly in the children it inspired to pursue STEM. This time, does anybody even care?
i mean maintinaing a Base on the moon is definitely a technological flex. getting there not as much. Still challenging.
Is it worth the risk and money? not sure, depends what our plan with this is. As a way to launch moon manufactured space probes? maybe.
Is that a debate worth having though?
If the tool is available universally it is hard to imagine any way to stop access without extreme privacy measures.
Blocklisting people would require public knowledge of their issues, and one risks the law enforcement effect, where people don’t seek help for fear that it ends up in their record.
reply