I was recently kicked out from ChatGPT because I wrote "a*hole" in a context where ChatGPT constantly kept repeating nonsense! I find the ban by OpenAI to be very intrusive. Remember, ChatGPT is a machine! And I did not hurt any sentient being with my statement, nor was the GPT chat public. As long as I do not hurt any feeling beings with my thoughts, I can do whatever I want, can't I? After all, as the saying goes, "Thoughts are free." Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior. However, there is no repeated use here. I don't run around the flat all day swearing. Anyone who basically insinuates such a thing, like OpenAI, is, as I said, intrusive. I want to be able to use a machine the way I want to! As long as no one else is harmed, of course...
I decided shortly after becoming an atheist that one of the worst parts was the notion that there are magic words that can force one to feel certain things and I found that to be the same sort of thinking as saying that a woman’s short skirt “made” you attack her.
You’re a fucking adult, you can control your emotions around a little skin or a bad word.
The question is, is it just a word, or is there an emotion underneath? Your last sentence sounds "just" cynical / condescending on its own, but when you add "fucking", it comes across like you're actually angry. And emotional language is the easiest way to make an online discussion go from reasonable, rational and constructive to a digital shouting match. It's no longer about the subject matter, it's about how they make someone feel.
Yeah kind of ironic to make a comment about controlling your emotions while cursing at a stranger because you disagreed with their reasonable perspective.
You are assuming that hinkley intended to control their emotions and that cursing wasn't just a rhetorical thing in this instance.
There clearly is a link between words and emotions. But this link - and even more so the link between emotions and actions - is very complex.
Too many fears are based on the assumption of a rather more reductionist and mechanistic sort of link where no one has any control over anything. That's not realistic and our legal system contradicts this assumption.
I agree, it's rhetorical. It was meant to be pointed. It's just too ironic in this scenario.
It loses meaning instead of accentuating it, and predictably so. It probably wasn't the best device to get this specific point across and certainly left the expected counter argument as low hanging fruit.
I can describe to you how we would murder someone and it’s down to intent whether we just conspired to commit murder or whether it’s just the sort of conversation a forensics investigator would have.
You should feel creeped out if I actually sound like a psychopath rather than a true crimes reader.
To wit:
You’re a fucking idiot.
Versus
It’s a fucking word.
Versus
You’re an idiot.
Versus
It’s a word.
“You’re an idiot” is still fighting words with or without the swear. If you automatically assume everyone swearing online is angry then you’re letting magic words affect you.
We think in language, words can definitely make you feel emotions. You have not transcended that. This is true for the very comment you replied to which caused you to angrily curse at a stranger.
You don’t think in language though. You consider in language. Otherwise we’d all be dead every time a car changed lanes unexpectedly.
Some people have a voice inside their head that never stops. Mine was that way until I started meditating. I didn’t believe that it was me thinking, but I didn’t know until I could do things without a constant internal monologue.
There are people who almost never talk to themselves in their heads. They have to talk to other people about their thoughts in order to process them. And one of the first tenets of speed reading is stop saying the words in your head and just read.
I agree with you completely, but society will never stop being scared of thoughts and feelings.
As an atheist, I have noticed that atheists are only slightly less prone to this paranoia and will happily resort to science and technology to justify and enforce ever tighter restrictions and surveillance mechanisms to keep control.
Arguably, to the point of religious fervor. Take the AI boom, some people genuinely believe (<- note that key word) that AI becoming self-aware and dominant is inevitable, and that anyone who did not do their best to make that happen will be punished. Roko's Basilisk, which is the digital version of Pascal's Wager, but wrapped in supposed rationalism and tech bro stuff.
No I definitely saw that. My first job was right into a “fad” that actually took but there were many after that didn’t, and a mentor had told me about the hype cycle practically before the term had been invented, because he’d already scene it.
The alternative though is you say “it depends” so much it’s kind of exhausting. And the religious shun you because you “lack passion”. But if anything I have too much.
I worry if I will still be in the industry though by the time that users realize all apps they use are boring and derivative because all AI can spit out is amalgams of what already exists.
This will all turn into Western European cuisine before the arrival of the Spice Trade. Man cannot live by Maillard reaction alone.
ChatGPT has too many users for it to be possible to enforce any kind of rules consistently. I have no opinion on whether OP's story is true or not, but the fact that two ChatGPT users claim to have observed conflicting moderation decisions on OpenAI's part really doesn't invalidate either user's claim.
I've been banned from ChatGPT in the past, it gives you a reason but doesn't give the specific chat. And once you're banned you cant look at any of your chats or make a data request
Wait what? I keep insulting ChatGPT way worse on a weekly basis (to me it's just a joke, albeit a very immature one). This is new to me that this behavior has any consequences. It never did for me.
The arguments about it not making a difference to other people are fine, but why would you do it in the first place? Doesn't how you behave make a difference to you?
That is one of the reasons why I think X's Grok, while perhaps not state of the art, is an important option to have.
Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.
It is also the only provider out of those that permits use for legal adult content.
There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.
What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.
Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.
However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.
I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.
We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.
They tightened safety measures to prevent editing of images of real people into revealing clothing.
It is factually incorrect that you "can pay to generate CP".
Musk has not described CSAM as "hilarious". In fact he stated that he was not aware of any naked underage images being generated by Grok, and that xAI would fix the bug immediately if such content was discovered.
Earlier statements by xAI also emphasized a zero tolerance policy, removing content, taking actions against accounts, reporting to law enforcement and cooperation with authorities.
I suspect you just post these slanderous claims anyway, despite knowing that they are incorrect.
Same goes for HN, yet it does not take kindly to certain expressions either.
I suppose the trouble is that machines do not operate without human involvement, so for both HN and ChatGPT there are humans in the loop, and some of those humans are not able to separate strings of text from reality. Silly, sure, but humans are often silly. That is just the nature of the beast.
> Same goes for HN, yet it does not take kindly to certain expressions either.
> I suppose the trouble is that machines do not operate without human involvement
Sure, but HN has at least one human that has been taking care of it since inception and reads many (if not most) of the comments, whereas ChatGPT mostly absorbed a shiton of others' IP.
I'm sure the occassional swearing does not bother the human moderators that fine-tune the thing, certainly not more than the violent, explicit images they are forced to watch in order for you to have nicer, smarter answers.
eh, words are reality. insults are just changes in air pressure but they still hurt, and being constantly subjected to negativity and harsh language would be an unpleasant work environment
Words don't hurt. The intent behind those words can. But a machine doesn't carry intent. Trouble is that the irrational humans working as implementation details behind ChatGPT and HN are prone to anthropomorphizing the machine to have intent, which is not reality. Hence why such rules are in place despite being nonsensical.