Hacker Newsnew | past | comments | ask | show | jobs | submit | johnwheeler's commentslogin

Where do I write the check?

Yes, and I agree and it seems like the author has a naïve experience with LLMs because what he’s talking about is kind of the bread and butter as far as I’m concerned

Indeed. To me, it has long been clear that LLMs do things that, at the very least, are indistinguishable from reasoning. The already classic examples where you make them do world modeling (I put an ice cube into a cup, put the cup in a black box, take it into the kitchen, etc... where is the ice cube now?) invalidate the stochastic parrot argument.

But many people in the humanities have read the stochastic parrot argument, it fits their idea of how they prefer things to be, so they take it as true without questioning much.


My favorite example: 'can <x> cut through <y>?'

You can put just about anything in there for x and y, and it will almost always get it right. Can a pair of scissors cut through a boeing 747? Can a carrot cut through loose snow? A chainsaw cut through a palm leaf? Nailclippers through a rubber tire?

Because of combinatorics, the space of ways objects can interact is too big to memorize, so it can only answer if it has learned something real about materials and their properties.


I’ll pass. Altman and co are total crooks.

Exactly. And the OpenAI corporates speak acting like they give a shit about our best interests. Give me a break, Sam Altman. How stupid do you think everyone is?

They have proven that they are the most untrustworthy company on the planet

And this isn't AI fear speaking. This is me raging at Sam Altman for spreading so much fear, uncertainty, and doubt just to get investments. The rest of us have to suffer for the last two years, worrying about losing our jobs, only to find out the AGI lie is complete bullsh*t.


To me, no company has the customers’ best interests in mind. This whole thing is akin to when Apple was refusing to unlock phones for the FBI. Of course, Apple profits by having people think that they take privacy seriously, and they demonstrate it by protecting users’ privacy. Same thing here; OpenAI needs chats to have some expectation of privacy, especially because a large use case of AI is personal advice on things. So they are fighting to make sure it's true.

> To me, no company has the customers’ best interests in mind.

Lavabit opted to stop operating rather than give the FBI access to client emails.

https://archive.ph/20200915083857/https://www.nytimes.com/20...


Both OpenAI and NYT are bad. I don't know about NYT's privacy policy, because that's not really the industry they're in, but they did admit to fabricating a story that led to a now 2-year-long war, so.

Yes, but I think at least in this instance, OpenAI needs people to think that what they ask ChatGPT is private. They will have no business model if everyone thought that whatever private question they ask could fall into the hands of a media company and be used for anything. Also, at least when I signed up, you had to provide either a highly trusted email address or phone number to sign up, so your identity is definitely attached to whatever question you ask ChatGPT. They know how high the stakes are for them in this suit.

> Both OpenAI and NYT are bad.

Both -1 and -1,000,000 are negative numbers.

We need to be careful and mindful of our framing. Saying "X is bad" is a drastic oversimplification and not necessarily useful. Pointing at any one company and saying "bad" doesn't move the needle much in terms of figuring out how to steer us towards better outcomes. For that, we have to identify incentives and understand motivations.


Which story is this?


The Story Behind the New York Times October 7 Exposé https://share.google/2HB4zPEGi7x3JTYyj

You got downvoted for this? That many people are doing 'Leave Sam Altman alone!'? kinda wild

It's weird. It went up and down and up and down. Controversial POV. But thanks for the support. Sam Altman's just too dishonest. It's been said time and time again by so many people, by Paul Graham, Ilya Sutskeve, everybody's telling everybody he's dishonest. When are we going to wake up and get this guy out of there?

AI should make people start really trying to build their own solopreneurships and start their own companies or band together in small teams and forget about jobs. It's not going to be the same again. But we're at an inflection point where we can make a difference as individuals.

Just because infinity is a hard thing to understand doesn't mean the universe is and has always been infinite.

/Raises hand Sam Altman hater over here.

Yeah, fuck your boss. Guys an asshole. He knows how stressful it is for you and either doesn't do something about it because the guy's nice to him, or because he's threatened by you and he knows that having this dickhead around gives him leverage against you. Worst-case scenario: he's just plain incompetent. I guess there is no worst-case scenario. They're all terrible.


It's silly to assume that they have a 90% chance of failure just because startups have a 90% failure rate or whatever.


There’s a difference between something and everything though


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: