Maybe for the moment it would be better if the AI companies simply presented their chatbots as slightly-steered text generation tools. Then people could use them appropriately.
Yes, there seems to be a little bit of grokking and the models can be made to approximate step-by-step reasoning a little bit. But 95% of the function of these black boxes is text generation. Not fact generation, not knowledge generation. They are more like improv partners than encyclopedias and everyone in tech knows it.
I don’t know if LLMs misleading people needs a clever answer entropy solution. And it is a very interesting solution that really seems like it would improve things — effectively putting certainty scores to statements. But what if we just stopped marketing machine learning text generators as near-AGI, which they are not? Wouldn’t that undo most of the damage, and arguably help us much more?
I’m working with a LLM right this moment to build some front end with react and redux, the technologies that I have almost no knowledge of. I posed questions and the LLM gave me the answers along with JavaScript code, a language that I’m also very rusty with. All of the code compiled, and most of them worked as expected. There were errors, some of them I had no idea what they were about. LLM was able to explained the issues and gave me revised code that worked.
All in all it’s been a great experience, it’s like working with a mentor along the way. It must have saved me a great deal of time, given how rookie I am. I do need to verify the result.
Where did you get the 95% figure? And whether what it does is text generation or fact or knowledge generation is irrelevant. It’s really a valuable tool and is way above anything I’ve used.
The last 6 weeks there's been a pronounced uptick in comments, motivated by tiredness of seeing "AI", manifested as a fever dream of them not being useful at all, and swindling the unwashed masses who just haven't used them enough yet to know their true danger.
I've started calling it what it is: lashing out in confusion at why they're not going away, given a prior that theres no point in using them
I have a feeling there'll be near-religious holdouts in tech for some time to come. We attract a certain personality type, and they tend to be wedded to the idea of things being absolute and correct in a way things never are.
It's also fair to say there's a personality type that becomes fully bought into the newest emerging technologies, insisting that everyone else is either bought into their refusal or "just doesn't get it."
Look, I'm not against LLMs making me super-human (or at least super-me) in terms of productivity. It just isn't there yet, or maybe it won't be. Maybe whatever approach after current LLMs will be.
I think it's just a little funny that you started by accusing people of dismissing others as "unwashed masses", only to conclude that the people who disagree with you are being unreasonable, near-religious, and simply lashing out.
I don't describe disagreeing with anyone, nor do I describe the people making these comments as near-religious, or simply lashing out, nor do I describe anyone as unreasonable
I reject simplistic binaries and They-ing altogether, it's incredibly boring and waste of everyones time.
An old-fashioned breakdown for your troubles:
> It's also fair to say
Did anyone say it isn't fair?
> there's a personality type that becomes fully bought into the newest emerging technologies
Who are you referring to? Why is this group relevant?
> insisting that everyone else is either bought into their refusal or "just doesn't get it."
Who?
What does insisting mean to you?
What does "bought into refusal" mean? I tried googling, but there's 0 results for both 'bought into refusal' and 'bought into their refusal'
Who are you quoting when you introduce this "just doesn't get it" quote?
> Look, I'm not against LLMs making me super-human (or at least super-me) in terms of productivity.
Who is invoking super humans? Who said you were against it?
> It just isn't there yet, or maybe it won't be.
Given the language you use below, I'm just extremely curious how you'd describe me telling the person I was replying to that their lived experience was incorrect. Would that be accusing them of exaggerating? Dismissing them? Almost like calling them part of an unwashed mass?
> Maybe whatever approach after current LLMs will be.
You're blithely doing a stream of consciousness deconstructing a strawman and now you get to the interesting part? And just left it here? Darn! I was really excited to hear some specifics on this.
> I think it's just a little funny that you started by accusing people of dismissing others as "unwashed masses",
Thats quite charged language from the reasonable referee! Accusing, dismissing, funny...my.
> only to conclude that the people who disagree with you are being unreasonable, near-religious, and simply lashing out.
Source? Are you sure I didn't separate the paragraphs on purpose? Paragraph breaks are commonly used to separate ideas and topics. Is it possible I intended to do that? I could claim I did, but it seems you expect me to wait for your explanation for what I'm thinking.
No. I don't think I said you did, either. One might call this a turn of phrase.
>> there's a personality type that becomes fully bought into the newest emerging technologies
> Who? Why is this group relevant?
What do you mean 'who'? Do you want names? It's relevant because it's the opposite, but also incorrect mirror image of the technology denier that you describe.
>> Look, I'm not against LLMs making me super-human (or at least super-me) in terms of productivity.
> Who is invoking super humans? Who said you were against it?
... I am? And I didn't say you thought I was against it? I feel like this might be a common issue for you (see paragraph 1.) I'm just saying that I'd like to be able to use LLMs to make myself more productive! Forgive me!
>> It just isn't there yet, or maybe it won't be.
> Strawman
Of what?? I'm simply expressing my own opinion of something, detached from what you think. It's not there yet. That's it.
>> Maybe whatever approach after current LLMs will be.
> Darn! I was really excited to hear some specifics on this.
I don't know what will be after LLMs, I don't recall expressing some belief that I did.
> Thats quite charged language from the reasonable referee! Accusing, dismissing, funny...my.
I could use the word 'describing' if you think the word 'accusing' is too painful for your ears. Let me know.
> Source? Are you sure I didn't separate the paragraphs on purpose? Paragraph breaks are commonly used to separate ideas and topics. Is it possible I intended to do that? I could claim I did, but it seems you expect me to wait for your explanation for what I'm thinking.
Could you rephrase this in a different way? The rambling questions are obscuring your point.
Yes, there seems to be a little bit of grokking and the models can be made to approximate step-by-step reasoning a little bit. But 95% of the function of these black boxes is text generation. Not fact generation, not knowledge generation. They are more like improv partners than encyclopedias and everyone in tech knows it.
I don’t know if LLMs misleading people needs a clever answer entropy solution. And it is a very interesting solution that really seems like it would improve things — effectively putting certainty scores to statements. But what if we just stopped marketing machine learning text generators as near-AGI, which they are not? Wouldn’t that undo most of the damage, and arguably help us much more?