It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing." These problems are largely the result of platform optimization for engagement and SEO competition for attention. This isn't unique to models; even we, humans, when operating without feedback, generate mostly slop. Curation is performed by the environment and the passage of time, which reveals consequences. LLMs taken in isolation from their environment are just as sloppy as brains in a similar situation.
Therefore, the correct attitude to take regarding LLMs is to create ways for them to receive useful feedback on their outputs. When using a coding agent, have the agent work against tests. Scaffold constraints and feedback around it. AlphaZero, for example, had abundant environmental feedback and achieved amazing (superhuman) results. Other Alpha models (for math, coding, etc.) that operated within validation loops reached olympic levels in specific types of problem-solving. The limitation of LLMs is actually a limitation of their incomplete coupling with the external world.
In fact you don't even need a super intelligent agent to make progress, it is sufficient to have copying and competition, evolution shows it can create all life, including us and our culture and technology without a very smart learning algorithm. Instead what it has is plenty of feedback. Intelligence is not in the brain or the LLM, it is in the ecosystem, the society of agents, and the world. Intelligence is the result of having to pay the cost of our execution to continue to exist, a strategy to balance the cost of life.
What I mean by feedback is exploration, when you execute novel actions or actions in novel environment configurations, and observe the outcomes. And adjust, and iterate. So the feedback becomes part of the model, and the model part of the action-feedback process. They co-create each other.
> It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing." These problems are largely the result of platform optimization for engagement and SEO competition for attention.
They didn't create those markets, but they're the markets for which LLMs enhance productivity and capability the best right now, because they're the ones that need the least supervision of input to and output from the LLMs, and they happen to be otherwise well-suited to the kind of work it is, besides.
> This isn't unique to models; even we, humans, when operating without feedback, generate mostly slop.
I don't understand the relevance of this.
> Curation is performed by the environment and the passage of time, which reveals consequences.
It'd say it's revealed by human judgement and eroded by chance, but either way, I still don't get the relevance.
> LLMs taken in isolation from their environment are just as sloppy as brains in a similar situation.
Sure? And clouds are often fluffy. Water is often wet. Relevance?
The rest of this is a description of how we can make LLMs work better, which amounts to more work than required to make LLMs pay off enormously for the purposes I called out, so... are we even in disagreement? I don't disagree that perhaps this will change, and explicitly bound my original claim ("so far") for that reason.
... are you actually demonstrating my point, on purpose, by responding with LLM slop?
LLMs can generate slop if used without good feedback or trying to minimize human contribution. But the same LLMs can filter out the dark patterns. They can use search and compare against dozens or hundreds of web pages, which is like the deep research mode outputs. These reports can still contain mistakes, but we can iterate - generate multiple deep reports from different models with different web search tools, and then do comparative analysis once more. There is no reason we should consume raw web full of "spam, scams, propaganda, and astroturfing" today.
For a good while I joked that I could easily write a bot that makes more interesting conversation than you. The human slop will drown in AI slop. Looks like we wil need to make more of an effort when publishing if not develop our own personality.
> It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing."
You should hear HN talk about crypto. If the knife were invented today they'd have a field day calling it the most evil plaything of bandits, etc. Nothing about human nature, of course.
Therefore, the correct attitude to take regarding LLMs is to create ways for them to receive useful feedback on their outputs. When using a coding agent, have the agent work against tests. Scaffold constraints and feedback around it. AlphaZero, for example, had abundant environmental feedback and achieved amazing (superhuman) results. Other Alpha models (for math, coding, etc.) that operated within validation loops reached olympic levels in specific types of problem-solving. The limitation of LLMs is actually a limitation of their incomplete coupling with the external world.
In fact you don't even need a super intelligent agent to make progress, it is sufficient to have copying and competition, evolution shows it can create all life, including us and our culture and technology without a very smart learning algorithm. Instead what it has is plenty of feedback. Intelligence is not in the brain or the LLM, it is in the ecosystem, the society of agents, and the world. Intelligence is the result of having to pay the cost of our execution to continue to exist, a strategy to balance the cost of life.
What I mean by feedback is exploration, when you execute novel actions or actions in novel environment configurations, and observe the outcomes. And adjust, and iterate. So the feedback becomes part of the model, and the model part of the action-feedback process. They co-create each other.