I do hope you comment the same thing on the pro-AI articles from people trying to sell you a product. Internet is now infested by those, and without these articles you might think everybody has collectively lost their mind and still think we will get replaced in the next 6 months.
I use AI, what I'm tired of is shills and post-apocalyptic prophets
I use AI, I pay a subscription to google. I use it for work. I use it for learning. I use it for entertainment.
I am still concerned with how it's going to impact society going forward. The idea of what this is being used for by those with a monopoly on the use of violence is terrifying: https://www.palantir.com/platforms/aip/
Yes, us who use AI yet aren't shills nor hypers and also still have our critical thinking receptors left in our brains, are tired of both sides exaggerating and hyping/dooming.
People would do much better if they just stopped listening so much and started thinking and doing a bit more. But as a lazy person, I definitely understand why it's hard, it requires effort.
"Look at how I use this cool new technology" tends to be much more interesting to me than "this new technology has changed my job and I refuse to use it because I'm afraid".
Obviously it’s far more nuanced than that. I’d say there are several categories where a reasonable person could have reservations (or not) about LLMs:
Copyright issues (related to training data and inference), openness (OSS, model parameters, training data), sovereignty (geopolitically, individually), privacy, deskilling, manipulation (with or without human intent), AGI doom. I have a list but not in front of me right now.
Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't, yet look at most of the replies here.
> Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't...
Did you read Mr. Bushell's policy [0], which is linked to by TFA? Here's a very relevant pair of sentences from the document:
Whilst I abstain from AI usage, I will continue to work with clients and colleagues who choose to use AI themselves. Where necessary I will integrate AI output given by others on the agreement that I am not held accountable for the combined work.
And from the "Ensloppification" article [1], also linked by TFA:
I’d say [Declan] Chidlow verges towards AI apologism in places but overall writes a rational piece. [2] My key takeaway is to avoid hostility towards individuals†. I don’t believe I’ve ever crossed that line, except the time I attacked you [3] for ruining the web.
† I reserve the right to “punch up” and call individuals like Sam Altman a grifter in clown’s garb.
Based on this information, it doesn't seem that Mr. Bushell will hate anyone for using "AI" tools... unless they're CEO pushers.
Or are you talking in generalities? If you are, then I find the unending stream of hype articles from folks using this quarter's hottest tool to be extremely disinteresting. It's important for folks who object to the LLM hype train to publish and publicize articles as a counterpoint to the prevailing discussion.
As an aside, the LLM hype reminds me of the hype for Kubernetes (which I was personally enmeshed in for a great many years), as well as the Metaverse and various varieties of Blockchain hype (which I was merely a bystander for).
That's a very thorough takedown of something the guy you're replying to never said. The end of their comment was "yet look at most of the replies here".
> this new technology has changed my job and I refuse to use it because I'm afraid
You're confusing fear with disgust. Nobody is afraid of your slop, we're disgusted by it. You're making a huge sloppy mess everywhere you go and then leaving it for the rest of us to clean up, all while acting like we should be thankful for your contribution.
> and "I am not an expert on that piece of the system" no longer is a reasonable position
Gosh that sounds horrifying. I am not an expert on that piece of system, no I do not want to take responsibility for whatever the LLMs have produced for that piece of system, I am not an expert and cannot verify it.
Or he perfectly understands what they meant but chose to create artificial outrage. "don't attribute to malice what can be explained by stupidity" has not aged well in 2026
> LLMs produce results on par with what I would expect out of a solid junior developer
This is a common take but it hasn't been my experience. LLMs produce results that vary from expert all the way to slightly better than markov chains. The average result might be equal to a junior developer, and the worst case doesn't happen that often, but the fact that it happens from time to time makes it completely unreliable for a lot of tasks.
Junior developers are much more consistent. Sure, you will find the occasional developer that would delete the test file rather than fixing the tests, but either they will learn their lesson after seeing your wth face or you can fire them. Can't do that with llms.
I think any further discussion about quality just needs to have the following metadata:
- Language
- Total LOC
- Subject matter expertise required
- Total dependency chain
- Subjective score (audited randomly)
And we can start doing some analysis. Otherwise we're pissing into ten kinds of winds.
My own subjective experience is earth shattering at webapps in html and css (because I'm terrible and slow at it), and annoyingly good but a bit wrong usually in planning and optimization in rust and horribly lost at systems design or debugging a reasonably large rust system.
I agree in that these discussions (this whole hn thread tbh) are seriously lacking in concrete examples to be more than holy wars 3.0.
Besides one point: junior developers can learn from their egregious mistakes, llms can't no matter how strongly worded you are in their system prompt.
In a functional work environment, you will build trust with your coworkers little by little. The pale equivalent in LLMs is improving system prompts and writing more and more ai directives that might or might not be followed.
This seems to be one of the huge weaknesses of current LLMs: Despite the words "intelligence" and "machine learning" we throw around, they aren't really able to learn and improve their skills without someone changing the model. So, they repeat the same mistakes and invent new mistakes by random chance.
If I was tutoring a junior developer, and he accidentally deleted the whole source tree or something egregious, that would be a milestone learning point in his career, and he would never ever do it again. But if the LLM does it accidentally, it will be apologetic, but after the next context window clear, it has the same chances of doing it again.
> Besides one point: junior developers can learn from their egregious mistakes, llms can't no matter how strongly worded you are in their system prompt.
I think if you set off an LLM to do something, and it does a "egregious mistake" in the implementation, and then you adjust the system prompt to explicitly guard against that or go towards a different implementation and you restart from scratch again yet it does the exact same "egregious mistake", then you need to try a different model/tool than the one you've tried that with.
It's common with smaller models, or bigger models that are heavily quanitized that they aren't great at following system/developer prompts, but that really shouldn't happen with the available SOTA models, I haven't had something ignored like that in years by now.
And honestly this is precisely why I don't fear unemployment, but I do fear less employment overall. I can learn and get better and use LLMs as a tool. So there's still a "me" there steering. Eventually this might not be the case. But if automating things has taught me anything, it's that removing the person is usually such a long tail cost that it's cheaper to keep someone in the loop.
But is this like steel production or piloting (few highly trained experts are in the loop) or more like warehouse work (lots of automation removed any skills like driving or inventory work etc).
> Chaum published the idea of anonymous electronic money in a 1983 paper; eCash software on the user's local computer stored money in a digital format, cryptographically signed by a bank
I'm trying to understand why this comment got downvoted. My best guess is that "if you're in the loop, something is wrong" is interpreted as there should be no human involvement at all.
The loop here, imo, refers to the feedback loop. And it's true that ideally there should be no human involvement there. A tight feedback loop is as important for llms as it is for humans. The more automated you make it, the better.
Yes, maybe I goofed on the phrasing. If you're in the feedback loop, something is wrong. Obviously a human should be "in the loop" in the sense that they're aware of and reviewing what the agent is doing.
Maybe documentation meant for other llms to ingest. Their documentation is like their code, it might work, but I don't want to have to be the one to read it.
Although of course if you don't vibe document but instead just use them as a tool, with significant human input, then yes go ahead.
reply