Hacker Newsnew | past | comments | ask | show | jobs | submit | abalashov's commentslogin

I liked the broader article, "Why Everybody is Losing Money on AI", more for the overhead perspective:

https://www.wheresyoured.at/why-everybody-is-losing-money-on...


A question the capital markets should be asking themselves.


> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.

That's just it. You can only use AI usefully for coding* once you've spent years beating your head against code "the hard way". I'm not sure what that looks like for the next cohort, since they have AI on day 1.

* That is, assuming it's nontrivial.


> What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?

In my experience, the type of messes created by humans and the type of messes created by genAI are immensely different, and at times require different skill sets to dissect.


Of course there should be. However, those nations should worry about that on behalf of their citizens. No other nation is going to concern itself with whether Americans can live close to their parents.


I like to say that it's not AI -- it's just A.


No, but I think you're wrong. My child grew 3 inches in the last year alone, according to his latest physical.

If I were to adopt your extrapolation methods, he'll soon not only be the tallest human alive, but the tallest structure on the planet.


Maybe. It all depends on whether the AI can actually solve your problem.


Yeah, if the AI isn't shite and has access to tools to solve my problem, I'll take it over some overworked and undermotivated human call center any day.


This is the best and most enlightening take I've heard in a good while.

I have articulated this to friends and colleagues who are on the LLM hype train somewhat differently, in terms of the unwieldiness of accumulated errors and entropy, disproportionate human bottlenecks when you do have to engage with the code but didn't write any of it and don't understand it, etc.

However, your formulation really ties a lot of this together better. Thanks!


I do a lot of work in a rather obscure technology (Kamailio) with an embedded domain-specific scripting language (C-style) that was invented in the early 2000s specifically for that purpose, and can corroborate this.

Although the training data set is not wholly bereft of Kamailio configurations, it's not well-represented, and it would be at least a few orders of magnitude smaller than any mainstream programming language. I've essentially never had it spit out anything faintly useful or complete Kamailio-wise, and LLM guidance on Kamailio issues is at least 50% hallucinations / smoking crack.

This is irrespective of prompt quality; I've been working with Kamailio since 2006 and have always enjoyed writing, so you can count on me to formulate a prompt that is both comprehensive and intricately specific. Regardless, it's often a GPT-2 level experience, or akin to running some heavily quantised 3bn parameter local Llama that doesn't actually know much of anything specific.

From this one, can conclude that a tremendous amount of reinforcement for the weights is needed before the LLM can produce useful results in anything that isn't quasi-universal.

I do think, from a labour-political perspective, that this will lead to some guarding and fencing to try to prevent one's work-product from functioning as free training for LLMs that the financial classes intend to use to displace you. I've speculated before that this will probably harm the culture of open-source, as there will now be a tension between maximal openness and digital serfdom to the LLM companies. I can easily see myself saying:

I know our next commercial product (based on open-source inputs) releases, which are on-premise for various regulatory and security reasons, will be binary-only; I have never customers looking through our plain-text scripts before, but I don't want them fed into LLMs for experiments with AI slop.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: