Hacker Newsnew | past | comments | ask | show | jobs | submit | cypherfox's commentslogin

Let me give a concrete example. I had a tool I built ten years ago on Rails 5.2. It's decent, mildly complex for a 1-man project, and I wanted to refresh it. Current Rails is 8. I've done upgrades before and it's...rough going more than one version up. It's _such_ a pain to get it right.

I pointed Claude Code at it, and a few hours later, it had done all of the hard work.

I babysat it, but I was doing other things while it worked. I didn't verify all the code changes (although I did skim the resultant PR, especially for security concerns) but it worked. It rewrote my extensive hand-rolled Coffeescript into modern JavaScript, which was also nice; it did it perfectly. The tests passed, and it even uncovered some issues that I had it fix afterwards. (Places where my security settings weren't as good as they should have been, or edge cases I hadn't thought of ten years ago.)

Now could I have done this? Yes, of course. I've done it before with other projects. But it *SUCKS* to do manually. Some folks suggest that you should only use these tools for tasks you COULD do, but would be annoyed to do. I kind of like that metric, but I bet my bar for annoyance will go down over time.

My experience with these systems is that they aren't significantly faster, ultimately, but I hate the sucky parts of my job VASTLY less. And there are a lot of sucky parts to even the code-creation side of programming. I *love* my career and have been doing it for 36 years, but like anything that you're very experienced in, you know the parts that suck.

Like some others, it helps that my most recent role was Staff Software Engineer, and so I was delegating and looking over the results of other folks work more than hand-rolling code. So the 'suggest and review' pattern is one that I'm very comfortable with, along with clearly separate small-scale plan and execute steps.

Ultimately I find these tools reduce cognitive load, which makes me happier when I'm building systems, so I don't care as much if I'm strictly faster. If at the end of the day I made progress and am not exhausted, that's a win. And the LLM coding tools deliver that for me, at least.

One of the things I've also had to come to terms with _in large companies_ is that the code is __never__ high quality. If you drill into almost any part of a huge codebase, you're going to start questioning your sanity (obligatory 'Programming Sucks' reference). Whether it's a single complex 750 line C++ function at the heart of a billion dollar payment system, or 2,000 lines in a single authentication function in a major CRM tool, or a microservice with complex deployment rules that just exists to unwrap a JWT, or 13 not-quite-identical date time picker libraries in one codebase, the code in any major system is not universally high quality. But it works. And there are always *very good reasons* why it was built that way. Those are the forces that were on the development team when it was built, and you don't usually know them, and you mustn't be a jerk about it. Many folks new to a team don't get that, and create a lot of friction, only to learn Chesterton's Fence all over again.

Coming to terms with this over the course of my career has also made coming to terms with the output of LLMs being functional, but not high quality, easier. I'm sure some folks will call this 'accepting mediocrity' and that's okay. I'd rather ship working code. (_And to be clear, this is excepting security vulnerabilities and things that will lose data. You always review for those kinds of errors, but even for those, reviews are made somewhat easier with LLMs._)

N.b. I pay for Claude Code, but I regularly test local coding models on my ML server in my homelab. The local models and tooling is getting surprisingly good...but not there yet.


I'm working on a tool to auto-label emails in Gmail (first) based on what you've labeled in the past.

It pulls down up to 400 emails for each custom label and creates a custom model just for you, that will label new incoming email.

For emails that are likely, but not certain to be a particular label, I use a 'Proposed/{label}' approach which lets you just archive them in Gmail, and it will detect that they've been archived with the proposed label and move them to the correct label. (Essentially using the archive action as an acceptance criteria.) Similarly I use re-labeling by the user as a negative signal, and include that data as a counter-example.

It's working well for my own accounts, and the back-end is pretty legendary, but Google requires a hefty cost to audit security in order to turn it into a real product.

It always frustrated me that Google won't use their ML systems to label emails for me based on what I've done before. So I scratched that itch.

I'm using very straightforward BERT models right now, but I'm exploring using something a little more intelligent. I'm also exploring a multi-stage process, because a lot of emails can be categorized using much simpler techniques.

It's a great Machine Learning project, with a back-end that really runs spectacularly on Temporal and Kubernetes, and it's useful to me, so...wins all around.

I do wish I could make it a product, though.


Do you have this up in a repo?


I agree with the author who said that is ahistorical...at least from my, and the people I grew up with's, perspectives. I grew up with computers in the 70's and 80's and while you may be thinking of centralized computing (minicomputers and mainframes), the personal computing revolution was widely distributed, not centralized in academia. BBSes, swap meets, user groups, even the corner Radio Shack was where 'computing' was vibrant and active. (And the magazines...SO many 'zines!)

We may be talking past each other, but my experience of computing in the 70's and 80's was definitely not academic.


That didn’t start to become common until the early/mid 80’s.

Did it exist a little? Of course. But it was dwarfed by the other stuff going on. I suspect your (and a lot of other HN) experience is going to bias on the hobbiest side though, as does mine. I only found out about the much larger stuff going on at the same time much later.

Almost all the early networking stuff (UUCP, pre-Internet internet like Arpanet, early Usenet, Gopher, even HTML and the WWW, etc) was academic institutions or related.

Often with military grants/contracts. Sometimes with purely commercial contracts, but even those were almost always for some Gov’t project. The amount of work on basics like sorting algorithms that grew out gov’t research is mind boggling, for instance.

There is a lot of well documented history on this.

Then PCs and halfway decent modems became available (2400 baud+), and things changed very rapidly.

Mid 80’s, BBS’s started sprouting like weeds. There were a few before then, but the truly hobbiest ones were very niche.

Then even more so with commercial services like Prodigy, then AOL, then actual ISPs, etc.


I think the compromise position here is to concentrate on the 1980s, and acknowledge that there was a lot of networking tech going on in academia in the 1970s.

However, in context, what I was trying to convey was that the personal computing revolution took place outside of academia. Generally, that lineage started in the early 1970s, with the homebrew movement, and took off with the Apple II in the United States in 1977. This is also well-documented, but a different branch, and definitely more concerned with the idea of computers as providing autonomy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: