Hacker Newsnew | past | comments | ask | show | jobs | submit | f311a's commentslogin

My blog is fully static and I have a 50-line CF worker script that sends comments to me which I import directly to markdown of a blog post. There are ways to do comments without embedding.

Would be neat to automate the comment-with-markdown as a commit/PR? Like using Pull request as comment moderation

Would be cooler if it really was ColdFusion. Of course, 50 lines wouldn’t get you very far.

Tangential, but recently I dove down the ColdFusion rabbit hole again.

My first job after dropping out of college was working with Flash and ColdFusion in 2012. Even by that time it was dated, but it was my first real dive into network and server programming so I do look at those days rather fondly.

CFML is one of those things that is simultaneously a brilliant and terrible product. The terrible part is obvious: it's a bloated language that doesn't lend itself terribly well to structure and historically has been very slow (though my understanding is that Lucee actually fixes that somewhat). The "brilliant" parts are less obvious but still cool. For example the cfquery blocks are really neat, and I say that without any sarcasm. Not only does it make it easy to write SQL directly, but there are nice built-in features to avoid injections with cfqueryparam that are easy to use, and you can simply add an attribute to the cfquery to specify caching. That's actually a really cool; I've seen people haphazardly reinvent different SQL caching heuristics and screw them up. The cfquery stuff makes it trivial and it has the advantage of doing it correctly.

Things like that are all over the language (though I haven't used it in awhile so I'd have to dig through notes to find specific examples); pages and pages of ugliness mixed with occasional spots of cleverness.

While I don't want to say I "miss" writing it, because I don't, I do have a bit of gratitude for its existence. If I hadn't picked up ColdFusion because of an, umm, "alternatively licensed" version of Dreamweaver when I was a teenager, my career would likely be very very different.


Care to share a link or some more info?

How it works:

* CF worker on a subdomain that handles POST requests. Basically, a JS function that handles incoming requests.

* It stores comments in CF KV and sends me a copy to telegram

* All I need to do is copy it to Markdown (can be automated, but I manually approve the comments in case of spam)

* In Markdown, I'm using frontmatter to store arbitrary JSON data

* To avoid automated spam, I have a few tricks: do not expose the submit URL in HTML (insert it via JS) and calculate a simple checksum so that automated software that does not execute JS won't be able to post. Such software usually targets Wordpress blogs by scraping them from Google. I get zero spam from it.

Everything, including hosting and workers, costs me zero.

Example: https://rushter.com/blog/zsh-shell/


>Everything, including hosting and workers, costs me zero.

Your setup sounds cool. Do you host it on a home lab or something?


Just a free plan on Cloudflare (CF in the comment above also means CloudFlare). Zero maintenance.

I push to Github and CF deploys static pages to CDN.


Ah awesome thanks. I've been using Netlify to do the same. CF scares me lol.

Interesting, thanks!

They don't understand codebases.

They are trained on other code, ignore how your codebase is structured, and lack knowledge of it. To do so, you would need to feed the whole codebase every time you ask it for something, with extensive comments about the style, architecture, and so on. No amount of md files will help with that.

In large codebases, they struggle with code reuse, unless you point the agent to look for specific code.

Finding bugs has nothing to do with understanding the codebase. They find local bugs. If they could understand the whole codebase, we would be finding RCEs for popular OSS projects so easily, including browsers.


Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

What are you gonna do with the results that are usually slop?


If the slop passes my tests, then I'm going to use it for precisely the role that motivated the creation of it in the first place. If the slop is functional then I don't care that it's slop.

I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.


> Wine/Proton have made far more in-roads in being able to run Windows

Yeah, they can even run modern games, which ReactOS can't. It can't even run on modern hardware properly.

It's a nice project, though. Good progress for a hobby project, and it's still going after 30 years!


For ML/AI/Comp sci articles, providing reproducible code is a great option. Basically, PoC or GTFO.

The most annoying ones are those which discuss loosely the methodology but then fail to publish the weights or any real algorithms.

It's like buying a piece of furniture from IKEA, except you just get an Allen key, a hint at what parts to buy, and blurry instructions.


This is so egregious. The value of such papers is basically nothing but they're extremely common.

It worked well for me when people were stealing my articles, pretending they wrote them. One tweet or mention in Linkedin and the article is gone.

Plagiarism is much different than collaborating on open source projects but I'm glad that calling them out worked.

How would it work if LLMs provide incorrect reports in the first place? Have a look at the actual HackerOne reports and their comments.

The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated. Instead of generating ten LLM comments and doubling down on their incorrect report, they could use a bit of brain power to actually validate the report. It does not even require a lot of skills, you have to manually tests it.


Let the reporter duke it out with the project's gatekeeping LLM. If it keeps going on for long enough a human can quickly skim the exchange. It should be immediately obvious if the reporter is making sensible rebuttals or just throwing more slop at the wall.

I think fighting fire with fire is likely the correct answer here.


Imagine the amount of slop PRs if it was open source. They don’t want to taste their own medicine

Reading their GitHub issues already is like reading through the diary entries of spurned lovers. I can only imagine the PRs.

It’s not like you needed LLMs for quickjs which already had known and unpatched problems. It’s a toy project. It would be cool to see exploits for something like curl.

Just look at the code quality produced by these loops. That's all you need to know about it.

It's complete garbage, and since it runs in a loop, the amount of garbage multiplies over time.


I don’t think anyone serious would recommend it for serious production systems. I respect the Ralph technique as a fascinating learning exercise in understanding llm context windows and how to squeeze more performance (read: quality) from today’s models

Even if in the absolute the ceiling remains low, it’s interesting the degree to which good context engineering raises it


How is it a “fascinating learning exercise” when the intention is to run the model in a closed loop with zero transparency. Running a black box in a black box to learn? What signals are you even listening to to determine whether your context engineering is good or whether the quality has improved aside from a brief glimpse at the final product. So essentially every time I want to test a prompt I waste $100 on Claude and have it an entire project for me?

I’m all for AI and it’s evident that the future of AI is more transparency (MLOPs, tracing, mech interp, AI safety) not less.


Current transparency is rubbish but people will continue to put up with it if they're getting decent output quality

there is the theoretical "how the world should be" and there is the practical "what's working today" - decry the latter and wait around for the former at your peril

You probably wouldn't use it for anything serious, but I've Ralphed a couple of personal tools: Mac menu bar apps mostly. It works reasonably well so long as you do the prep upfront and prepare a decent spec and plan. No idea of the code quality because I wouldn't know good swift code from a hole in the head, but the apps work and scratch the itch that motivated them.

I do not understand where this Ralph hype is coming from. Back when Claude 4.0 came out and it began to become actually useful, I already tried something like this. Every time it was a complete and utter failure.

And this dream of "having Claude implement an entire project from start to finish without intervention" came crashing down with this realization: Coding assistants 100% need human guidance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: