Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting: "Reviews are billed on token usage and generally average $15–25, scaling with PR size and complexity."


This cost seems wild. For comparison GitHub Copilot Code Review is four cents per review once you're outside of the credits included with your subscription.


Same thoughts.

For comparison, Greptile charges $30 per month for 50 reviews, with $1 per additional review.

At average of $15~25 per review, this is way more expensive.


Yea, but copilot review is useless. The noise it generates easily costs that much in wasted time.


Not sure about that, I find it's generally pretty good at finding niche issues that aren't as easily caught by humans. Especially with newer LLMs it gets even better.


eh works fine for me, much better than I expected.


I don't know how good Claude's reviews are but I have yet to get a worthwhile GitHub Copilot review.


Senior+ engineers easily make $100+ an hour. This is equivalent to 15 minutes of their time max.

I run a PR review via Claude on my own code before I push. It’s exceptionally good. $20 becomes an incredibly easy sell when I can review a PR in 10 minutes instead of an hour.


Average _per review_? Insane costs, that's potentially thousands per developer. Am I missing something?


I haven't used it so just spit balling, but surely it depends on the quality of the review? If it picks up lots of issues and prevents downtime then it could work out as worthwhile. What would it cost an engineer with deep knowledge of the codebase to do a similar job? You could spend an hour really digging into a PR, poking around, testing stuff out etc. Im guessing most engineers are paid more than $15-25/hr, not to mention the opportunity cost.


Now imagine what it will be when they actually need to make money


At those prices I wonder if it also reviews the design for ineffectiveness in performance or decomposition into maintainable units besides catching the bugs.

Also the examples are weird IMO. Unless it was an edge/corner case the authentication bug would be caught in even a smoke test. And for the ZFS encryption refactor I'd expect a static-typed language to catch type errors unless they're casting from `void*` or something. Seems like they picked examples by how important/newsworthy the areas were than the technicality of the finds.


This mostly matches my own estimates for pr-review command that I use. But it's pretty sophisticated: 6 specialized agents, best practices skills, CVE database, bunch of scripts. To reduce cost, most of agents use cheap open source models.


Wait, what? So if I'm a paying Max user, i'd still have to pay more? Don't see the value. Would rather have a repo skill to do the code review with existing Claude Max tokens.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: