Hacker Newsnew | past | comments | ask | show | jobs | submit | more kissgyorgy's commentslogin

There is a global cache for all installed packages in the user home cache dir.


Astral's tools are so fast in general, when you first try them out, you double check what went wrong because you are sure nothing happened.

Same with uv. They are doing very nice tricks, like sending Range requests to only download the metadata part from the ZIP file from PyPI, resolve them in memory and only after that downloading the packages. No other package manager does this kind of crazy optimization.


> how soon before we start designing codebases for LLM agents to navigate more cleanly?

It's already happening. Armin Ronacher started writing more Go code instead of Python because it understand it better. My coworker changed writing a Desktop app in Rust, because it can navigate it better because of better tooling and type system.

People already thinking about how to write documentation for AI instead of other people, etc.


The difference is huge, not even close both in quality and usage.

Claude Code is fully agentic, meaning you give it a task and fully implements everything, produces surprisingly good, working code. Can test, commit, run commands, log in to remote system, debug anything.

It doesn't optimise for token usage, which Cursor heavily do, that's why it can produce higher quality code on first shots (the downside is that the cost is very high)

Cursor's agent mode is very much in it's infrantry just catching up, but Cursor is essentially a tool for editing files, but Claude Code is like a junior developer.


This does Cursor a disservice by not mentioning its deep integration.

Cursor will suggest and complete code for you inline. You just tab-complete your way to a written function. It's mad.

Claude Code doesn't do this.

Cursor also has much better awareness of TypeScript. It'll fix errors as they occur, and you can right-click an issue and have it fixed.

Contrast with CC where I've had to specify in CLAUDE.md to "NEVER EVER leave me with TS errors", and to do this it runs a CLI check using its integration, taking way longer to do the same thing.


Fwiiw

I noticed that CC’s generated Go code nowadays is very solid. No hallucination recently that i can remember or struck me. I do see youtube videos of people working with js/ts still struggling with this. Which is odd, there is way more training material for the latter. Perhaps the simplicity of Go shines here.

CC might generate Go code for which there are already library functions present. So thorough code reviews are a necessity.


Modern idiomatic JavaScript and TypeScript encourage "clever" code. The latter also has a very complicated type system, which, again, is frequently used, especially in .d.ts files for pure JS libraries because JS devs love tricks like functions doing different things based on number and type of arguments. So models learn all that from the training set, but then often can't deal with the complexity they themselves introduce.

Much as I dislike Go, it is indeed probably closer to the ideal language for the LLM. But I suspect that we need to dial it down even further, e.g. no type inference whatsoever (so no := etc). In fact I wonder if forcing the model to spell out the type of every subexpression as a type assertion might be beneficial due to the way LLMs work, for the same reason why prompting for explicit chain-of-thought improves outputs even with models not specifically trained to produce CoT. In the similar vein, it could require fully qualified names for all library functions etc. But it also needs to have fewer footguns, which Go has aplenty - possible to ignore error returns, concurrency is unsafe etc. I suspect message passing a la Erlang might be the best bet there but this is just a gut feel.

Of course, the problem with any hypothetical new PL optimized for LLMs is that there's no training data for it. To some extent this can be mitigated by mechanically converting existing code - e.g. mandatory fully qualified names and explicit type assertions for subexpressions could be easily bolted onto any existing statically typed language.


The CC vscode plugin can also fetch the errors and warnings reported to vscode by other plugins and language Servers, making the additional compile step obsolete


That's the biggest reason that the IDE plugin exists, so that Claude Code can get access to the LSP information


Right. Which it can. But to suggest that this makes it Cursor-like is wildly misrepresentative.


This is precisely what the CC extension does, no? At least that’s how the extension behaves in JetBrains IDEs.


Nope. It allows the CLI to read and parse your files. It absolutely does not give you Cursor-like interactivity.

If I’m wrong I’d be overjoyed! But I have it installed and have seen no hint of this.


It doesn't give you a Cursor-like experience but it gives Claude Code the LSP info, which means it will make less mistakes, especially with TypeScript


I started a new project to test out CC and constantly find I have to ask it to fix ts errors…it’s nice I don’t have to tell it what error it is (ie “fix the ts errors in file.tsx”) but I’m surprised it doesn’t have a “check ts” step automatically (i even added something to CLAUDE.md, which seems to work sometimes, but not always). It’s especially bad when working with recently updated libraries. It keeps suggesting thing that don’t exist anymore even though ts clearly knows it’s wrong.

Otherwise CC has been stellar and I live it’s a CLI + optional vs code extension.


I’ve used Cursor a good deal and also CC. The CC JetBrain extension replaces my code in the IDE, shows me a preview ans allows me to confirm, decline, etc. Am I missing something super specific about Cursor’s behavior? It doesn’t seem that practically different to me.


Copilot in vscode is so bad about typescript errors as well.


You mentioned that Claude Code is fully agentic.

I am using the Cursor agent mode, which can run in auto mode with, let's say, 50 consecutive tool calls, along with editing and other tasks. It can operate autonomously for 30 minutes and complete a given task. I haven't tried Claude Code yet, but I'm curious—what exactly does Claude Code do differently compared to the Cursor agent?

Is the improvement in diff quality solely because Cursor limits the context size, or are there other factors involved?


I'd suggest to just give it a shot and notice the difference, it's night and day.

I couldn't get cursor agent to do useful stuff for me - might be because I don't do TS or Python - and Claude Code was a big productivity boost almost from day one. You just tell it to do stuff, and it just... does it. At like the level of a college student.


I'm writing TS and I was not very happy with Cursor - I expected more coming from using Cline + Sonnet in VS Code. I tried the composer or how do they call it, and the results were mediocre. After few hours of struggling I gave up and returned to Cline. Now with Claude Code I got much more value right from the start. I don't know, maybe I was "holding it wrong".


Cursor does a lot of trickery to reduce context size, since they ultimately have to pay per token even if you pay them a flat fee. Cline, on the other hand, is quite profligate with context, but the results are correspondingly better, especially when paired with Gemini due to its large max context.


"infrantry"

I think you meant "infancy"


Not sure what you mean, cursor has agents, that run in feedback cycles, checking e.g syntax errors before continuing, reflecting, working for minutes if need be, can execute commands in your terminal, check any file it wants. What can cc do that cursor can't, at least in theory?


I've had Claude Code run for many hours, and also let it manage its own sub-agents. Productively.

Coming back to an implementation that has good test coverage, functions exactly as specified, and is basically production-ready is achievable through planning/specs.

Maybe Cursor can do this now as well, but it was just so far behind last time I tried it.


> but Claude Code is like a junior developer.

This has been exactly my experience. I guess one slightly interesting thing is that my “junior developer” here will get better with time, but not because of me.


Do you have details on what "optimise for token usage" looks like in Cursor? Or is your point more about how Cursor manages the context window?


Cursor does all that stuff too perfectly fine.


This was already installed when you ran Claude Code in a VSCode terminal, I guess the difference is that now it's explicitly listed on the VSCode Marketplace.


From the extension's page:

Features:

- Auto-installation: When you launch Claude Code from within VSCode’s terminal, it automatically detects and installs the extension

- Selection context: Selected text in the editor is automatically added to Claude’s context

- Diff viewing: Code changes can be displayed directly in VSCode’s diff viewer instead of the terminal

- Keyboard shortcuts: Support for shortcuts like Alt+Cmd+K to push selected code into Claude’s prompt

- Tab awareness: Claude can see which files you have open in the editor

- Configuration: Set diff tool to auto in /config to enable IDE integration features


It was slightly buggy it was uninstalled itself sometimes. I hope this will be better now with this official extension


So I won't get anything more that the file compare that appear when claude in terminal ask to modify a file ?


You can select lines, which will be added to the context (can't do that from the console), it can show the edited files in the VSCode editor, not just in the terminal.


The extension say "Tab awareness: Claude can see which files you have open in the editor" I don't know how to activate this, it would help me to not have to CD in the terminal each time


Ok so I tested it by CD into a directory, and open a file from another directory, create an empty function and selecting the function in the editor, and asking claude to just "fill the function" it knew which text was selected in which file and filled the function, this will gain me some time


The apps are so simple, so clear, no fuss, no ads, no nothing. Just the functionality you need from every app. Excellent work!

This is the only project I immediately "donated" with the Thank You app.


Me too, but I hate having the useless Thank You app installed


You can donate, remove the thank you app and use the f-droid versions.


My phone came w/ a folder for apps from my service provider and I've been adding to it any apps which I don't interact with regularly (since I've been able to keep my apps down to a single screen so I don't have to scroll).


You can uninstall them via adb. For Example there are different "unbloat your samsung device" github pages which give you a list of apps you can safely remove.


Instead of needing ADB, you can also go into Settings > Apps, turn on ‘show system apps’, and disable the ones you don’t want from there.


Yep, I had a crap folder for apps too. Until I switched to using T-UI as my launcher. Now I can only ever launch the apps I remember in my head since I need to type it out.


Probably that's why it's so good.


I also noticed Claude likes writing useless redundant comments like this A LOT.


IMO, this is much better than the status quo. Most programmers are terrible about writing clean code with good comments. I would much prefer this style over unreadable mess (especially if it’s a language/framework I’m not comfortable with).

But of course, it’s not an either-or. Ideally, I agree LLMs would provide slightly fewer comments.


It can use any command line tool very well. I just told him "Look up the status of the created systemd servive". It ssh-d to the machine, run "systemctl status", read the output and fixed issues based on that! That was totally unexpected.


I hope this question doesn't sound snarky, it's a legitimate concern that I want to address for myself: how do you ensure that once it ssh's to the machine, it does not execute potentially damaging commands?


Claude code asks you permissions for every command. It also gives you the possibility of marking commands as safe so next time it can use them without asking .


So these agents that people are so excited about spawning in parallel stop and ask you before executing each command they choose to execute? What kind of life is that. I'd rather do something myself than tell 5 AI agents what I want and then keep approving each command they are going to run.

I'm not saying it is better if they run commands without my approval. This whole thing is just doesn't seem as exciting as other people make it out to be. Maybe I am missing something.

It can literally be a single command to ssh into that machine and check if the systemd service is running. If it is in your history, you'd use ctrl+r to lookback anyway. It sounds so much worse asking some AI agent to look up the status of that service we deployed earlier. And then approve its commands on top of that.


I think it's something you have to try in order to understand.

Running commands one by one and getting permission may sound tedious. But for me, it maps closely to what I do as a developer: check out a repository, read its documentation, look at the code, create a branch, make a set of changes, write a test, test, iterate, check in.

Each of those steps is done with LLM superpowers: the right git commands, rapid review of codebase and documentation, language specific code changes, good test methodology, etc.

And if any of those steps go off the rails, you can provide guidance or revert (if you are careful).

It isn't perfect by any means. CC needs guidance. But it is, for me, so much better than auto-complete style systems that try to guess what I am going to code. Frankly, that really annoys me, especially once you've seen a different model of interaction.


Sure, if you already have the knowledge and can do it faster than the AI, you can do it yourself.

But a beginner in system administration can also do it fast.


I do not think that is a good thing in the long run. More people in fields they know absolutely nothing about? That does not sound like a good thing to me. I am going to become a chemical engineer (something I know absolutely nothing about) or some shit and have an LLM with me doing my job for me. Sounds good I guess?


[flagged]


He has a point, that's quite depressing that a work you had to think and act in order to solve hard problems now became almost the same as scanning barcodes in any supermarket, and it's outright sad that most people are happy about it and being snarky towards anyone that points the hardships that come with it.

Philosophically speaking (not practically) it's like living the industrial revolution again. It's lit! But it's also terrifying and saddening.

Personally it makes me want to savor each day as the world would never be the same again.


If these tools make your job as easy as scanning barcodes then you really weren't working on anything interesting anyways.


Thank you for rubbing extra salt into the wound.

I mean most software engineering jobs are not especially exciting. I have done web dev for smaller companies that never had more than a few hundred concurrent users. It is boring CRUD apps all day every day.

Still at least you could have a bit of fun with the technical challenges. Now with AI it becomes completely mind numbing.


I'm with you on this. I'm pouring one out for human skill because I think our ability to do a lot of creative work (coding included) is on the brink of extinction. But I definitely think these are the future


The interesting part of my job is unchanged. Thinking through the design, UX, architecture, code structure, etc were always where I found the fun / challenge. Typing was never the part I was overly fond of.


I also made a component library which is more generic, can be used for ALL Python web frameworks: https://compone.kissgyorgy.me/

Components can be mixed and matched, Bootstrap v5 components is already in the works.

I already have a Storybook-like tool which can render and showcase such components.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: