Hacker Newsnew | past | comments | ask | show | jobs | submit | dwattttt's commentslogin

Locks are there to keep honest people honest.

To expand on the saying, they're not there to be insurmountable. Just to be hard enough to make it easier to do things the right way.


And often they’re there so no one can plausibly say they didn’t know what they were doing or stumbled into it accidentally. You can’t “accidentally” go through a door with a padlock on it.

I’d guess it’s something similar with this dongle. You can’t “accidentally” run the software without the dongle.


I wonder if you could replace the git commands exposed to an agent.

A 'commit' 'happens' when you leave a change in jj, i.e. by moving to a new or existing change; that's when you'd run pre-commit hooks.


The problem with this is that at the limit, this means "run the precommit on every save of a file," which is not really usually how people think of precommit hooks.

Given jj's rebasing tooling, rewiring precommit into "prepush" feels like the right way forward. There's a bit of a phase transition between commits on your machine and commits that are out in the wild, after all.

Maybe! I agree that it feels kinda better. I’m not a huge hook user personally though.

For me it's mostly about doing as much of CI locally as feels reasonable to do quickly to save an iteration cycle. But I live in a world of badly configured text editors and the like so I usually have some silly formatting error

Multiple agents is definitely tempting fate. Concurrent modification of the same git repo by multiple entities?

At that point you should use multiple repos so they can merge & resolve.

EDIT: of course, if a single agent uses git to modify a repo instead of jj, jj may have trouble understanding what's happened. You could compare it to using an app that uses an sqlite db, and then also editing that db by hand.


The OP is talking about the `jj workspace create` command, which creates a separate working copy backed by the same repository. It’s not a bad way to work with multiple agents, but you do have to learn what to do about workspace divergence.

Like git, you don’t lose any history.


Sibling comment from gcr has the right details.

This doesn't involve git use at all.

Even with multiple workspaces (like git worktrees), once you use something like jjk, both the agent and jjk in the associated VS Code are operating on the same workspace, so that doesn't isolate enough. I don't think jjk uses `--ignore-working-copy` for read-only status updates, so it's snapshotting every time it checks the repo status while the agent is editing.

On top of that, throw in whatever Claude does if you "rewind" a conversation that also "reverts" the code, and agents wrongly deciding to work on code outside their focus area.

It's possible watchman helps (I need to look into that), but I'm so rarely using jj in VS Code (all I really want is inline blame), that it was easier to remove jjk than try to debug it all.

Divergence won't hide or lose any work, but it's an annoying time-suck to straighten out.


It sounds like it's a design goal of this "wamedia" to _not_ maintain bug compatibility with media players.

I suspect it is actually about maintaining permissiveness for malformed inputs rather than keeping security bugs. I ran into this building ingestion for a print-on-demand service where users upload technically broken PDFs that legacy viewers handle fine. If the new parser is stricter than the old one you end up rejecting files that used to work, which is a non-starter for the product.

Referencing the classic https://xkcd.com/2030

"I don't quite know how to put this, but our entire field is bad at what we do, and if you rely on us everyone will die"

"They say they've fixed it with something called <del>blockchain</del> AI"

"Bury it in the desert. Wear gloves"


Honestly, this is absurdly funny, but it makes me wonder whether we'll ever see Computer Science and Computer Engineering as seriously as other branches of STEM. I've been debating recently whether I should keep working in this field, after years of repeatedly seeing incompetence and complacency create disastrous effects in the real world.

Oftentimes, I wonder if the world wouldn't be a bit better without the last 10 or 15 years of computer technology.


This is really something that’s making me quite fed up with industry. I’m looking towards embedded and firmware in hopes that the lower in the stack I go the more people care about these type of things out of business necessity. But even then I’m unsure I’ll find the rigor I’m looking for

I’ve been thinking the same thing lately. It’s hard to tell if I’m just old and want everyone off my lawn, but I really feel like IT is a dead end lately. “Vintage” electronics are often nicer to use than modern equivalents. Like dials and buttons vs touch screens. Most of my electronics that have LCDs feel snappy and you sort of forget that you’re using them and just do what you were trying to do. I’m not necessarily a Luddite. I know tech _could_ be better theoretically but it’s distressing to know that it’s also not possible for things to be different for some other reasons. Economically, culturally? I don’t know.

> makes me wonder whether we'll ever see Computer Science and Computer Engineering as seriously as other branches of STEM

It's about as serious as a heart attack at this point...


This seems like something that shouldn't be the container formats responsibility. You can record arbitrary metadata and put it in a file in the container, so it's trivial to layer on top.

On the other hand, tie the container structure to your OS metadata structure, and your (hopefully good) container format is now stuck with portability issues between other OSes that don't have the same metadata layout, as well as your own OS in the past & future.


What is a container then?

Just an id,blob format?

The purpose of tar (or competitors) is to serialize files and their metadata.


Tar is not the pinnacle of "containers"; it has age and ubiquity, and that's about it at this point.

Tar's purpose was to serialise files and metadata in 1979, accounting for tape foibles such as fixed or variable data block size.


> they have to resort to solutions like the rust-analyzer.

It's not really a bad thing. IDEs want results ASAP, so a solution should focus on latency; query based compilers can compile just enough of the source to get the answer to a specific query, so they're a good answer.

Compiling a binary means compiling everything though, so "compiling just the smallest amount of source for a query" isn't specifically a goal, instead you want to optimise for throughput; stuff like batching is a win there.

These aren't language specific improvements, they're recognition that the two tasks are related, but have different goals.


That's not forced behaviour. If you want to do something more interesting, you'd use the raw/unsynchronised handles:

  /// The returned handle has no external synchronization or buffering layered on top.
  const fn stdout_raw() -> StdoutRaw;

> Empirically, CTVP attains very good detection rates with reliable false positives

A novel use of the word "reliable"? Jokes aside, either they mean the FPR as the opposite of what you'd expect, the table is not representative of their approach, or they're just... really optimistic?


Format C: /DevDrv /Q

EDIT: Just to be clear, if you don't understand when you'd use this command, do not use it. I suddenly realised people might not be familiar with formatting, and don't want to be responsible for the destruction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: