Hacker Newsnew | past | comments | ask | show | jobs | submit | GCUMstlyHarmls's commentslogin

Is this some dark pattern or what?

https://imgur.com/a/WN2Mr1z (UK: https://files.catbox.moe/m0lxbr.png)

I clicked settings, this appeared, clicking away hid it but now I cant see any setting for it.

The nasty way of reading that popup, my first way of reading it, was that filestash sends crash reports and usage data, and I have the option to have it not be shared with third parties, but that it is always sent, and it defaults to sharing with third parties. The OK is always consenting to share crash reports and usage.

I'm not sure if it's actually operating that way, but if it's not the language should probably be

    Help make this software better by sending crash reports and anonymous usage statistics.

    Your data is never shared with a third party.

    [ ] Send crash reports & anonymous usage data.
    

    [ OK ]

There is no telemetry unless you opt in. It was just a very poorly worded screen which will definitly get updated with your suggestions

update: done => https://github.com/mickael-kerjean/filestash/commit/d3380713...


You're very conscientious, thank you (providing a separate link for UK readers). I hate that it's come to this.

It's very easy to call Erlang from Elixir,

    value = :erlang.function(a, b, c)

So with that in mind, you can just learn Elixir, use Erlang where you need it for some libraries. Elixir is IMO much nicer to learn, write and use. I think it is worth learning a bit about the underlying systems but I've never felt like I should have put in 10 erlang-only projects before writing larger elixir stuff.

Just to emphasize this as someone that's worked in Elixir professionally for a decade now.

It really is that easy. The interoperability between Erlang and Elixir is fantastic and the communities get along well. There has been a long time push from many of the thought leaders that BEAM (the VM that Erlang and Elixir run on) should be a community regardless of language. That way we can share resources.

When I first learned Elixir I spent all my time in Elixir. Erlang has a lot of nice libraries though, so it wasn't uncommon back when I started to reach for one.

It was a pretty gentle learning curve, you can write Elixir with no knowledge of Erlang at all. You can consume Erlang libraries from Elixir with no knowledge of Erlang at all. Then if you are like me, you are curious about how something works and you go read some library code and it's a bit odd but you can mostly get the gist of it. Then over time reading Erlang is easy enough, the prolog inspired syntax is the hardest hurdle to get over, but then you realize how much Erlang and Elixir have in common.


I dont get why anti-swap is so prevalent in Linux discussions. Like, what does it hurt to stick 8-16-32gb extra "oh fuck" space on your drive.

Either you're going to never exhaust your system ram, so it doesn't matter, minimally exhaust it and swap in some peak load but at least nothing goes down, or exhaust it all and start having things get OOM'd which feels bad to me.

Am I out of touch? Surely it's the children who are wrong.


The pro-swap stance has never made sense to me because it feels like a logical loop.

There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.

For instance, if I have 8 GB of RAM, people recommend adding 8 GB of swap. But since I like having plenty of memory, I install 16 GB of RAM instead—and yet, people still tell me to use swap. Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.

Then, if I upgrade to 24 GB of RAM, the advice doesn’t change—they still insist on enabling swap. I could install an absurd amount of RAM, and people would still tell me to set up swap space.

It seems that for some, using swap has become dogma. I just don’t see the reasoning. Memory is limited either way; whether it’s RAM or RAM + swap, the total available space is what really matters. So why insist on swap for its own sake?


You're mashing together two groups. One claims having swap is good actually. The other claims you need N times ram for swap. They're not the same group.

> Memory is limited either way; whether it’s RAM or RAM + swap

For two reasons: usage spikes and actually having more usable memory. There's lots of unused pages on a typical system. You get free ram for the price of cheap storage, so why wouldn't you?


This rule of thumb is outdated by two decades.

The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.


That's not useful as a rule of thumb, since you can't know the size of "all inactive anonymous pages" without doing extensive runtime analysis of the system under consideration. That's pretty much the opposite of what a rule of thumb is for.

You are right, it is not a rule of thumb, and you can't determine optimal swap size right away. But you don't need "extensive runtime analysis". Start with a small swap - a few hundred megabytes (assuming the system has GBs of RAM). Check its utilization periodically. If it is full, add a few hundred megabytes more. That's all.

It's not like it's easy to shuffle partitions around. Swap files are a pain, so you need to reserve space at the end of the table. By the time you need to increase swap the previous partition is going to be full.

Better overcommit right away and live with the feeling you're wasting space.


> Swap files are a pain

Easier than partitions:

    mkswap --size 2G --file swap.img
    swapon swap.img

Yeah, until you need to hibernate to one. I understand that calculating file offsets is not rocket science, but still, all the dance required is not exactly uninvolved and feels a bit fragile.

Exactly opposite. Don't use swap partitions, and use swap files, even multiple if necessary. Never allocate too much swap space. It is better to get OOM earlier then to wait for unresponsive system.

Swap partition is set and forget. Can be detected by label automatically, never fails.

Swap file means fallocating, setting extended attributes (like `nocow`), finding file offset and writing it to kernel params, and other gotchas, like btrfs not allowing snapshotting a subvolume with an active swap file.

Technically it's preferable, won't argue with that.


Hast thou discovered our lord and savior LVM?

> There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.

That rule came about when RAM was measured in a couple of MB rather than GB, and hasn't made sense for a long time in most circumstances (if you are paging our a few GB of stuff on spinning drives your system is likely to be stalling so hard due to disk thrashing that you hit the power switch, and on SSDs you are not-so-slowly killing them due to the excess writing).

That doesn't mean it isn't still a good idea to have a little allocated just-in-case. And as RAM prices soar while IO throughput & latency are low, we may see larger Swap/RAM ratios being useful again as RAM sizes are constrained by working-sets aren't getting any smaller.

In a theoretical ideal computer, which the actual designs we have are leaky-abstraction laden implementations of, things are the other way around: all the online storage is your active memory and RAM is just the first level of cache. That ideal hasn't historically ended up being what we have because the disparities in speed & latency between other online storage and RAM have been so high (several orders of magnitude), fast RAM has been volatile, and hardware & software designs or not stable & correct enough such that regular complete state resets are necessary.

> Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.

Because your need for fast immediate storage has increased, so 8-quick-8-slow is no longer sufficient. You are right in that this doesn't mean you need 16-quick-16-slow is sensible, and 128-quick-128-slow would be ridiculous. But no swap at all doesn't make sense either: on your machine imbued with silly amounts of RAM are you really going to miss a few GB of space allocated just-in-case? When it could be the difference between slower operation for a short while and some thing(s) getting OOM-killed?


Swap is not a replacement for RAM. It is not just slow. It is very-very-very slow. Even SSDs are 10^3 slower at random access with small 4K blocks. Swap is for allocated but unused memory. If the system tries to use swap as active memory, it is going to become unresponsive very quickly - 0.1% memory excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.

What is allocated but unused memory? That sounds like memory that will be used in the near future and we are scheduling in an annoying disk load when it is needed

You are of course highlighting the problem that virtual addressing was intended to over abstract memory resource usage, but it provides poor facilities for power users to finely prioritize memory usage.

The example of this is game consoles, which didn't have this layer. Game writers had to reserve parts of ram fur specific uses.

You can't do this easily in Linux afaik, because it is forcing the model upon you.


Unused or Inactive memory is memory that hasn't been accessed recently. The kernel maintains LRU (least recently used) lists for most of its memory pages. The kernel memory management works on the assumption that the least recently used pages are least likely to be accessed soon. Under memory pressure, when the kernel needs to free some memory pages, it swaps out pages at the tail of the inactive anonymous LRU.

Cgroup limits and OOM scores allow to prioritize memory usage on a per-process and per-process group basis. madvise(2) syscall allows to prioritize memory usage within a process.


There is too much focus in this discussion about low memory situations. You want to avoid those as much as possible. Set reasonable ulimit for your applications.

The reason you want swap is because everything in the Linux (and all of UNIX really) is written with virtual memory in mind. Everything from applications to schedulers will have that use case in mind. That's the short answer.

Memory is expensive and storage is cheap. Even if you have 16 GB RAM in your box, and perhaps especially then, you will have some unused pages. Paging out those and utilizing more memory to buffer I/O will give you higher performance under most normal circumstances. So having a little bit of swap should help performance.

For laptops hibernation can be useful too.


It's true that if you always have free RAM, you don't need swap. But most people don't have that it can always be used as a disk cache. Even if you are just web browsing, the browser is writing to disk stuff fetched from the internet in the hopes it won't change, the OS is will be keeping all of that in RAM until no more will fit.

Once the system has used all available RAM if has for disk cache it has a choice if it has swap. It can write write modified RAM to swap, and use the space it freed for disk cache. There is invariably some RAM where that tradeoff works - RAM use by login programs, and other servers that haven't been accessed in hours. Assuming the system is tuned well, that is all that goes to swap. The freed RAM is then used for disk cache, and your system runs faster - merely because you added swap.

There is no penalty for giving a system too much swap (apart from disk space), as the OS will just use it up until the tradeoff doesn't make sense. If your system is running slow because swap is being overused the fix isn't removing swap (if you did you system may die because of lack of RAM), it's to add RAM until swap usage goes down.

So, the swap recipe is: give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop. Monitor it occasionally, particularly if your system slows down. If swap usage ever goes above 1Gb, you probably need to add RAM.

On servers swap can be used to handle DDOS from malicious logins. I've seen 1000's of ssh attempts happen at once, in an attempt to break in. Eventually the system will notice and firewall the IP's doing it. If you don't have swap, those login's will kill the system unless you have huge amounts of RAM that isn't normally used. With swap it slows to a crawl, but then recovers when the firewall kicks in. So both provisioning swap and having loads of RAM prevent DDOS's from killing your system, but this is in a VM, one costs me far more per month than the other, and I'm trying fix to a problem that happens very rarely.


> There is no penalty for giving a system too much swap (apart from disk space)

There is a huge penalty for having too much swap - swap thrashing. When the active working set exceeds physical memory, performance degrades so much that the system becomes unresponsive instead of triggering OOM.

> Monitor it occasionally, particularly if your system slows down.

Swap doesn't slow down the system. It either improves performance by freeing unused memory, or it is a completely unresponsive system when you run out of memory. Gradual performance degradation never happens.

> give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop.

Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.


> There is a huge penalty for having too much swap - swap thrashing.

Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.

Although trashing is not something you want happening, if your system is thrashing with swap the alternative without having it available is the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.

> Gradual performance degradation never happens.

Where on earth did you get that from? It's wrong most of the time. The subject was very well researched in the late 1960's and 1970's. If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff. This is a modern example, but there are lots of papers from that era showing the usual gradual response, followed by falling off a cliff: https://yeet.cx/r/ayNHrp5oL0. A seminal paper on the subject: https://dl.acm.org/doi/pdf/10.1145/362342.362356

The underlying driver for that behaviour is the disk system being overwhelmed. Say you have 100 web workers that that spend a fair chunk of their time waiting for networked database requests. If they all fit in memory the response is as fast as it can be. Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database. Eventually the increasing swapping hits the disk's IOPS limit, active memory is swapped out and performance crashes.

The only reason I can think the gradual slow down is not obvious to you is that modern SSD's are so fast, the initial degradation it's not noticeable to desktop user.

> Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.

A you seem to recognise having lots of swap on hand and unused, even it it's terabytes of it does not effect performance. The question then becomes: what would you prefer to happen in those rare times when swap usage exceeds the optimal few hundred megabytes? Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app. When that happens it seems it's popular to blame the swap system for slowing their system down because they temporarily exceeded the capacity of their computer.


> Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.

Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.

> the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.

In a swap thrashing event, the system isn't just running slowly but totally unresponsive, with an unknown chance of recovery. The majority of people prefer OOM killer to an unresponsive system. That's why we got OOM killer in the first place.

> If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff.

Random access latency difference between RAM and SSD is 10^3. When the active working set spills out into swap, liner increase of swap utilization leads to exponential performance degradation. Assuming random access, simple math gives that 0.1% excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.

> A seminal paper on the subject: https://dl.acm.org/doi/pdf/10.1145/362342.362356

This paper discusses measuring stable working sets and says nothing about performance degradation when your working set increases.

> https://yeet.cx/r/ayNHrp5oL0.

WTF is this graph supposed to demonstrate? Some workload went from 0% to 100% of swap utilization in 30 seconds and got OOM-killed. This is not going to happen with a large swap.

> Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database

In practice, you never see constant or gradually increasing swap I/O in such systems. You either see zero swap I/O with occasional spikes due to incoming traffic or total I/O saturation from swap thrashing.

> Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app.

You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap. It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.


> Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.

You seem to be very certain about that inevitable memory leak. I guess people can make their own judgements about how inevitable they are. I can't say I've seen a lot of them myself.

But the next bit is total rubbish. A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there. It doesn't thrash. What actually happens if the leak continues is swap eventually fills up, and then the OOM killer comes out to play. Fortunately it will likely kill the process that is leaking memory.

I've used this behaviour to find which process had a slow leak (it had to be running for months). This has only happened once in decades mind you - these leaks aren't that common. You allocate a lot of swap, and gradually it is filled by the process that has the leak. Because swap is so large once the process leaking memory fills it, it stands out like dogs balls because it's memory consumption is huge.

You notice all of this because, like all good sysadmins, you monitor swap usage and receive alerts when it gets beyond what is normal. But you have time - the swap is large, the system slows down during peaks but recovers when they are over. It's annoying, but not a huge issue.

> In a swap thrashing event, the system isn't just running slowly but totally unresponsive

Again, you are seem to be very certain about this. Which is odd, because I've logged into systems that were thrashing which means they didn't meet my definition of "totally unresponsive". In fact I could only log in because the OOM killer had freed some memory. The first couple of times the OOM killer took out sshd and I had to each for the reset button, but I got lucky one day and could log in. The system was so slow it was unusable for most purposes - but not for the one thing I needed, which was to find out why it had run out of memory. Maybe we have different definitions of "totally", but to me that isn't "totally". In fact if you catch it before the OOM killer fires up and kills god knows what, these "totally unresponsive systems" are salvageable without a reboot.

> This paper discusses measuring stable working sets and says nothing about performance degradation when your working set increases.

Fair enough. Neither link was good.

> You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap.

Perhaps some of them are, but for me it wasn't the swapping that did the system in. It is always the OOM killer.

> It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.

The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly. Despite what you say, the reset button won't corrupt modern journaled filesystems as they are pretty well debugged. But applications are a different story. If they get hit by a reset or the OOM killer while they are saving your data and aren't using sqlite as their "fopen()", they can wipe the file you are working on. You don't just lose the changes. The entire document is gone. This has happened to me.

I'd take the system taking a few minutes to respond to my request to kill a misbehaving application over the OOM killer any day.


> You seem to be very certain about that inevitable memory leak.

It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event. Read other comments.

> A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there.

You assume that leaked memory is inactive and goes to swap. This is not true. Chrome, Gnome, whatever modern Linux desktop apps leak a lot, and it stays in RSS, pushing everything else into swap.

> if the leak continues is swap eventually fills up, and then the OOM killer comes out to play

You assume that the OOM killer comes out to play in time. The larger the swap, the longer it takes for the OOM killer to trigger, if ever, because the kernel OOM-killer is unreliable, so we have a collection of other tools like earlyoom, Facebook oomd and systemd-oomd.

> I've logged into systems that were thrashing

It means that the system wasn't out of memory yet. When it is unresponsive, you won't be able to enter commands into an already open shell. See other comments here for examples.

> The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly.

This is not true. By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system. By default, systemd, ssh and other socket-activated systemd units are protected from OOM.


> It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event.

If they disable swap they will get hit by the OOM killer. You seem to prefer it over slowing down. I guess that's a personal preference. However, I think it is misleading to say people are being bitten by a swap thrashing event. The "event" was them running out of RAM. Unpleasant things will happen as a consequence. Blaming thrashing or the OOM killer for the unpleasant things is misleading.

> You assume that leaked memory is inactive and goes to swap. This is not true.

At best, you can say "it's not always true". It's definitely gone to swap in every case I've come across.

> It means that the system wasn't out of memory yet.

Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!

> When it is unresponsive, you won't be able to enter commands into an already open shell.

Again that's just plain wrong. I have entered commands into a system is trashing. It must work eventually if thrashing is the only thing going on, because when the system thrashes the CPU utilization doesn't go to 0. The CPU is just waiting for disk I/O after all, and disk I/O is happening at a furious pace. There's also a finite amount of pending disk I/O. Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.

If the system does die other things have happened. Most likely the OOM killer if they follow your advice, but network timeouts killing ssh and networked shares are also a thing. If you are using Windows or MacOS, the swap file can grow to fill most of free disk space, so you end up with a double whammy.

Which brings me to another observation. In desktop OS's, the default is to provide it, and lots of it. In Windows swap will grow to 3 times RAM. This is pretty universal - even Debian will give you twice RAM for small systems. The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber. They've used real data, they've observed when swapping starts being used systems do slow down giving the user some advance warning, when thrashing starts systems can recover rather than die which gives the user opportunity to save work. It is the right design tradeoff IMO.

> By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system.

Yes, it does. And if it is a single large process hogging memory you are in luck - the OOM killer will likely do the right thing. But Chrome (and now Firefox) is not a single large process. Worse if the out of memory is caused by say someone creating zillions of logins, they are so small they are the last thing the OOM killer chooses. Shells, daemons, all sorts of critical things go first. The "largest" process first is just a heuristic, one which can be and in my case has been wrong. Badly wrong.


> You seem to prefer it over slowing down.

An unresponsive system is not a slowdown. You keep ignoring that.

>> You assume that leaked memory is inactive and goes to swap. This is not true.

> At best, you can say "it's not always true".

You skipped my sentence that was specifying the scope when "it's not always true", and now you pretend that I'm making a categorical generalized statement. This is a silly attempt at a "strawman".

>> It means that the system wasn't out of memory yet.

> Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!

Swap is not RAM. When the free RAM is below the low watermark, the kernel switches to direct reclaim and blocks tasks that require free memory pages. Blocking of tasks happens regardless of swap. If you are able to log in and fork a new process, the system is not below the low watermark.

>> When it is unresponsive, you won't be able to enter commands into an already open shell.

> Again that's just plain wrong.

You are in denial.

> Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.

This is false. A system can stay unresponsive much longer than a cup of coffee. There is no guarantee that the thrashing will end in a reasonable time.

> even Debian will give you twice RAM for small systems.

> The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber.

That 2x RAM rule is exactly that - an old folk law. You can find it in SunOS/AIX/etc manuals or Usenet FAQs from the 80s and early 90s, before Linux existed.

> They've used real data.

You're hallucinating like an LLM. No one did any research or measurements to justify that 2x rule in Linux.


Another factor other commenters haven't mentioned, although the article does bring it up: you may disable swap and you will still get paging behavior regardless, because in a pinch the kernel will reclaim pages that are mmapped to files. Most typically binaries and librairies. Which means the process in question will incur a map page read next time it schedules. But of course you're out of memory, so the kernel will need to page out another process's code page to make room, and when that process next schedules... Etc.

This has far worse degradation behavior than normal swapping of regular data pages. That at least gives you the breathing space to still schedule processes when under memory pressure, such as whichever OOM killer you favor.


Binaries and libraries are not paged out. Being read-only, they are simply discarded from the memory. And I'll repeat, actively used executable pages are explicitly excluded from reclaim and never discarded.

The reason you're supposed to have swap equal in size to your RAM is so that you can hibernate, not to make things faster. You can easily get away with far less than that because swap is rarely needed.

> so that you can hibernate

The “paging space needs to be X*RAM” and “paging space needs to be RAM+Y” predate hibernate being a common thing (even a thing at all), with hibernate being an extra use for that paging space not the reason it is there in the first place. Some OSs have hibernate space allocated separately from paging/swap space.


I do wish there was a way to reserve swap spaces for hibernation that don't contribute to the virtual memory. Else by construction the hibernation space is not sufficient for the entire virtual memory space, and hibernation will fail when the virtual memory is getting full.

this. i don't even want swap for my apps. they allocate to much memory as it is. i'd rather they be killed when the memory runs out or simply be prevented from allocating memory that's not there. the kind of apps that can be safely swapped out are rarely using much memory anyways.

but i do want hibernate to work.


You're implying that people are telling you to set up swap without any reason, when in fact there are good reasons - namely dealing with memory pressure. Maybe you could fit so much RAM into your computer that you never hit pressure - but why would you do that vs allocating a few GB of disk space for swap?

Also, as has been pointed out by another commenter, 8GB of swap for a system with 8GB of physical memory is overkill.


I'm also in the GP's camp; RAM is for volatile data, disk is for data persistence. The first "why would you do that" that needs to be addressed is why volatile data should be written to disk. And "it's just a few % of your disk" is not a sufficient answer to that question.

> RAM is for volatile data, disk is for data persistence.

Genuinely curious where this idea has come from. Is it something being taught currently?


No, not currently -- since the start of computers. This is quite literally part of Computing 101; see https://web.stanford.edu/class/cs101/lecture02.html#/9 , slides 10-12.

You can ask your favourite search engine or language fabricator about the differences between RAM and disk storage, they will all tell you the same thing. Frankly, it's kind of astonishing that this needs to be explained on a site like HN.


I have no idea where on those slides it says non-volatile storage should not be used for non-permanent, temporary data.

It does note main differences (speed, latency, permanence). How does that limit what data disk can be used for?

What would one use optane DIMMs for?

Also, if my program requires huge working set to process the data, why would I spend the effort and implement my own paging to templrary working files, instead of allocating ridiculous amount of memory and letting OS manage it for me? What is the benefit?


Because of cost - particularly given the current state of the RAM market. In order to have so much memory that you never hit memory spikes, you will deliberately need to buy RAM to never be used.

Note that simply buying more RAM than what you expect to use is not going to help. Going back to my post from earlier, I had a laptop with 8GB of RAM at a time where I would usually only need about 2-4GB of RAM for even relatively heavy usage. However, every once in a while, I would run something that would spike memory usage and make the system unresponsive. While I have much more than 8GB nowadays, I'm not convinced that it's enough to have completely outrun the risk of this sort of behaviour re-occuring.


how much swap do you have? i have 16GB now, and 16GB ram. i had a machine before with 48GB ram. obviously having more ram and no swap should perform better than the same amount of memory split into ram and swap.

8-16-32gb of swap space without cgroup limits would get the system into swap thrashing and make it unresponsive.

I think it's some kind of misplaced desire to be "lightweight" and avoid allocating disk space that cannot be used for regular storage. My motivation way back when for wanting to avoid swap was due to concerns about SSD wear issues, but those have been solved for a long time ago.

Swap causes thrashing, making the whole system unusable, instead of a clean OOM kill

IMO OOM killing should be reserved for single processes misbehaving. When a lot of different applications just use a decent amount of memory and exhaust the system RAM swapping to disk is the appropriate thing to do.

When you set cgroup limits, you tell the kernel how to determine when a process is misbehaving and needs to be OOM-killed.

swap causes thrashing if you have too large swap and no cgroup limits.

1) in the Microsoft days I would have a lot of available ram, bur windows still would aggressively swap, and I would get enraged when changing to an app that would have to swap in while I had 4gb of memory free

2) the os tried to be magical, but a swap thrash is still crap... I would much rather oom kill apps than swap thrash. For a desktop user: kill the fucking browser or electron apps, don't freeze the system/ui.


I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.

Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.

Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.


Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.

But that doesn't actually work for my use cases though, plenty of other people have already told me "I'm Holding It Wrong" without actual suggestions that work I've started ignoring them. At this stage I just assume many people work in very different sectors, and some see the "great benefits" often proselytized on the internet. And other areas don't see that. Systems programming, where I work, seems to be a poor fit - possibly due to relatively lack of content in the training corpus, perhaps due to company internal styles and APIs meaning lots of the context is taken up simply detailing takes a huge amount of the context leaving little for further corrections or details, or some other failure modes.

We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.


So how are you as a person able to keep all of those rules in mind when you make a change? How would you train a junior engineer to do your job? Perhaps looking at it from that angle will solve your problem.

I've never seen horse that scratches you.

> More corrosion

Surely given starlinks 5ish year deorbit plan, you could design a platform to hold up for that long... And instead of burning the whole thing up you could just refurbish it when you swap out the actual rack contents, considering that those probably have an even shorter edge lifespan.


Starlinks are built to safely burn up on re-entry. A big reusable platform will have to work quite differently to never uncontrollably re-enter, or it might kill someone by high velocity debris on impact.

This adds weight and complexity and likely also forces a much higher orbit.


Hopefully a sea platform does not end up flying into space all of its own, only to crash and burn back down.

Maybe the AI workloads running on it achieve escape velocity? ;)


I can’t wait for all the heavy metals that are put into GPUs and other electronics showering down on us constantly. Wonder why the billionaires have their bunkers.

Yeah, "burn up safely on reentry".

100 years later: "why does everything taste like cadmium?"


Every time brave gets walked out as some good alternative I cant get past the vc / crypto coin / brave-reward holding garbage.

Maybe they're ok now but they had some really gross mistakes (?).


There's a lot of confusion around the "brave-reward holding garbage."

To be brief, Brave issued grants to users, which those same users could then direct to their favorite content creators. So, the grants _started with Brave_, and initially _remained with Brave_ until they were claimed by the designated content creator. If the content creator never claimed the grant, it could be recycled back into the pool, and re-issued to another Brave user in the future.

The _grossness_ of this "controversy" is in the fiction surrounding it, and not in the details itself. Some falsely claimed Brave solicited donations on behalf of content creators—that was never the case. _Falsehood flies, and the truth comes limping after it._


Rewards has always been opt in, so you don't need to get past it to use Brave. We would not be here without it, but use Chrome or Firefox if you prefer. IMHO "really gross" applies to the Google spyware embedded in Chrome, and Firefox has had its share of "gross mistakes" since I left.

For those who don't want to free ride, we will offer Brave Origin soon. One time payment for stripped down Brave, no opt-in UX of any kind.


[flagged]


The moment you read “crypto mining in the browser while you browse” should be an immediate red flag that you should run away. Absolutely no need to respect him even when he was the creator of JS. So what.

“but it’s opt in, bro, you dont have to use it” — every Brave stan

We never, as in not ever, offered crypto mining in the browser.

In fact, Brave was the first browser to block nasty crypto-jacking/mining scripts (e.g., CoinHive) when they began to appear on the scene, nearly a decade ago.

> ... things like "Come home, white man" and other dog-whistles on image-boards

Wow, that's a new lie.

Do you have any evidence? This isn't something Brave ever did, but it's easy to make unfalsifiable "There's probably archives" b.s. claims on HN.


I use old.reddit on my desktop, new.reddit on my phone and new.reddit is constantly mashing in posts from a more niche "my-country" sub (eg: not the "main" /r/country) that's often got very baity posts (eg: guised "does anyone else hate immigrants??" posts).

Same account, same behaviour, but the new site is really pushing "gross" stuff at me.


I've used podman for number of years, possibly too long to really give a good comparison but for the most part it is exactly s/docker/podman. Can't think of anything I've read on the internet that I couldn't just copy the tail of and stick podman in front of it. Any run/build/inspect/volumes/secrets/etc all work like for like by design afaik. There may be additional flags on podmans end for other things it supports (eg: selinux labels).

EDIT: Actually the biggest might be that containers often need a fully qualified name, so instead of `run name/container:latest` you need `run docker.io/name/container:latest`. You can configure default search domains though.

The biggest thing people will (did?) miss is docker-compose. There was a third party `podman-compose` but now it seems that's actually been folded under the official umbrella, along with a `podman compose` command that will "Run compose workloads via an external provider such as docker-compose or podman-compose" so even that gap might be closed up now. Honestly I swapped to just scripting it myself when I swapped to podman - before even the third party podman compose existed, either using sh, .kube files or now systemd units. If you're used to using big 5-10+ container compose files you might have some friction there, might not.

There are differences internally, ex: docker primarily runs as root and has a different networking stack compared to podman, but for most usage on a dev machine it doesn't matter, and matters maybe in a deployment, maybe not.

Unsolicited opinion, I originally found Podman much less intrusive, dockers iptable muckery always rubbed me the wrong way, so it defaulting to userspace and just letting me do any nftable routing I wanted felt much nicer. It also just fees less icky when using it where its default or configuration options were less funnel-you-into-docker.com.

https://github.com/containers/podman-compose


Man. I don't actually know anyone who vapes. I see it in public sometimes and just assumed people refilled them - maybe they do. Seeing him hold some up, seeing all that plastic, metal, electronics, all that Work (Joules) expended, in something that you just dump after a day is nuts. I can't think of anything else like that. Maybe plastic water bottles but they don't have even half the materials or complexity? Maybe I under-estimate how much is put into regular cigarettes or beer & cans.


Refillable vapes used to be the standard around a decade ago, back when a liter of vape base (without nicotine) cost 30€ at max. Disposable vapes pretty much didn't exist. Now the same liter of vape base (still without nicotine) is a "tobacco product" and costs 400€+ due to taxes thanks to decade-long lobbying efforts by big tobacco, turning refillable vapes into a massive niche product due to single-use vapes costing the same or less, without any of the hassle of mixing your own liquids or having to refill them.


Are you referring to VG/PG? Are they really that expensive for you? That's wild.


Yes, 100ml from the same store I bought a liter from for 40€ in 2015 now cost 56€. There is currently a tax of 0.32€ per ml on liquid, no matter if you're buying the base or the finished product.


I'm confused. Why wouldn't the same taxes apply to a disposable vape which has the same liquid inside of it?

Also, in the GP comment, you mentioned the cost was 400€ but here you're saying 56€. Are you talking about different things?


The 400 euro figure was for a liter, while the 56 euro price is for 100ml


I guess it's time to not only make the vaping liquids yourself, but the bases too :P

Quick Googling suggests I'd pay less than 30 euro for a liter of VG. What about for the nicotine concentrate? I'd pay about 20 euros for 100ml of 20mg/ml concentrate.


A little calibrating correction: A vape should last more than a day unless you're a very heavy user. Around three days with a '700 puffs' one maybe, and a week wouldn't be unheard of.


The puff number was extremely exaggerated on the disposable ones I've tried.


The complexity of a can isn't as extreme as a disposable ARM chip, but it is still quite a sophisticated mass produced object. https://www.youtube.com/watch?v=hUhisi2FBuw

Many daily life, single use objects have a lot of thoughts put into them: https://www.youtube.com/watch?v=pj0ze8GnBKA


Man, if that's the problem then I can only assume any fast food box is not recyclable too?


The point of paper fast food boxes is not to recycle them but to have no trash in the end as they just burn or rot, all in a sustainable way. In contrast to plastic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: