Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is a balance. Too many logs cost money and slow down log searches both for the search and the human seeing 100 things on the same trace.


The trick here is to log aggressively and then filter aggressively. Logs only get costly if you keep them endlessly. Receiving them isn't that expensive. And keeping them for a short while won't break the bank either. But having logs pile up by the tens of GB every day gets costly pretty quickly. Having aggressive filtering means you don't have that problem. And when you need the logs, temporarily changing the filters is a lot easier than adding a lot of ad hoc logging back into the system and deploying that.

Same with metrics. Mostly they don't matter. But when they do, it's nice if it's there.

Basically, logging is the easy and cheap part of observability, it's the ability to filter and search that makes it useful. A lot of systems get that wrong.


Nice. I'm going to read up more about filtering.


Yeah, absolutely. But the author's idea of logging all major business logic decisions (that users might question later) sounds reasonable.


Yes. I like the idea of assertions too. Log when an assertion fails. Then get notified to investigate.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: