Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rate limiting (and its important cousin, back-off retries) is an important feature of any service being consumed by an "outside entity". There are many different reasons you'll want rate limiting at every layer of your stack, for every request you have: brute-force resistance, [accidental] DDoS protection, resiliency, performance testing, service quality, billing/quotas, and more.

Every important service always eventually gets rate limiting. The more of it you have, the more problems you can solve. Put in the rate limits you think you need (based on performance testing) and only raise them when you need to. It's one of those features nobody adds until it's too late. If you're designing a system from scratch, add rate limiting early on. (you'll want to control the limit per session/identity, as well as in bulk)



Very much what I recommend our teams as well. And you can totally start with something careful. Does a single IP really need 50 requests per second?

Like, sure, I have services at work where the answer is "yes". But I have 10 - 20 times more services for which I could cut that to 5 and still be fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: