When working on a high volume web application I found data sketches (specifically the count-min sketch) very useful for finding and logging unique query patterns (Check if we've seen it before, how often have we seen it, if it's new, log it). Using a sketch bounded how much memory we used and eliminated database trips. In theory a hashmap data structure might have worked as well, but its size is unbounded given too many unique queries, while all we needed was an estimate.
Probabilistic data structures like bloom filters and sketches are really useful for gathering statistics on data sets that are too large to manage or would have a negative performance impact by having an extra database trip. So something to do with internal application diagnostics/debugging/logging is a great low-risk place to use these sorts of algorithms even when it makes sense to keep all business logic in the database.
Probabilistic data structures like bloom filters and sketches are really useful for gathering statistics on data sets that are too large to manage or would have a negative performance impact by having an extra database trip. So something to do with internal application diagnostics/debugging/logging is a great low-risk place to use these sorts of algorithms even when it makes sense to keep all business logic in the database.