Hacker Newsnew | past | comments | ask | show | jobs | submit | nvahalik's commentslogin

So interestingly... I went back to LinkedIn just a little while ago and there was a different message "mark as working" which I didn't see before.

For another provider, I was told I had to contact support as I had been "blacklisted" by Sendgrid and they had to issue a request to unblacklist me.


I watched the show but if I remember correctly they actually didn’t die. They survived.

> The reality is much more positive than the myth, with all three men escaping such a grisly fate. Indeed, Alexei Ananenko and Valeri Bespalov are believed to be both still alive as of 2024, while Boris Baranov lived until 2005 when he passed away from heart disease.

Source: https://www.history.co.uk/article/the-real-story-of-the-cher...


Same. What are the odds?

We live in an age where the commercialization/cheapening of sex is celebrated by society but the natural result of that commercialization/cheapening isn't wanted.

You can't have it both ways.

Our anthropology is confused and it shows.


Made it to 33… what fun!


When doing web development I will occasionally connect my local code base to a remote SQL server via SSH.

This adds enough latency to be noticeable and I’ve found pages that were “OK” in prod that were unbearable in my local environment. Most of the time it was N+1 queries. Sometimes it was a cache that wasn’t working as intended. Sometimes it simply was a feature that “looked cool” but offered no value.

I’m not sure if there is a proxy that would do this locally but I’ve found it invaluable.


I'm a big fan of Toxiproxy for these kinds of things:

https://github.com/Shopify/toxiproxy


I do networked game development on Windows and I've found the clumsy program to be very valuable to simulate adverse network conditions. You can set it up to simulate arbitrary network latency, packet loss and so forth.

https://jagt.github.io/clumsy/


You can use the tc command from the netem package for those wondering how to achieve this on Linux. https://man7.org/linux/man-pages/man8/tc-netem.8.html


I've used `tc` about three times in the last 15 years. Every time I have to relearn it.


+1 for clumsy On Windows I've also used Heavy Load, Netlimiter for more fine-grained control and Microsoft's Driver Verifier.


This looks interesting. I'll check it out, thank you!


I’m not sure if you’re saying the latency was introduced in client <-> server hops or server <-> db hops, but chrome dev tools (I’m sure all browsers too) can simulate different network conditions with a few clicks! Useful for something similar to what you’ve said, but in the end I think you meant server <-> db latency is what you want to inject


This was the first thing I thought of. Dev Tools in any browser have a boatload of great stuff like this.


> I’m not sure if there is a proxy that would do this locally but I’ve found it invaluable.

If you're on Linux, you can use iptables to randomly drop a fraction of packets to simulate bad connections - even for localhost. The TCP retransmits will induce a tunable latency. You have to be careful with this on a remote host or you may find yourself locked out, unless you can reboot out of band


On FreeBSD I have used https://man.freebsd.org/cgi/man.cgi?dummynet for this exact thing.


macOS has embedded feature that allows slowing down network calls. Similar to the IP/netfilter suite Linux has. PF handles most of the magic behind the scenes.

Even better because there is an UI to configure it. You just need to download Xcode additional tools package, there is a NetworkConditioner.prefpane inside. Install that and various settings will show up in the regular System Settings/Preferences...


I do this with a Makefile that calls “kubectl port-forward.”


Sounds like he shot for the moon and missed.

I've been allowing LLMs to do more "background" work for me. Giving me some room to experiment with stuff so that I can come back in 10-15 minutes and see what it's done.

The key things I've come to are that it HAS to be fairly limited. Giving it a big task like refactoring a code base won't work. Giving it an example can help dramatically. If you haven't "trained" it by giving it context or adding your CLAUDE.md file, you'll end up finding it doing things you don't want it to do.

Another great task I've been giving it while I'm working on other things is generating docs for existing features and modules. It is surprisingly good at looking at events and following those events to see where they go and generating diagrams and he like.


Used this Friday to have Claude do some stuff but telling it to read a Linear ticket and make appropriate changes. Not perfect but saved me 15 minutes.


If you like that workflow you might love the tool[0] which I built specifically to support it: CheepCode connects to Linear and works on tickets as they roll in, submitting PRs to GitHub.

[0] https://cheepcode.com


That idea sounds good, but...

1. Our tickets are sometimes "unqualified" and don't have enough information for a human to work on them (let alone an AI agent). 2. Tickets can be created on accident or due to human error and would then result in time spent working on things that don't matter. 3. AI tends to write code that violates our own "unwritten rules" and we are still in the process of getting our rules written down so that our own agentic workflows work properly.

I could definitely see the value in this for certain types of updates, but unfortunately it wouldn't work for our system.


I wanted this workflow but we absolutely share the same problems. However it took a few minutes on their site to see that the workflow involves explicitly assigning tickets to cheepcode in order for it to work on them.

That being said, I tried to sign up, and it broke horribly, and it looks like it was knocked out in 5 minutes, so my desire to give it access to my production codebase is fairly minimal.


I work for a company that offers nutrition tracking on an app in the App Store.

We are not shipping camera functionality yet. But our concept is to not necessarily guarantee the accuracy of portions but to make lookup easier.

We also spent the time to get the AI integrated with a verified database. This made our results far more accurate.

We tended to find that without the lookup the calories and macros would be generally correct. The math was usually within a margin of error of 5%. This was acceptable except that… there was no micronutrient values and you couldn’t really adjust the portions at all. The system just dumps the macros and while you can halve something… the user experience isn’t great.

Ultimately, if you want precision: manual entry is the only way to go. I feel like out approach will end up being very great once we work out the kinks. Our search isn’t spectacular and as a team we are learning a lot of about prompt engineering and how to make best use of the AI.


Yes, I would think that would work better indeed. As a augmentation or help tool. I would love to be able to say to MyFitnessPal that 'I ate this and that food, same as usual, and oh yeah drank this.' Just as a easier input interface. I wouldn't trust a pure AI solution without some proper database behind it.


Yeah. The big problem is that "augmentation" is hard because we (humans) have an internal process for how we think about things that is hard to define and building a flowchart for how we understand foods doesn't even necessarily capture things. Very well. You can take something like "chocolate chip pancakes" where the context can be "<brand> <item>" or "<modifier> <food item>". And then you can search.

But even though we've integrated it with a good food database, the process of searching isn't great because sometimes things like brand names don't get recognized and/or modifiers may get confused because... is it a brand? Is it a way of preparing something?

Ultimately we are working on improving how our search works by not just searching by the name, but by getting information about brand, the product, and possible serving options as well. These would better inform the search and allow us to, say, fallback without a brand if we can't find the brand.

The other problem has to do with variant detection. I can say "kirkland sous vide egg bites" but there are 3-4 variants of them. And right now most databases are just "here is the item you requested" without looking at possible variants, which is a problem that we are going to end up solving ourselves.

It's been interesting because we've learned a lot about how people "think" it should work vs. how it actually works.


Does that work for homemade food as well? The vast majority of the food we eat is homemade with recipes that don’t have any sort of nutritional information. I’ve always wished there was a simple way to figure out the calories. Taking a picture would be ideal.


For homemade food it should be easier to make reliable estimates of the calorie content, because you know with certainty all the food ingredients and their amounts.

The food ingredients with the highest calorie content, like various kinds of seeds or nuts or flour or meal or oil or fat or sugar or dried fruits or dairy products, come usually with calorie estimates from their vendors.

For other ingredients, like various kinds of meat or of fresh vegetables or fruits, there are online databases with typical nutritional information, like the USDA database. Some of that information can even be found in the corresponding Wikipedia pages.


Absolutely.

Weighing everything (rather than using volumetric measures) is generally going to be the BEST way to ensure consistency and accuracy.

What's also important is that, in general, even if you are 20% off on something (e.g. I logged 2200 calories but I actually consumed 2600 calories) AND you are planning to eat at a caloric deficit, this usually will mean that you will still lose weight or body recomp. It'll just take a little more time.

But if you are just not tracking, it's _so easy_ to miscalculate your intake to the point where you think "oh this isn't that bad." However, the truth is you consumed 4200 calories and that's a big surplus.

So I/we tend to find the value partially in "simple tracking" to get you aware of what you are actually consuming and then find that transitioning to specific portions to be helpful for "dialing in" and achieving specific targets/goals.


With my kids, I just burned them CDs and bought them $15 CD players. They use them a lot. This gives my wife and I ultimate control over what they play—plus we find CDs at estate sales that allow them to build a collection ~$1 at a time.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: