Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're thinking of streams. The fragmentation discussed here is of packets.

Recent equipment might be sending packets of 9000 bytes. When you cross over into older networks that gets split into 6 or 7 packets, so the length of the last packet is a problem but it's 15% of the traffic.

But going from older networks into quirky ones (wireless, modems, or ancient systems) will split your ~1500 byte packet into two. Like 1344 and 166 or 900 and 600.



Even recent and not so recent equipment that supports jumbo frames do not have it enabled by default. Large mtu hosts (or the specific interface with a large MTU) should be in an isolated network where all hosts have that large MTU. Best practice is to not mix MTU sizes on hosts that should talk to each other and never go over 1500 for hosts/interfaces that need to talk to the internet.


Interesting. I haven't participated in a network architecture at the hardware level for a bit so I'm out of the loop.

Of course, this whole article is about the differences between what should be and what you encounter in the wild.

But if I have jumbo packets in my data center and a reverse proxy at the edge, I either have an application layer creating fragments, or I have another, lesser problem with packet sizes.

If I don't see fragmentation, I will instead see some jitter in packet transmission due to buffering. Many workloads wouldn't even notice but some care. The proxy will end up holding onto the last partial packet until the next one is processed. If that packet has a message boundary in it then the jitter turns into latency from the perspective of the destination system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: