Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really wish physical and OS network stacks would be able to give you the ability to send uncorrected bit streams. That way you can tune the error rate at the application level that makes sense. For example, with video streaming, you probably don’t need much error correction on the data stream as periodic i-frames would correct any transient glitches (you’d only bother to EC the control headers for the video). Then WiFi networks maybe wouldn’t have to be as careful about time multiplexing all the coex streams and some noise due to conflicts would be fine and not require retransmission (because the application layer could handle it).

This does come with tradeoffs (eg it may take your application longer to recover from the noise than a quick retransmit at the physical layer).

From a cost perspective it’s also maybe impractical because the computer industry gets efficiency gains by solving a problem for everyone at some quality threshold by giving up optimality for applications that could do something with it. Also you would still need to correct the control layer of the network (IP + MAC) just to make it work at all so it may be a wash (ie the incremental cost of correcting the data vs control + data may be insignificant).

Still, at least having the option as a switch that could be flipped for experimentation purposes would be quite neat to allow the curious to find new techniques / layers of abstractions vs what’s orthodoxy today.



You can do it in linux, with drivers that implement packet injection. You're giving up a lot, though. I think you're right about it being a wash with regard to control overhead. In particular, because of the data ACKs, the firmware can aggressively ramp the 802.11 frame data rate up and down. Thanks to packet aggregation, the cost of the frame preamble gets amortized across all buffered up data. Letting each application schedule its own transmission would quickly eat up the available channel bandwidth in frame overhead. It works for a single specialized application, but monopolizes the entire shared channel.

Folk do that sort of thing when transmitting low latency first-person-video from drones, using commodity wifi hardware.


CNLohr at Youtube did this to use ESP8266 at absurdly low power usage, only waking up every couple of minutes to transmit a 802.11 frame without connection handshake. https://youtu.be/I3lJWcRSlUA


This is already done by many audio/video chat compression algorithms which use UDP.

The idea being that with something like voice or video, if a packet doesn’t arrive, instead of stalling everything while you wait for a retransmit, you instead just carry on regardless.

The occasional missing packet in voice is almost imperceptible, however the more packets that get lost the more the voice seems to “break up”.

With video the symptoms are an accumulation of visual corruption until the next key frame.

FPS games also do this (or at least used to), servers would send a stream of UDP packets of entity states (as opposed to sending deltas of state changes), so things like player1 is at x,y,z coordinate with velocity a and heading vector of b.

If clients missed a packet they would just wait for the next one since it’s useless to know later where someone else was, all you actually care about is where they are, or what’s happening, now.


I think the ask is for something like let a bit flip inside the UDP packet or let a byte fall out of the UDP packet. The integrity of UDP packets is still checked for (at different layers).

For most networks the error rate is so low that I don't think it's that valuable. Also you'd still want your metadata checked otherwise it'll get potentially delivered to the wrong place.


That's what I assumed they meant but there's so many layers of correction/protection going on and most of it is non-optional if you want a working system. For example, if you disabled the FEC that's used over high-speed serdes links within a core router you'll be left with a broken system, the error rate is much too high. In the designs I've worked on you can't disable the internal CRC/ECC on the databuses without risking corrupting the control data, which won't end well. Nobody provides separate data and control ECC protection, that's pointless overhead.

I guess they probably meant disable checking the ethernet FCS, that might still work but I think it's a very bad plan. I doubt this option is even exposed to network operators.


Most engines I've seen will delta all the entity states against the client's last acknowledged state. Costs some memory and computation on both sides to keep the deltas valid, but keeps the state update to under an MTU, generally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: