Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

one does not typically find that applications are limited by user-kernel context switches.

the OS isn't the bottleneck.

Curious, then why are we seeing articles here all the time on bypassing the linux kernel for low latency networking?



You don't bypass the kernel, you bypass the TCP stack of the kernel and this is for very specific applications.


Bypassing the kernel entirely is pretty normal in HPC applications. Infiniband implementations typically memory-map the device's registers into user-space so that applications can send and receive messages without a system call.


This is not bypassing a kernel. This is called Remote Direct Memory Access (RDMA) and there is still a kernel.

FYI most of the devices inside your computer work through DMA.


On in particular is network capturing via libpcap.

It's basically an alternative driver that comes with additional capabilities. Such as capturing promiscuously, and filtering captures.


Just curious, how does a unikernel solve network latency?


The issue GP talks about comes from the cost of context-switching on a syscall (going into "kernel mode", performing the call, then going back into "application mode"). There's no context switch in a unikernel.


Well, unless you count the hypervisor context switch, which you do.


And if you need super high performance, you can run a unikernel on bare metal.


I guess in the glorious future everyone will be using SR-IOV.


Are you assuming SRV-IOV passthrough (which has its own performance profile) ? Because normal virt -definitely- hits a context switch when it goes from unikernels virtual NIC to real NIC, if not twice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: