Hacker Newsnew | past | comments | ask | show | jobs | submitlogin





> So when a little-endian system needs to inspect or modify a network packet, it has to swap the big-endian values to little-endian and back, a process that can take as many as 10-20 instructions on a RISC-V target which doesn’t implement the Zbb extension.

See this justification doesn't make any sense to me. The motivation is that it makes high performance network routing faster, but only in situations where a) you don't implement Zbb (which is a real no-brainer extension to implement), and b) you don't do the packet processing in hardware.

I'm happy to be proven wrong but that sounds like an illogical design space. If you're willing to design a custom chip that supports big endian for your network appliance (because none of the COTS chips do) then why would you not be willing to add a custom peripheral or even custom instructions for packet processing?

Half the point of RISC-V is that it's customisable for niche applications, yet this one niche application somehow was allowed and now it forces all spec writers and reference model authors to think about how things will work with big endian. And it uses up 3 precious bits in mstatus.

I guess it maybe is too big of a breaking change to say "actually no" even if nobody has ever actually manufactured a big endian RISC-V chip, so I'm not super seriously suggesting it is removed.

Perhaps we can all take a solemn vow to never implement it and then it will be de facto removed.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: