Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My first machine was a 286, so they have a special place in my heart. It should be pointed out that the clock speed of an ATMega328P (original Arduino / Arduino nano) is nearly exactly the same as an 80286. Two of the same could easily outcompute a 286! You can buy them for ~$3 at the moment...

I've put plenty of AVRs on ethernet - and once, a 386, using an ancient DOS TCP/IP stack, but never a 286. Linux doesn't support it due to requiring an FPU.

It'd be fun to design the electronics for an old-style computer, but IMHO it'd be far more amusing to do it from scratch with an original architecture than try recreating a third party motherboard using a similar layout scheme. Something with parallelism and auto-scaling would be quaint.



It's the lack of a "modern" paged memory management unit that's the main issue for mainline Linux on a 286 AFAIK. Nobody wants to deal with x86 segmented memory, having a nice flat addressing paged mode is one of the big wins of the 386. Although I believe Linux dropped support for 386 (and 486?) a while back so they could remove some special cases for such old chips.


Yes, paging is what is necessary to run Linux. The main problem it solves is how to allocate physical memory without having to constantly move variably-sized segments around to get rid of fragmentation.

But I'm kind of sad that all OSes converged on a flat address space for the sake of ~PoRtAbIlItY~. Monocultures are bad, and we currently have one, where everything is basically a UNIX clone written in C.

32-bit x86 allowed to use both segments (of any length up to 4 GiB) and paging at the same time. Before the NX bit was introduced, this was the way to have write-xor-execute permissions. And some security features of segmentation can't be replicated at all in a flat address space: with a separate stack segment (or one with the same flat base address but a different limit), it will be impossible to add the wrong offset to a pointer to stack and get access to data that is outside of it.

IMHO, the iAPX 432 had some good ideas, and x86 should have evolved in that direction, adding a few more "extra" segment registers which code can use freely as pointers to isolated objects. Each would have two "limit" fields (for negative and positive offsets from the base), with one of these spaces used for storing pointers to other objects, which can only be manipulated through CPU microcode that guarantees memory safety.

Instead they eliminated segments completely in 64-bit mode, except for FS/GS serving as an extra offset that gets added with no limit checking whatsoever.


I do agree with what you say and also have nice memories of 8086 segmentation. I find it funny that we are continuously forced to add workaround on top of workarounds to the flat address space in order to avoid accidental memory errors. Segmentation had all that decades ago, and easier debugability to begin with. But it is clear to me that we are moving again towards something like that, even if enforced at a different level.


I did not mind programming the 16-bit x86 model in assembly. You could do a lot of things such as use segment register pointers (16 byte aligned) for "large" data structures which could themselves be addressed with ordinary 16 bit pointers and chained together to make "extra-large" structures.

Compilers like Turbo Pascal and Microsoft C all gave you a choice of which memory model you wanted to use, often you wrote programs where a 64k code and 64k data space is all you need.


Turbo Pascal actually didn't: IIRC versions 3.x and lower used the "small" model (they were also available for CP/M on Z80, where 64K was the entire address space anyway), then newer ones exclusively the "large" one.

TP forcing use of large pointers for everything, and its lack of even trivial peephole optimizations, probably contributed to the myth that C was somehow inherently more efficient.


You misremembered.

[BT]P7 includes for compatibility: tiny (.COM), small, compact, medium, large, and large w/ overlays (which can use EMS when available). There are DPMI clients for [BT]P7 that make it possible to switch to protected mode and use more memory.

It definitely generated offset-only pointers in the tiny, small, and medium memory models because that's how they referred to data with the common DS segment. I did plenty of memory and instruction-level debugging in [BT] Profiler/Debugger and inspected plenty of pointers where they lived in RAM.


What command line option or compiler directive can I use to create a .COM program? How do I declare a near pointer variable?

http://bitsavers.trailing-edge.com/pdf/borland/turbo_pascal/...

Page 221 (232 in the PDF): "A Pointer type is stored as two words"


Correction: Protected user-supervisor mode separation, and virtual memory with page faulting are what are essential. NX helps enforce W^X but it's not a deal-breaker functionally even if it's very useful.

Who cares about address space organization? (k|)ASLR, PIC & PIE means this is a detail not worth your time. This kind of standardization is vital for security, debugging, and profiling.

The 386 has LDT that generally isn't used, and there's only one TS that is updated rather than multiple ones. FS and GS tend to be used for thread-local storage.


Minix ran on 8086/8088 computers, from firsthand past experience.


It did, but it was honestly quite limited in 16-bit mode, not only due to the (intentionally) limited scope of the OS but because the 16-bit addressing only allowed for a 64k code segment and a 64k data segment per process.


UNIX, designed and developed on the PDP-11, had similar per-process instruction/data space limitations until V6 and V7 were ported to 32-bit minis ca. 1977 and 1979, and was further constrained by the PDP-11's limited physical address space: 18-bit 1970–75, 22-bit 1975–, so a quarter of the 8086's and 80286's 20- and 24-bit address spaces, respectively.

As an aside, this reminds me of an amusing early example of a rough-and-ready configure-style script[1] included in the BSD source for the compress(1) utility, used to limit the maximum supported LZW code length based on an estimate of memory that will be available to the process[2].

[1] https://github.com/dspinellis/unix-history-repo/blob/BSD-4_3...

[2] https://github.com/dspinellis/unix-history-repo/blob/BSD-4_3...


286 protected mode supports paging, but it can't revert to real mode and it only supports 16 MiB of RAM. And 286 PE is really buggy and limited. It's pretty crap.

386 support was dropped, 486 support has not yet been dropped as of writing.


286 doesn't support paging, only segmentation. I have read the manuals extensively, and also discovered what may be the last undocumented feature, almost 40 years after the chip being released. [https://rep-lodsb.mataroa.blog/blog/intel-286-secrets-ice-mo...]

Not saying this to brag, just to establish that I probably know more about that chip than the average poster. And you're also posting plainly wrong claims elsewhere in this thread. I don't get why you're doing this?


> Linux doesn't support it due to requiring an FPU.

I have used Linux on a 386SX which did not have any FPU. Linux had x87 floating point emulation for a long time (AFAIK, that emulation has since been removed, together with the rest of the support for very old CPUs).

The main reasons Linux doesn't support the 80286 are AFAIK that the 80286 is a 16-bit CPU, and Linux (other than the short-lived ELKS fork) has never supported 16-bit CPUs; and that the 80286 doesn't have paging (it has only segmentation), which would require running it as a "no-MMU" CPU.


They meant MMU.


I've put plenty of AVRs on ethernet - and once, a 386, using an ancient DOS TCP/IP stack, but never a 286. Linux doesn't support it due to requiring an FPU.

mTCP (http://brutmanlabs.org/mTCP/) is a currently-maintained TCP/IP stack for MS-DOS that will run on any PC, 8088 and up. You also need a packet driver for your NIC. 286 machines are easy; nearly any 16-bit ISA NIC will work. XT-class machines are slightly trickier because not all NICs will work in 8-bit ISA slots and some packet drivers use 186/286 instructions.



> Linux doesn't support it due to requiring an FPU.

This is incorrect.

FPU means "floating point unit". Linux does not need this. Few OSes do at all.

You meant "MMU", meaning memory management unit. This is what Linux uses to perform virtual memory handling, and that is why Linux can't run on an 80286: the '286 had no MMU (or FPU).

The difference being that 99% of computers without an FPU could just have one added. This was never true of an MMU.

Indeed some early Unix machines based on CPUs with no MMU used an entire 2nd CPU just for MMU duties.


Incorrect. Clock speed is not a 1:1 correlation with cycle efficiency. There are plenty of "slow" single-cycle architectures with very good cycle efficiency, and there are many architectures like the older P4 that have very poor cycle efficiency to chase the MHz wars.


Linux runs fine without an FPU, it even has a software emulation.

It’s the hardware task switching features that the 386 introduced that it requires (and the reason why Linus got one in the first place back in the day)


Hardware task switching was introduced on the 80286. And no modern OS - including Linux, except for the earliest versions - uses it, because the way it is implemented, the mechanism will do a lot of unnecessary duplicate work (preserving "garbage" register contents from just before the kernel is about to return to user mode, and will consequently have to restore them itself).

The only situation where it would be required to use this misfeature is for handling exceptions like stack overflow in the case that they can occur in kernel mode. A "Task State Segment" is really more like a pre-allocated stack frame anyway, the CPU puts active ones in a linked list and does not allow freely switching between them, only returning to the outer level.

x86-64 does not have hardware task switching anymore, instead it is now possible to switch to another stack on an exception or interrupt, which is all that the TSS was used for in practice anyway.


Linux started on 80386.


Where am I disagreeing with that? Hardware task switching already existed on the 16-bit 80286, and remained supported on newer generations until being finally removed in x86-64. Early versions of Linux did use it for a time, but doing it in software turns out to be faster, because the mechanism was badly designed for what a general-purpose kernel has to do.


Sorry, I see now that your comment could be read both ways, and my misinterpretation was actually a bit far-fetched.


> It’s the hardware task switching features that the 386 introduced that it requires

Nope.

8086 and 80286 multitasked fine.

It was memory management hardware that came in with the 80386. You need that for virtual memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: