How do you know GPT-5 does not call a Python interpreter remotely on OpenAI servers when you ask it to do arithmetic ? Your prompt goes to their servers, you have no way to know what happens there.
The only way to be sure a model calls no tool is to run it locally and control the network.
I remember after I read the 1st edition, bought MINIX ($150 !!), and then was very annoyed to find that the compiler source was not included. Luckily it was '89 or '90 and GCC sources were available.
The sursprise comes when you try to compile the minimal book version and find out that it is not as lean as presented in the book but actually depends on hundereds of assembler files (see https://github.com/rochus-keller/Minix3/tree/Minix3_Book_TCC).
I’m a tad confused so maybe I’m not understanding the horror show.
Tanenbaum explicitly mentions multiple times that the book is a subset of the code because it would be too long to print with the library. So he covers mostly the main areas.
But the source code, in its entirety, is mounted under /usr/src. And it has all the assembly files in ACK files, mostly in lib I believe. You can compile it with a make command and it works as expected.
The author makes it seem like there’s some terrible thing. Am I missing some gory directory? Yes the ACK syntax would need to be ported over to something more modern like NASM or FASM if someone wants to move the whole kitchen sink, new linker scripts made as a result of exported symbols etc. It is painful but alas, so is the archaic K&R C.
I don’t know if that’s necessary though? It sounds like a waste of time to begin with.
I mean this book is ancient, and nobody really uses 32-bit protected mode. I’m mostly doing it out of curiosity even though I already stood up a small 64-bit long mode thinger.
The author writes in the book explicitly "This is a modified version of config.h for compiling a small Minix system with only the options described in the text". This leaves no doubt that the book indeed describes a working microkernel of less than 15kSLOC which can be built and run (even if the "small Minix" lacks a few features). I blieved the author (like generations of other scholars) until I tried to actually build and run it myself.
> Converting between ACK and GCC assembler is a solved problem.
I assume you mean because the assembler was manually migrated in later Minix versions, not because there is a tool which can do so automatically. Or did I miss this?
> and a lot of interesting stuff was left out
Can you please make examples what you mean specifically?
Yes, there is a tool called asmconv, that converts.
One example is the new compiler driver that can use ACK and GCC from a single compiler driver and that can automatically convert assembler and figure out which archiver to use to create libraries.
Another example is library support for filenames longer than 14 characters that was completely transparent. MINIX3 just broken all backward compatibility by increasing the size of directory entries to a new fixed size.
I'm sure there is more stuff, these are just a few I remember.
Nobody should have any illusion about the purpose of most business - make money. The "safety" is a nice to have if it does not diminish the profits of the business. This is the cold hard truth.
If you start to look through the optics of business == money making machine, you can start to think at rational regulations to curb this in order to protect the regular people. The regulations should keep business in check while allowing them to make reasonable profits.
It's not long ago they were a non-profit. This sudden change to a for-profit business structure, complete with "businesses exist to make money" defence, is giving me whiplash.
I find the whole thing pretty depressing. They went to all that effort with the organization and setup of the company at the beginning to try to bake this "good for humanity" stuff into its DNA and legal structure and it all completely evaporated once they struck gold with ChatGPT. Time and time again we see noble intentions being completely destroyed by the pressures and powers of capitalism.
Really wish the board had held the line on firing sama.
> Time and time again we see noble intentions being completely destroyed by the pressures and powers of capitalism.
It is not capitalism, it is human nature. Look at the social stratification that inevitably appears every time communism was tried. If you ignore human nature you will always be disappointed. We need to work with the reality we have on the ground and not with an ideal new human that will flourish in a make believe society.
You got me wrong, I did not defended OpenAI - the 180 they did from non profit to for profit was disgusting from a moral point of view. What I was describing is how most businesses operate and how to look at them and not be disappointed.
"Safety" was just a mechanism for complete control of the best LLM available.
When every AI provider did not trust their competitor to deliver "AGI" safely, what they really mean was they did not want that competitor to own the definition of "AGI" which means an IPOing first.
Using local models from China that is on par with the US ones takes away that control, and this is why Anthropic has no open weight models at all and their CEO continues to spread fear about open weight models.
This is more Altman-speak. Before it was about how AI was going to end the world. That started backfiring, so now we're talking about political power. That power, however, ultimately flows from the wealth AI generates.
It's about the money. They're for-profit corporations.
You get it. To everyone who thinks ai is a money furnace they don’t understand the output of the furnace is power and they are happy with the conversion even if the markets aren’t.
Got same message in Chrome with uBlock Origin Lite when I clicked on an YouTube link.
Even worse sometimes when I'm in an Incognito window or after I've erased my browser history Google asks me to solve a captcha which made switch to DuckDuckGo for all my searches.
The AI learned nothing, once its current context window will be exhausted, it may repeat same tactic with a different project. Unless the AI agent can edit its directives/prompt and restart itself which would be an interesting experiment to do.
These things don't work on a single session or context window. They write content to files and then load it up later, broadly in the class of "memory" features
I hope they don't. These are large language models, not true intelligence, rewriting a soul.md is more likely just to cause these things to go off the rails more than they already do
I doubt it will be enforced at scale. But, if someone with power has a beef with you, it can use an agent to search dirt about you and after sue you for whatever reason like copyright violation.
The first programming language one learns is really not important, most skills are transferable. Racket is an excellent programming language for beginners (this book however is not a good book for a complete beginner because it goes really fast over the Racket introduction).
This is an exaggeration, if you store the prompt that was "compiled" by today's LLMs there is no guarantee that in 4 months from now you will be able to replicate the same result.
I can take some C or Fortran code from 10 years ago, build it and get identical results.
That is a wobbly assertion. You certainly would need to run the same compiler, forgo any recent optimisations, architecture updates and the likes if your code has numerical sensitive parts.
You certainly can get identical results, but it's equally certainly not going to be that simple a path frequently.
> You certainly can get identical results, but it's equally certainly not going to be that simple a path frequently.
But at least I know that if I need to, I can do it. With an LLM, if you don't store the original weights, all bets are off. Reproducibility of results can be a hard requirement in certain cases or industries.
The more important point is that even when you don’t get identical binary output, you still get identical observable behavior as specified by the programming language, unless there’s a compiler bug. That’s not the case for LLMs, they are more like an always randomly buggy compiler. You wouldn’t want to use such a compiler.
The only way to be sure a model calls no tool is to run it locally and control the network.
reply