Hacker Newsnew | past | comments | ask | show | jobs | submit | more namuol's commentslogin

> In both cases they imply training models is happening when that's not been confirmed.

It is safest to assume that your photos are being used for training.


Show your math.


Besides common sense, I can tell you about my kW h counter going brrr when playing games (400 W continuously, sometimes for hours on end) vs. running Stable Diffusion or Llama-whatever (400 W for 15 seconds every 3 minutes for an hour or two). Extrapolate from that.


Training runs basically 24/7 at full throttle. Inference isn’t the main source of energy consumption.


Show your math.


Good docs don’t fix bad apps or APIs though. I get the sense that demand for docs is a signal that there’s a deeper problem with DX. Good docs generally only exist in places where they’ve given the rest of the DX enough love in the first place, so it’s more of a mark of quality than a means to quality.


Yes, and creating documentation is an exercise in understanding the whole experience. Often, nobody on the team truly gets it.


Interesting tech but there’s zero explanation of the actual application, so it’s all a little abstract.


A little detail in the otherwise great write up! I'm curious too.


Agreed. Came to the comments thinking the same thing.


the code example of vision framework, and the links in "Software Resources" section are enough I guess, you can feed them to an LLM to get a full application if you are too lazy to figure out by yourself


I meant what is the _author’s specific_ application.


Are arms dealers immune from responsibility, in your view?


Disagree. Poor engineers will go in circles with AI because they will under specify their tasks and fail to recognize improper solutions. Ultimately if you’re not thoughtful about your problem and critical about the solution you will fail. This is true with or without AI at the wheel.


> Poor engineers will go in circles with AI

But they move quickly around that circle making them feel much more productive. And if you don't need something outside of the circle it is good enough.


It's not an opinion, it's what research has shown many times. For example, they can ask the LLM about how to get started or what an experienced engineer might do, etc, as a research tool before writing code.


> If I were interviewing a candidate now, the first things I'd ask them to explain would be the fundamentals of how the Model Context Protocol works and how to build an agent.

Um, what? This is the sort of knowledge you definitely do not need in your back pocket. It’s literally the perfect kind of question for AI to answer. Also this is such a moving target that I suspect most hiring processes change at a slower pace.


I would just give some sort of LLM-esque answer that sounds correct but is very wrong, and hope I would get the opportunity to follow that up with: "Oh, I must have hallucinated that, can you give me a better prompt?"

I crack myself up.


Don’t forget to apologize and acknowledge that they are right.


You're right to ask about that! ...


Having worked with and around a lot of different people and places in IT as an instructor; frankly the funniest thing I've observed is that everyone in IT believes that there is some baseline concept or acronym or something that "ought to be obvious and well known to EVERYONE."

And it never is. There's just about nothing that fits this criteria.


Hashtable.

Explain how it works and what you use for.

If you don’t know this, you’re not a programmer in any language, platform, framework, front end or back end.

It’s my go-to interview question.

Tell me what’s wrong with that.


What's wrong with that is that a lot of languages don't call them hashtables. I don't think I've used the actual term since like 2016.


Sure, but if you've never even heard the term hash table, and you don't know that Dictionary<K,V> or your language's "associative array" or whatever uses hash tables under the hood... you're not a programmer in my mind.

It's such a foundational thing in all of modern programming, that I just can't imagine someone being "skilled" and not knowing even this much. Even scripting languages use hash tables, as do all interpreted languages such as Python and JavaScript.

Keep in mind that I'm not asking anyone to implement a hash table from scratch on a white board or some nonsense like that!

There ought to be a floor on foundational knowledge expected from professional developers. Some people insist on this being zero. I don't understand why it shouldn't be at least this?

You can't get a comp-sci or comp-eng degree from any reputable university (or even disreputable ones) without being taught at least this much!

What next? Mechanical engineers who can't be expected to hold a pencil or draw a picture using a CAD product?

Surgeons that have never even seen a scalpel?

Surveyors that don't know what "trigonometry" even means?

Where's your threshold?


Boom. Nailed exactly what happens, and how you deal with it.


try asking chatgpt about MCP and its so new it will hallucinate about some other random stuff

its still a bad interview question unless you’re hiring someone to build ai agents imho


No, it is actually a critical skill. Employers will be looking for software engineers that can orchestrate their job function and these are the two key primitives to do that.


The way it is written is to say that this is an important interview question for any software engineering position, and I'm guessing you agree by the way you say it's critical.

But by the same logic, should we be asking for the same knowledge of the language server protocol and algorithms like treesitter? They're integral right now in the same way these new tools are expected to become (and have become for many).

As I see it, knowing the internals of these tools might be the thing that makes the hire, but not something you'd screen every candidate with who comes through the door. It's worth asking, but not "critical." Usage of these tools? sure. But knowing how they're implemented is simply a single indicator to tell if the developer is curious and willing to learn about their tools - an indicator which you need many of to get an accurate assessment.


Understanding how to build an agent and how Model Context Protocol works is going to be, by my best guess, the new "what is a linked list and how do you reverse a linked list" interview question in the future. Sure, new abstractions are going to come along, which means that you could perhaps be blissfully unaware about how to do that because there's a higher order function to achieve such things. But for now, we are at the level of C and, like C, it's essential to know what those are and how to work with them.


It's a good heuristic for determining how read in somebody is to the current AI space, which is super important right now, regardless of being a moving target. The actual understanding of MCP is less important than the mindset having such an understanding represents.


Hard disagree. It’s not super important to be AI-pilled. You just need to be a good communicator. The tooling is a moving target, but so long as you can explain what you need well and can identify confusion or hallucination, you’ll be effective with them.


Nope. Being a good communicator and being good at AI are two completely different skillsets. Plenty of overlap, to be sure, but being good at one does not imply being good at the other any more than speaking first-language quality English means you are good at fundraising in America.

I know plenty of good communicators who aren't using AI effectively. At the very least, if you don't know what an LLM is capable of, you'll never ask it for the things it's capable of and you'll continue to believe it's incapable when the reality is that you just lack knowledge. You don't know what you don't know.


Filed under: “NOT the Onion”


Beware the greenfield effect.

I don’t want to comment on the technology choices specifically here, but in general the whole “we rewrote our app in X and now it’s better” is essentially a fact of life no matter the tech choices, at least for the first big rewrite.

First, you’re going to make better technical choices overall. You know much better where the problems lie.

Second, you’re rarely going to want to port over every bit of technical debt, bugs, or clunky UX decisions (with some exceptions [1]), so those things get fixed out of the gate.

Finally, it’s simply invigorating to start from a (relatively) clean slate, so that added energy is going to feel good and leave you in a sort of freshly-mowed greenfield afterglow. This in turn will motivate you and improve your work.

The greenfield effect happens even on smaller scales, especially when trying out a new language or framework, since you’re usually starting some new project with it.

[1] A good example of the sort of rewrite that _does_ offer something like an apples-to-apples comparison is Microsoft’s rewrite of the TypeScript compiler (and type checker, and LSP implementation) from TypeScript to Go, since they are aiming for 1-to-1 compatibility, including bugs and quirks: https://github.com/microsoft/typescript-go


Counterpoint: if you rewrite a Rust app, ANY Rust app and turn it into a perfectly rewritten Electron app, it will 100 percent still be shittier, bigger, slower and eat more RAM and CPU.


Possibly but not guaranteed.

For desktop apps UI quality and rendering speed is paramount. There's a lot of stuff buried inside Chrome that makes graphics fast, for example, deep integration with every operating systems compositing engine, hardware accelerated video playback that is integrated with the rendering engine, optimized font rendering... a lot of stuff.

If your Rust UI library is as advanced and well optimized as Blink, then yes, maybe. But that's pretty unlikely given the amount of work that goes into the Chrome graphics stack. You absolutely can beat Chrome in theory, by avoiding the overhead of the sandbox and using hardware features that the web doesn't expose. But if you just implement a regular ordinary UI toolkit with Rust, it's not necessarily going to be faster from the end user's perspective (they rarely care about things like disk space unless they're on a Windows roaming account and Electron installed itself there).


Having worked on a graphical application in rust for a (albeit not a complex one) computers today are fast.. latencies top out at 3 ms with cpu based rendering in an application with just a few rendering optimizations.

The fact that you just draw on the screen instead of doing whatever html parsing / DOM/IR is probably doing it? And doing rendering on the gpu means extra delay in the processing moving from cpu to gpu and being a frame behind because of vsync.


Can you clarify this? Seems to me that no matter how you render your UI, it has to go to the GPU framebuffer at some point.

For any non-trivial case where I can enable GPU acceleration for an app, it's been anywhere from equivalent to much more responsive.

What apps have you experienced delays with by enabling GPU acceleration?


Where it really shows up is stuff like power usage and reliably hitting framerates even on slow machines with big hi-res monitors attached.


Point of information: I believe this project uses Tauri, which actually does use web technology and even JavaScript for rendering, it just does it with the native web renderer of the platform so you're not dragging around a superfluous extra copy of Chrome in RAM for each and every individual app:

"Write your frontend in JavaScript, application logic in Rust, and integrate deep into the system with Swift and Kotlin."

"Bring your existing web stack to Tauri or start that new dream project. Tauri supports any frontend framework so you don’t need to change your stack."

"By using the OS’s native web renderer, the size of a Tauri app can be little as 600KB."

So you write your frontend with familiar web technology and your backend in Rust, although it's all running in one executable.

I am curious if it would be all that much worse if your backend was also JavaScript, let's say in Node.js, but it certainly depends on what that back end is doing.


Comparing Rust with Electron is so weird. One is a language, the other is a lib/framework.


I think it’s clear what stack is typically meant by “an Electron app” and why using a blanket term like this is faster.

However, you could use Rust compiled to WASM in an Electron app, therefore the two aren’t even mutually exclusive.


For the OP's specified case, it is only a little language related. It is more a lib/framework related thing.


Tauri is the thing being implicitly compared with electron. Well worth checking out.


It's not, both can be apps.


This is akin to saying if you rewrite it in Assembly it would be better than Rust. True, but what are the tradeoffs? Why doesn't everyone write it in assembly?

It _would_ be bigger and eat more RAM and CPU. But that does not imply "shittier".

There are parameters like dev time, skills available in the market, familiarity, the JS ecosystem etc that sometimes outweigh the disadvantage of being bigger/slower.

You're pointing out the disadvantages in isolation which is not a balanced take.


All those parameters mentioned are exclusively for developers. End users don't care and will get a worse product when you choose Electron instead of doing it properly.


End users care that they get a product at all. Which they won't if it's too costly to make. There is a balance that is appropriate for each project. Or else we should all be writing machine code by hand.


Rust has been shown by Google to not be any less productive than other mainstream languages though.


Link? I’d love to learn more!


[citation_needed]


> All those parameters mentioned are exclusively for developers. End users don't care and will get a worse product when you choose Electron instead of doing it properly.

A sensible take wouldn't pick one or the other as unilaterally better regarding the abstract context of what a good product is. The web as a platform is categorically amazing for building UIs, and if you chose continued to choose it as the frontend for a much more measurably performant search backend, that could be a fantastic product choice, as long as you do both parts right.


> This is akin to saying if you rewrite it in Assembly it would be better than Rust

Not really. Nobody is rewriting GUI apps in Assembly, the reasons are obvious.


Dev time like cater for ever moving Node stack? (Not sure if it applies here because I'm not familiar with Tauri).


This is something I think a lot of people miss about Rust - outside of slow compile times and personal preference, there is no reason not to choose Rust over JavaScript/TypeScript (unless of course you're working in the browser). It does everything JavaScript can do, but it does it faster and with more stability. At the end of the day, these features pay out huge dividends.

And this isn't Rust zealotry! I think this goes for any memory-safe AoT language that has a good ecosystem (e.g. Go or C#): why use JavaScript when other languages do it better?


Rust's type system gymnastics compared to most languages goes quite a bit beyond preference. I can't see the overlap at all with dynamic scripting languages, two completely different tools for completely different problems.


TS has one of the more gymnastics-heavy type systems, IMO, and I think many if not most JS shops use TS.


TS is gradual though, Rust is all or nothing.


A world of difference between the borrow checker and a garbage collector.


> there is no reason not to choose Rust

Sounds like Rust zealotry to me, followed by a mild attempt to walk it back.


All of that is true, but technological choices aside it doesn’t take away from all of the points made by the parent.

I took that as the somewhat the point, and I think it was insightful. Your app will still be worse, but worse as result of your poor technology choices, not the arguments made here. Put together it may still be a bad move, but you would still get the greenfield effect.


Performance is a feature, I agree. Language choice matters to a degree. Shitty apps can still be written in “fast” languages.


Microsoft rewriting typescript tools in Go and getting a 10x speedup? It's wild that they would choose Go for that. And a surprising level of speedup.

https://devblogs.microsoft.com/typescript/typescript-native-...


In my experience, Go is one of the best LLM targets due to simplicity of the language (no complex reasoning in the type system or borrow checker), a high quality, unified, and language-integrated dependency ecosystem[1] for which source is available, and vast training data.

[1]: Specifically, Go community was trained for the longest time not to make backward-incompatible API updates so that helps quite a bit in consistency of dependencies across time.


I have never understood why people want to use LLMs for programming outside of learning. I have written Perl, C, C#, Rust, and Ruby professionally and to this day I feel like they would slow me down.

I have used golang in the past and I was not am still not a fan. But I recently had to break it out for a new project. LLMs actually make golang not a totally miserable experience to write, to the point I’m honestly astonished that people have found it pleasant to work with before they were available. There is so much boilerplate and unnecessary toil. And the LLMs thankfully can do most of that work for you, because most of the time you’re hand-crafting artisanal reimplementations of things that would be a single function call in every other language. An LLM can recognize that pattern before you’ve even finished the first line of it.

I’m not sure that speaks well of the language.


> I have never understood why people want to use LLMs for programming outside of learning

"I have never understood why people want to use C for programming outside of learning m. I have written PDP11, Motorola 6800, 8086 assembly professionally and to this day I feel like they would slow me down. I have used C in the past and I was not am still not a fan. But I recently had to break it out for a new project. Turbo C actually make C not a totally miserable experience to write, to the point I’m honestly astonished that people have found it pleasant to work with before they were available. There is so much boilerplate and unnecessary toil. And Turbo C with a macro library thankfully can do most of that work for you, because most of the time you’re hand-crafting artisanal reimplementations of things that would be a single function call in every other language. A macro can recognize that pattern before you’ve even finished the first line of it. I’m not sure that speaks well of the language."

They are enormously powerful tools. I cannot imagine LLMs not being one of the primary tools in a programmer's toolbox, well... for as long as coding exists.


Right now they are fancy autocompletes. That is enormously useful for a language where 90% of the typing is boilerplate in desperate need of autocompletion.

Most of the “interesting” logic I write is nowhere close to autocompleted successfully and most of it needs to be thrown out. If you’re spending most of your days writing glue that translates one set of JSON documents or HTTP requests into another I’m sure they’re wildly useful.


I don't know which models you are using, but in my experience they have been way more than fancy autocomplete today. I have had thousand line programs written and refined with just a few prompts. On the analysis and code review side, they have been even more impressive, finding issues and potential impacts of changes and describing the intent behind the code. I implore you to revisit good models like Gemini 2.5 Pro. To wit, there was an actual Linux kernel vulnerability in SMB protocol stack discovered with LLM a few days ago.

Even if we take the narrow use case of boilerplate glue code that transforms data from one place to another, that encompasses almost all programs people write, statistically. There was a running joke at Google "we are just moving protobufs." I would not call this "fancy autocomplete."


It comes back to the nature of the work; I've got a hobby project which is basically an emulator of CP/M, a system from the 70s, and there is a bug in it.

My emulator runs BBC Basic, Zork, Turbo Pascal, etc, etc, but when it is used to run a vintage C compiler from the 80s it gives the wrong results.

Can an LLM help me identify the source of this bug? No. Can I say "fix it"? No. In the past I said "Write a test-case for this CP/M BDOS function, in the same style as the existing tests" and it said "Nope" and hallucinated functions in my codebase which it tried to call.

Basically if I use an LLM as an auto-completer it works slightly better than my Emacs setup already did, but anything more than that, for me, fails and worse still fails in a way that eats my time.


> Can an LLM help me identify the source of this bug? No. Can I say "fix it"? No. In the past I said "Write a test-case for this CP/M BDOS function, in the same style as the existing tests"

These are all things I've done successfully with ChatGPT o1 and o3 in a 7.5kloc Rust codebase.

I find the key is to include all information which may be necessary to solve the problem in the prompt. That simple.


I wrote a summary of my issue on a github comment, and I guess I will try again

https://github.com/skx/cpmulator/issues/234#issuecomment-291...

But I'm not optimistic; all previous attempts at "identify the bug", "fix the bug", "highlight the area where the bug occurs" just turn into timesinks and failures.


It seems like your problem may be related to asking it to analyze the whole emulator _and_ compiler to find the bug. I'd recommend working first to pare the bug down to a minimal test case which triggers the issue - the LLM can help with this task - and then feed the LLM the minimal test case along with the emulator source and a description of the bug and any state you can exfiltrate from the system as it experiences the issue.


Indeed running a vintage, closed-source, binary under an emulator it's hard to see what it is trying to do, short of decompiling it, and understanding it. Then I can use that knowledge to improve the emulation until it successfully runs.

I suggested in my initial comment I'd had essentially zero success in using LLMs for these kind of tasks, and your initial reply was "I've done it, just give all the information in the prompt", and I guess here we are! LLMs clearly work for some people, and some tasks, but for these kind of issues I'd say we're not ready and my attempts just waste my time, and give me a poor impression of the state of the art.

Even "Looking at this project which areas of the CP/M 2.2 BIOS or BDOS implementations look sucpicious?", "Identify bugs in the current codebase?", "Improve test-coverage to 99% of the BIOS functionality" - prompts like these feel like they should cut the job in half, because they don't relate to running specific binaries also do nothing useful. Asking for test-coverage is an exercise in hallucination, and asking for omissions against the well-known CP/M "spec" results in noise. It's all rather disheartening.


> Indeed running a vintage, closed-source, binary under an emulator it's hard to see what it is trying to do, short of decompiling it, and understanding it.

Break it down. Tell the LLM you're having trouble figuring out what the compiler running under the emulator is doing to trigger the issue, what you've done already, and ask for it's help using a debugger and other tools to inspect the system. When I did this o1 taught me some new LLDB tricks I'd never seen before. That helped me track down the cause of a particularly pernicious infinite recursion in the geometry processing code of a CAD kernel.

> Even "Looking at this project which areas of the CP/M 2.2 BIOS or BDOS implementations look sucpicious?", "Identify bugs in the current codebase?", "Improve test-coverage to 99% of the BIOS functionality" - prompts like these feel like they should cut the job in half, because they don't relate to running specific binaries also do nothing useful.

These prompts seem very vague. I always include a full copy of the codebase I'm working on in the prompt, along with a full copy of whatever references are needed, and rarely ask it questions as general as "find all the bugs". That is quite open ended and provides little context for it to work with. Asking it to "find all the buffer overflows" will yield better results. As it would with a human. The more specific you can get the better your results will be. It's also a good idea to ask the LLM to help you make better prompts for the LLM.

> Asking for test-coverage is an exercise in hallucination, and asking for omissions against the well-known CP/M "spec" results in noise.

In my experience hallucinations are a symptom of not including the necessary relevant information in the prompt. LLM memories, like human memories, are lossy and if you force it to recall something from memory you are much more likely to get a hallucination as a result. I have never experienced a hallucination from a reasoning model when prompted with a full codebase and all relevant references. It just reads the references and uses them.

It seems like you've chosen a particularly extreme example - a vintage, closed-source, binary under an emulator - didn't immediately succeed, and have written off the whole thing as a result.

A friend of mine only had an ancient compiled java app as a reference, he uploaded the binary right in the prompt, and the LLM one-shotted a rewrite in javascript that worked first time. Sometimes it just takes a little creativity and willingness to experiment.


7.5 kloc is pretty tiny, sounds like you may be able to get the entire thing into the context.


Lots of Rust libraries are relatively small since Cargo makes using many libraries in a single project relatively easy. I think that works in favor of both humans and LLMs. Treating the context window as an indication that splitting code up into smaller chunks might be a good idea is an interesting practice.


I generally have to maintain the code I write, often by myself; thousands of lines of uninspired slop code is the last thing I need in my life.

Friction is the birth place of evolution.


Some people go to camping now and then to hunt their own food and feel connected to nature and feel that friction. They just won't want it every day. Just like they don't tend to generate the underlying uninspired assembly themselves. FWIW if your premise is the code they generate is necessarily unmaintainable compared to an average CS college graduate human baseline, I'd argue against that premise.


I've always found it fascinating how frequently I've seen the complaint about Go re: boilerplate and unnecessary toil, but in previous statements Rust was uttered with an uncritical breath. I agree with the complaint about Go, but I have the same problem with Rust. LLMs have made Rust much more joyful for me to write, and I am sure much of this is obviously subjective.

I do like automating all the endless `Result<T, E>` plumbing, `?` operator chains, custom error enums, and `From` conversions. Manual trait impls for simple wrappers like `Deref`, `AsRef`, `Display`, etc. 90% of this is structural too, so it feels like busy work. You know exactly what to write, but the compiler can't/won’t do it for you. The LLM fills that gap pretty well a significant percentage of the time.

But to your original point, the LLM is very good at autocompleting this type of code zero-shot. I just don't think it speaks ill of Rust as a consequence.


This is akin to saying that you prefer a horse to a car because you don't have to buy gas for a horse, it can eat for free so why use it?


The first cars were probably much less useful than horses. They didn’t go very far, gas pumping infrastructure wasn’t widely available, and you needed specialized knowledge to operate them.

Sure, they got better. But at the outset they were a pretty poor value proposition.


Well it certainly makes error handling easy. No need to reason about complex global exception handlers and non-linear control structures. If you see an error, return it as a value and eventually it will bubble up. If err != nil is verbose but it makes LLMs and type checkers happy.


I have never seen any AI system could explain correctly on the following Golang code:

    package main

    func alwaysFalse() bool {
     return false
    }

    func main() {
     switch alwaysFalse() // don't format the code
     {
     case true:
      println("true")
     case false:
      println("false")
     }
    }
> Go community was trained for the longest time not to make backward-incompatible API updates so that helps quite a bit in consistency of dependencies across time

Not true for Go 1.22 toolchains. When you use Go 1.21-, 1.22 and 1.23+ toolchains to build the following Go code, the outputs are not consistent:

    //go:build go1.21
    package main

    import "fmt"

    func main() {
     for counter, n := 0, 2; n >= 0; n-- {
      defer func(v int) {
          fmt.Print("#", counter, ": ", v, "\n")
          counter++
      }(n)
     }
    }


You're bringing up exceptions rather than a rule. Sure you can find things they mess up. The whole premise of a lot of the "AI" stuff is approximately solving hard problems rather than precisely solving easy ones.


The opposite is true, they sometimes guess correctly, even a broken watch is right two times a day.


I believe future AI systems can make correct answers. The rule is clearly specified in Go specification.

BTW, I haven't found an AI system can get the correct output for the following Go code:

    package main

    import "fmt"

    func main() {
        for counter, n := 0, 2; n >= 0; n-- {
            defer func(v int) {
                fmt.Print("#", counter, ": ", v, "\n")
                counter++
            }(n)
        }
    }


What do you base that prediction on? Without a fundamental shift in the underlying technology, they will still just be guessing.


Because I am indeed experiencing the fact that AI systems do better and better.


It can easily explain it with a little nudge.

Not sure why you feel smug about knowing such a small trivia, ‘gofmt’ would rewrite it to semicolon anyway.


I write code in notebook++ and never format my code. :D


Go is a great target for LLM because it needs so much boilerplate and LLMs are good at generating that.


AFAIK the borrow checker is not strictly needed to compile Rust. I think one of the GCC Rust projects started with only a compiler and deferred adding borrow checking later.


The borrow checker does not change behavior, so any correct program will be fine without borrow checking. The job of borrow checking is to reject programs only.

mrustc also does not implement a borrow checker.


Not that much different than a type checker in any language (arguably it is the same thing).

I have been using various LLMs extensively with Rust. It's not just borrow checker. The dependencies are ever-changing too. Go and Python seem to be the RISC of LLM targets. Comparatively, the most problematic thing about generated Go code is the requirement of using every imported package and declared symbol.


> And a surprising level of speedup.

Not surprising at all; I keep pointing out that the language benchmarking game is rarely, if at all, reflective of real-world usage.

Any time you point out how slow JS is someone always jumps up with a link to some benchmark showing that it is only 2x slower than Go (or Java, or whatever).

The benchmarks game, especially in GC'ed languages, are not at all indicative of real-world usage of the language. Real world usage (i.e. idiomatic usage) of language $FOO is substantially different from the code written for the benchmarks games.


Perhaps "real-world usage" is "… rarely, if at all, reflective of [other] real-world usage …".

Perhaps when you write "idiomatic usage" you mean un-optimized.


It doesn't surprise me at all.

Idiomatic Go leans on value types, simple loops and conditionals, gives you just enough tools to avoid unnecessary allocations, doesn't default to passing around pointers for everything, gives you more control over memory layout.

JS runtimes have to do a lot of work in order to spit out efficient code. It also requires more specialized knowledge from programmers to write fast JS.

I think esbuild and hugo are two programs that showcase this pretty well. Esbuild specifically made a splash in the JS world.


A tooling team selects language widely used in tooling circles - wild, shocking.


My surprise is typescipt is so slow. I have never used it yet, but I think will never too.


At the risk of feeling silly for not knowing this ... why is TypeScript considered a programming language, and how can you make it "faster"?

I have used its it came out so I do know what it is, but I have people ask if they should write their new program in TypeScript, thinking this is something they can write it in and then run it.

My usage of it is limited to JavaScript, so I see it as adding a static typing layer to JavaScript, so that development is easier and more logical, and this typing information is stripped out when transpiled, resulting in pure JavaScript which is all the browser understands.

The industry calls it a programming language, so I do too just because this is not some semantic battle I want to get into. But in my mind it's not.

There's probably a word for what it is, I just can't think of it.

Type system?

And I don't understand a "10x speedup" on TypeScript, because it doesn't execute.

I can understand language services for things like VS Code that handle the TypeScript types getting 10x faster, but not TypeScript itself. I assume that is what they are talking about in most cases. But if this logic isn't right, let me know.


The "10x speedup" is for the compilation step from TS to JS, eg how much faster the new Typescript compiler is, not the runtime performance of the JS output.

Theoretically(!) using TS over JS may indirectly result in slightly better perf though because it nudges you towards not mutating the shape of runtime objects (which may cause the JS engine to re-JIT your code). The extra compilation step might also allow to do some source-level optimizations, like constant folding. But I think TS doesn't do this (other JS minifiers/optimizers do though).


I suspect the particular use-case of parsing/compiling is pathologically bad for JavaScript runtimes. That said, they are still leaps faster than reference Python and Ruby interpreters.


Depends what you mean by slow. The Typescript code was 3x slower than the Go code, and a 3x overhead is pretty much the best you can do for a dynamically typed language.

Languages like Python and Ruby are much much slower than that (CPython is easily 10x slower than V8) and people don't seem to care.


Technically Typescript can't really be slow, since it's just a preprocessor for Javascript, and the speed of its programs will depend on which Javascript implementation you use.

Typescript's sweet spot is making existing Javascript codebases more manageable.

It can also be fine in I/O-heavy scenarios where Javascript's async model allows it to perform well despite not having the raw execution performance of lower-level languages.


I thought that (for example) deno executed typescript natively?


It executes typescript without you compiling it to JavaScript first, it doesn’t make code execution any faster.


Here's the FAQs, where they explain the decision to go with Go and not, say, rust.

https://github.com/microsoft/typescript-go/discussions/categ...

Hejlsberg also says in this video, about 3.3x performance is from going native and the other 2-3x is by using multithreading. https://www.youtube.com/watch?v=pNlq-EVld70&t=51s


This is a good take, so thank you for sharing. Can you please rewrite it in Go?


Yes and no, Rust is suitable to solve a certain set of problems in a certain way. Or you could say: Rust brings certain qualities to the table that may or may not suit your project.

A hacker blog I read regularly made a challenge about the fastest tokenizer in any language. I just had learned basic Rust and decided why the heck not. I spent 15 minutes with a naive/lazy approach and entered the result. It won second place, where the third place was a C implementation and the first place was highly optimized assembler.

This is not nothing and if I had written this in my main language (python) I wouldn't even have made the top 10.

So if you want a language where the result is comparably performant while giving you some confidence in how it is not behaving, Rust is a good choice. But that isn't the only metric. People understanding the language is also important and there other languages shine. Everything is about trade offs.


OK, but you phrase it like it's something bad, while you say it's very, very good in practice. But maybe for different reasons than language change.

We should state: "this is true and really works, just remember language was likely only part of that".

It's rather "embrace" instead of "beware".


I'm a little confused, why beware of this?


I know of at least one YouTuber who took a homemade treatment that alerted his GI system DNA to produce lactase so he could eat pizza again:

https://youtu.be/J3FcbFqSoQY


That's not gene editing, but you could call it a gene therapy - he's introducing new fragments of DNA/RNA into his cells which then just float around and cause the right enzyme get made. Sometimes called upregulation. This is different to actually editing the existing DNA in the cell.


Good point - to be honest I didn’t realize the difference which is very significant from a treatment perspective.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: