Here's an article that talks about the switch to canvas for VS Code. The "5 to 45" times faster part really sticks out to me. Kind of surprised it took Google this long to do this with Docs.
I personally didn't even bother checking out VS code because it was based on electron, and so I figured the performance just wouldn't be there because it's doing all of this awkward web stuff while trying to be an IDE.
I was completely wrong though. Using it, it really doesn't feel like a web app at all. It's really shocking and impressive. It feels like a text editor. Perhaps I should learn more about what they're doing.
The major problem I have with electron apps is that they eat memory for breakfast, lunch, and dinner.
This happens with jvm applications as well, but you can limit the max heap size and force the garbage collector to work more, trading off speed for the ability to run more apps side by side.
AFAIK, you can't limit the memory used in electron apps, and they don't respond by sharing heap with their child processes. With enough extensions to make it usable, vscode easily eats GB of memory.
I like lsp. But I don't need vscode to do the rendering.
The majority of the problems with "Electron" are actually just problems with the development style used by the types of people who publish and consume packages from NPM.
We've gone from a world where JS wasn't particularly fast, but it powered apps like Netscape, Firefox, and Thunderbird just fine (despite the fact that the machines of the era were nothing like what we have today) and most people didn't even know it, to a V8-era world where JS became crazy fast, to the world we're in now where people think that web-related tech is inherently slow, just because of how poorly most apps are implemented when they're written in JS.
If you want to write fast code, including for Electron, then the first step is to keep your wits as a programmer, and the second step is to ignore pretty much everything that anyone associated with NPM and contemporary Electron development is doing or that would lead you to think that you're supposed to be emulating.
I agree, and this is something also I have witnessed in my own Electron project, where careful care was taken to write fast and memory efficient code. It doesn't really use that much memory compared to native applications when running, I've done comparisons.
I feel also that the problem is more with the style of javascript development rampant these days, where not a lot of care is taken into making memory efficient or even efficient code.
This has to do a lot of course with the high rise in people studying to become (mostly) web-developers, without any deeper degree in CS or understanding of how computers really work.
This isn't entirely the fault of Electron though, but the convenient data types exposed in a web environment. Beyond the baseline memory of running Chromium, you could use various tricks to keep memory very low such as minimizing GC (eg. declare variable up front, not within loops), use array buffers extensively, shared array buffers to share memory with workers, etc.
Behaviorally they aren't the same thing, so that's not a straightforward mechanical translation. For example if the variable is an object instance then if escape analysis can prove the variable doesn't outlive the loop, then it can be put on the stack & then yes there wouldn't be any benefit to the suggested change. Although deoptimization makes stack allocation more complicated, so JS engines are more conservative here than say JVM runtimes.
But it's really easy for escape analysis to fail, it has to be conservative. So you can end up heap allocating a temporary object every loop iteration quite easily.
Not sure on any particular guide, but I learned a lot from the old #perfmatters push from Chrome, getting a deeper understanding of what the JS engine does when you create an object, where it lives, how it interacts with the garbage collector and so on would be a good thing to learn about. Also it's generally only worth considering optimization for things that store a lot of data like arrays/maps. I don't see why these techniques wouldn't be good in the long term.
I definitely agree that it's easier to make webapps that consumer much more memory than it is using a lower-level language like C++, unless you're being careful.
I just in the past month upgraded my main work laptop from 16 GB from 40 GB (8 GB soldered + 32 GB SODIMM). So your point is granted, but on the other hand, DDR4 prices have collapsed ~50% from 2018 (I couldn't believe it either, given all of the other semiconductor issues).
I'd assume a significant cost-benefit tradeoff. For all its flaws, the DOM rendering algorithm is at least "document-like," so there's a lot of wheel-reinventing to do going from just using the DOM to a custom document layout implementation underpinning a canvas-targeted rendering algorithm.
Yes, at OrgPad, we are writing our own rich text editor and if you want to use the DOM approach, you don't have much choice than to do it like this. You can see the WIP demo here (in Czech but it is quite visual): https://www.youtube.com/watch?v=SkFJ1zcRjQY
It is also written in ClojureScript. Some of the reasoning is here (in English, but 3 hours long): https://www.youtube.com/watch?v=4UoIfeb31UU
https://code.visualstudio.com/blogs/2017/10/03/terminal-rend...