That would also avoid the problem with this syntax, that it's not a valid Go file (it doesn't start with `package ...` and I don't think a bare top-level string is valid), which lots of editors will be pretty unhappy about.
Definitely not a minor feedback, there's no reason to write go in a .js file. Vite/rollup are perfectly able to "load" certain file types and parse them however you like.
I stand firm that there's no reason to write go in a .js file other than ragebaiting, especially with that "use" directive that clearly everyone is hating on Twitter at the moment (due to Vercel, etc)
To be clear I'm fine with importing .go from JS, it's the "go in file.js" thing I don't like.
> Scientific computing where you already have Go code
This is a really cool project and I must admit that and I am on the side as well also asking for something similar to your project for julia since that has one of the highest focus on scientific computing. I would like it if you could create something similar to this but for julia as well, it shall be really cool.
Now coming back to my main point, my question is that what if the scientific computing project is too complicated and might require on features which shall not be available on tinygo as from what I remember, tinygo and go aren't 1:1 compatible
How much impact could it have though, like I am basically asking about the state of tinygo really and if it could do the scientific thing as accurately as you describe it but still a great project nonetheless. Kudos.
Looks interesting and good use case for introducing folks to extending web apps with WASM functionality.
Used a similar technique using tinygo wasm builds (without Vite ofcourse) on toy project where WASM based functionality acted as a fallback if the API wasn't available or user was offline - found it an interesting pattern.
Just be careful with this backend-code-in-frontend stuff. If it's needed for some computationally expensive logic that is logically client side, then fine. But be wary of letting the client dictate business rules and having open-for-anything APIs (GraphQL is particularly prone to this).
I've seen teams do this in the wild more than once.
Better performance? For javascript code that calls into native platform apis provided by the browser it's been alteady proven that performance is an order of magnitude better than calling into wasm and doing all the sheningans to move bytes from and to wasm
I second that, having just relatively recently used the native browser APis for image processing. While it felt a bit awkward to use, it served its purpose pretty well.
If I needed more, I would probably not use Go anyways, but a sharper tool instead.
I don't think any of the use cases suggested really make sense though. For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work. Build a frontend and make an API that runs on a server somewhere.
As for cryptography, trusting that the WASM build of your preferred library hasn't introduced any problems demonstrates a level of risk tolerance that far exceeds what most people working in cryptography would accept. Besides, browsers have quite good cryptographic APIs built in. :)
The browser often runs on an immensely powerful computer. It's a waste to use that power only for a dumb terminal. As a matter of fact, my laptop is 6 years old by now, and considerably faster than the VPS on which our backend runs.
I let the browser do things such as data summarizing/charting, and image convolution (in Javascript!). I'm also considering harnassing it for video pre-processing.
> For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work.
... I mean... elaborate?
Everytime I've heard somebody say this, it's always a form of someone stuck in the 90s/00s where they have this notion that browsers showing gifs is the ceiling and that real work can only happen on the server.
Idk how common this is now, but a a few years ago (~2017) people would show projects like figma tha drew a few hundred things on screen and people would be amazed. Which is crazy, because things like webgl, wasm, webrtc, webaudio are insanely powerful apis that give pretty low level access. A somewhat related idea are people that keep clamoring for dom access in wasm because, again, people have this idea that web = webpage/dom, but that's a segway into a whole other thing.
I was playing around with WASM and WebGL a few years ago to see if it could be used to increase JS performance on certain computationally heavy tasks. I might be misremembering but if I recall correctly the answer was generally always no because of the overheads involved in JS -> WASM -> JS.
Additionally JIT optimisations means that even if you're doing very computationally heavy tasks unless they're one-offs or have a significant amount of computational variance JavaScript is surprisingly performant.
So unless you need to compute something for several seconds and it's done as a one-off typically there will be very little (if any) gain in trying to squeeze out a bit of additional performance in this way.
However this is all off the top of my head and from my own experimentation several years back. Someone please correct me if I'm wrong.
I'm guessing this only works on back end? If yes, then why not just write the back end in Go if you're so fond of the language? It's not like Golang lacks the libraries to do web stuff. Would it be like some shop that is all React, Angular, or some other?
seems like an unintuitive idea that could have only come from someone infected by react/vercel. The natural way that most would think about this is just write go in a go file and have an import attribute or macro
Fair take! Though, this was literally built as a joke in response to @ibuildthecloud's tweet. Sometimes the dumbest ideas are the most fun to prototype.
reply