Hacker Newsnew | past | comments | ask | show | jobs | submit | pmarreck's commentslogin

How fast is Ruby, lately?

I haven't used it in about 10 years


Is this basically Greasemonkey 2.0?

Greasemonkey with vibe coded user scripts, basically.

> If you’ve used Violentmonkey/Tampermonkey, Tweeks is like a next‑generation userscript manager

here’s something I noticed: If you yell at them (all caps, cursing them out, etc.), they perform worse, similar to a human. So if you believe that some degree of “personable answering” might contribute to better correctness, since some degree of disagreeable interaction seems to produce less correctness, then you might have to accept some personality.

Interesting codex just did the work once I sweared. Wasted 3-4 prompts being nice. And angry style made him do it.

Actually DeepSeek performs better for me in terms of prompt adherence.

The stats I've seen show a 10-20% loss in speed relative to natively-compiled, which is effectively noise for all but the most critical paths.

It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/


Unfortunately 64bit address suppport does the opposite, that comes with a non-trivial performance penalty because it breaks the tricks that were used to minimize sandboxing overhead in 32bit mode.

https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...


1) This may be temporary.

2) The bounds checking argument is a problem, I guess?

3) This article makes no mention of type-checking, which is also a new feature, which moves some checks that normally only run at runtime to only needing to be checked once at compile time, and this may include bounds-style checks


The choice to go to WebAssembly is an interesting one.

WASM3, especially (released just 2 months ago), is really gunning for a more general-purpose "assembly for everywhere" status (not just "compile to web"), and it looks like it's accomplishing that.

I hope they add some POSIXy stuff to it so I can write cross-platform commandline TUI's that do useful things without needing to be recompiled on different OS/chip combos (at the cost of a 10-20% reduction from native compilation- not a critical loss for all but the most important use-cases) and are likely to simply keep working on all future OS/chip combos (assuming you can run the wasm, of course)


> I hope they add some POSIXy stuff to it

Are you aware of WASI? WASI preview 1 provides a portable POSIXy interfance, while WASI preview 2 is a more complex platform abstraction beast.

(Keeping the platform separate from the assembly is normal and good - but having a common denominator platform like POSIX is also useful).


I'd go a bit further. If you want full POSIX support, perhaps WASIX is the best alternative. It's WASI preview 1 + many missing features, such as: threads, fork, exec, dlopen, dlsym, longjmp, setjmp, ...

https://wasix.org/


My understanding of the wasm execution model was that it was fundamentally single threaded?

I don't think that's accurate, although it's true that needs extra work to work properly in JS based environments.

You can already create threads in Wasm environments (we got even fork working in WASIX!). However, there is an upcoming Wasm proposal that adds threads support natively to the spec: https://github.com/WebAssembly/shared-everything-threads


what are the options regarding working with wasix? (compiling to it, running it?)

is this something that is expected to "one day" be part of WASM proper in some form?


whoa, this is actually a fascinating option. unfortunately not on nixpkgs, probably due to licensing issues, but I bet I could put together a flake.nix for it

but also: ooof, of COURSE it has encoding issues:

https://github.com/taviso/wpunix/issues/64


It's not a scam because it does make you code faster even if you must review everything and possibly correct (either manually or via instruction) some things.

As far as hallucinations go, it is useful as long as its reliability is above a certain (high) percentage.

I actually tried to come up with a "perceived utility" function as a function of reliability: U(r)=Umax ⋅e^(−k(100−r)^n) with k=0.025 and n=1.5 is the best I came up with, plotted here: https://imgur.com/gallery/reliability-utility-function-u-r-u...


> In your quest to be critical of anything Tesla does

That conclusion does not logically follow, whatsoever, from the evidence presented


I didn't assert any conclusions.

you assumed he was on a "quest to be critical of anything Tesla does"; the person made only 1 criticism, which was valid, no less (having used FSD and experienced issues myself; I refuse to be gaslit)

The fact that Elon has blown the San Franistan EV market should surprise absolutely no one.

Sounds like a problem with all of AI.

Having driven Tesla FSD and coded with Claude/Codex, it suffers from the exact same issues- Stellar performance in most common contexts, but bizarrely nonsensical behavior sometimes when not.

Which is why I call it "thunking" (clunky thinking) instead of "thinking". And also why it STILL needs constant monitoring by an expert.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: