Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Optimizing Guile Scheme (dthompson.us)
168 points by avvvv on Sept 25, 2024 | hide | past | favorite | 80 comments


I have such mixed feelings about dynamically typed languages. I've designed a whole pile of hobby programming languages, and dynamically typed languages are at least an order of magnitude simpler to design and for users to learn and start using.

At the same time, they inevitably seem to lead to user stories like this where a user really does know exactly what types they're working with and wants the language to know that too (for performance or correctness), and they end up jumping through all sorts of insane hoops to get the optimizer to do exactly what they want.


I think Common Lisp got it exactly right. Strongly typed dynamic language where you can optionally specify types for the extra performance/correctness if need be (especially with SBCL).

Honestly, I think weak typing is more of an issue than dynamic typing and people cry for static types when they suffer mostly from the former.

Dynamic typing is great because it allows you to have extremely complex types for basically free. It allows for insane expressiveness. It also makes prototyping much easier and does not force you into over-specifying in you types early on. In dynamic language most of your types are the most general type that would work by default while static types forces you to use very specific types (especially when lacking structural typing.)

If you want to allow just half the expressiveness of dynamic languages in your static language you will quickly find huge complexity with dependent types, long compile time, cryptic error messages and whatnot.

Generally, I think gradual typing is rising in popularity for good reason. It allows for quick prototyping but also to drill down on your types when you want to. Best of both worlds.


I find much the same to be true. I'm a big fan of Racket's define/contract and clojure's Malli/guardrails. You get one of the biggest benefits of static types (code that self-documents the expected shape of data) while enjoying all the benefits of a dynamic language (like creating types at runtime, and repl-driven development).


I wish Common Lisp had integrated type declarations with TYPEP and CHECK-TYPE, instead of punting with "consequences are undefined if the value of the declared variable is not of the declared type," i.e., sucks to be you.


No doubt you know this, but Common Lisp's standardization gave leeway to implementations. So you could choose which implementation suits you better. Common Lisp is an umbrella for different Lisps to have some commonality. It was a process of negotiation, during a time when there were many design forks in the road, and diverse use-cases

For example, type declarations can enable performance optimizations, compiletime type checking, runtime type checking, IDE autocompletion, etc. Or they can be ignored, if compiler simplicity is more valued. All these things have engineering tradeoffs. For example, runtime checks may have runtime costs at odds with performance optimization

There might be higher-value improvements to Common Lisp, if higher quality code is desired


As someone who doesn't know much about types, do SBCL type declarations provide as good type-based development experience as OCaml and Rust?

And perhaps I wouldn't get your answer, I mean is there something fundamentally inadequate in the way SBCL declares types? I think there is a phrase for it in CS theory.


> As someone who doesn't know much about types, do SBCL type declarations provide as good type-based development experience as OCaml and Rust?

First of all, op was talking about strongly typed languages. Asking are they good as statically typed ones like Rust and OCaml is raising the goal posts quite a bit.

Second of all, SBCL can indeed have a subsection of its code expressed in OCaml-like static types, see

https://github.com/coalton-lang/coalton


To add to the other comment, it is also important to understand in the more interactive mode of developing in CL. You basically have the program always running, fixing bugs and adding features while it is running.

You simply don't have the problem of batch style programming where you have written a bunch of code and now you want to know if it works so you run a lot of static analysis on it beforehand because running it and getting it to the point and state that is relevant costs time.

In CL you don't end up with lots of code that has never been run. You have constantly run it during development and are much more confident about its behavior. So just having this interactive way of programming already leads to much more reliable software. It is not a replacement for static analysis or unit testing of course but another pillar to help you write more correct software.


I like dynamic languages too. But I don't like the idea of "optimization", and I would be super interested in a dynamic language that didn't attempt to divorce performance from correctness. The worst part about jumping through insane hoops to enchant the optimizer is that it can all go wrong with the tiniest change--a flag here, a different usage pattern there, a new version, etc., and suddenly your program doesn't do what you need it to, as though an operation taking 1000x longer is just a matter of degree.


I agree completely.

At the same time, no one wants their code to run 100x slower than it would in any typical statically typed language. Unoptimized dynamic languages are sloooooow.


Rpython and Graal (and what else?) provide JIT-for-free (or at least cheap).

Of course, this really only works for code that is (a) statically polymorphic but dynamically monomorphic, and (b) has hot loops, but qualitatively that conjunction does seem like it ought to cover a lot of low-hanging fruit.

Anyone have quantitative measures?


There aren't many people looking at these JITs at the moment. Stefan Marr[1]'s group[2] is, I believe, the where most of the research is currently done. A recent paper[3] compares performance of interpreters in RPython and Graal. Their baseline performance is Java, and they achieve performance close to V8, which itself is about 2x slower than Java.

My summary is you can write fast interpreters + get JIT for free, but fast JIT for dynamic languages still means 2x slower than JIT for statically typed languages (and Java definitely leaves some performance on the table due to how it represents data).

[1]: https://stefan-marr.de/

[2]: https://research.kent.ac.uk/programming-languages-systems/

[3]: https://dl.acm.org/doi/10.1145/3622808


There is only so much a compiler can do when any operation can result in a complex function call or variable sized data.


I'm happy with dynamic languages for almost everything I do and generally do not want to sacrifice flexibility, which is the price to pay for a static type system. However, certain parts of a program become more crystalline over time, whether for performance or correctness reasons, and being able to express those parts using a static type system makes a lot of sense. PreScheme [0] is an example of a statically typed Scheme that composes with the host Scheme. I'd like to see more work in this direction as Scheme already works well as a multi-paradigm language.

[0] https://prescheme.org/


Yeah prescheme is really interesting, I really liked the SystemCrafters exploration of it:

https://www.youtube.com/watch?v=QqKuHylIqBs


As a long-time Python and JavaScript user, I've come to the conclusion that dynamic typing is just not a good idea for anything beyond exploratory or very small projects.

The problem is that you invariably have to think about types. If you mistakenly pass a string to a function expecting an integer, you better hope that that is properly handled, otherwise you risk having type errors at runtime, or worse—no errors, and silent data corruption. That function also needs to be very explicit about this, but often the only way to do that is via documentation, which is often not good enough, or flat out wrong. All of this amounts to a lot of risk and operational burden.

Python's answer has historically been duck typing, which doesn't guarantee correctness so it's not a solution, and is more recently addressing it with gradual typing, which has its own issues and limitations. Primarily that if specifying types is optional, most programmers will not bother, or will just default to `any` to silence the type checker. While for JS we had to invent entirely new languages that compile to it, and we've reached the point where nobody sane would be caught working with plain JS in 2024.

Static typing, in turn, gives you a compile time safety net. It avoids a whole host of runtime issues, reduces the amount of exhaustive and mechanical tests you need to write, while also serving as explicit documentation. Code is easier to reason about and maintain, especially on large projects. You do lose some of the expressiveness and succinctness of dynamic typing, but what you gain with static typing is far more helpful than these minor benefits.


Dynamic typing with deeply nested data forces you to put type bandaids all over the code. For example you end up defining Pydantic schemas and then validating the same thing more than once since you can't guarantee that the type of a thing was not changed somewhere in the middle.

Dynamic typing forces you to test behavior which could be tested much more thoroughly by a type checker, at compile time, with zero development time.

Dynamic typing does offer much faster time to early prototyping but then drags you down with each bug.

Static typing does force some early commitments to the structure of the data but it also allows faster iteration and refactoring.

Static typing with good type inference seems the best to me.


I was strongly in the static typing camp for a long time, with Haskell, Purescript, Idris, OCaml, and Typescript, but over time I realized that the costs mostly outweigh the benefits.

> The problem is that you invariably have to think about types. If you mistakenly pass a string to a function expecting an integer, you better hope that that is properly handled, otherwise you risk having type errors at runtime, or worse—no errors, and silent data corruption.

The silent data corruption is really only a problem with weak dynamic typing, that usually automatically coerces types. A lot of dynamically typed languages still have strong typing and will immediately error out. And usually in practice you end up testing all of the code you're writing anyone, so this almost never happens in practice except when someone is refactoring without thoroughly testing what they did, which should be done anyway whether there are types or not.

> Primarily that if specifying types is optional, most programmers will not bother, or will just default to `any` to silence the type checker.

They probably do bother when it's an important module, or a an edge boundary that needs to be documented with a contract, or during times of significant refactoring. And these days LLMs can generate the specs, optional types, and tests pretty easily for any sort of self-contained, modular, reasonably well written code.

So now with LLMs I think there's even better reasons to use dynamic typing. And type completion in an IDE still exists for a bunch of dynamically typed languages anyway, like javascript.


> The silent data corruption is really only a problem with weak dynamic typing, that usually automatically coerces types.

That's not necessarily true. A function could serialize the passed value, which would work without type conversion, and it could still result in data corruption somewhere down the line. The point is that with dynamic typing there's no guarantee of correctness. It has nothing to do with strong vs. weak typing, which incidentally I don't find helpful debating, since there's no single definition for those terms, and most languages can behave arbitrarily depending on the situation.

Furthermore, you ignored my primary point of runtime type errors. These are very common in Python, and there's really no solution to them besides doing offline type checking, which as I said, has its own problems and is not a silver bullet either.

> And usually in practice you end up testing all of the code you're writing anyone, so this almost never happens in practice except when someone is refactoring without thoroughly testing what they did, which should be done anyway whether there are types or not.

Assuming you were referring to data corruption, maybe. But type errors happen very often in practice, and no amount of testing can guarantee you won't run into them. Besides, most teams I've worked with weren't disciplined enough to achieve even 100% statement coverage, let alone branch coverage, or do more sophisticated testing like fuzzing. So while type errors are close to impossible to prevent by testing, even data corruption can easily fly under the radar.

Static typing gives you this safety net, _for free_. This alone is worth the minor inconvenience of having to specify type information, and think about types explicitly.

> They probably do bother when it's an important module, or a an edge boundary that needs to be documented with a contract, or during times of significant refactoring.

This requires experience to know good practices, when to follow them, and the discipline to do so. IME very few developers are this diligent 100% of the time, and most, if given the option, will do the minimum amount of work necessary. I'm not just blaming others, I've been lazy about good practices myself many times. This is why gradual or optional typing is not a solution to these issues.

Looking at it from the other direction, most statically typed languages can do type inference. This avoids the tedium of having to be explicit all the time, while still giving you the benefits of type checking at compile time. This is a much safer solution.

> And these days LLMs can generate the specs, optional types, and tests pretty easily for any sort of self-contained, modular, reasonably well written code.

Seriously? LLMs have no place in a discussion about correctness. They're glorified autocomplete engines, which can be useful, but trusting them to give you correct output for these issues is incredibly risky. At best you would need to manually verify everything they do, and I trust myself to do a quicker job in most situations with macros and `sed`.

> And type completion in an IDE still exists for a bunch of dynamically typed languages anyway, like javascript.

I feel like we're talking about two different things, and you're ignoring the main issue of type errors at runtime.


I bounced off learning Haskell a few times (infinite regress to downloading category theory textbooks), but almost the moment I saw its type inference, I immediately wanted that in lisp.

Roc is looking pretty nice (especially once your editor can paste in the inferred type like they want to do), but I still think there’s an empty space for an imperative language where type inference makes it feel as untyped (or at least as unceremonious) as (pre-annotation) Python


stanza (https://lbstanza.org/) is a very interesting experiment in designing for gradual types rather than retrofitting them onto a dynamic language


Solid blog overall and I think it is pitched at the right level of granularity for the topic. However, if I were offering criticism, the place I think more detail would be super interesting is the 'Please Inline' section. In particular, I would really be interested in a slightly more detailed description of the optimizer's algorithm for inlining. I think the "define_inlinable" macro is a great example of macro usage, but it is clearly a way to circumvent the inliner's apparent short comings. I would like to be able to understand what heuristic the optimizer is using to see if there is a sensible default function construction style/idiom that is more appealing to the optimizer for inlining. However, I am reminded of the inlining discussion in Chandler Carruth's talk (Understanding Compiler Optimization) from a few years ago where he discusses how obscure and seemingly arbitrary, in general, inlining heuristics and their optimization passes are in practice. [1]

1 - https://youtu.be/FnGCDLhaxKU?si=J3MhvJ-BmX5SG2N6&t=1550


A walkthrough of Guile's optimization passes and the inlining heuristics would be great. I've been meaning to do a "part two" here but you know how these things go.


If you want to read just an enormous amount of well-written bloggage about optimizing Guile Scheme, this is the spot: https://wingolog.org

Andy Wingo is the maintainer and I get a kick out of everything he posts.


Andy's blog is on another level. He's also leading the Hoot project to compile Guile to WebAssembly and I work with him on that and try to absorb whatever compiler knowledge I can while doing so.


These sorts of optimizations can and should be handled by a (sufficiently smart (tm)) compiler.

Common Lisp/SBCL is usually sufficiently smart. I know not everyone likes Common Lisp, but at least I would have tested it with something more performant that Guile, like Chicken Scheme (my favorite!), Chez Scheme, etc.

I like Guile and its purpose as a universal scripting language. However, its performance issues are well known. Even compared to other scripting-first languages (Lua, Perl, Python etc).


I think that is why this blog is particularly interesting to me. As one of the other comments to this posting said, it is nice to see an analysis/detailed description of working to optimize code where the first step is not to rewrite in a language with a presumed better performance baseline. Also, I think there is also some props to be given for continuing to work within Guile's somewhat spartan tooling to do the optimization work, instead of switching to a language that may have more/better tooling for the task.

Not to take away from the general comparisons between various Lisp flavors and between various scripting languages (an activity I engage in quite often), but your lead off line is more prescriptive than I find advisable. I don't think a blanket statement that optimizations of runtime behavior of code "should" only be done via a compiler. Some devs enjoy the work, others have varied reasons for doing performance sensitive work in a given language/environment. But at the end of day, doing optimization is a valid usage of developer effort and time if that developer judges it so.


Part of the problem is that raw Scheme is spectacularly underspecified.

It also doesn't help that Schemes like Guile are also interactive. The domain of an interactive language and a "compiled" language are quite different.

Given the entirety of the program made available to the compiler all at once, there are high level derivations that can happen notably through flow analysis to let the compiler make better decisions. But do that in an interactive environment when the rug can be pulled out of any of the assumption the compiler made, and things get messy quite quickly.

One interesting tidbit for Java is that the developers of the compiler advocate "idiomatic" Java. Simply, write Java like Java, and don't try to trick the compiler. Let the compiler developers trick the compiler.

That's evident here in this article when they wrote the function that tests the types of the parameters. Raw Scheme, naturally, doesn't allow you to specify parameter types, not the way you can in Common Lisp, for example. And, either Guile does not have an specific extension to support this, or simply the compiler looks for this type checking pattern at the top of a function to make optimization determinations. On the one hand, it could be more succinct with a specialized facility, but on the other, this is "just Scheme".

So, in effect by checking for the type of the variable, they're implicitly declaring the type of the variable for the "sufficiently smart" compiler to make better decisions.

The counter example is the "define-inline" construct, and thus not standard Scheme (though readily replaced by a "no-op" macro if one was interested in porting the code).


> But do that in an interactive environment when the rug can be pulled out of any of the assumption the compiler made, and things get messy quite quickly.

https://bibliography.selflanguage.org/_static/dynamic-deopti...


> Given the entirety of the program made available to the compiler all at once, there are high level derivations that can happen notably through flow analysis to let the compiler make better decisions. But do that in an interactive environment when the rug can be pulled out of any of the assumption the compiler made, and things get messy quite quickly.

Haskell's GHC does quite well with its 'ghci' interactive environment. GHC is a compiler first and foremost, and as far as I can tell, ghci works by compiling each line you give it one by one? (But even in ghci, you have to abide by the type system of Haskell, so that might help.)

The Common Lisps were always pretty good at combining compiled and interpreted parts, even in the same program. And I think OCaml also does a good job of combining the two approaches?


In my experience, guile these days is often faster than chicken, even compiled (the csi interpreter is dog slow and not suitable for anything but interactive exploration). Chicken is just not a fast implementation.

Plus guile comes with a more comprehensive standard library; on the other hand, chicken's package manager and available packages do make up for that.


Guile has come a long way in the past decade or so! I think your info is quite out of date. Guile's compiler performs a number of state of the art optimizations. It compiles to bytecode so native code compilers like Chez win the race, naturally. The JIT is pretty good, though! Native compilation is on the roadmap for Guile, but maybe somewhat surprisingly we're getting AOT compilation to WebAssembly first. https://spritely.institute/hoot/


It's still much much slower than SBCL which is not surprising given the time & effort that went into the latter. User-guided optimizations (type declarations, stack allocation, intrinsics, machine code generation) in SBCL are also more flexible and better integrated.


Yup, SBCL is quite amazing!


I switched from Guile to SBCL because I really like having things such as (declare (inline my-function)) and (declare (type Double-Float x y z)). Now if only it had case-lambda, named let, and a better deftype which can specify members of classes and/or structs.



The language-hack of the "the" operator in Common Lisp, whereby `(the fixnum (+ 5 7))` signals that the result of (+ 5 7) should be an integer is so ... lispy.


Isn't Racket the 'default' Scheme? (Even though it's no longer called Scheme.)


Extremely easy interop with native code is the main selling point of guile IMO. You just link in guile as a library and can have C code call scheme code and vice versa. Makes it great for any native program that needs an embedded scripting language (much like Lua). Does Racket support that use-case?


I haven't done FFI with Racket, but https://docs.racket-lang.org/foreign/index.html looks reasonably approachable?


That seems to let you call C functions from Racket, but I don't see how it lets you embed Racket in a C program and call Racket functions from C. So it's at best half a solution, unless I'm missing something.



I don't think there's a real 'default' Scheme, like Chez is probably the implementation which generates the fastest code, but if I'm not mistaken it only implements the R6RS spec, Guile is quite performant and supports both R6RS and R7RS Small, Chicken has a bunch of libraries (the 'eggs'), but I think it's R5RS (I may be wrong), and of course GNU/MIT Scheme is what you want to follow along with MIT publications working in Scheme (like Structure and Interpretation of Computer Programs, Structure and Interpretation of Classical Mechanics, and The Art of the Propagator). There's also Gauche, which I believe is the most conformant implementation to the various Colour Dockets for R7RS Large (Gerbil may also be fully conformant).


You can download a package to use R7RS Small in Racket. https://github.com/lexi-lambda/racket-r7rs


For SICP, the best option is probably Racket with the sicp language package.

I can't recommend MIT Scheme for anything these days; it's just missing too many things that I feel are required for real work in Scheme, and has too many idiosyncrasies and quirks. Even using it to run a standalone program written in Scheme is a pain.


Racket is now built around Chez.


Which was a great idea, bootstraped languages always takes the usual "but you depend on XYZ" that haters always bring into the discussion.

Additionally they allow to prove a point.

For example, is writing compilers systems programming or not?


Thanks for the context! It's been a while since I did serious work in the Lisps. (I've moved on to the ML family.)

> [...] GNU/MIT Scheme is what you want to follow along with MIT publications working in Scheme (like Structure and Interpretation of Computer Programs, Structure and Interpretation of Classical Mechanics, and The Art of the Propagator).

Definitely, though I suspect if you need a language that's exactly what's written in the text, you are probably missing the point? At least for SICP, I haven't looked into the others as closely. (Part of) the point being learning wider concepts.

I almost feel like you get more out of the book, if you do the exercises in a mix of JavaScript and Python. Not because those are better languages, just the opposite: because it forces you to understand the concepts well enough to translate them.


In SICM they frequently make use of a kind of implicit applicative lifting (I can't remember what they call it) where you apply a vector-of-functions as if it were a function itself. In psuedo-Haskell:

    lift :: Vec (a->b) -> (a->Vec b)
    lift [] a = []
    lift f:fs = (f a):(lift fs $ a)
so that you can write natural-looking multidimensional physics expressions like

    ((fx fy fz) r)
without having to invoke macros or restructure the expression to please the compiler. I dearly wish you could do this in another scheme but so far I haven't found one. Iirc it's required for using the magnificent `scmutils` package too.

For example, `guile-scmutils`[0] says:

> Functionality not available in the port:

> Scheme extension to allow applying vectors/structures as procedures. For example rather than

    1 ]=> (pe ((up (literal-function 'x) (literal-function 'y)) 't))
    (up (x t) (y t))
> you must use

    guile> (pe ((lambda (t) (up ((literal-function 'x) t) ((literal-function 'y) t))) 't))
    (up (x t) (y t))
[0] https://www.cs.rochester.edu/~gildea/guile-scmutils/


Clojure has a really nice `juxt` function for that, which I have an implementation of in my `.guile` file.

  (define (juxt . fns)
    (lambda args
      (map (lambda (fn) (apply fn args)) fns)))


lift = sequenceA @[] @(a ->)


Did you do that by hand? I guessed you used the classic `pointfree` program, bit that gave

    lift = fix ((`ap` tail) . (. head) . flip ((.) . liftM2 (:)))
As far as I know it's not possible to get this functionality in Haskell even with clever instance magic, but I'd love to be proved wrong.


> As far as I know it's not possible to get this functionality in Haskell even with clever instance magic

It is possible to fill in basic function bodies based on their type, using ghc-justdoit (https://hackage.haskell.org/package/ghc-justdoit). That's maybe not what you meant, if you are looking for integrating pointfree into Haskell it can be added to ghci or your development environment.

    foo :: ((a -> r) -> r) -> (a -> ((b -> r) -> r)) -> ((b -> r) -> r)
    foo = (…)
In this case I wrote it because I knew about the pattern. Your lift definition is just ($)

      flip \a -> ($ a)
    = flip (&)
    = flip (flip ($))
    = ($)


Maybe you meant to write

    (??) :: Functor f => f (a -> b) -> a -> f b
    funs ?? a = fmap ($ a) funs
from lens: https://hackage.haskell.org/package/lens-5.3.2/docs/Control-...

This is valid definition of lift along a different, less interesting but more general axis.

    lift = (??) @[]


    lift = flip $ \a -> ($a)


At this point it's a separate language from Scheme. I think Chez Scheme tends to be the "default" recommendation.


No


Great overview on how to approach improving code performance, without going down the usual route of rewriting into something else.


The guile source->source Optimizer is such a nice tool to see what is going on. Especially when writing macros. I really recommend Kent Dybvig's "the macro writer's bill of rights" to see how useful it can be.


I prefer Common Lisp with SBCL and Lem, but this is good too.

On SICP, Guile badly needs a module for the picture language from the book (and srfi-203 + srfi-216).


The monomorphic vs polymorphic argument is an interesting one. I think that you could explicitly get unboxing if you used something like CLOS style multimethods to dispatch based on the type, so that (add <float> <float>) would dispatch to the function that uses fadd on those operands. I never realized that you could use this kind functionality, multimethods or free monad interpreters, to write in-code optimizations that are conveniently abstracted away in actual code usage.

Edit: nevermind, that's also dynamic dispatch. You'd have to add static dispatch via macros or some external transpilation step.


Makes me happy when I see Guile is alive and going.


The full numeric tower sounds like a great idea. But in retrospect you almost never want uint32 silently converting to bignum, or ending up with a low precision float.

Has anyone had a positive experience?


I would say the opposite: In a high-level language for "everyday programming", as opposed to systems programming or high-performance programming, arbitrary precision signed integers are the right choice. They let you do math on things like a large file size in bytes, or a nanosecond-precision timestamp, without having to think about integer widths. You only need to think about "is this an integer or is this floating-point", which takes less mental effort than using a language like C with its large selection of integer types.


I love the numeric tower most of the time. Not having to worry about integer overflow bugs is great. I like that I can express the fraction 1/3 exactly rather than approximately with a float. It's only in the cases of very sensitive code that I have to worry about the details of how numbers are represented at runtime.


That's funny, my impression was the opposite: that you'd almost always want a fixed-width integer to promote to bignum when it transcends the limit. It's a lot more sensible than adding a bunch of integers together and ending up with one which is smaller than any of the ones in the input.


As soon as I saw the title, I thought of the streetfighter character, but this was actually an interesting read on a programming language, I had never heard of before


The slogan I've proposed for the language is "Guile goes with everything." Because Guile was designed from the outset to run embedded or standalone, and to transpile other extension languages to Scheme or a Scheme-compatible representation, I think that fitting. See: https://knowyourmeme.com/memes/guiles-theme-goes-with-everyt...


I think the Guile community of yore would have no idea what Street Fighter was but now we should embrace it as long as Capcom doesn't get mad.


Originally, it was called GEL (GNU Extension Language) but was later renamed to GUILE.

https://wingolog.org/archives/2009/01/07/a-brief-history-of-...

There was a forum where Tom Lord, the creator of GEL talked about the early history from his perspective. Unfortunately I cannot remember where it is. Sadly, Tom Lord passed away in 2022.


The other Scheme environment I use regularly is Gambit. So, reppin both Marvel and Capcom.


Should it be "Guile Scheme goes with everything" for the rhyme?


This is too good not to use.

(My favourite use of Guile's Theme is by Team Teamwork: https://www.youtube.com/watch?v=w5JuYmQ2_ns but I bet one of rms's many songs could be found to fit.)


Even better.


A prominent use of Guile is as the configuration language for Guix, GNU's version of Nix


It's the official GNU extension language, so it's fairly widely used in the GNU world I think.

It's also the language of the init system/service manager on GuixSD (the full OS distribution based on Guix), GNU Shepherd (a.k.a. dmd), and IIRC their initrd runs a Guile program instead of a shell script.


You probably shouldn’t do those things. The point of a high level language is to not have to think about such details. If you can’t get the performance you need, you should use a different tool, instead of trying to circumvent implicit limitations.


Sometimes you can write mostly high level code and only add trick and annotations for speed to very hot loops.


There are two major problems with this approach. First of all, the intent is implicit, so it won't be clear for a new set of eyes. Second, by peeking behind the curtain you can get some gains, but only as long as everything behind this curtain stays the same. Author written about Guile 3, but is it also true of Guile 2 or 1? Will it hold true for Guile 4? Anybody's guess really.

In contrast to this approach, I'd point at Numpy. It optimises specific cases in Python code, but does so in an explicit way and its interface is even sufficiently high level to match Python well.


Early this year, I've been suffering with @guvectorize in Python, so I don't disagree completely... Anyway:

> First of all, the intent is implicit, so it won't be clear for a new set of eyes.

Yep. Many times the change is obvious, like changing + to fx+ But if the change needs a big rewrite, it probably needs a good comment explaining the simple versions and the tricks to make it faster. Even better, have the functions `something` and also `something_slow` with the simple slow implementation so you can make a few test and check they give the same result. I've used that for big refactoring/rewriting, in the moment I run the two functions and the results differ by more than 1E-10, I made a mistake and I have to revert the last change (hopefully).

> Author written about Guile 3, but is it also true of Guile 2 or 1? Will it hold true for Guile 4?

I don't know about the details of Guile, but I know about Racket. (I guess Guile has a similar culture.)

It the code is fast in the current version 8, then nobody is sure if it's also fast in the previous versions 7 or 6 or ... The compiler get a lot of tiny invisible improvements and perhaps one of them made your code fast. It's difficult to know.

About version 9 ...

There is an informal implicit promise to make idiomatic code faster. So I expect fast idiomatic code in version 8 to be fast in version 9. Moreover, I'd classify a big slowdown as a almost-bug and hope it's fixed for next edition. (It happened in the 7 -> 8 transition when the back end was changed completely, but the problems were rare.)

Non idiomatic code is more problematic, for example if you use too many `set!` to make the code faster. I don't expect that code with `set!` to be slower in version 9, but perhaps the version without `set!` may be faster in the new release.

About the changes proposed in the article, I don't expect them to cause problems in the future. Perhaps the Guile compiler will be improved to make them unnecessary, but they don't look problematic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: