Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hopefully this will finally stop the continuing claims[1] that LLMs can only solve problems they have seen before!

If you listen carefully to the people who build LLMs it is clear that post-training RL forces them to develop a world-model that goes well beyond a "fancy Markov chain" that some seem to believe. Next step is building similar capabilities on top of models like Genie 3[2]

[1] eg https://news.ycombinator.com/item?id=45769971#45771146

[2] https://deepmind.google/discover/blog/genie-3-a-new-frontier...



Please read section 2 of the paper[1] cited in the blog post. LLMs are used as a mutation function in an evolutionary loop. LLMs are certainly an enabler, but IMO, evolutionary optimization is what deserves credit in this case.

[1]: https://arxiv.org/abs/2511.02864


Yes, and that’s what a coding agent is too. Technically an agent is not (just) an LLM, but the two have become synonymous is discussions.


> that’s what a coding agent is too.

Not quite.


An AI agent mutates state (the repository) in order to get some metric (e.g. test suites) closer to passing. Same thing, really.


An AI agent isn't an evolutionary optimization loop.


It is when you’re using it right. The most effective way to use an AI agent is to give it a metric for success (ideally tests, but even a description of success), a state upon which it can effect changes (the repo) and it moves the state towards success by executing tool calls and evaluating the success criteria at each step.


all mathematicians and scientists work with a feedback loop. that's what the scientific method is.


Not one that amounts to a literal, pre-supplied objective function that's run on a computer to evaluate their outputs.


That's exactly how a great deal of research level math is done.

In fact all open conjectures can be cast this way: the objective function is just the function that checks whether a written proof is a valid proof of the statement.

Is there a solution to this PDE? Is there a solution to this algebraic equation? Is there an optimal solution (i.e. we add an optimality condition to the objective function). Does there exist a nontrivial zero that is not equal to 1/2, etc.

I can't tell you how many talks I've seen from mathematicians, including Fields Medal winners, that are heavily driven by computations done in Mathematica notebooks which are then cleaned up and formalized. That means that -- even for problems where we don't know the statement in advance -- the actual leg work is done via the evaluation of computable functions against a (explicit or implicit) objective function.


Existence problems are not optimisation problems and can't, AIUI, be tackled by AlphaEvolve. It needs an optimisation function that can be incrementally improved in order to work towards an optimal result, not a binary yes/no.

More importantly, a research mathematician is not trapped in a loop, mutating candidates for an evolutionary optimiser loop like the LLM is in AlphaEvolve. They have the agency to decide what questions to explore and can tackle a much broader range of tasks than well-defined optimisation problems, most of which (as the article says) can be approached using traditional optimisation techniques with similar results.


> Existence problems are not optimisation problems

Several of the problems were existence problems, such as finding geometric constructions.

> It needs an optimisation function that can be incrementally improved in order to work towards an optimal result, not a binary yes/no.

This is not correct. The evaluation function is arbitrary. To quote the AlphaEvolve paper:

> or example, when wishing to find largest possible graphs satisfying a given property, ℎ invokes the evolved code to generate a graph, checks whether the property holds, and then simply returns the size of the graph as the score. In more complicated cases, the function ℎ might involve performing an evolved search algorithm, or training and evaluating a machine learning model

The evaluation function is a black box that outputs metrics. The feedback that you've constructed a graph of size K with some property does not tell you what you need to do to construct a graph of size K + M with the same property.

> a research mathematician is not trapped in a loop, mutating candidates for an evolutionary optimiser loop like the LLM is in AlphaEvolve.

Yes they are in a loop called the scientific method or the research loop. They try things out and check them. This is a basic condition of anything that does research.

> They have the agency to decide what questions to explore

This is unrelated to the question of whether LLMs can solve novel problems

> most of which (as the article says) can be approached using traditional optimisation techniques with similar results.

This is a mischaracterization. The article says that an expert human working with an optimizer might achieve similar results. In practice that's how research is done by humans as I mentioned above: it is human plus computer program. The novelty here is that the LLM replaces the human expert.


> finding geometric constructions

Finding optimal geometric constructions. Every problem is an optimisation because AlphaEvolve is an optimiser.

> This is not correct. The evaluation function is arbitrary.

You say this and then show details of how the score is calculated. AlphaEvolve needs a number to optimise, because it is optimiser. It can't optimise true/false.

> The feedback that you've constructed a graph of size K with some property does not tell you what you need to do to construct a graph of size K + M with the same property.

The feedback that you've constructed a graph of size K tell you that you've constructed a bigger graph than a competing solution that only constructed a graph of size K-1 and are therefore a more promising starting point for the next round of mutation

If you're trying to solve a "does there exist an X" problem, the information that none of your candidates found (or was) an X doesn't give you any information about which of them you should retain for mutation in the next step. You need a problem of the form "find the best X" (or, rather "find a good X") and for that you need a score of how well you've done. If you can find a score that actually improves steadily until you find the thing you're trying to prove the existence of then great, but generally these problems are "find the best X" where it's easy to come up with a load of competing Xs.

> The novelty here is that the LLM replaces the human expert.

That's not the claim at all. Tao said the benefits are scaling, robustness and interpretability, not that it can be operated by someone who doesn't know what they're doing.


> not that it can be operated by someone who doesn't know what they're doing.

It's operated by an LLM, not a human. There is no human in the loop.


The LLM doesn't write the objective function. It only generates candidate solutions to be evaluated by it. The human writes the objective function. That's how you define the optimisation to perform.


The objective function is the problem. The objective function is given in all problems.

In "find an X such that Y" the objective function is "is this an X? is Y satisfied?". In induction you have P and n to ratchet up the proof by increasing n such that P(n) holds.

Combining this reply with your previous one, it sounds like you're setting up a situation where the LLM can neither know the problem nor whether it's making progress on the problem. With those constraints no human could do mathematics either or really anything else.


What's the objective function for the Langlands Program?


It seems like the person you're responding to has decided a type checker that outputs 1 when the proof object is valid according to the rules & 0 otherwise should be considered a scoring function. The objective function in this case is not differentiable so the usual techniques from deep learning will not work & genetic search algorithms like AlphaEvolve can in theory be used in these cases. Someone still has to come up w/ the type system & verify the soundness of the rules b/c there is no finite codification of all valid & sound type systems like there are for problems in the linked blog post.


An evolutionary algorithm doesn't need a differentiable objective function, but it can't work with a simple "1 if you got it right, 0 otherwise". It is an iterative approach and at each step it needs to be able to distinguish "better" from "worse" candidates. You certainly can't just say to AlphaEvolve "off you go, I'll tell you when you got it right".

Taking the Collatz Conjecture I used as an example just now, you can trivially write an objective function that outputs 1 for a correct Lean proof of it and 0 for an incorrect one, but AlphaEvolve won't be able to work on that. It needs to be able to assess a collection of incorrect proofs to identify the most promising ones for the next step. I don't know how you'd even start on that, and it's certainly not what they've been doing with AlphaEvolve.


It can and it does work w/ such objective functions. Lots of people have used evolutionary algorithms to evolve chess playing neural networks¹ & they have been successful w/ very sparse reward signals where the the final trajectory is scored w/ 0 or 1 according to a win condition. You can say this is not likely to work for proof search & I'd be inclined to agree but the strategy has proven to work in simpler settings so whether it can be used in more complex settings is yet to be determined. If Collatz is not independent of existing axiomatic foundations then a brute force search will find a solution so any heuristics added on top of it that cut out paths to unsuccessful attempts will increase the probability of finding the proof object.

¹https://arxiv.org/abs/1711.08337


From that chess paper:

> Each position in the ECM test suite has a predetermined “best move”. Each chromosome processes all of the 879 positions, and for each position it attempts to find this predetermined best move as fast as possible.

> Instead of counting the number of correctly “solved” positions (number of positions for which the organism found the best move), we used the number of nodes the organism had to process in order to find the best move.

That isn't a 1-or-0 objective function, and the paper isn't an example of using an evolutionary loop in which the objective function doesn't give you any information on which candidates are better in a given iteration. Because that isn't possible.

Re brute forcing by evaluating the countable set of correct proofs within a given formal system, people have been trying that since computers were invented and it hasn't resulted in the magic proof factory yet. People continue to work on better and better heuristics for trimming the search and I understand some of the stuff people have been doing in that direction with Lean is actually useful now, but there hasn't been a huge breakthrough in it and nobody expects a system like that to spit out a proof of the Collatz Conjecture any time soon. More to the point of this discussion, it's not what AlphaEvolve does.

Anyway, I need to go to bed. It's been fun.


The same applies to proof search. Once you fix a finite foundational set of axioms the game proceeds exactly as in chess.


You can't use minimax with alpha-beta pruning for proof search, but that's sufficient to play chess at a high level. I don't see what you're seeing. Chess and mathematics are completely different kinds of problem.


In other words, proofs of existence are measure zero in the space of evolution?


> It seems like the person you're responding to has decided a type checker... should be considered a scoring function

To clarify, I didn't decide this, this is a valid scoring function in AlphaEvolve. The scoring function is generic and can even be an LLM writing prose giving feedback on the solution followed by another LLM scoring that prose numerically. There needs to be a numeric score to rank solutions. Typically type checkers give more output than 1 or 0, though. For example, they'll often give you information about where the first error occurred.

That doesn't mean it's a great scoring function or even a good one. But it is a scoring function and without any scoring function at all progress would be impossible. To the extent that math is about writing proofs, it's a valid and essential scoring function for any problem. In practice, to make progress you need more than just the ability to write a logical proof, you need to build on previous results, add extra conditions, compute examples, etc. But in the context of the discussion, the point is that there is always some way to measure progress, which is why AlphaEvolve includes this mechanism.

> Someone still has to come up w/ the type system & verify the soundness of the rules b/c there is no finite codification of all valid & sound type systems like there are for problems in the linked blog post.

This is true, but it's also true that typically mathematicians fix a logic or universe before getting down to work. So AlphaEvolve and human mathematicians are on equal footing in that respect.


> it's also true that typically mathematicians fix a logic or universe before getting down to work

This is true for programmers, it's not true for mathematicians. You can say programming is a subset of mathematics but mathematics is more than programming so proof search does not exhaust all the activities of a mathematician but it does exhaust all the activities of a programmer.


> it's not true for mathematicians

It actually is true for mathematicians.

Most of us work in ZFC. Some people choose to work in constructive logic. Some people choose to only use countable choice. But we always know what logic we're working in and which theorems we can invoke. E.g. if you choose not to accept the axiom of choice you also have a sense of which results depend on Zorn's lemma and have to work around them. All of this is normal background for mathematicians.

So there's no need to allow the foundations of mathematics to vary unless you're working in logic and need to quantify over foundational systems or compare them. You would certainly never start a problem and then switch the logic midway through. That sort of thing wouldn't even be coherent.


You seem to have a very limited understanding of mathematics. You're welcome to stick to it but I'd recommend learning what mathematics is actually about before thinking you can reduce it all to proof search in a single & fixed finite axiomatic system.


> You seem to have a very limited understanding of mathematics. You're welcome to stick to it but I'd recommend learning what mathematics is actually about

You'd think I'd know what mathematics is about since I have a Ph.D. in it...

> fixed finite axiomatic system

ZFC is not a finite axiomatic system. nowhere did we specify that axiomatic systems have to be finite

> single...fixed

As I said above, there are people who study different logics. However the topic of conversation is individual problems. An individual problem does not switch logics mid stream and anyone working on a problem works in the context of a logic.


You don't need a degree to understand mathematics but plenty of people w/ degrees think they understand what it's all about. Seems a bit odd you have a degree in mathematics but lack the basic understanding of what counts for mathematical activity. I don't think you should be proud of that but then again I don't know what they teach in PhD departments these days so maybe that's what they told you you needed to think to get the degree.


I see. It sounds like you're coming at this from an outsider perspective as someone who hasn't formally studied math and doesn't know what research level mathematicians are taught.

There's nothing wrong with taking an interest in a subject as an amateur. I thoroughly support it. But knowing what is taught in PhD departments and what mathematicians do is a precondition for understanding whether what AlphaEvolve is doing is similar to it.

If I wanted to know whether some AI tool was doing what humans do for, say, song writing, the first thing I'd do is figure out how song writing is taught, how professionals think about song writing, etc. And this is true regardless of the fact that I enjoy, support and often prefer outsider musicians.


I think you're making too many assumptions. Something they should have taught you while you were getting your degree but again, I'm not sure what they teach these days b/c it's been a while since last time I was listening to a graduate lecture on mathematics in a classroom.


The Langlands Program is a research program not a single mathematical problem. It consists of many open conjectures with various levels of progress toward them. However, I think it's quite a victory that the goal posts have moved from "LLMs can't solve problems" to "LLMs probably can't solve all open conjectures in Langlands Program in one shot.

But as I said way up above, if you have the statement of any particular problem you can just use the function that evaluates proofs as your objective function. If you were to do this in Lean, for example, you'd get compiler output that contains information you can use to see if you're on the right track.

In addition to the output of the proof program you'd probably want to score sub-computations in the proof as metrics. E.g. if you want to show that a map has finite fibers, you may want to record the size of the fibers, or a max over the size. If you need to know an element is not contained in some Levi subgroup then you may want to record information about the relevant Levi decomposition. This mimics things that humans know to score their progress as they're doing computations.


If you want a simple, well defined problem that a child can understand the statement of, how about giving me the objective function for the Collatz Conjecture?


I don't know, it feels exactly like how I work


I'll ask you the same question as ants_everywhere, then. What objective function do you give AlphaEvolve to ask it to attempt to prove or disprove the Collatz conjecture?


I don't see how anything about what's presented here that refutes such claims. This mostly confirms that LLM based approaches need some serious baby-sitting from experts and those experts can derive some value from them but generally with non-trivial levels of effort and non-LLM supported thinking.


Yes, applied research has yielded the modern expert system, which is really useful to experts who know what they are doing.


It's not the "modern expert system", unless you're throwing away the existing definition of "expert system" entirely, and re-using the term-of-art to mean "system that has something to do with experts".


I don't know what the parent was referring to, but IMO "expert system" is one of the more accurate and insightful ways of describing LLMs.

An expert system is generically a system of declarative rules, capturing an expert's knowledge, that can be used to solve problems.

Traditionally expert systems are symbolic systems, representing the rules in a language such as Prolog, with these rules having been laboriously hand derived, but none of this seems core to the definition.

A pre-trained LLM can be considered as an expert system that captures the rules of auto-regressive language generation needed to predict the training data. These rules are represented by the weights of a transformer, and were learnt by SGD rather than hand coded, but so what?


If you can extract anything resembling a declarative rule from the weights of a transformer, I will put you in for a Turing award.

Expert systems are a specific kind of thing (see https://en.wikipedia.org/wiki/Expert_system#Software_archite...): any definition you've read is a description. If the definition includes GPT models, the definition is imprecise.


Well, OK, perhaps not a declarative rule, more a procedural one (induction heads copying data around, and all that) given the mechanics of transformer layers, but does it really make a conceptual difference?

Would you quibble if an expert system was procedurally coded in C++ rather than in Prolog? "You see this pattern, do this".


Yes, it makes a conceptual difference. Expert systems make decisions according to an explicit, explicable world model consisting of a database of facts, which can be cleanly separated from the I/O subsystems. This does not describe a transformer-based generative language model. The mathematical approaches for bounding the behaviour of a language model are completely different to those involved in bounding the behaviour of an expert system. (And I do mean completely different: computer programs and formal logic are unified in fields like descriptive complexity theory, but I'm not aware of any way to sensibly unify mathematical models of expert systems and LLMs under the same umbrella – unless you cheat and say something like cybernetics.)

You could compile an expert system into C++, and I'd still call it an expert system (even if the declarative version was never written down), but most C++ programs are not expert systems. Heck, a lot of Prolog programs aren't! To the extent a C++ program representing GPT inference is an expert system, it's the trivial expert system with one fact.


AlphaEvolve isn't an LLM - it's an evolutionary coding agent that uses an LLM for code generation.

https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...

This is part of Google/DeepMind's "Alpha" branding (AlphaGo, AlphaZero, AlphaFold) of bespoke machine learning solutions to tough problems.

It sounds like AlphaEvolve might do well on Chollet's ARC-AGI test, where this sort of program synthesis seems to be the most successful approach.

I find Tao's use of "extremize" vs "maximize" a bit jarring - maybe this is a more normal term in mathematics?


Sometimes you want to minimize


Read https://www.argmin.net/p/lore-laundering-machines

Given time, we may find out that the solutions in this paper were also in the literature, as was the case in the anecdotes from the linked article :)


Then its utility as a best search agent is even more. It proves the statement, LLM will find the needle in the haystack if the needle exists.


> best search agent

Except that, instead of telling you that the idea came from a particular source, it fools you into thinking it came up with the idea. In other words, it cannot be described as "search", but rather "laundering" as the article describes it.


That's not what "world-model" means: see https://en.wiktionary.org/wiki/world_model. Your [2] is equivocating in an attempt to misrepresent the state-of-the-art. Genie 3 is technically impressive, don't get me wrong, but it's strictly inferior to procedural generation techniques from the 20th century, physics simulation techniques from the 20th century, and PlayStation 2-era graphics engines. (Have you seen the character models in the 2001 PS2 port of Half-Life? That's good enough.)


Inferior in what sense? Genie 3 is addressing a fundamentally different problem to a physics sim or procgen: building a good-enough (and broad-enough) model of the real world to train agents that act in the real world. Sims are insufficient for that purpose, hence the "sim2real" gap that has stymied robotics development for years.


Genie 3 is inferior in the sense you just described: the sim2real gap would be greater, because it's a less accurate model of the aspects of the world that are relevant to robotics.


>.. that LLMs can only solve problems they have seen before!

This is a reductive argument. The set of problems they are solving are proposals that can be _verified_ quickly and bad solutions can be easily pruned. Software development by a human — and even more so teams — are not those kind of problems because the context cannot efficiently hold (1) Design bias of individuals (2) Slower evolution of "correct" solution and visibility over time. (3) Difficulty in "testing" proposals: You can't build 5 different types of infrastructure proposals by an LLM — which themselves are dozens of small sub proposals — _quickly_


For the less mathematically inclined of us, what is in that discussion that qualifies as a problem that has not been seen before? (I don't mean this combatively, I'd like to have a more mundane explanation)


This is a useful summary given by another poster here

https://news.ycombinator.com/item?id=45833892

The novel results seem to be incremental improvements on some obscurely-named inequalities that I'm not personally familiar with, but I'm far from this field of maths


I had seen that post, and frankly it didn't pass my smell test of "LLM solves entirely new problems". But again, I'm not a mathematician to know what "unsolved problems" means in the context of the ones they gave AlphaEvolve, and more than that I'm entirely a skeptic on LLM solves problems that don't exist in their training data, so I'm biased. However the request for details is genuine. :)


It means something that is too out-of-data. For example if you try to make an LLM write a program in a strange or very new language it will struggle in non-trivial tasks.


I understand what "a new problem for an LLM is", my question is about what in the math discussion qualifies as a one.

I see references to "improvements", "optimizing" and what I would describe as "iterating over existing solutions" work, not something that's "new". But as I'm not well versed into maths I was hoping that someone that considers the thread as definite proof for that, like parent seems to be, is capable of offering a dumbed down explanation for the five year olds among us. :)


> Hopefully this will finally stop the continuing claims[1] that LLMs can only solve problems they have seen before!

The AlphaEvolve paper has been out since May. I don't think the people making these claims are necessarily primarily motivated by the accuracy of what they're saying.


I think it's disingenuous to characterize these solutions as "LLMs solving problems", given the dependence on a hefty secondary apparatus to choose optimal solutions from the LLM proposals. And an important point here is that this tool does not produce any optimality proofs, so even if they do find the optimal result, you may not be any closer to showing that that's the case.


Well, there's the goal posts moved and a Scotsman denied. It's got an infrastructure in which it operates and "didn't show its work" so it takes an F in maths.


A random walk can do mathematics, with this kind of infrastructure.

Isabelle/HOL has a tool called Sledgehammer, which is the hackiest hack that ever hacked[0], basically amounting to "run a load of provers in parallel, with as much munging as it takes". (Plumbing them together is a serious research contribution, which I'm not at all belittling.) I've yet to see ChatGPT achieve anything like what it's capable of.

[0]: https://lawrencecpaulson.github.io/2022/04/13/Sledgehammer.h...


yeah but random walks can't improve upon the state of the art on many-dimensional numerical optimisation problems of the nature discussed here, on account of they're easy enough to to implement to have been tried already and had their usefulness exhausted; this does present a meaningful improvement over them in its domain.


When I see announcements that say "we used a language model for X, and got novel results!", I play a little game where I identify the actual function of the language model in the system, and then replace it with something actually suited for that task. Here, the language model is used as the mutation / crossover component of a search through the space of computer programs.

What you really want here is represent the programs using an information-dense scheme, endowed with a pseudoquasimetric such that semantically-similar programs are nearby (and vice versa); then explore the vicinity of successful candidates. Ordinary compression algorithms satisfy "information-dense", but the metrics they admit aren't that great. Something that does work pretty well is embedding the programs into the kind of high-dimensional vector space you get out of a predictive text model: there may be lots of non-programs in the space, but (for a high-quality model) those are mostly far away from the programs, so exploring the neighbourhood of programs won't encounter them often. Because I'm well aware of the flaws of such embeddings, I'd add some kind of token-level fuzzing to the output, biased to avoid obvious syntax errors: that usually won't move the embedding much, but will occasionally jump further (in vector space) than the system would otherwise search.

So, an appropriate replacement for this generative language model would be some kind of… generative language model. Which is why I'm impressed by this paper.

There are enough other contributions in this paper that slotting a bog-standard genetic algorithm over program source in place of the language model could achieve comparable results; but I wouldn't expect it to be nearly as effective in each generation. If the language model is a particularly expensive part of the runtime (as the paper suggests might be the case), then I expect it's worth trying to replace it with a cruder-but-cheaper bias function; but otherwise, you'd need something more sophisticated to beat it.

(P.S.: props for trying to bring this back on-topic, but this subthread was merely about AI hype, not actually about the paper.)

Edit: Just read §3.2 of the paper. The empirical observations match the theory I've described here.


A random walk could not do the mathematics in this article-- which was essentially the entire starting point for the article.


well, it produced not just the solutions to the problems but also programs that generate them which can be reverse-engineered




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: