Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you understand the relevant fundamental difference between SAT and neural net approaches? One is a machine learning approach, the other is not. We know the computational complexity of SAT solvers, they're fixed algorithms. SAT doesn't learn with more data. It has performance limits and that's the end of the story. BTW, as I mentioned in my other comment, people have been trying SAT solvers in the CASP competition for decades. They got blown away by transformer approach.

Such approaches exist, and they've been found wanting, and no amount of compute is going to improve their performance limits, because it isn't an ML approach with scaling laws.

This is definitely not some unfair conspiracy against SAT, and probably not against the majority of pre-transformer approaches. I am sympathetic to the concern that transformer based research is getting too much attention at the expense of other approaches. However, I'd think the success of transformer makes it more likely than ever that proven-promising alternative approaches would get funding as investors try to beat everyone to the next big thing. See quantum computing funding or funding for way out there ASIC startups.

TL;DR I don't know what is meant by the "same treatment" for SAT solvers. Funding is finite and goes toward promising approaches. If there "at least as promising" approaches, go show clear evidence of that to a VC and I promise you'll get funding.





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: