It's worth noting that an issue with scrypt for proof-of-work cryptocurrencies is that is is expensive to check. The ideal problem is memory hard to solve but can be checked cheaply. This is what equihash (used by zcash, (https://www.internetsociety.org/sites/default/files/blogs-me...) attempts to do, though proving the memory hardness seems more harduous than in the case of scrypt.
The status quo that SNARKs require a lot of memory is simply due to the current available constructions; optimisations and new constructions will certainly change that.
While building snark proofs is very memory intensive in itself, it is far from optimization free, which is one of the primary requirements for a PoW...
I did get your point. But if the onus to produce such a proof is on the miner before they can broadcast the next block, then the PoW is no longer optimization free.
Ah I see. It's true, but there's a good chance these optimizations would be discovered independently, progressively and percolate into the ecosystem. Turning to ASIC was an "optimization" that was replicated and copied. It's true that there is a lot more secret sauce to optimizing a SNARK, but if mining drives improvement in SNARK efficiency, I would call that a win.
Is this really a major pain point? You have to run billions of units of the work to find the proof. You run exactly one unit of the work to validate the proof.
The units can be set somewhat arbitrarily. Although it is important for the unit to be memory-hard-enough to not accelerate well on a GPU.
I would think that's a rather wide "target" which can be hit by scrypt at a certain difficulty.
Needless to say, it's still possible to get it wrong, e.g. Litecoin.
There's something to be said for both. It's true that there is a scaling intrinsic to the Bitcoin protocol (which it had to have, as it was based on fast functions) to allow arbitrary slowdown at the brute-force level. However ideally it would be nice to have an asymmetry there.
Just think of it purely in the manner of "why run exactly one unit of the work? why not run a hundredth of that unit to validate the proof?" An asymmetry allows you to more quickly verify the ledger, which is both a startup cost and a lesser ongoing cost for everyone.
I'm wondering however if there is a connection with asymmetric encryption, though. Imagine a one-way function h and two functions f(x), g(x) such that f(g(x)) = x. Suppose the task is "find M such that g(h(M)) lies within some narrow subset," then if g is much slower than f, you can verify a pair (M, g_h_M) not by verifying that g(h(M)) = M (slow) but rather by verifying that f(g_h_M) = h(M) (much faster). The importance of h is just that for the most obvious application (f(x) = x^E modulo N, g(x) = x^D modulo N), you also have that g(f(x)) = x, and this is a relatively common property of inverse functions.
Validation needs to be as light as possible in absolute terms (not just relative to the computation) for DDOS protection. An attacker could spam you with fake blocks and force you to compute expensive hashes to realize they are fake. Also consider the case of building a lightweight hardware device to act as an SPV client.