Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Scrypt is Maximally Memory-Hard (iacr.org)
147 points by cperciva on Oct 18, 2016 | hide | past | favorite | 74 comments


Wait a minute, there's a time memory tradeoff attack for scrypt. Argon2 was developed precisely to remedy that problem. And now there's a proof Scrypt is the best anyone can do? This would mean a similar attack is possible on Argon2 as well, even if we haven't found it yet.

This sounds like a significant result, so I'm a bit skeptical right now. Perhaps the paper doesn't mean what I think it means?


Time-memory tradeoffs don't reduce the cost, which is determined by the time-area product (or technically the integral of area over time).


After you perform the tradeoff, you can still use ASIC to reduce your costs. We regular users only have an x86_64 box. Ideally we want ASIC to be just as expensive.

And by the way I'm not sure time memory tradeoffs doesn't reduce the time area product some don't, but I bet some do (which is why they're called "attacks").

That said, my knowledge is quite sketchy here. Any recommended reading?


Perhaps I was not clear. While memory cost about the same in all settings, processing power does not. Whatever part of an attack can be ASICified will be cheaper than it would have been if executed on stock hardware.

Time memory tradeoffs, even if they don't reduce the time-area product, do increase the amount of silicon that can be ASICified. The additional computations might very well cost less than the equivalent memory lookups.

I'd like to close the gap between stock hardware and ASIC. This paper's abstract suggest that we cannot. That's a bummer.


Ah, found it. I just had to read the damn paper. Turns out I was confusing memory hard functions, which allow for time/memory tradeoffs, and memory bound functions, which do not.

Scrypt is hard, but not bound. I believe Argon2d attempts to be bound as well. Argon2i however might just be hard, at least in theory, because its memory accesses do not depend on the secrets, making them predictable.


What's "area" here? Silicon size?


Yes. Square mm of silicon, number of transistors, words of memory -- they're all equivalent asymptotically.


I dont think they are equivalent. routing area overhead scales in the square root (or at least cube root if your willing to optimize the hell out of things) in the total memory on chip.


I dont think there is a time-memory attack on scrypt. There is a time-area attack.

AFAIK area is whats expensive for an ASIC, not just space. Thing is, memorytime is not a an ideal measure for "area used for memory on chip"time.

routing wires to and from memory also require area on the chip. A lot if you want to have a lot of memory. storing 1KB in a chip with total 1KB of mem. takes a lot less area (and energy to read/write) than storing 1KB on a 1GB chip.

scrypt (probably in contrast to Argon2d) can be computed either with lots of memory in little time or with little memory in lots of time (and any variant in the middle too as long as the product is n^2 as shown in that new paper). But the later approach doesnt have all that routing overhead. So AFAIK thats what the ASICs in Litecoin miners use. The algorithm still has space-time n^2 so it doesnt contradict the paper.

The other (IMHO much bigger) reason why we have Litecoin ASICs is that the parameters used scrypt in that protocol are just way to small to give any serious memory-hardness properties.


Argon2i has worse parallel time-memory tradeoff attacks than scrypt.

E.g. https://eprint.iacr.org/2016/115

Argon2d probably not though.


The point of scrypt is to utilize the commodity hardware to the max. Commodity hardware gives you a lot of memory IO for the buck. You can't spend the same dollars and get 100x more IO on an ASIC. But you can spend the same dollars and get 100x (actually even more) faster at typical hashes like SHA256.

The point is to cut down on an attacker's possible advantage by using specialized hardware. Scrypt uses memory IO so that you can "get your money's worth" by running it for 100ms on an Intel CPU.

This is old-school thinking and a defensive play at trying to prevent offline attacks. The idea is impose a run-time cost on each attempt to verify a password. Defender spends it every time they try to log someone in. Attacker spends it for every guess at trying to crack a password.

The new approach (well I'm quite biased since I founded a company that does this) is to add a one-time cost which is unbounded and does not impose run-time latency.

We do this with a large amount of random data. Many TBs or perhaps as much as a PB. The salted hashes are entangled with the data pool with a simple API, in such a way that an attacker would need to steal the entire data pool to run an offline attack. Stealing only part of the data pool does not let you crack any passwords, even if they are trivially simple.

The bigger the data pool, the easier it is to defend, and the faster the system runs and more hashes/second it supports. So, is essence it's high performance scalable security that actually stops offline attacks even after the site is breached.

You use this new approach as an additive layer on top of the existing scrypt hashing that you are already doing.


>The bigger the data pool, the easier it is to defend, and the faster the system runs and more hashes/second it supports.

Which means these data pools will end up stored in the cloud. And in the end of the day only big players can afford sufficiently large pools. So your model encourages centralization and outsourcing of security, doesn't it?


I completely agree that very few companies can afford to run something like this on their own.

So with our cloud service, you keep control of your hashing, your users, your authentication framework. You do everything you can to protect your own network and prevent a breach.

But then add a, yes centralized, additional layer of security on top that which protects you even after a breach.

So I absolutely agree centralization is generally evil. But what we are is a common defense fund. Everyone shares the cost of the data pool, so you pay a fraction of the overall cost, while enjoying the full security benefit. You do this without turning over control or exposing any private data.

Compared this to, say, CloudFlare: we are not as worrisome because we don't see the username or the password, and we cant make an invalid login look valid.

Companies are turning away from passwords and looking at SSO (talk about centralization) because storing passwords is too much of a liability.

Our goal is to eliminate the liability. Eliminate password breaches, so that a simple, memorable password is secure. So it's safe for any company to store passwords and be responsible for authenticating their own users. It's a very lofty goal, and I agree it's not fully without any trade-offs, but we've been meticulously careful to design a protocol which minimizes those trade offs to the greatest degree possible.

So my hope, my goal is that this tech ultimately helps maintain and support decentralized password authentication, while providing an extremely cost-efficient way to secure those hashes.


I'm intrigued.

Do you have a link or something that I can read more about the "new approach" as you say?




This is snake oil. You are just doing 4 additional HMAC salts and pretending the data is more secure. This doesn't give you any additional security.

     H1 = HMAC(Salt1, Password)
     H2 = HMAC(Salt2, H1)
     ---send to Company---
     H3 = HMAC(Salt2, H2)
     H4 = HMAC(Salt3, H3)
     ---send back to customer---
     Compare H4 with known value
Ref to technical white paper: https://taplink.co/wp-content/uploads/2016/10/TapLink_Blind_... (I'm treating the AppID as a Hash, as I'm ignore the look up stage for the massive salt).

Effectively your process can be described as:

     Hash = HMAC( Salt3, HMAC( Salt2, HMAC( Salt2, HMAC( Salt1, Password)))) 
But with network transmission?!?!

Multiple salts != more security

https://stackoverflow.com/questions/12753062/multiple-salts-...

https://programmers.stackexchange.com/questions/115406/is-it...

Also the step of shipping the final hash back to the customer is SO dangerous. TLS/SSH is secure but one miss-configuration/bug/0-day and you leak dozens of credentials.

This is a really stupid model. Furthermore salts larger then the final output side offer no additional security. If you have 256bits of output, you only need 256nits of input. Larger inputs technically risk reducing the entropy of the final output.

This whole security model assumes taplink.co will never get MITM. This can't be guaranteed. This also means your whole system goes down when taplink.co is out.


> Furthermore salts larger then the final output side offer no additional security. If you have 256bits of output, you only need 256nits of input. Larger inputs technically risk reducing the entropy of the final output.

If I understand that correctly, then for a 256bit hash with an internal state of 1024 bit, initialized to a fixed and public nothing-up-my-sleeve set of values, increasing the salt from 256 bit to 1024 bit - going from 1 random bit for every 4 known bits to 1 for 1 - reduces entropy.

If from there we assume that the hash uses a Merkle-Darmgard block construction for which it is proven that the only loss of entropy comes from the compression function, this would - for me - mean that the chosen compression function loses more entropy on random bits than on the initialization vector, which makes it special, although it is supposedly chosen to be not special in any way.

This confuses me, but I'm just a sysadmin not a cryptographer, so that is ok. Do you have any links for me to follow up on this?


That's a lot to respond to at once. :-)

I will say it's important to understand how the Salt we return is generated. We use a massive pool of data as effectively the internal state of a hashing function. This means you need all the data in order to perform the same calculation.

The value sent to us is just the hash of a salted hash. By itself it is useless. The value we return is Salt2 -- also useless on its own. This is very purposeful. If the TLS channel is owned, nothing of value is lost. The Salt1 is still private on the site's server, and no passwords can be cracked without it.

Availability is a legitimate concern. Similar to SMS multifactor, this is an added 3rd party dependency. We direct peer to many of our customers and route over private IP space to avoid a potential DDoS.


      I will say it's important to understand how the Salt we return is generated. 
Not in the slightest. It is the HASH of a HASH of a HASH. There is no academic rigor to support this gives you any additional security. And a lot of academic rigor that states it is a moot operation.

      The Salt1 is still private on the site's server, and no passwords can be cracked without it.
1) If you argue any SALT is private... Then WHY EVEN USE SALTS?!?! If something on the database is private/secure why not use clear text passwords? The whole idea of using SALT+HMAC is that NOTHING stored is private/secure. You cannot EVER assume SALT(s) are private.

2) Cracking the password is irrelevant. All that is needed is:

      SALT2 + Web transferred hash
Then the credentials are gained. Yes learning somebody's password, or several million's peoples password is a fun experiment but gaining access to the website is also an attack vector, one your system is making easier.

For a traditional HMAC/Scrypt/Bcrypt recovering the password is a critical part of this as the authentication mechanism is a single black box program. Input is SALT+Password+Hash. Output is an auth token.

The system you describes breaks this dichotomy. No longer must the password be broken to gain access.

:.:.:

       Availability is a legitimate concern
An extreme one

       Similar to SMS multifactor
SMS Multi-factor is depreciated. Faking SMS from an arbitrary phone number is extremely trivial in practice. Look into Dave Kennedy's work on social engineering.

Multi-Factor auth is best done via Tokens or TOTP.

       We direct peer to many of our customers and route over private IP space to avoid a potential DDoS.
So instead you want to just install your own box on the customers network? This opens up even more headaches about managing and authenticating your access to their private LAN. Then you get into patching agreements, possible OS limitations, security audits. This is a horrible solution for both parties.


I'm sorry if I'm not explaining it right, there's definitely something lost in transmission here.

We are keying the hash (HMAC), keyed by a value derived from the data pool. Please don't take the 4 line pseudo-code too literally.

The value stored in the database (Hash2) can only be used to verify a password if you can also complete the blind hash which is blinded by the data pool.

I can say, this approach has been vetted by both well known cryptographers (Solar Designer, Scoob) as Passwords^15 as well as industry crypto wonks at MITRE and elsewhere. It's certainly not snake oil.

The salt kept on the site's server means that the site must be breached in order for a successful attack to be mounted. It's why we call this a fully additive layer of security. You need the Salt from the site, and the entire data pool from us, to mount an offline attack.

2) Why would you say that? It's absolutely not the case that you can login with Hash1 and Salt2.

If you intercept Hash1 and Salt2 then you may know Hash2 but you still cannot login, and you cannot try to crack it without Salt1. Again, this is all assuming TLS is broken in which case you can just inject your own JavaScript onto the page and steal passwords in clear text.

By direct peering I mean programs such as AWS DirectConnect which gives us 10Gbps on their network and private IP access to our peers. Nothing to get too excited about.

I'll be back online in about an hour if this still doesn't answer the basic questions around the security of the construct.

EDIT: I do not mean to say solardiz or sc00bz have personally endorsed our product. Only that we have all worked on and published write ups on the same general approach (using large data pools with bounded network links) to solve the password security problem.


While security does not rely on simplicity, a simple system is much easier to reason about security. What you add is a whole stack of complexity through multiple hashes, entropy generation (on your end), and network transport (TLS).

There's no way you could convince me to use your system over a basic KDF implementation. The only people you're going to convince to use this protocol is someone who doesn't have experience in the field, which is why I'd consider your solution snake oil.


I've talked to many cryptographers in the field. Universally they appreciate the solution for its simplicity. We don't use any new crypto - the whole construct is based on a CS-PRNG, hashing and HMAC.

You talk to this service just like you talk to any service over the LAN or WAN. Through an encrypted channel. Goes are the days when you can just dispatch a request over the LAN and assume you're good. We are happy to setup dedicated machines with spiped as we are using TLS.

But certainly you look at solutions like CloudFlare or even dare I say Firebase, and the industry has moved far beyond your level of paranoia.

I don't want you to use it instead of your basic KDF, but in addition to / after your KDF.

13 million Americans had over $15 Billion stolen last year in cyber-heists and almost 70% of those attacks were using a stolen password. The basic KDF is not working, and it's time to stop blaming the user for not having 69 bits of entropy on their password and start giving companies the tech they need to actually secure their passwords.


It occurs to me that another benefit of this approach is that it gives you ample time and warning that an attack is occurring, because unless the attacker has physical access to the machines and can just plug in a drive, you will know that a ton of data is crossing the wire.


Precisely! We monitor the network traffic very closely. The only data over the wire is API requests/responses which are counted. Every 5 minutes we compare that against the actual number of requests performed according to the application counters. If the traffic doesn't line up, it pages the whole ops team.

Every day we are looking at total network egress. It would take an attacker over 200 days to move the data pool at full line rate. It gives us time to detect and remediate the inevitable breaches without ever exposing any passwords to an attack.

The best part is we don't see the login, we don't see the password, we just see the hash. And without the salt it's just a random number. We don't even know if the login is valid or invalid, and there's nothing we could do to make an invalid login appear valid. So our customers effectively retain full control over their authentication process.

The hash determines where we read from the data pool. 64 bytes read from each of 64 uniformly distributed locations. The 4K result is hashed and the digest is sent back in the response.

On the customer side, they use the response to run one HMAC and save the result (we call it Hash2). The original hash (Hash1) can be discarded, or if you want a way to leave the service sometime in the future, encrypted using a public key with a semantically secure (non-deterministic) algorithm.

Large enterprises where we are doing pilots also tell us they like this because it defends against insider threats as well.


Ok but how do you associate Hash2 with a user account when the user logs in?


Does this help?

  login(username, password)
    User = GetUser(username)
    Hash1 = Hash(User.Salt + password)
    Salt2 = BlindHash(Hash1)
    Hash2 = Hash(Salt2 | Hash1)
    return Hash2 == User.Hash


Is it wrong to think of you as really secure, off site salt provider? At the end of the day you are just providing a really good source secure random bytes that are stored for the user.

What is the security benefit of sending you the actual hash of the password? Why not just the hash of a unique user token?


That's a good way to think of it.

It's important to use the password as part of the entropy so that an attacker has to make a Blind Hashing request for each attempt to crack a password versus being able to make a single request to effectively unblind it.


Actually, I think a better way to describe this is user specific peppers instead of calling it a salt.


The thing sent to their service shouldn't be known by anyone but the user, then it can't be stolen from the website.


You store Hash2 in your database.


> We do this with a large amount of random data. Many TBs or perhaps as much as a PB.

What's the tradeoff in using a much smaller pool and severely rate limiting its network interface?


The protocol involves about 64 bytes in and 64 bytes response. Technically you could shrink it a bit if you needed, but let's start there.

If you want 1,000 logins per second you limit yourself to 64,000 bytes / sec egress. This is hand wavy - there is some overhead but I'm not counting front-end traffic, TLS handshakes, etc. just the request/response against the internal data pool server.

If you want it to take 100 days at full rate to steal the pool, then that works out to 64,000 * 60 * 60 * 24 * 100 = 552GB.

You can try to set the limit even lower, to allow an even smaller pool but that's the basic math. Just 10 logins/sec --> 5.5GB.

But if you depend too much on software limits and not PHY layer limits on the network speed you certainly run the risk of having your software limits blown away and the data pool slipping away much faster than you expected. You want a physical limitation to ensure the data will not get away from you over the network.

When you get to the point where a successful attack would require physically removing a hundred disks which are self-encrypted SSDs then you sleep more soundly at night!


What happens if someone is sniffing the line? It wouldn't even need to be your fault, it could be some logger on your customer's server... i.e. do you have any protection against someone including this in the client code:

    DEBUG("testing external service... " + Hash1)
    Salt2 = call_external_salt_service_api(Hash1)
    DEBUG("service returned: " + Salt2)
If someone was logging this traffic, wouldn't this allow your service to be treated as a black box and nullify any additional security?

They wouldn't need to steal the full pool in this case, as you are sending back the exact hash/salt values needed for every user that logs in.


It's worth noting that an issue with scrypt for proof-of-work cryptocurrencies is that is is expensive to check. The ideal problem is memory hard to solve but can be checked cheaply. This is what equihash (used by zcash, (https://www.internetsociety.org/sites/default/files/blogs-me...) attempts to do, though proving the memory hardness seems more harduous than in the case of scrypt.


> proving the memory hardness seems more harduous

In fact, several submissions to the Zcash Open Miner Contest, at

https://zcashminers.org/submissions

exhibit time-memory tradeoffs that were not considered in that paper, invalidating some of its claims.

My own, memory latency bound, proof-of-work called Cuckoo Cycle

https://github.com/tromp/cuckoo

offers $5000 in bounties for invalidating its claims of near-optimal implementations and trade-off resistence.


Hopefully, in the near future, we can just slap a recursive SNARK on scrypt (or any memory hard hash function for that matter) and call it a day.


The status quo that SNARKs require a lot of memory is simply due to the current available constructions; optimisations and new constructions will certainly change that.


While building snark proofs is very memory intensive in itself, it is far from optimization free, which is one of the primary requirements for a PoW...


Not my point. The idea is that you can obtain a cheaply checkable proof that you computed a memory bound function, such as a scrypt hash.


I did get your point. But if the onus to produce such a proof is on the miner before they can broadcast the next block, then the PoW is no longer optimization free.


Ah I see. It's true, but there's a good chance these optimizations would be discovered independently, progressively and percolate into the ecosystem. Turning to ASIC was an "optimization" that was replicated and copied. It's true that there is a lot more secret sauce to optimizing a SNARK, but if mining drives improvement in SNARK efficiency, I would call that a win.


Is this really a major pain point? You have to run billions of units of the work to find the proof. You run exactly one unit of the work to validate the proof.

The units can be set somewhat arbitrarily. Although it is important for the unit to be memory-hard-enough to not accelerate well on a GPU.

I would think that's a rather wide "target" which can be hit by scrypt at a certain difficulty.

Needless to say, it's still possible to get it wrong, e.g. Litecoin.


There's something to be said for both. It's true that there is a scaling intrinsic to the Bitcoin protocol (which it had to have, as it was based on fast functions) to allow arbitrary slowdown at the brute-force level. However ideally it would be nice to have an asymmetry there.

Just think of it purely in the manner of "why run exactly one unit of the work? why not run a hundredth of that unit to validate the proof?" An asymmetry allows you to more quickly verify the ledger, which is both a startup cost and a lesser ongoing cost for everyone.

I'm wondering however if there is a connection with asymmetric encryption, though. Imagine a one-way function h and two functions f(x), g(x) such that f(g(x)) = x. Suppose the task is "find M such that g(h(M)) lies within some narrow subset," then if g is much slower than f, you can verify a pair (M, g_h_M) not by verifying that g(h(M)) = M (slow) but rather by verifying that f(g_h_M) = h(M) (much faster). The importance of h is just that for the most obvious application (f(x) = x^E modulo N, g(x) = x^D modulo N), you also have that g(f(x)) = x, and this is a relatively common property of inverse functions.


You get the same properties with a basic proof of work scheme. The difficulty would be to make g memory hard and f not memory hard.


> "why run exactly one unit of the work? why not run a hundredth of that unit to validate the proof?"

Because it is a solution in search of a problem in many cases?


Validation needs to be as light as possible in absolute terms (not just relative to the computation) for DDOS protection. An attacker could spam you with fake blocks and force you to compute expensive hashes to realize they are fake. Also consider the case of building a lightweight hardware device to act as an SPV client.


When we talk about the term 'computationally hard', we usually mean an NP-hard problem. I assume that here, 'memory-hard' means that no other hashing algorithm can have a greater lower bound on its memory complexity than Scrypt. Is that correct?

Edit: After a re-read, I realised that the answer is in the text:

"Memory-hard functions (MHFs) are hash algorithms whose evaluation cost is dominated by memory cost."


I was under the impression that there were scrypt ASICS. Can someone explain how, if true, this is compatible with this claim?


You can put anything into an ASIC. The only question is how much it costs.

But all the "scrypt" ASICs I'm familiar with are limited to computing the nerfed litecoin version anyway.


Take a normal CPU, delete the instructions that aren't used by scrypt, and bam, you have a cheaper and more power efficient chip.

In general, you can't prevent ASICs unless you maximally utilize commodity hardware.


The point of scrypt is to make ASICs expensive by requiring lots of fast memory. You can always build an ASIC, that doesn't mean the ASIC is cost-efficient.


> The point of scrypt is to make ASICs expensive by requiring lots of fast memory

But what is the point of that?


The point is just to utilize the commodity hardware to the max. Commodity hardware gives you a lot of memory IO for the buck. You can't spend the same dollars and get 100x more IO. But you can spend the same dollars and get 100x (or more) faster at typical hashes like SHA256.

Scrypt uses memory IO so that you can get "your money's worth" by running on an Intel CPU.


Not allowing anybody to have an advantage in computing it

(as opposed to bcrypt and other systems where an ASIC could be faster than a CPU/GPU)


Okay, but if I buy 10x the amount of hardware I will still be 10x more proficient in mining.

So rich people (the ones able to buy lots of ASICs and/or memory) will still have an advantage in mining.

So again, what is the point?


> Okay, but if I buy 10x the amount of hardware I will still be 10x more proficient in mining.

Sure.

> So rich people (the ones able to buy lots of ASICs and/or memory) will still have an advantage in mining.

> So again, what is the point?

With non-memory hard hashes, ASICs or GPGPU yield superlinear gain (in hash/s/$) compared to general-purpose CPUs. A 1080 will get you orders of magnitude faster SHA hashing than the pair of i7s you'd get from the same price, and ASICs have an even larger difference.

Also scrypt was designed for password storage, not for workproofing dogecoin.


Because the alternative of not doing this means you could buy 10x the hardware with specialized ASICs and get 10000x more proficient in mining.


So, what is bad about that?


See comment above, scrypt was not designed for cryptocurrency wankery it was designed for password hashing, you don't want the attacker to test billions of hashed passwords per second as that lets them trivially brute-force a DB leak or dump. You want costs to the attacker (hash/second/dollar) to be as similar as they are to the regular user, so that the regular user can crank up difficulty as high as acceptable for their use case (e.g. >100ms/password for a web login).


Okay, now it makes sense :) Thanks!


Everyone has access to commodity hardware by definition, which increases decentralization.

Specialized hardware goes in the opposite direction in both ways.


From a cryptocurrency perspective, if custom hardware (ASICs) gives a mining edge, then mining power will concentrate to those who build custom hardware.

If commodity hardware is competitive for mining, then mining power is more likely to be distributed amongst a network of users, rather than owned by a few big players.

You might ask why the network of users doesn't just buy the custom mining hardware. But the economics of it doesn't work out in the long run. If you can build a machine that prints money, why would you sell it to me for less than its lifetime money-printing value?


Significantly lower the ROI password-cracking ASICs, more money and die area going to memory means less of it going to compute, and a much lower difference (in hash/s) between regular user and cracker. It all works towards the ultimate end of making cracking passwords too inefficient and expensive for "bad guys" to bother.


For some reason Dogecoin, Litecoin, and all the other scrypt-based cryptocurrencies use extremely low N and r parameters, such that computing a hash only needs 128K of memory.


They needed to, in order to make the proof-of-work cheap to verify. See my blog post

http://cryptorials.io/beyond-hashcash-proof-work-theres-mini...

on how asymmetric PoW avoid this handicap.


Maybe using a harder hash does not incentivize people mining it, reducing its growth and liquidity?


I'm under the impression that the Litecoin creators thought that scrypt would prevent ASIC miners, but they still wanted to be able to GPU mine, so they used low difficulty parameters. Which of course led to ASIC miners being implemented. Then Dogecoin just copied Litecoin directly.


> thought that scrypt would prevent ASIC miners,

I'm still amazed that people believed that.

We've been pushing general purpose CPU's for decades because there is nothing novel or hard about making single purpose processors.

I didn't realize you could just say random computing terms and people would believe you. "This is memory hard so therefore no ASIC (single purpose processor) could be made because memory is expensive"

I like cryptocurrency but 2012-2014 was seriously like being on crazy pills listening to the fans.


To be fair, memory hardness is a legitimate concept with proofs and everything; Litecoin just made a mistake in their design. ZCash is giving it a second try, taking into account improvements that have been made over the last few years.


scrypt's memory requirements are not hard coded. Choose a value high enough and an ASIC would not be feasible.


That's fine, scrypt was designed to defend against the eventuality of scrypt ASICS.


at https://hashcash.io/ I am using litecoin hashing algo, which in turn uses scrypt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: