Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Breaking RSA with a quantum computer? (schneier.com)
259 points by barathr on Jan 3, 2023 | hide | past | favorite | 152 comments


This is not my area, but I wanted to mention this before speculation got out of control: some quick feedback from colleagues is that the analysis in this paper seems to assume Schnorr's claims from 2021 [0] without detailed supporting evidence that these claims are true for large parameters like the ones needed to factor RSA-2048. But these claims are viewed skeptically, and this paper doesn't provide much evidence to change that.

[0] https://eprint.iacr.org/2021/933


I want to contrast this paper with Shor's factoring paper [1].

One of the things that stands out to me about Shor's paper is how meticulous he is. He is considering the various ways the algorithm might fail, and proving it doesn't fail in that way. For example, the algorithm starts by picking a random seed and you can show that some choices of seed simply don't work. He proves a lower bound on how many have to work. Also, given a working seed, sometimes the quantum sampling process can correctly return a useless result. He bounds how often that can occur as well. He never says "I think this problem is rare so it's probably fine", instead he says "this problem is at least this rare therefore it is fine". Essentially the only real problem not addressed by the paper was that it required arbitrarily good qubits... so he went and invented quantum error correction [2].

The paper being discussed here [3] does not strike me as meticulous. It strikes me as sloppy. They are getting good numbers by hoping potential problems are not problems. Instead of addressing the biggest potential showstoppers, they have throwaway sentences like "It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA".

How many shots are needed for each sample point fed into the classical optimization algorithm? How many steps does the optimization algorithm need? How do these scale as the problem size is increased? How big are they for the largest classically simulable size (RSA128 with 37 qubits according to their table)? These are absolutely critical questions!... and the paper doesn't satisfyingly address them.

Is there somewhere where I can bet money that this doesn't amount to anything?

1: https://arxiv.org/abs/quant-ph/9508027

2: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.R2...

3: https://arxiv.org/abs/2212.12372


The difference is Shor is attempting to prove something. This article is by a security researcher who cares about staying ahead of threats. That is, a 10% chance that RSA-2048 was broken means he's screaming about changing to -4096 or another standard. Because he is trying to make security systems reliable.

Or, to put it a different way, most papers focus on being right. To many publishers, "being right" means being true in what the paper is saying. In some other cases, "being right" means that the action you take is correct. Trying reading it as not a paper on "is RSA-2048 cracked" but "is RSA-2048 still safe".


It sounds like you are talking about Schneier (the blog author and well known security researcher) and the parent post is talking about the paper Schneier is blogging about. And comparing it to another paper by a different author.

I don't think there was a comparison between Schneier and Shor, or I missed it.


I am talking about Schneier. I may have misread the parent post


I agree that it makes sense for cryptographers to be jumpy around papers claiming improvements in quantum factoring, even if those papers are low quality and likely to be wrong. But that doesn't mean you stop calling the papers low quality and likely to be wrong.

I guess I'd also be a lot more sympathetic if the paper had a paragraph in the abstract, or at least the intro and conclusion, where they explicitly positioned the paper as a wild idea that could work but probably won't but is still worth considering because of the risks.


I think you are right that this paper may not amount to anything but it should also be a big wakeup call that maybe we need to switch or enhance our public key crypto starting now rather than later, in case similar ideas can work, or in case quantum computers get a little bit better faster than we thought, etc.


Let me know if you find such a betting venue... I'll take the same side.

I'm aware of both Microsoft and Google publishing papers claiming astounding quantum feats, then later retracting them (I have to assume there were similar instances with lesser known companies). I think your skepticism is valid.


For the Microsoft one I'm assuming you're referring to the retraction of the detection of Majoranas [1].

Do you have a reference for a Google quantum paper being retracted? I don't recall an instance of that (disclaimer: I am on the Google quantum team; my views do not represent them).

1: https://www.nature.com/articles/s41586-021-03373-x


The Google "supremacy" paper wasn't retracted per se, it was just shown to be wrong. They claimed to do a thing on a quantum computer (specifically, simulating a particular quantum circuit) that would take 40,000 years on a classical supercomputer, assuming a particular method for doing the computation.

The real number was 2.5 days, and the computing breakthrough involved in the huge speedup was... using SSDs to store your intermediate state instead of recomputing it every iteration.


Not a retraction, just that many of the claims about supercomputer performance were easily shown to be less than accurate, making any claims about supremacy less exciting.


Just deploy a smart contract on, say, Ethereum MainNet.

One side will publish the public keys for RSA and the private key signature can take the money.

The other side will likewise lock up some money, but that money can be moved by a smart contract method after a certain date, if the first account still has money in it. You can have multiple dates, for removing some or all of the money in the multiple bets against it. For example, "$200 that it's cracked by 2004".

The main problem with bets and contests is that the side which knows the private key can simply withdraw the money itself. That’s why you need the private key to be generated by all parties involved in a ceremony.


I want you to plan my next birthday party.


Funny you say that. We did it in 2013: https://qbix.com/docs/users.pdf


It is fair to say, that in security assessment you estimate lower bound on complexity of hacking.

Schor proved upper bound.


There are two cryptographers being discussed here. Peter Shor, and (Claus) Peter Schnorr.

Neither are Schor.


Shor, Schnorr, Schor and Schneier.


They all are the same person - Satoshi.


You mean Schnatoshi?


Lenny Baum, Lloyd Welch, and their colleagues at IDA were using the EM algorithm for code cracking well before they were able to prove anything about its convergence.

EM worked in practice, so they spent a long time trying to prove convergence. Modern proofs are simpler.

Could be the case that this method also works in practice. I haven't the faintest idea whether it will.


You divide the number of physical qubits by 5 to get the number of fault tolerant error corrected qubits. [0]

If their algorithm works, they need a 1860 (372*5) qubit computer to break 2048 bit RSA.

IBM expects to get there by 2025. [1]

[0] https://en.wikipedia.org/wiki/Five-qubit_error_correcting_co...

[1] https://www.ibm.com/quantum/roadmap


The amount of error correction you need is more than what the five qubit code provides. A more typical estimate is that you'd need 1000 physical qubits per logical qubit.

For example, using the surface code, a back of the envelope estimate would be that you need a code distance of d = ln(number_of_operations). Each logical qubit will use 2d^2 physical qubits. So for a million operations you'd need around 400 physical qubits per logical qubit and for a trillion operations you'd need around 1500 physical qubits per logical qubit. So, way more than 5.

(A major practical obstacle to using almost-anything-that-isn't-the-surface-code is that the surface code has forgiving connectivity and maximum-allowed-physical-noise requirements.)


The other issue when you start talking about very large numbers of qubits is that they're not independent. Building more qubits influences your noise environment substantially.

Expectations of a Moore's law type improvement rate are going to be left wanting.


So if the paper is correct then you need about a half a million qubits? Is this roughly a 40x improvement on the 20M qubits in 8 hours or is there more to it?


If the paper is correct then yes, it would be a huge improvement in the required space even accounting for the overhead of error correction.

Shor's algorithm requires performing a modular exponentiation under superposition. For an n bit modulus this requires 2n or 3n qubits of storage, plus let's say 50% overhead for routing and gates. You end up needing 5n to 10n logical qubits for an n bit number. So to factor a 2048 bit number you'd need on the order of ten thousand logical qubits. Improving that to a few hundred logical qubits would be a big improvement. Also, there's fewer operations so the code distance can be lower.

...but don't forget that "if the paper is correct" bit.


A 5-qubit code corrects against single qubit errors. You could say a factor of 5 is a lower bound. In more technical terms, this scheme fixes all weight-1 errors.

More realistically [1], you'd have a factor of around 1,600 for a distance-27 code.

[1] https://arxiv.org/pdf/1905.09749.pdf


If this roadmap is accurate for the present, the types of qbits here are NISQ qubits, so not useful for any sort of generalized computation no matter how many they have.

Which is to say that Osprey has 433 qubits, so should be capable of 86 fault tolerant error corrected qubits, so they should be able to factor (not bothering with the math) AT LEAST ONE NUMBER using Shor's algorithm, and yet they cannot.


If the paper is to be believed you are wrong in two different ways:

1) If the paper is right, they are claiming they need 300ish physical qubits that can sustain about 1000 gates before decohering. No need for scalable error correction.

2) Independently of the veracity of the paper, if you actually need logical error corrected (and fault tolerant) qubits, you need error correcting codes with much more severe overhead than the 5-qubit code. The 5-qubit code is a pedagogical example, not something that would actually work under realistic conditions. And even the 5-qubit code needs quite a few extra ancillary qubits for fault tolerance (which is more expensive than simple error correction).


What practical things can be done once RSA is broken?


Breaking RSA is a practical thing.

E.g. forge email (most dkim keys are 1024 bit rsa). Break ssh (depends on key algo chose). Break pgp (depending on settings). Mitm https connections, Etc.


So if you were going to explain it to someone who's less technical, maybe saying "remove HTTPS" would be an oversimplified way to explain?

(I don't think my contacts aren't going to know what SSH or PGP is, if that helps.)


Not exactly.

For the vast majority of HTTPS (say, for example, Google or Hacker News) RSA is not used to agree the encryption. So although quantum computers would be a threat for other reasons, breaking RSA in particular doesn't just "remove HTTPS".

However, RSA is used to prove the identity of the server for most web sites even with a newer key agreement. So if an adversary can get on path between you and the server, they could get in the middle and masquerade successfully as the server - arranging key agreement with you, and then providing a convincing fake proof of identity, if they do so live.

In TLS 1.2 optionally, and to a greater extent in older versions (which are no longer used by popular web browsers) you can also use RSA to agree the encryption, and for sites using that breaking RSA would allow an adversary to interpose in real time, or to decrypt communication after the fact, but I'd be astonished if anywhere important still does that when talking a halfway modern browser.


The way id explain it to a non-technical person - it would (among other things) allow the person running the sketchy wifi at the coffee shop to see all the sites you are visiting and any data you enter.


isn't most stuff elliptic curve now?


Decrypt decades of pre-PFS archived encrypted internet traffic.


If only someone archived it in a data center in the desert.


>> Decrypt decades of pre-PFS archived encrypted internet traffic.

> If only someone archived it in a data center in the desert.

Uh oh

https://en.wikipedia.org/wiki/Utah_Data_Center

Good think I declared moral bankruptcy this year, all that is the old me :-)


A note to cast further suspicion on the immediate risk severity:

The researchers indicate use of a computer built with superconducting qubits in the abstract, to that, superconducting qubits present barriers such as

- limited coherence time due to common atmospheric muon events, and resulting phonons

- limited topological connectivity, further increasing needed coherence time.


Limited coherence time is not explained as simply as "atmospheric muon events". There are a variety of reasons, some environmental, others a result of imperfect fabrication, etc. that contribute to decoherence, gate error, etc.


I know that, but I recall specific literature reflecting the specific effect mentioned[0] and it seems intuitive that the surface area and orientation currently in use for these qubits may potentiate the effect, I think the other effects you list are barriers to other qubit registers as well.

[0] https://ai.googleblog.com/2022/01/resolving-high-energy-impa...


I'm surprised nobody has written a thriller yet where a new mathematical algorithm is found (maybe something to do with primes?), all encryption suddenly collapses, and with that we go into a psuedo-apocalypse where nobody is sure whether anything is authentic anymore. Banks can't share cash, ID systems are useless, we've still got electricity but no functioning internet, hardware root of trust is shattered...


It's called Impagliazzo's Worlds Paper:

https://scholar.google.com/scholar?cluster=14678868687868063...

A classic paper that explores what happens if various scenarios come to pass. Would be worth exploring some of the updated versions and fictionalizing them.


"Summer Wars"[1] features breaking keys as a plot element, with a passing mention of Shor's Algorithm. The rest of the movie is mostly unrelated to math. Good movie though.

[1] https://en.wikipedia.org/wiki/Summer_Wars


Sneakers is the thriller movie you’re looking for



And it's a great movie!


Or The Net.


They have, usually this is called a skeleton key. NCIS has done at least two separate episodes with it as the MacGuffin.


All that would need to happen for this exact scenario to occur would be for anyone, anywhere to find a case where P == NP. [1]

P !== NP is a theory that has never been proven, so it very well could happen in reality.

This is one of those things that keeps me awake, like Carrington events [2].

[1] https://en.wikipedia.org/wiki/P_versus_NP_problem

[2] https://en.wikipedia.org/wiki/Carrington_Event


As someone who is only vaguely familiar with the P = NP problem, can someone explain to me if proving P=NP automatically solves the numerous problems that can then be “quickly computed” or does it simply prove there is an existence of an algorithm for each problem?

To rephrase if this is not the case - what value does solving P = NP provide?


There are roughly four possibilities for P = NP:

* P != NP. In practical terms, nothing changes.

* Nonconstructive case. The resulting algorithm looks something like some primality test algorithms (which I'll describe): essentially, if a number n is composite, then there is some (X + a)^n = X^n + a in Z/nZ (X is a polynomial here). If you test "enough" a's, then you can prove whether n is prime or composite. A nonconstructive case would mean we have a proof that you only need to test poly(lg n) a's to confirm truth, without necessarily telling which a's you have to test. In this world, there is no practical change to problems--the proof doesn't yield an effective algorithm to actually solve any NP-complete problem.

* Combinatorial algorithm for an NP-complete problem. The good example here is what has been done to prove L = SL. The result is "technically" in L, but the factors in the algorithm run very quickly into "more than the number of atoms in the universe" phase. The goal is to find a memory-less algorithm (can't use a visit stack) that can prove a path between two points in an undirected graph, and it turns out that you can transform the graph into another one that will guarantee that you will visit every node in a certain amount of time. The found result has a new graph that replaces every node with more nodes than exist atoms in the universe, so it technically meets the requirements but is completely and totally impractical. Sometimes people handwave this possibility by saying that once an algorithm is found, people will find better results, but this result hasn't been improved on in close to two decades.

* "Simple" algorithm for an NP-complete problem. This is the result that really, really changes things. Once you get a simple algorithm for one NP-complete problem, you can generally find analogous ways to get simple algorithms for other NP-complete problems. However, the research done to date does suggest that this is perhaps the least likely solution: looking at the "hard" instances of SAT problems, it does seem that there is a certain complexity that is at best reducible via some sort of combinatorical nightmare construction rather than the kinds of decompositions that yields "simple" algorithms.


It's only required to prove an algorithm that solves an NP-complete problem exists, not find it.

Even if that algorithm exists and was found, it could be that such an algorithm is O(n^123456789), which would not break RSA in any practical sense, though it would be mathematically asymptotically faster than O(2^n).


If P == NP, then it could be possible that a proof would show that an algorithm must exist without providing such an algorithm. But that approach wouldn't be necessary, actually showing an algorithm would of course also be a proof.

> To rephrase if this is not the case - what value does solving P = NP provide?

P vs NP is a question of enormous practical interest. But it's also a very interesting question of pure mathematics. A proof that P != NP, or a proof of P == NP that didn't provide an algorithm would still be a huge deal in the math and computer science world.


Think about cryptography for a second…. In cryptography you need a problem that if you know the key is fast to decode but if you don’t is really slow… like you would have to search all the possibilities one by one. Such problems are (basically…) called NP. P are all the algorithms that are fast on computers. If P = NP than any problem you could use for cryptography could be decoded fast


P does not mean fast.

P means you don't have to try every single possible answer.

But lots of algorithms fit that description while still being impractically slow. Keys might still be uncrackable.


It's been a little while - but I think it's because NP problems can be converted into each other, so if you can solve one of them in P you can solve all of them in P.


Some have speculated though that P doesn't necessarily have to be a short period of time. If P == NP, but P takes hundreds of years to compute, we may survive.


The Carrington event page was fascinating to read. Telegraph operators held an entire conversation without power! It is fairly unsettling though. I wonder if there are contingency plans that include thoroughly shielded or non-electronic communications equipment.



It took me a surprisingly long time to realize that I was reading fiction. The example SHA sums even do work, it's just that they only start with 40 bits set to 0 (expect to require 10^12 operations to generate each randomly - about a second on an ANTMiner) instead of 88 0 bits (multiple weeks of the full Bitcoin network to generate each randomly)


Can a quantum computer fit in an answering machine? :) https://www.youtube.com/watch?v=F5bAa6gFvLs


One of the neater bits of world building/subplots in the book "A fire upon the deep" is their ship is a freighter with a cargo full of one time pad keys. that is, the encryption you would revert to in a world where factorization is easy.


Isn't that basically implied by the ending of Sneakers (1992)?

It would be fun fodder for a sequel, but I feel like it'd come across as histrionic disaster-porn.


Symmetric encryption should still be unbreakable even against quantum computers or any advances in prime numbers. Given we currently trust CAs they could switch to something like Kerberos.


This is now my new favorite apocalypse.


I’d be interested to speculate how cryptocoins would fair also, not that it isn’t already a sh1t show XD


Agreed.

But how would the movie end?


Cloudflare and Meta team up to invent a proprietary crypto algorithm that fixes everything but routes all traffic through their servers, centralizing the internet completely. All speech is controlled and monitored by this new entity, which receives a national security letter thereby merging it with the US government for all intents and purposes, but, like, you know, for our safety.

Nobody notices or cares what has happened, and critics are met with "well it dOesN'T mATteR because they aren't using the information for anything bad."


How did Google not get a look in? But makes me think how would that work? Physical key swaps?


In the worst possible case we just switch to the one time pad encryption, which makes things inconvenient but literally can't be cracked by any computer or algorithm(tldr: the length of the key is as long as the length of the message, so a message can decode into anything and you can't tell whether the text you decoded is the right one or not). So a scenario where literally no encryption is available seems far fetched.


Yes, we simply change to one time pad encryption and ... distribute pre-share keys to every user on earth for every service that modern civilization relies on? The failure of RSA in a significant way would mean the end of modern commerce, military balance of power, and the sharing of information. It would be a history-altering event in the best case and devolve into existential war in the worst case. OTPs would do exactly zero to prevent any of that.


>>OTPs would do exactly zero to prevent any of that.

So if it was literally the only remaining unbroken type of encryption on the planet, it would have no effect? How so, exactly? We would just go with no encryption whatsoever rather than bear the inconvenience of distributing OTP keys?


Not really. The failure of RSA would necessitate a rapid shift to, probably, lattice key exchange. It would be a fire drill, but it would be greater by degree and perhaps not by kind than previous fire drills we've run.


As a reminder, "mekmiditastigoat" is the internet shared secret for IPSec interoperability. In a pinch you can use it for SSH as well.


Isn't the problem with one time pads distributing the pad? Like, you would have to walk to a bank and have them hand you a piece of paper... and tellers could read the paper before handing it to you. So basically so ineffective in practice as to be unusable?


The distribution part have gotten a lot easier. I would be very possible for banks to give all their customers USB devices with a few GByte of one time pads and keep a copy of those same GBytes in their own systems.

And most banks still have brick-and-mortar stores where customers could come and identify themselves and collect their one time pad.

And a GByte of keypads would cover your banking need for a long time.

The problem is more that it works for some one-to-many relationsships such as banks, but not many-to-many relationships, such as emails, websites, etc. There have to be second or third parties.

On the hand, a lot of people seems to log in everywhere with Google Sign In or similar anyway.

And we could instead all have Google or Amazon devices with GBytes of one time pads. And that could be used to set up symmetric encryption, which should be more resistant to quantum computer attacks.

The only drawback is that we would have to trust the third party and everyone who could compromise the third party and every government that could put pressure on the third party :-)


Quantum Key Distribution. You can prove that the key hasn't been intercepted. But since that means you need a direct point to point connection with no routers, switches, hubs, amplifiers/repeaters etc., it only works for a tiny fraction of cases. https://en.wikipedia.org/wiki/Quantum_key_distribution


I remeber in the very early 2000s, a guy had a stopwatch sized whos-it, and it kept flashing numbers on it. Updated every 5 minutes.

Apparently, some cesium based list of numbers, again, was 20 years ago.

Point is, it was a one time pad...


You mean this? https://en.wikipedia.org/wiki/RSA_SecurID I'm not sure but I don't think this is OTP?


Indeed, they're not one-time pads - they are symmetric authenticators where both sides hold the same seed, and iterate a PRNG or similarly iterable function every N units of time (say, 30 seconds), to give you the same new output, based on the same starting seed. Think stream cipher output, with an initialisation vector.

They are often called OTPs though (i.e. one-time passcodes), just to cause confusion.


A Whatsit. A Thingamabob. A doohickey. I remember those - people still use them I think?


so print it via pressure into the inside of a sealed envelope? that's how they send me my PIN.


(not sure if replying to troll..)

The argument against OTP is that by securely distributing the key of the same length as your message, you ostensibly already have a secure messaging mechanism; why would you need the OTP?


Well no, that's not true - a bank could issue you with a one-time-pad long enough to encrypt the next 10000 messages with you, and use that over the years - they just need to tell you which line of the key is needed to decrypt your message. In that scenario you only need to guarantee security for the first time the key is distributed(for example a file sent to you when the account is opened).


You’d have to guarantee security every time a key is distributed, right? I guess practically it would look like going to the bank every few years… or maybe only once per account, you can fit a lot of bank statements in a couple gigabytes.

Actually this could be a nice service to offer now. We might worry that someday public key crypto will be broken, and we wouldn’t want all our old bank statements to become public at that point I guess.


I mean, years ago my local bank had an online banking system, where your PC had to dial-up their local server for any operations - and the authentication key was on a floppy. Which meant that yes, you needed to visit their branch once every year for a new key distributed on this physical medium.

Yes, it was inconvenient, but hardly an impossible thing to do. Banks manage to communicate the PIN for your card safely every time you open an account, I'm sure this could be done as well.


So now people have to keep a piece of paper around or somehow put it into some software and not lose access. You're right, it would work. In reality, people can't even be bothered to use a password manager or understand even the most simple of new security software, let alone even remember their password. That makes it completely intractable as a solution.


I mean, the scenario being discussed here is some kind of "world ending" situation where all known encryption is broken. So you either do it the way I described it, or you don't have any encryption whatsoever. I think under those conditions people would adjust. It isn't an alternative to our current arrangements.

Also: my bank access is done entirely through an app that obscures its internal implementation. It could already be using OTP and it wouldn't make any difference to me, nor would I be able to tell(my point is that the user wouldn't need to keep a piece of paper that they would need to type in anywhere - the internal implementation of tools we use every day would change, but most users wouldn't even notice)


I’m not clear on how we’d use a one-time-pad on a physical piece of paper (unless we want to do it like an old-timey spy novel character, using pencil and paper to combine the bits!)


Concur. I should have stated "an argument" not "the argument".


> we just switch to the one time pad encryption

"just". So how do you do the key exchange?


Depends on the security of the service you want to use - I'd expect my bank or my email provider to send me my encryption keys by a recorded letter. Hacker news can probably email me their key instead.

Again, we're talking about some "world ending" scenario that OP mentioned - where all "normal" forms of encryption are already broken. If OTP is the only unbreakable encryption around, them I'm sure we'd find a way to distribute keys.


Email it obviously


There's any number of less subtle and more important problems with this idea, but another problem you have here is that one-time pads only provide confidentiality, not integrity.


And people wonder why I don't upgrade my 1996-1998 webpages to https.

In the end, as the world burns, I will helpfully explain how I was right.


Huge, if true. However, many such claims have surfaced before and turned out to be dudds.

For now, I am taking it with a spoon of salt but with an interest of any follow-ups and peer review.


In an update on the article, it reveals that it relies on an algorithm that breaks down with larger N for an unknown reason.

It seems like we're safe for now.


Even when it worked, QC itself is not massively deployable right now. Agencies will be sucking their thumbs while everyone migrates to something stronger or even quantum-proof algorithms. No drama.


Yeah, I think this is a sign that while, as the post notes, it's not clear that this is a 100% airtight paper/algorithm, there are major advancements happening in this space.


I think history will show that the current state of the art quantum cypher breaking is well ahead of what is being discussed here.


> Honestly, most of the paper is over my head—both the lattice-reduction math and the quantum physics.

Yeah, that alone is impressive. Schneier led a group that wrote Twofish, which was one of the AES finalists before losing Rijndael.


It's not in fact that impressive. The math for all sorts of cryptography goes over Schneier's head (as he sort of infamously implied with elliptic curve, over a decade ago). That's normal! Cryptographers specialize. Having a really careful, fine-grained, up-to-date intuition for differential cryptanalysis is crucial for designing hashes and ciphers, but less so for a key exchange.

Not writing this to dunk on Schneier so much as to relate that cryptography is specialized, and that generally there aren't a lot of people that you'd expect to be ultra up on PQ key exchanges and modern block cryptography.


Here's an implementation of Schnorr's algorithm with an attempt to estimate the amount of work needed to factorize big numbers: https://github.com/lducas/SchnorrGate

It also contains some links to critique of the Schnorr's algorithm paper. It looks like either much more p_n-smooth integer pairs are needed or the size of the p_n-smooth integers should be much bigger than estimated by original Schnorr's paper. Or both estimations are off.

As the paper discussed Schneier relies on the assumptions of the (classic) Schnorr's algorithm, it may also be off in the calculations as well.


so just increase to 4096 or 8192 bits or beyond?


Why is this downvoted? Ain't it exponentially harder to break RSA using qubits each time you double the key length?

Until we switch to quantum resistant algorithms, we can keep doubling the key length for some time no?

8192 bits should still be acceptable speed wise (if we consider that 2048 bits is broken, then I'll take slower operations over broken keys any day of the week).


No, if you have a scalable quantum computer (which no one has yet), then doubling the key size just requires doubling the number of logical qubits. That is for asymmetric encryption that uses "hidden subgroup problems", like RSA. We have newer asymmetric algorithms that are resistant to quantum attacks, they are just not well tested yet.


Judging by the current difficulties of building a quantum computer with bigger number of qbits, I'd assume scaling linearly isn't trivial at all. So increasing key size should definitely increase security, if not exponentially, at least buying a lot of time?


Because it isn't. This paper (alegedly) uses sublinear qbits (that's the whole point).


What isn't what?


The number of qbits isn't exponential in key length with this paper (if it's correct)


> A group of Chinese researchers have just published a paper claiming that they can—although they have not yet done so—break 2048-bit RSA.

If a quantum computer can break 2048-bit RSA, what about elliptic curves?


Elliptic curves are also vulnerable to Shor's algorithm. However, there are several new algorithms for asymmetric cryptography, such as Google's NewHope:

https://en.wikipedia.org/wiki/Post-quantum_cryptography

https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...


I had the same question as OP and you seem to have misinterpreted the question. Specifically this article is about showing a meaningful reduction in the Quantum bits required for Shor's algorithm by pairing it with lattice reduction (whatever that means). Does this approach of combining classical lattice reduction technique + quantum Shor's hold for elliptic curves? Are there any other problems beside cracking crypto algorithms that this technique could be applied to that might actually be useful vs just a more complicated attack vector?


Who was the guy and what was the flawed and retracted paper mentioned in the below quote from Schneier's article?

>In email, Roger Grimes told me: “Apparently what happened is another guy who had previously announced he was able to break traditional asymmetric encryption using classical computers…but reviewers found a flaw in his algorithm and that guy had to retract his paper.


Schnorr's "This distroyes the RSA cryptosystem"?


I remember that some small countries, running DPI in their up-link were just storing their whole encrypted traffic to - when this day comes - be able to decrypt the traffic and see who was conspiring against the government back in the days and then start the witch hunt. Maybe this day is coming. Maybe its already there for some degree.


In Russia few decades lobbied initiative, to store all country traffic for some amount of time, for law enforcement agencies. As I know, now this reborn-ed to store just connection initiation data (tcp syn from tcpdump, gsm call originate/receiver numbers, datetime stamp and duration).

In Ukraine, I hear inside, one law enforcement agency asked big provider ~ in 2000 to gather all emails. Co-owner said, "ok, you will got it". Than he few weeks gather old hdds from everywhere, and once sent to this agency small truck with few hundreds hdds, filled with data.

After that, at least 10 years, nothing similar asked.


there were/are countries in africa with small uplink, pretty easy to store everything in flows, pcaps and then extract encrypted attachments offline, run different tools and techniques to identify adversaries (that in that region, are mostly seem as deadly enemies)


Yes, and Africa is not alone.

Colleague said, he considered to build internet in Uzbekistan, even visited country. What he see, they have all very unfriendly or extremely conservative (Afghanistan) countries around, so he cannot just buy external internet channel.

Internet there extremely slow, most time near impossible to use, only email working (but could delay for few hours).

To send tweet or whatsapp, start app, push "send" and lay phone/tablet alone turned on, and in few hours will see "message delivered".


I just watch this yesterday about post quantum cryptography.

https://youtu.be/_C5dkUiiQnw


It's high time to transition from RSA to the unbreakable TUPLEZZ (www.tuplezz.com)


Even if it's not true, it shows that RSA is breakable by the near future.


"And there’s the nagging question of why the Chinese government didn’t classify this research"

Would like to hear informed thoughts on any state's pros and cons of sharing such obviously weaponizable discoveries.


As a guess, if you're the only one who knows about this, it's one hell of a zero day. Once used though, the cat is out of the bag and industry will race to patch it. Yes, it'll take time.

If I were a country who could easily just drop bombs on people to cause destruction, then I'd rather leak something that I have no defense against in the hopes it gets patched rather than save it as a tool to use.


There is more to diplomacy and war than destruction. If you can read your adversary's private messages, you can do a lot better than blowing their shit up.


Even if this particular scheme doesn't work at scale, the writing is on the wall for conventional crypto. If you are encrypting data that will be at rest for more many years it's time to start think about migrating to post-quantum crypto so you don't end up one day discovering you're entire corpus is vulnerable.


Post-quantum cryptography is not necessarily secure either. One of the four finalists in a NIST competition for post-quantum cryptography [SIDH] was suddenly, out of the blue, shattered with an algorithm that could break it in hours on a laptop. Turns out it was secure against quantum computers but insecure against classical computers.

If you want to be safe, you might almost consider standard cryptography on one of the three-remaining post-quantum algorithms to be (hopefully) safer. Also this might be bad news for Bitcoin in the ultra-long-term...


There are open questions about whether any PQ algorithm is truly quantum-safe, given the limits of our knowledge about QC.

But it's also just the case that any "new" (for values of "new" that include "old but never deployed at a scale sufficient to attract scrutiny") encryption primitive, PQ or not, stands a decent change of being breakable by a laptop, at least in its initial implementation parameters. That's what happened with the supersingular isogenies you're referring to. That's just to say: there's nothing special about the "PQ-ness" of these protocols that makes them risky; all cryptography is risky. It took a surprisingly long time --- well into the 2000s --- to figure out how to safely deploy RSA.

Virtually every serious PQ implementation proposal pairs the PQ key exchange with a "conventional" key exchange, for this reason.

If you believe that QC is going to break "conventional" cryptography, Bitcoin is toast; I don't think there aren't a lot of extra "ifs" to that. Smarter people than me think there might be a window of time where RSA falls and ECC survives; maybe you could hope that Bitcoin would react quickly enough inside that window.


My understanding is that Bitcoin, used correctly, is effectively quantum safe.

Since the "recipient" address of a UTXO is expressed as a hash, a user does not broadcast their public key until after they spend the funds. If you follow good practice, you make a single transaction, sending funds to the recipient, and the "change" to yourself, in a new wallet address (addressed by the hash of its public key). This means the public key is never visible to an attacker until its balance is zero.

Therefore, to attack this and steal funds through false transactions, you effectively need both a pre-image attack on SHA256 (so you have a valid public key to match the UTXO address), and a way to solve the discrete logarithm problem, breaking ECDSA (on the Secp256k1 curve), so you can sign using the private key corresponding to that public key.

SHA256 would come under Grover's algorithm, I believe, which would give you 128 bits of security under a quantum attack. That is still pretty good going.


> This means the public key is never visible to an attacker until its balance is zero.

This part is just a tad questionable. To spend funds, you have to get the transaction recorded in a block. The usual way to do that is to broadcast it through the whole network until a miner picks it up.

So there's more than a bit of wiggle room between "I broadcast everything about this tx" until "the money is already spent and I'm safe.

It certainly does (in the usual case where the vast majority of the network and your connection to it is not under the attacker's control) limit the time an attacker has to compute, but it's not exactly pretty or reassuring.


It's worth noting that BTC could be quantum-resistant if enough of the network decided they wanted it to be. Unfortunately the BTC community is famously resistant to change so it wouldn't happen until it's too late, but in a rational world the problem could be fixed.


Tangentially to secant, BTC community resistance is in the spirit of maintaining a solid base layer that is not perpetually in startup mode given stakeholder motives. Use other coins for that purpose.

Regarding changes that still maintain or even enhance this goal of long-lived store of value, I bet they'd be fine with that, even if the work effort of BTC transactions increased.


> Tangentially to secant

I see what you did there. And I approve.


Alright. Who gets to confiscate the pre-quantum keyed Bitcoin? Miners or codebreakers?


SHA2 is not under threat from post-quantum computing. Only the signatures used.


And as long as you don't re-use wallet addresses, your public key is effectively not revealed until the balance is zero.

Since your wallet address is a sha256 hash of the public key, you would need to meaningfully break sha256 to be able to go after a public key or generate a false signature.

Once the public key is broadcast to spend the funds, that wallet shouldn't be reused, and a new wallet address should receive the change.


Yes, but the signatures are what's keeping your funds secure, not the hash.

To explain why, we need to talk about a type of nonstandard transaction people used to do as a sort of puzzle: hash-only outputs. This is a Bitcoin transaction whose outputs can only be spent by someone who knows the input that gives a particular SHA-256 output. Cute, right?

Problem is, this is a proof of knowledge, not a proof of identity. Anyone else can replicate it once the problem is solved and the solution does not restrict itself to whoever is identified as the first solver. Which means that the only thing that keeps people from claiming THEY were the first to solve it is the Sybil-resistance that keeps the blockchain from being reorg'd.

In Ethereum we call this the "dark forest" problem[0] - anything in the mempool that is not cryptographically locked down can and will be manipulated to the benefit of others. Smart contracts make it way more lucrative to scan the mempool for manipulable transactions, because instead of looking for the one guy solving old SHA-256 puzzles you now have potentially millions of arbitrage opportunities to find. But this isn't limited to Ethereum. It's whoever mines[1] the block wins.

So if quantum computing breaks RSA, that turns all existing coins into SHA-256 puzzles on a blockchain whose users haven't internalized the maximal extent of transaction malleability. The only way to avoid having your coins stolen would be to coordinate with trustworthy miners to include your signature transition transaction into a block - almost certainly with a very large fee, effectively a cooperative confiscation[2]. Those who rely on the kindness of strangers (read: the mempool) will find quantum computer owners and miners fighting to see who can steal their coins first. And the only protocol-level change to fix this would be to just invalidate all RSA signatures and treat all old wallets as burned.

[0] https://www.paradigm.xyz/2020/08/ethereum-is-a-dark-forest

[1] The Ethereum term for this is Miner Extractable Value, which reportedly has even incentivized miners to reorg blocks (i.e. a short-term 51% attack) if and when they can get away with it.

[2] This is, again, another thing Ethereum's ecosystem already thought of; they call it Flashbots.


> If you want to be safe

You use post-quantum with a vetted algorithm in a hybrid scheme, usually this just involves concatenation or hashing.


I would think most encryption at rest done using AES128/256, which is already quantum-resistant. It's mostly the key management which is at risk due to more heavily used RSA for asymmetric wrapping of keys.


AES-128 is effectively AES-64 when quantum computers exist which can run Grover's algorithm. That's not a huge exponent.


It's been pointed out that Grover's algorithm parallelizes very badly. So not AES-64. It probably will turn out that AES-128 bit is perfectly fine as it has a tremendous amount of margin to start with


for the time being it's fine (we're a long way away from having fast quantum computers where 2^64 is a significant problem), and AES-256 is already widely used


> If you are encrypting data that will be at rest for more many years it's time to start think about migrating to post-quantum crypto so you don't end up one day discovering you're entire corpus is vulnerable

Because Bitcoin is centralized, if it would get cracked tomorrow, the major miners could decide to save everyone.


Another use case where you should be seriously considering using post-quantum techniques is update verification. If a piece of hardware needs to work 10 years from now, and it uses RSA or ECC public key crypto to verify proposed software updates, it may live long enough to see quantum computers break that.


No practical quantum supremacy or anything like it has been demonstrated has it? Is it sort of expected that will co-occur with QC breaking encryption? I'm just trying to gauge how "almost here" this actually is, or if it's still talk


Cloudflare estimates 15-40 years[1]. Cloudflare, Google [2], Amazon [3] and others are all at various phases of moving to post-quantum algorithms.

My own lesson from the Snowden revelations is that if we're close enough to a security break that the possibility is well understood, there's a very high chance someone is already doing it.

[1] https://blog.cloudflare.com/post-quantum-for-all/

[2] https://cloud.google.com/blog/products/identity-security/why...

[3] https://www.amazon.science/blog/preparing-today-for-a-post-q...


Quantum supremacy is a much much much lower bar than breaking rsa (at normal key strength)


I will admit that I have no idea how that would look like. If quantum computer can be as capable as one envisioned in Jormungand, I am not sure if anything short of declaring them a weapon is needed.


Quantum computers aren't magic, they have specific capabilities that can be planned around. Google 'post-quantum cryptography' for current work on quantum resistent algorithms, some of which are already being deployed to production.


Any sufficiently advanced technology is indistinguishable from magic. I will openly say that I not understand it beyond knowing that, in a very general sense, it is not based on ones and zeroes. If I do not understand it, it is difficult for me to wrap my head around it and what it can and cannot do.

It might not be magic, but it might as well be ( and may end up being next hype train ).


HN discussion for the paper itself: https://news.ycombinator.com/item?id=34235964


It's not an HN discussion if there's no discussion there.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: