Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Quantum Algorithm Implementations for Beginners (acm.org)
110 points by rg111 on June 17, 2022 | hide | past | favorite | 46 comments


As someone who did most of IBMs quantum course, the difficulty didn't come with the generating code part. It came with the application. How do I use this to speed up database access? How do I use this to better commute graphics? Can you make a neural net better with this?

Giving people useful applications will generate more interest than you will need.


> How do I use this to speed up database access? How do I use this to better commute graphics? Can you make a neural net better with this?

It seems that you have missed some of the basics of quantum computing.

For quantum computing to work the system must be isolated from the environment for the period of the computation. You can't use it for anything interactive like graphics.

Any practical problem worth solving with quantum computer needs something like years or millions of years to solve using classical computer. Do you have database index where search takes years or graphics problem where output takes years?

Also consider the amount of qbits needed. For database search you need quantum computer with same order of magnitude qbits as the database index.

Practical problems for quantum computing are mostly scientific problems like simulating chemistry.


Useful applications don’t exist. They surely will, at some point, but not yet. That’s why they aren’t provided as examples.


There are/seem to be very specific scientific applications, mostly around simulation of other systems whose hamiltonians map nicely to that of the computer. For almost everyone on the planet that isn’t very useful, but for that small handful it can be a game changer.

… doesn’t contribute much here, but worth knowing! It’s exciting either way.


Protein folding (computational chemistry), and breaking everyone's crypto.


> breaking everyone's crypto

Symmetric encryption and hash functions with 256 bits are already quantum proof.

Post-quantum algorithms are here in few years. Some of them are already in use in critical applications.


Importantly, Bitcoin and Ethereum are (inadvertently) almost quantum-proof (because the addresses are hashes of the public keys), just not quite (because the public keys still get exposed during transactions). But at least the latter should be fixable (because the community is quite used to changes to the protocol anyway).


I have to object, while not many algorithms exists, one of the most famous ones, Grovers algorithm [0] is also one of the most general ones and quite powerful. It reduces search of 2^n items to about 2^(n/2) while using n q-bits. Thats just the square root of a classical algorithm.

[0] https://en.wikipedia.org/wiki/Grover%27s_algorithm


of course they exist, we just don't have QCs with enough logical qubits yet.


Ya looking at quantum computing (QC) from a programmer's perspective, I see issues with abstraction at multiple levels. Basically what's going on is that it's too narrow (tightly coupled to its implementation details), but also too broad (generalized over many disciplines) to easily transform it into simpler domains that are easier for humans to reason about.

So for example, in computer science we can explain basic data structures like binary trees and hash tables from whichever context a student understands. We can use pointers and references from C-style imperative languages. We can use car and cdr from Lisp-style functional languages. We can even jump up to higher levels of abstraction with associative arrays in Javascript. Or make the whole thing in key-value stores like Redis or even a SQL database or x86 assembly if that floats our boat.

But with QC, even its "assembly language" of Hilbert spaces and tensors is already so difficult to understand, much less apply, that the terminology feels like gatekeeping. There's no way to easily identify the smoking gun(s) which separate a QC simulation from qubits in the real world. It's like asking an assembly language teacher how a full adder works, but they throw up their hands and say that first we must understand Fermi levels and electron diffusion across a PN junction. Which while important, doesn't answer the student's question.

What's needed are simple transforms to go from any existing formula/algorithm to its "optimized" QC equivalent. I don't know if that can be done in a general way, like how we can switch from the time domain to the frequency domain with a Fourier transform, or take the derivative of an equation. Maybe it's more like an integral where the solution must be "found" by composing known integrals from a dictionary using heuristics.

But without those transforms, I see us being stuck in a QC winter for the foreseeable future. My words are imperfect, but I know precisely what I'm getting at here. I can't run with QC the way I would with any other programming language, and it really bothers me. To the point of, I'll probably have to actually learn it from first principles and write it up from a programmer's perspective! Someday.


“The remaining probability was distributed over 01 and 10, which should not be a part of the Bell state. As we discussed before, this phenomenon is due to errors inherent to the quantum processor. As the backend technology improves we expect to get better results.”

Debugging sounds like a real bear.


I have to object, while not many algorithms exists, one of the most famous ones, Grovers algorithm [0] is also one of the most general ones and quite powerful. It reduces search of 2^n items to about 2^(n/2) while using n q-bits. Thats just the square root of a classical algorithm. [0] https://en.wikipedia.org/wiki/Grover%27s_algorithm


You're asking whether they can do things that classical computers can do reasonably well already marginally better, while their whole point is that they (one day, hopefully) can do things that current computers can't do at all (at scale, at least). Most of the speculative applications I've heard are in science, cryptography and AI/ML.


QPUs are effectively (and theoretically) high-speed, specialized coprocessors at best. There's no way they're going to be doing anything you mentioned any time soon because that's not what they're good at or built for.


There’s some research on AI, eg at Microsoft Research.

https://arxiv.org/abs/1412.3489


IBM has (had?) a quantum course? Was it a free online course? And if one wants to take it for curiosity and not expecting to use the skills (immediately although maybe in the future) would you recommend it?


Factoring large integers in order to decrypt RSA. Quantum computations do not produce numbers, only probability distributions. This is fine if all you are trying to do is factor integers since the answer can be easily verified. This is why massive amounts of money is being thrown at quantum computing.


See the “Quantum Algorithm Zoo” for a comprehensive list of quantum algorithms.

https://quantumalgorithmzoo.org/

Massive amounts of work is needed to make these comprehensible.


> Tequila is an Extensible Quantum Information and Learning Architecture where the main goal is to simplify and accelerate implementation of new ideas for quantum algorithms. It operates on abstract data structures allowing the formulation, combination, automatic differentiation and optimization of generalized objectives. Tequila can execute the underlying quantum expectation values on state of the art simulators as well as on real quantum devices.

https://github.com/tequilahub/tequila#quantum-backends

Quantum Backends currently supported by tequilahub/tequila: Qulacs, Qibo, Qiskit, Cirq (SymPy), PyQuil, QLM / myQLM

tequila-tutorials/Quantum_Calculator.ipynb https://github.com/tequilahub/tequila-tutorials/blob/main/Qu... :

> Welcome to the Tequila Calculator Tutorial. In this tutorial, you will learn how to create a quantum circuit that simulates addition using Tequila. We also compare the performance of various backends that Tequila uses.


A problem that could possibly be quantum-optimized and an existing set of solutions: Scheduling

Optimize the scheduling problem better than e.g. SLURM and then generate a sufficient classical solution that executes in P-space on classical computers.

SLURM https://en.wikipedia.org/wiki/Slurm_Workload_Manager :

> Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[2]

Additional applications and use cases: "Employee Scheduling" > "Ask HN: What algorithms should I research to code a conference scheduling app" https://news.ycombinator.com/item?id=22589911

> [Hilbert Curve Scheduling] https://en.wikipedia.org/wiki/Hilbert_curve_scheduling :

> [...] the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves, assigning related tasks to locations with higher levels of proximity.[1] Other space filling curves may also be used in various computing applications for similar purposes.[2]


Also nice:

Quantum Country: A Free Introduction to Quantum Computing and Quantum Mechanics [0][1][2]

The content is okay (as far as I read), and it has Spaced Repeatition built-in.

[0]: https://quantum.country/

[1]: https://news.ycombinator.com/item?id=23561018

[2]: https://news.ycombinator.com/item?id=19426573


I’ll copy paste what I wrote before about quantum country

This doesn’t provide solutions to the problems and doesn’t show you how to do the mathematical calculations. It assumes you know proof based linear algebra and if you’re a math beginner the level that this is written at will be beyond you.

Instead I recommend

https://www.thomaswong.net/introduction-to-classical-and-qua...

which doesn’t assume much pre-req knowledge and contains solutions so you can check your work


I did not know about this one. Thanks.

As someone who went to do a Physics BS from a solid uni, I didn't find QC difficult (I didn't finish it, though.)


"... are shorthands for the vectors encoding the two basis states of a two dimensional vector space." How many beginner's know what a basis of a vector space is -- how about unitary operators -- Eigen stuff?

The word "beginner" does a disservice to the level you need to be at to start wadding into quantum stuff. At a minimum, you better not be a beginner of linear algebra!

There is, imo, no better way to discourage people than saying this stuff is for "beginners".


Not trolling: is there really a need to train a cohort of quantum programmers? Can these computers actually do something better than their classical counterparts, right now?


> Can these computers actually do something better than their classical counterparts, right now?

You could say quantum computers do something better than classical computers, but you cannot say they can do something useful better than classical computers. At present, “Quantum supremacy” only applies to toy problems.

Your other question is not so easily answered.


> Can these computers actually do something better than their classical counterparts, right now?

The answer is yes. Sometimes this is referred to as 'quantum supremacy' and some researchers at Google declared it in 2019.

https://www.nature.com/articles/s41586-019-1666-5 https://ai.googleblog.com/2019/10/quantum-supremacy-using-pr...


Thanks, interesting idea but it seems the practical applicability is still far away.

> One of the most celebrated results in quantum computing is the development of a quantum algorithm for factorization that works in time polynomial in n. This algorithm, due to Peter Shor and known as Shor’s algorithm, runs in O (n3 log n) time and uses O (n2 log n log log n) gates. The first experimental implementation of this algorithm on a quantum computer was reported in 2001, when the number 15 was factored. The largest integer factored by Shor’s algorithm so far is 21.


Quantum supremacy, while a very interesting experiment and a milestone in quantum computing, proves hardly anything about usefulness of quantum computers. It "only" proves that quantum computers are the best quantum computers, and classic ones can't efficiently simulate them. It say nothing about useful operations.


From a theoretical point of view, this is evidence that quantum computing is a more powerful model of computation. I think it would be hard to argue that a model of computation that can solve more problems is any less useful. Applications like quantum simulation do appear to be difficult on a classical computer yet efficiently computable on a quantum computer


Basketball is hard to simulate on a classical computer, but easy to simulate by playing a game of basketball.


In what sense the result describes a computation? And it what sense the device describes is a computer?

The only thing it can do faster is to run itself.

It's as if I called a Boeing airliner an aerodynamic computer and each flight a computaion.


They are programmable, that makes them drastically more intetesting than a single-parameter aerodynamic analog computer like the Boeing, from the point of view of Computational Complexity. It does not make them "useful" yet, but it is a big milestone. See https://scottaaronson.blog/?p=5460


Programmable, or parametrizable?

In any test flight there are plenty of parameters set for the flight. Are aerodynamic computers programmable?

But in any case, the mere existence of "quantum supremacy" research is a clear indication these are pointless contraptions.


Could you elaborate on your last paragraph? Proof of concepts that can not solve "useful" problems seem like an incredibly important milestones to me? Or is your frustration that they are occasionally presented by overly enthusiastic engineers as more than "useless" technology demonstrators (with this frustration I would agree).

The second half of the article I shared covers the parametrization question you raised.


> Could you elaborate on your last paragraph?

A successful technology does not need to show theoretical supremacy. Its proponents can simply show the useful services and products it makes.

> Proof of concepts that can not solve "useful" problems seem like an incredibly important milestones to me?

Sure, just don't claim supremacy until you can back it up with an actually reasonable definition of computation.

> Or is your frustration that they are occasionally presented by overly enthusiastic engineers as more than "useless" technology demonstrators (with this frustration I would agree).

It is hard to find any news about quantum computing that is not way over exaggerated.

> The second half of the article I shared covers the parametrization question you raised.

His argument may apply to the teapot example, but for, say, a test flight to fine tune the auto pilot behavior, there are plenty of inputs that can be set. I'd argue the aerodynamic computer supremacy example still stands.


To your last point: I think I disagree, because your analog computer (like all analog computers) addresses one fixed-size instance of a problem, not the scalable family of problems. You can not make a computational complexity big-O statement about your analog computer, but you can do that about boson sampling and random circuit sampling. This is a crucial milestone in the creation of a physical realization of a computational device.

In your first paragraph, while I am sympathetic with your annoyance (pardon my imprecise choice of words), I disagree there too. Mostly because I can imagine the same argument used against Babbage, Lovelace, and Turing, who did their theory (and failed experiment) work decades to a century before a scalable classical computer existed.


I, for one, welcome any learning. So even if I am never actually to program a quantum computer, Why would I not like to know how to do such? Why should the ability/knowledge be hidden from any body.

But you'll never know, until you look at me.


You can program a quantum computer today, ibm quantum computing cloud provides all 5qubit computing for free.


Also see Quantum Inspire for free, accessible quantum computing on the cloud:

https://www.quantum-inspire.com/


Unfortunately every page of the document has the words "Just accepted" across the whole page making it almost unreadable (as in: can be read, but it's painful)


You can download a version without the watermarks from arXiv: https://arxiv.org/abs/1804.03719.


I have a copy without any watermarks. How should I share it here or share with you?


Not GP, but, can you send me via email?

Email on bio.


Almost three dozen authors, yeah, for "beginners" :-)


I guess different people just wrote different parts.

I skimmed over some pages and the content did seem like it was for beginners.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: