As the first paragraph says:
"In this post, we will understand the concept of FixMatch and also see it got 78% accuracy on CIFAR-10 with just 10 images."
Reporting the best performance on a method that deliberately uses just a small subset of the data is shady as heck.
Agreed, also this model fully uses the other images, just not the way that traditional supervised learning would. With "just 10 labels" is more accurate. Impressive results, but this isn't some hyper-convergence technique that somehow trains on only ten images.
It seems like a big win for images and other stuff where getting images is cheap, but labelling them is expensive. Less great for (say) drug discovery where running the experiments to generate the data points is the bottleneck.
I agree it's pretty sensationalistic, and I almost ignored it for that reason. But it turns out that it's actually well worth a read if you can get past that one flaw.
Ok, we've reverted the title to that of the page, in keeping with the site guidelines (https://news.ycombinator.com/newsguidelines.html). When changing titles, the idea is to make them less baity or misleading, not more!
(Submitted title was "Semi-Supervised Learning: 85% accuracy on CIFAR-10 with only 10 labeled images")
As the first paragraph says: "In this post, we will understand the concept of FixMatch and also see it got 78% accuracy on CIFAR-10 with just 10 images."
Reporting the best performance on a method that deliberately uses just a small subset of the data is shady as heck.