Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think it is by definition the most accurate system

By gum, an opportunity to quibble semantics on the internet. That is true if benchmark using means 'only admit to knowing' and accuracy means 'must be numerically quantifiable given existing data'. It is false otherwise, especially if accuracy means 'conforming to truth' and we have a model for how the numbers are being generated.

Obviously if I generate a set of numbers by sampling a normal distribution then the most accurate model is a normal distribution, no matter what empirical data I use for benchmarking.

That is to say, if we know how the data was generated (sans noise) we can reject empirical distributions as the most accurate, because we can directly know the distribution of the data.



Ok, that is a legitimate ... quibble. Let's assume that we don't already know the correct distribution. In that case we're going to judge each theoretical fit by how close it comes to the historical data. (Or else we're going to get that wrong, which is another common approach.) ELO is much more prestigious and credible than some guy who made a game, but it is less credible than data, for some number of data points N. (Although I think a theory can be more prestigious than data almost independent of N.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: