Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see how people can doubt this. Positive results get published. Even without p hacking that means a significant number of experiments at the p <0.05 level are published when they're one of the 1 in 20 due to chance. Combine that with p hacking and small samples with small magnitude effects... well replication is going to be a problem. Part of the solution here might be more impetus and incentive to publish negative results.


Agreed. I think all fields should require pre-registration, where studies are announced before they are run to help counter the positive-results bias. Still not a 100% fix (I could imagine a lot of studies that look like they'll show no effect then move into the "oops, nevermind, I knocked over the test tube" variety), but certainly better than what we have now


I’ve always thought that p<0.05 is ridiculously low bar, missing at least one zero after the dot.


It should be more than enough if the experiment's replicated rnough times.


Yes, and with larger samples. Too many small sample experiments which, at most, should be considered a "pilot" study, and when results are positive even with p < 0.01 they should still only be considered "suggestive" until replicated, preferably with larger samples.


That's equivalent to having a lower p threshold in the first place though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: