I don't see how people can doubt this. Positive results get published. Even without p hacking that means a significant number of experiments at the p <0.05 level are published when they're one of the 1 in 20 due to chance. Combine that with p hacking and small samples with small magnitude effects... well replication is going to be a problem. Part of the solution here might be more impetus and incentive to publish negative results.
Agreed. I think all fields should require pre-registration, where studies are announced before they are run to help counter the positive-results bias. Still not a 100% fix (I could imagine a lot of studies that look like they'll show no effect then move into the "oops, nevermind, I knocked over the test tube" variety), but certainly better than what we have now
Yes, and with larger samples. Too many small sample experiments which, at most, should be considered a "pilot" study, and when results are positive even with p < 0.01 they should still only be considered "suggestive" until replicated, preferably with larger samples.