> We already know from plenty of surveys that people will self-report a lot of benefits; and we also already know that adding in randomization, longitudinal tracking, or blinding makes most or all of the effects go away.
Do we know this or do we "know" this?
Having another study, even a self reported one like this, doesn't degrade the scientific body of knowledge, it adds to it. The fact that there have been other studies doesn't make doing another worthless.
I also think a lot of people here are discounting the fact that this study would typically be very hard (or impossible) to administer at such a large scale, especially in such a short amount of time.
In my mind, the important questions here are:
Are the benefits of massive scale studies worth the trade-offs of self-reporting?
What can we do to reduce or eliminate those trade-offs?
Imagine the scientific value of having easily created, easily administered, massive studies be easily accessible to any research group. If the research data can be made even a little bit more reliable, that's hugely valuable.
>doesn't degrade the scientific body of knowledge, it adds to it.
Yes. The simple corollary to Bayes theorem: all observations, even biased ones, improve your knowledge of the world.
But it doesn't say anything about efficiency. A single photodiode mechanically scanned behind a pinhole can capture the same image as an image sensor behind a lens, but it takes millions of times longer. A self-reporting survey will say something, but it will be very noisy. How many self-report surveys would it take to match an actual double-blinded study with double digit participants? What if the answer is "hundreds"? Should we wait a century for all these surveys to be done before even attempting data analysis? (After all, surveying the same set of people over and over won't give you additional data. You need to wait for some of the survey respondents to die) What about all the people in the preceding 99 years who, perhaps reasonably, look at a stack of fifty self-reported surveys that all say the same thing, and come to an incorrect conclusion? Science is right eventually, not right right now.
You say that "millions" of participants should be useful, almost automatically. Why should that be the case? If you're off by an order of magnitude somewhere in your stack of assumptions, the data could be noisy enough that you need to be sample from a population of a hundred billion. If the data is bad enough or biased enough then even a large survey wouldn't be powered enough without a world population ten times bigger.
> You say that "millions" of participants should be useful, almost automatically.
I don't disagree that there are issues with self-reported data, but I do disagree with saying that a large set of self-reported data is worthless.
I also didn't say millions, I said massive. Which in the case of research like this could be the difference of 40k participants (through an app) and 100 participants (clinically). While the size of the data set isn't everything, it does matter. With 40k participants you could aggressively invalidate 30k of them and still have 100 times as many data points for statistical validity as that clinical study. The clinical study would also cost more and take much more time to implement.
> How many self-report surveys would it take to match an actual double-blinded study with double digit participants? What if the answer is "hundreds"?
I don't know, but how many more can be easily done now if technology is used like it was here?
What if doing 100s in the same amount of time is now feasible?
What if it takes far fewer than your arbitrary assumption of 100s of studies?
What if it only takes 1 study with 100x the participants?
No one here is advocating for stopping double-blinded studies. Why not both?
> If you're off by an order of magnitude somewhere in your stack of assumptions, the data could be noisy enough that you need to be sample from a population of a hundred billion.
Why would you assume that it would be off by an order of magnitude? That seems about as likely as the data being perfectly accurate (that is, very unlikely).
And why 100 billion? It seems like you're just throwing out arbitrarily large numbers here.
> Should we wait a century for all these surveys to be done before even attempting data analysis?
This study took less than a year, and because it was automated, could likely be run concurrently with others. Why would it take a century?
> surveying the same set of people over and over won't give you additional data. You need to wait for some of the survey respondents to die.
Why would they survey the same set of people for the same study? There's a lot of people in the world.
I know you've decided for some reason that it would take 100 billion people, and that 100s of studies would need to be done to compare to even one clinical study. I know that for some reason you think that would take a century instead of in parallel.
Consider for a second that if you're wrong on any of those magically large and arbitrary numbers, this might actually have some value to the world instead of dismissing it out of hand.
Again, no one is suggesting any fewer clinical, double blinded, etc. studies. Doing studies this way has some potentially very large upside, and it can be easily used in tandem with those other studies to speed up research and gain much larger data sets.
Yes it has trade-offs. We can be aware of those without throwing out the whole idea.
Yes, but it might make sense to do a very well structured case-control study using mouse or rat models.
There are well over 200 highly diverse lines of mice that could be split into isogenic cases and controls by sex, age, and any other instrumental variable (diet, enrichment).
We have many objective ways to quantify rodent health and behavior. Admittedly, there is no “happiness” metric that is translationally relevant to humans, but at least with rodents we can critically evaluate what goes on in the CNS at molecular and synaptic levels with and without drug X.
Mice and rats are in the same Supraprimate superoder (the euarchonta) with monkeys and apes. They are also fine for rigorous case-control studies of drug effects, IF and ONLY IF the study includes a high level of genetic variation among the rodents that will model hunan genetic and biochemical diversity. Unfortunately, typical rodent studies of just a single inbred type of mouse or rat are of modest translational relevance to humans.
> Having another study, even a self reported one like this, doesn't degrade the scientific body of knowledge, it adds to it.
I really don't see how it's valuable. I don't even believe my friends when they talk about how much micro-dosing benefits them (and this is coming from someone who is a huge supporter of psychedelics). People are really good at making themselves believe what they want to believe.
Yeah, self-reported data should be taken with a grain of salt. On a large enough scale though, that data says something though. It might not be the original insight you're looking for, but it's still data.
There are also plenty of tests and measurements that can be given over an app that don't rely solely on subjective self reporting too though. For instance: reflexive tapping tests, wearable health telemetry, memory tests, etc., etc.
I'm excited to see researchers taking advantage of tech to speed up studies and make research more feasible in general. More research means more insights and advancements in the long run for all of us.
Do we know this or do we "know" this?
Having another study, even a self reported one like this, doesn't degrade the scientific body of knowledge, it adds to it. The fact that there have been other studies doesn't make doing another worthless.
I also think a lot of people here are discounting the fact that this study would typically be very hard (or impossible) to administer at such a large scale, especially in such a short amount of time.
In my mind, the important questions here are:
Are the benefits of massive scale studies worth the trade-offs of self-reporting?
What can we do to reduce or eliminate those trade-offs?
Imagine the scientific value of having easily created, easily administered, massive studies be easily accessible to any research group. If the research data can be made even a little bit more reliable, that's hugely valuable.