View Single Post
Old 01-10-2011, 04:16 PM   #2
ChinoCoug
Senior Member
 
ChinoCoug's Avatar
 
Join Date: Jan 2006
Location: NOVA
Posts: 3,005
ChinoCoug is an unknown quantity at this point
Default

Quote:
University and government research overseers rarely step in to directly enforce research quality, and when they do, the science community goes ballistic over the outside interference. The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t.
Do scientists have any incentives to trash each others' studies? You can get published that way, can't you? Or is it just boring compared to publishing your own study?

Quote:
Originally Posted by MikeWaters View Post
I'll give you an example of the kind of thing that can happen (and certainly does happen all the time). You have a research question, you collect your data, you decide on how to analyze it. You have an intervention, and an outcome. You adjust for certain other factors (like age). You get a result that indicates that your intervention "worked", but there is a 6% chance that the positive results you found are a matter of chance, and not truly efficacious. By convention, if there is a 5% chance or less that it is chance, then it is accepted as gospel truth. But if it's above 5% then it is a "negative study." The researcher is frustrated. He just knows that the intervention works, and the world needs to know about it. He thinks about it, and says, "you know what, I really should have adjusted for sex and race as well." He runs the analysis, and it comes back as only 4% chance of being random variation. Now he has a positive study. And he can make the argument, that he really indeed should have included race and sex.
In economics peer reviewers would subject your study to a robustness test--see what happens to statistical significance if you were to tweak things here and there, add one variable in, etc. What would happen if you expand the study across time, or space? In fact, some times when they publish studies they show a stepwise regression: they chart changes statistical significance with the addition of each variable. If statistical significance can't hold up, your study is declared to be non-robust.

I'm not familiar with clinical trials, but from what I heard you use a rather small sample size and you don't need control variables: all you have to do is compare a randomly-selected treatment group and a randomly-selected control group, so the characteristics in both groups should be roughly the same (eliminating the need for more variables).

I think economists are more adept with quantitative methods than anybody (cf. Pelagius vs. Indy Coug).
__________________
太初有道
ChinoCoug is offline   Reply With Quote