cougarguard.com — unofficial BYU Cougars / LDS sports, football, basketball forum and message board  

Go Back   cougarguard.com — unofficial BYU Cougars / LDS sports, football, basketball forum and message board > non-Sports > Current Events
Register FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
Old 01-06-2011, 09:55 PM   #1
MikeWaters
Demiurge
 
MikeWaters's Avatar
 
Join Date: Aug 2005
Posts: 36,365
MikeWaters is an unknown quantity at this point
Default How much fraud is going on in academic research? Autism and Vaccines

http://www.bmj.com/content/342/bmj.c5347.full

http://www.npr.org/2011/01/05/132692...tism-was-fraud

http://www.telegraph.co.uk/health/he...-has-said.html

Academic fraud is really a spectrum thing, not a yes/no dichotomy. You have some people that are sloppy. You have some people that let their biases enter the picture, and they game the question and the methods, but what they actually do in terms of procedures is on the up and up. And then you have people who will adjust things (towards their biases), but justifying it (somewhat reasonably). Some bias is subconscious, some of it is conscious. Then you have people that are adjusting things, without really proper justification--i.e. changing the rules during the middle of the game. Of course there is just plain making up the data and that kind of thing.

I'll give you an example of the kind of thing that can happen (and certainly does happen all the time). You have a research question, you collect your data, you decide on how to analyze it. You have an intervention, and an outcome. You adjust for certain other factors (like age). You get a result that indicates that your intervention "worked", but there is a 6% chance that the positive results you found are a matter of chance, and not truly efficacious. By convention, if there is a 5% chance or less that it is chance, then it is accepted as gospel truth. But if it's above 5% then it is a "negative study." The researcher is frustrated. He just knows that the intervention works, and the world needs to know about it. He thinks about it, and says, "you know what, I really should have adjusted for sex and race as well." He runs the analysis, and it comes back as only 4% chance of being random variation. Now he has a positive study. And he can make the argument, that he really indeed should have included race and sex.

And then you have the guys who tweak their analyses 50 different ways. The first 49 are insignificant results, but the 50th is significant. Guess what gets reported.

My point is this: I bet if you do enough digging into studies--many of them seminal studies accepted as gospel truth, you will frequently find evidence of gaming results.

This kind of thing requires a level of personal integrity that may not be as common as needed. Because it involves examining your own personal biases, and kind of de-investing yourself from the results of your studies. And ignoring the professional and financial implications of your results. Sometimes a failed study means your career is over. Because there is only a Step B if Step A works.

It's only the really high-profile studies that come under the microscope. The cloning genetics guy in S. Korea is an example. There have also been some medical safety trials where lawyers got involved and found a lot of problems in studies. This is merely the stuff that percolates into the public consciousness.

But I have good news. Most medical studies don't matter anyway, no matter the results. And even when done non-fraudulently, the results are often wrong.

http://www.theatlantic.com/magazine/...-science/8269/
MikeWaters is offline   Reply With Quote
Old 01-10-2011, 04:16 PM   #2
ChinoCoug
Senior Member
 
ChinoCoug's Avatar
 
Join Date: Jan 2006
Location: NOVA
Posts: 3,005
ChinoCoug is an unknown quantity at this point
Default

Quote:
University and government research overseers rarely step in to directly enforce research quality, and when they do, the science community goes ballistic over the outside interference. The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t.
Do scientists have any incentives to trash each others' studies? You can get published that way, can't you? Or is it just boring compared to publishing your own study?

Quote:
Originally Posted by MikeWaters View Post
I'll give you an example of the kind of thing that can happen (and certainly does happen all the time). You have a research question, you collect your data, you decide on how to analyze it. You have an intervention, and an outcome. You adjust for certain other factors (like age). You get a result that indicates that your intervention "worked", but there is a 6% chance that the positive results you found are a matter of chance, and not truly efficacious. By convention, if there is a 5% chance or less that it is chance, then it is accepted as gospel truth. But if it's above 5% then it is a "negative study." The researcher is frustrated. He just knows that the intervention works, and the world needs to know about it. He thinks about it, and says, "you know what, I really should have adjusted for sex and race as well." He runs the analysis, and it comes back as only 4% chance of being random variation. Now he has a positive study. And he can make the argument, that he really indeed should have included race and sex.
In economics peer reviewers would subject your study to a robustness test--see what happens to statistical significance if you were to tweak things here and there, add one variable in, etc. What would happen if you expand the study across time, or space? In fact, some times when they publish studies they show a stepwise regression: they chart changes statistical significance with the addition of each variable. If statistical significance can't hold up, your study is declared to be non-robust.

I'm not familiar with clinical trials, but from what I heard you use a rather small sample size and you don't need control variables: all you have to do is compare a randomly-selected treatment group and a randomly-selected control group, so the characteristics in both groups should be roughly the same (eliminating the need for more variables).

I think economists are more adept with quantitative methods than anybody (cf. Pelagius vs. Indy Coug).
__________________
太初有道
ChinoCoug is offline   Reply With Quote
Old 01-10-2011, 04:45 PM   #3
MikeWaters
Demiurge
 
MikeWaters's Avatar
 
Join Date: Aug 2005
Posts: 36,365
MikeWaters is an unknown quantity at this point
Default

While Randomized Controlled Trials are subject to less bias than observational studies, they are not immune, by a long shot.

While the main analysis may not use demographic variables as covariates, certainly there are a variety of statistical methods that can be employed, each with a different result. They cherry-pick the one that supports their claims. Not to mention changing the rules at the end, including and excluding certain participants. Normalizing the data, not normalizing it, etc.

Medical scientists are generally woeful at quantitative methods, because they are generally not trained in it. They usually rely on biostatisticians, and they exert their influence on these people. "Is there a different way we can run it?"

Data collection can also be very problematic. When some underling doesn't follow procedures and makes up data, or leaves a huge gap that someone has to decide what to do. "Patch it up."

And no, there is really not any incentive to trash someone else's work, other than promoting your own, if it happens to differ. I can't apply for a grant to double check an investigator's work. The peer review process is problematic as well. The manuscripts get sent to people that often lack methodology expertise.

Last edited by MikeWaters; 01-10-2011 at 04:50 PM.
MikeWaters is offline   Reply With Quote
Old 01-12-2011, 02:51 AM   #4
MikeWaters
Demiurge
 
MikeWaters's Avatar
 
Join Date: Aug 2005
Posts: 36,365
MikeWaters is an unknown quantity at this point
Default

This article hits many points that I was trying to explain:

http://www.newyorker.com/reporting/2...fa_fact_lehrer
MikeWaters is offline   Reply With Quote
Old 01-12-2011, 02:39 PM   #5
ChinoCoug
Senior Member
 
ChinoCoug's Avatar
 
Join Date: Jan 2006
Location: NOVA
Posts: 3,005
ChinoCoug is an unknown quantity at this point
Default

"cosmic habituation" That is one hokey explanation for why your results diminish when you attempt replication. But then again, this is the New Yorker. What do you expect from the outlet that claimed W. used the presidency as a stepping stone to baseball comissioner?
__________________
太初有道
ChinoCoug is offline   Reply With Quote
Old 01-12-2011, 05:26 PM   #6
MikeWaters
Demiurge
 
MikeWaters's Avatar
 
Join Date: Aug 2005
Posts: 36,365
MikeWaters is an unknown quantity at this point
Default

I don't buy the cosmic stuff, but rather things like publication bias.
Nonrepresentative populations. A system that rewards outliers.

Publish or perish. Leads to a lot of garbage.
MikeWaters is offline   Reply With Quote
Reply

Bookmarks


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 06:49 AM.


Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.