A Massachusetts anesthesiologist has been accused of faking data for a dozen years in 21 published studies that suggested after-surgery benefits from painkillers including Vioxx and Celebrex.
Dr. Scott Reuben, who is on leave from Springfield's Baystate Medical Center, studied the use of several drugs to relieve pain and speed recovery after surgery.
The hospital said a routine review in May found that some of Reuben's research was not approved by an internal hospital review board. Further investigation found 21 papers published in anesthesiology journals between 1996 and 2008 in which Reuben made up some or all data. Hospital officials said Reuben did not admit to the fabrications. The doctor couldn't be reached for comment.
"Dr. Reuben deeply regrets that this happened," said his attorney, Ingrid Martin. "Dr. Reuben cooperated fully with the peer review committee. There were extenuating circumstances that the committee fairly and justly considered."
The hospital asked the journals to retract the studies, some of which reported favorable results from painkillers including Pfizer Inc.'s Bextra, Celebrex and Lyrica and Merck & Co. Inc.'s Vioxx. His studies also claimed Wyeth's antidepressant Effexor could be used as a painkiller.
Vioxx and Bextra _ among a class of painkillers known as Cox-2 inhibitors _ were pulled from the market amid mounting evidence they raised the risk of heart attack, stroke and death. Celebrex is the only Cox-2 inhibitor still on the market. Lyrica is a treatment for fibromyalgia.
Pfizer gave Reuben five research grants between 2002 and 2007. He also was a member of the company's speakers bureau, giving talks about Pfizer drugs to colleagues.
Pfizer said in a statement it was "not involved in the conduct of any of these independent studies or in the interpretation or publication of the study results."
The investigation was first reported by the trade publication, Anesthesiology News.
The journal Anesthesia & Analgesia retracted 10 of Reuben's studies last month. The journal Anesthesiology said it retracted three.
"Doctors have been using (his) findings very widely," said Dr. Steven Shafer, editor of Anesthesia and Analgesia. "His findings had a huge impact on the field."
Shafer said researchers would re-examine the literature and may be forced to repeat clinical trials.
A Chicago doctor said Reuben's research had made him feel more comfortable using some of the painkillers in certain patients after surgery but that he would be less likely to do so now.
Dr. Honorio Benzon said the drugs helped relieve pain so that smaller doses of powerful and potentially addictive morphine-related drugs were needed.
Specifically, Celebrex and its relatives have been shown in some studies to interfere with bone healing _ but not in Reuben's research. Because of that, Benzon said he, and probably other doctors, too, began using those drugs more often in patients having bone-related operations including spinal surgery.
Benzon said he didn't know if any of his patients treated with those drugs developed bone-healing problems, but that he will be very reluctant to use these drugs in bone surgery patients "until some better studies" are done.
Benzon, the chief of pain medicine at Chicago's Northwestern Memorial Hospital, called the revelations about Reuben's research "very disappointing. We're talking about 12 to 13 years that this has been going on with over 20 publications."
In an editorial in Anesthesiology's April edition, editor-in-chief Dr. James Eisenach calls for new research "re-examining the questions that seemed to be answered by Reuben."
source: http://www.newser.com/article/d96s2kvo0/mass-doctor-accused-of-fabricating-data-in-21-painkiller-studies-over-dozen-years.html
Tuesday, March 31, 2009
MIT professor sacked for fabricating data
A high-flying researcher has been fired from the prestigious Massachusetts Institute of Technology in Boston for fabricating data. A New Scientist investigation can, however, reveal that serious doubts are also being expressed over the accuracy of data published by the same researcher much earlier in his career.
Luk Van Parijs, 35, was an associate professor of biology at MIT. On Wednesday, he was sacked by the institute after admitting to fabricating and falsifying research data in a published scientific paper and several manuscripts and grant applications.
MIT will not confirm which paper contains the faked data, but over the past 8 years, Van Parijs has co-authored 40 research papers, and was considered a rising star in the fields of immunology and RNA interference, a technology with promise for finding new genes and treating genetic disease. Van Parijs has been on administrative leave from MIT since August 2004 while the university conducted its investigation prompted by allegations made by members of his lab. MIT has made it clear that none of his co-authors or members of his research group at MIT were involved in the misconduct.
Prior to MIT's announcement, an investigation by New Scientist uncovered uncanny similarities between supposedly different results presented in at least three highly regarded immunology papers authored by Van Parijs in 1997, 1998 and 1999.
Five experts were asked to view results from some of these papers, which cast light on the detailed signals that prompt cells of the immune system to commit suicide. While they emphasise the need to see the underlying raw data, they say they have sufficient doubts about the authenticity of specific figures within the papers to merit an independent investigation into their authenticity.
Cell suicide
Two of the three papers, published in 1997 and 1998, report on Van Parijs' graduate work in the lab of Abul Abbas at Harvard Medical School in Boston. Of particular concern are results in a paper published in 1998 in the journal Immunity (vol 8, p 265), of which Van Parijs was the lead author. The paper studies two ways that cell suicide is controlled - the proteins Fas and FasL help trigger the process, while the protein Bcl-2 helps suppress it. The work was considered elegant and helped to establish Van Parijs as an expert on immune cell death.
Figure 1 of the paper contains 8 graphs made by flow cytometry, a common lab technique for measuring the level of different proteins on the surface of cells. Dots on the graphs represent cells and their positions show how much of each kind of protein the cells have.
Three graphs in the top row of the figure look very similar. Yet they are captioned as if they show data from three different mice - a basic control mouse used for the experiment, a mouse missing the protein FasL, and a mouse missing the protein Fas. In the bottom row, three graphs again look very similar. Yet they are again captioned as if they show data from three different mice.
"This is extremely distressing to me," says one immunologist, who asked to remain anonymous. He says it looks to him as if the three differently captioned graphs were, contrary to the information in the caption, made from a single graph from data from a single mouse.
Noisy dots
Michael Borowitz, at the Johns Hopkins University School of Medicine in Baltimore, Maryland, says: "The shapes of the major clusters are often similar but in any system there is noise, and those noisy dots are in the same place too. That's hard to explain by biology. It is very difficult for me to believe that these were independent experiments." Borowitz is an expert in interpreting flow cytometry graphs, which he regularly uses to identity abnormal populations of cells in the blood and bone marrow of leukaemia patients.
Three other experts contacted, including Paul Robinson, a professor of immunopharmacology and biomedical engineering and Director of the Flow Cytometry Labs at Purdue University in West Lafayette, say that the graphs appear concerningly alike.
Robinson emphasises that it is impossible to prove that the data underlying the supposedly different graphs is the same without seeing the raw data from which they were generated. But he and four members of his lab have conducted a simple experiment to gauge the likelihood that data taken from different mice would produce such similar graphs.
They ran a computer model that produced graphs from two sets of 93,000 cells and reduced the resolution to below that of Van Parijs' published figures. Their conclusion, detailed in a brief report seen by New Scientist, is that even graphs produced from samples taken from the same animal are unlikely to look the same, and that outlying data points are unlikely to appear in the same positions by random chance.
However, PJ Utz, an immunologist consulted at Stanford University School of Medicine, cautions that in this case the figure that contains similar graphs is not crucial to the argument being made in the research paper. "It does not affect the scientific conclusions," he says. Utz is not one of the five experts previously mentioned. However, three other experts strongly disagreed. They emphasised that the authenticity of all data is crucial to the integrity and progress of science.
"Convincingly similar"
Further examples exist in a paper Van Parijs published in 1997 in the Journal of Experimental Medicine (vol 186, p 1119). In Figure 4, the lower left hand corners of at least three graphs contain almost identical clusters of data points. Yet they are captioned as if they come from three different mice. "The ones in the bottom left hand corner are the most convincingly similar," says Borowitz.
There are also additional concerns about data in a paper published by Van Parijs in 1999 after he left Harvard and became a post-doc in the lab of Nobel-prize winner David Baltimore at Caltech Institute of Technology in Pasadena. In Immunity (vol 11, p 281), two graphs in Figure 1C look very similar, especially if the graphs are printed out on transparent paper and superimposed. Yet one is captioned as if it comes from mouse cells infected with a human mutant gene, while the other is captioned as if it comes from mouse cells infected with a normal "wild type" human protein.
No aspersions have been cast on any of Van Parijs's co-authors who helped contribute to these three papers. Four of the co-authors that worked on the papers also say they do not believe that the papers formed part of the MIT investigation.
In an email to New Scientist following his sacking, Van Parijs said: "None of the data for the figures you mention have been falsified. I am collaborating fully with the inquiries and I cannot comment further at this time."
MIT will send the final report of its investigation to the US Office of Research Integrity, which will conduct its own confidential review of the matter and make the findings public when that review is complete.
Caltech inquiry
Due to New Scientist's queries on apparent duplication of data within the 1999 Immunity paper (vol 11, p 281), Caltech launched a separate inquiry into Van Parijs's work three weeks ago. The inquiry was instigated by Baltimore, Caltech's president, and the senior and corresponding author on the Immunity paper, and is being independently run by Elliot Meyerowitz, head of the university's biology division.
Two institutes of the National Institutes of Health in Washington DC confirmed that in the past year MIT has returned over $2 million of federal research money granted to Van Parijs.
Following New Scientist's queries, Abbas, who moved in 1999 from Harvard to the University of California in San Francisco, says he is contacting the co-authors of the Immunity paper (vol 8, p 265) and Journal of Experimental Medicine paper (vol 186, p 1119), and the editors of the journals in which they are published, in a bid to determine what course of action is appropriate.
Lynne Herndon, the president and CEO of Cell Press, which publishes the journal Immunity, says in a statement to New Scientist: "We take all scientific misconduct very seriously and we will be looking into this case in detail before determining what actions, if any, may be necessary from the journal."
source: http://www.newscientist.com/article/dn8230-mit-professor-sacked-for-fabricating-data.html?full=true
Luk Van Parijs, 35, was an associate professor of biology at MIT. On Wednesday, he was sacked by the institute after admitting to fabricating and falsifying research data in a published scientific paper and several manuscripts and grant applications.
MIT will not confirm which paper contains the faked data, but over the past 8 years, Van Parijs has co-authored 40 research papers, and was considered a rising star in the fields of immunology and RNA interference, a technology with promise for finding new genes and treating genetic disease. Van Parijs has been on administrative leave from MIT since August 2004 while the university conducted its investigation prompted by allegations made by members of his lab. MIT has made it clear that none of his co-authors or members of his research group at MIT were involved in the misconduct.
Prior to MIT's announcement, an investigation by New Scientist uncovered uncanny similarities between supposedly different results presented in at least three highly regarded immunology papers authored by Van Parijs in 1997, 1998 and 1999.
Five experts were asked to view results from some of these papers, which cast light on the detailed signals that prompt cells of the immune system to commit suicide. While they emphasise the need to see the underlying raw data, they say they have sufficient doubts about the authenticity of specific figures within the papers to merit an independent investigation into their authenticity.
Cell suicide
Two of the three papers, published in 1997 and 1998, report on Van Parijs' graduate work in the lab of Abul Abbas at Harvard Medical School in Boston. Of particular concern are results in a paper published in 1998 in the journal Immunity (vol 8, p 265), of which Van Parijs was the lead author. The paper studies two ways that cell suicide is controlled - the proteins Fas and FasL help trigger the process, while the protein Bcl-2 helps suppress it. The work was considered elegant and helped to establish Van Parijs as an expert on immune cell death.
Figure 1 of the paper contains 8 graphs made by flow cytometry, a common lab technique for measuring the level of different proteins on the surface of cells. Dots on the graphs represent cells and their positions show how much of each kind of protein the cells have.
Three graphs in the top row of the figure look very similar. Yet they are captioned as if they show data from three different mice - a basic control mouse used for the experiment, a mouse missing the protein FasL, and a mouse missing the protein Fas. In the bottom row, three graphs again look very similar. Yet they are again captioned as if they show data from three different mice.
"This is extremely distressing to me," says one immunologist, who asked to remain anonymous. He says it looks to him as if the three differently captioned graphs were, contrary to the information in the caption, made from a single graph from data from a single mouse.
Noisy dots
Michael Borowitz, at the Johns Hopkins University School of Medicine in Baltimore, Maryland, says: "The shapes of the major clusters are often similar but in any system there is noise, and those noisy dots are in the same place too. That's hard to explain by biology. It is very difficult for me to believe that these were independent experiments." Borowitz is an expert in interpreting flow cytometry graphs, which he regularly uses to identity abnormal populations of cells in the blood and bone marrow of leukaemia patients.
Three other experts contacted, including Paul Robinson, a professor of immunopharmacology and biomedical engineering and Director of the Flow Cytometry Labs at Purdue University in West Lafayette, say that the graphs appear concerningly alike.
Robinson emphasises that it is impossible to prove that the data underlying the supposedly different graphs is the same without seeing the raw data from which they were generated. But he and four members of his lab have conducted a simple experiment to gauge the likelihood that data taken from different mice would produce such similar graphs.
They ran a computer model that produced graphs from two sets of 93,000 cells and reduced the resolution to below that of Van Parijs' published figures. Their conclusion, detailed in a brief report seen by New Scientist, is that even graphs produced from samples taken from the same animal are unlikely to look the same, and that outlying data points are unlikely to appear in the same positions by random chance.
However, PJ Utz, an immunologist consulted at Stanford University School of Medicine, cautions that in this case the figure that contains similar graphs is not crucial to the argument being made in the research paper. "It does not affect the scientific conclusions," he says. Utz is not one of the five experts previously mentioned. However, three other experts strongly disagreed. They emphasised that the authenticity of all data is crucial to the integrity and progress of science.
"Convincingly similar"
Further examples exist in a paper Van Parijs published in 1997 in the Journal of Experimental Medicine (vol 186, p 1119). In Figure 4, the lower left hand corners of at least three graphs contain almost identical clusters of data points. Yet they are captioned as if they come from three different mice. "The ones in the bottom left hand corner are the most convincingly similar," says Borowitz.
There are also additional concerns about data in a paper published by Van Parijs in 1999 after he left Harvard and became a post-doc in the lab of Nobel-prize winner David Baltimore at Caltech Institute of Technology in Pasadena. In Immunity (vol 11, p 281), two graphs in Figure 1C look very similar, especially if the graphs are printed out on transparent paper and superimposed. Yet one is captioned as if it comes from mouse cells infected with a human mutant gene, while the other is captioned as if it comes from mouse cells infected with a normal "wild type" human protein.
No aspersions have been cast on any of Van Parijs's co-authors who helped contribute to these three papers. Four of the co-authors that worked on the papers also say they do not believe that the papers formed part of the MIT investigation.
In an email to New Scientist following his sacking, Van Parijs said: "None of the data for the figures you mention have been falsified. I am collaborating fully with the inquiries and I cannot comment further at this time."
MIT will send the final report of its investigation to the US Office of Research Integrity, which will conduct its own confidential review of the matter and make the findings public when that review is complete.
Caltech inquiry
Due to New Scientist's queries on apparent duplication of data within the 1999 Immunity paper (vol 11, p 281), Caltech launched a separate inquiry into Van Parijs's work three weeks ago. The inquiry was instigated by Baltimore, Caltech's president, and the senior and corresponding author on the Immunity paper, and is being independently run by Elliot Meyerowitz, head of the university's biology division.
Two institutes of the National Institutes of Health in Washington DC confirmed that in the past year MIT has returned over $2 million of federal research money granted to Van Parijs.
Following New Scientist's queries, Abbas, who moved in 1999 from Harvard to the University of California in San Francisco, says he is contacting the co-authors of the Immunity paper (vol 8, p 265) and Journal of Experimental Medicine paper (vol 186, p 1119), and the editors of the journals in which they are published, in a bid to determine what course of action is appropriate.
Lynne Herndon, the president and CEO of Cell Press, which publishes the journal Immunity, says in a statement to New Scientist: "We take all scientific misconduct very seriously and we will be looking into this case in detail before determining what actions, if any, may be necessary from the journal."
source: http://www.newscientist.com/article/dn8230-mit-professor-sacked-for-fabricating-data.html?full=true
Monday, March 30, 2009
Summary of Case
Overly Ambitious Researchers
For the past few years, several scientific organizations have been trying to come up with a definition for scientific misconduct. All of the organizations agree that it has something to do with fabrication of data. Scientific misconduct makes us wonder what reports are true. The consequences for misconduct can be lengthy and serious and have a chain reaction of these consequences. These cases of fabrication aren’t always easy to detect. John Darsee was a Research Fellow in the Cardiac Research Laboratory and had a special area of research concerning the testing of heart drugs on dogs.
In May of 1981, three colleagues observed his labeling of the data recordings and the recordings were not nearly realistic. Darsee admitted to fabrication and suffered the consequences. His research fellowship was terminated and his faculty position was withdrawn.
In October, Darsee was caught forging data; this time, the data showed no variation. It looked “too good”. As a result, he lost his researcher position at Harvard, and lost all NIH funding.
Next, there was the Bruening case. Bruening was fabricating data for an experiment dealing with the effects of psychotropic medication for mentally retarded patients. Dr. Alan Poling, one of the psychologists involved in the investigation, pointed out that Bruening was a contributor of 34% of all published research on the treatment of mentally retarded people. The validity of the entire research project is now questioned because of Bruening’s fabricated data.
For the past few years, several scientific organizations have been trying to come up with a definition for scientific misconduct. All of the organizations agree that it has something to do with fabrication of data. Scientific misconduct makes us wonder what reports are true. The consequences for misconduct can be lengthy and serious and have a chain reaction of these consequences. These cases of fabrication aren’t always easy to detect. John Darsee was a Research Fellow in the Cardiac Research Laboratory and had a special area of research concerning the testing of heart drugs on dogs.
In May of 1981, three colleagues observed his labeling of the data recordings and the recordings were not nearly realistic. Darsee admitted to fabrication and suffered the consequences. His research fellowship was terminated and his faculty position was withdrawn.
In October, Darsee was caught forging data; this time, the data showed no variation. It looked “too good”. As a result, he lost his researcher position at Harvard, and lost all NIH funding.
Next, there was the Bruening case. Bruening was fabricating data for an experiment dealing with the effects of psychotropic medication for mentally retarded patients. Dr. Alan Poling, one of the psychologists involved in the investigation, pointed out that Bruening was a contributor of 34% of all published research on the treatment of mentally retarded people. The validity of the entire research project is now questioned because of Bruening’s fabricated data.
Friday, March 27, 2009
Ethics ?'s
- 1. What kinds of reasons are offered for fabricating data?
- Under pressure
- Not enough time
- Knew how the experiment should’ve turned out; needed to support the right answer
- Needed to get an article published
- Needed a good grade; didn’t have time to do it right
- 2. Which, if any, of those reasons are good reasons--i.e., reasons that might justifyfabricating data?
- None of these are good enough reasons to make up data. There is no good excuse for making up research that could change the world.
- 3. Who is likely to be harmed by fabricating data? Does actual harm have to occur in order for fabrication to be ethically wrong?
- Everyone is affected by research, especially those who the research is for.
- 4. What responsibilities does a scientist have for checking on the trustworthiness of the work of other scientists?
- A scientist has to closely moderate other scientist, especially the new ones. They are not as well trained, and therefore need guidance.
- 5. What should a scientist do if he or she has reason to believe that another scientist has fabricated data?
- A scientist should report it to someone, anyone. Forging data for any reason is ethically wrong.
- 6. Why is honesty in scientific research important to the scientific community?
- Honesty is important to the scientific community because scientists bounce ideas off each other. If one idea is false, everyone else’s research could be false, and a whole chain of bad information could start. The community relies on the research and facts of the scientists.
Subscribe to:
Posts (Atom)