Jump to content

Gene-Guided Chemotherapy Research Questioned


gpawelski

Recommended Posts

Gene-Guided Chemotherapy Research Questioned as Three NCI Trials Are Halted

July 27, 2010

Three ongoing cancer trials funded by the National Cancer Institute have been suspended after the validity of the technology being used was called into question by a large group of US scientists.

Developed at Duke University, the technology now under question uses gene signatures to predict responses to chemotherapy. Two of the trials involve patients with non-small cell lung cancer (NCT00545948 and NCT00509366), and the third is in patients with breast cancer (NCT00636441).

The trials were suspended on July 22 and 23.

The move was made after a group of 31 scientists called on the National Cancer Institute to suspend the trials because of concerns over the prediction models that were being used. The models were developed on the basis of research reported by Anil Potti, MD, and Joseph Nevins, PhD, from Duke University, Durham, North Carolina, but the validity of those models has been questioned by peer-reviewed reanalyses of their work, the scientists note.

In a letter dated July 19 and addressed to the new National Cancer Institute director, Harold Varmus, MD, the group of researchers called for the trials to be suspended until a "fully independent review is conducted of both the clinical trials and of the evidence and predictive models being used to make cancer treatment decisions."

At the same time, one of the Duke scientists involved in developing the technology has been suspended from his place of work. Dr. Potti was placed on administration leave while the university investigates allegations that he had falsely claimed to be Rhodes scholar, according to a report in the New York Times.

In addition, one of the published papers that reported this technology has now come under scrutiny. The Lancet Oncology has issued an "expression of concern" over a paper published in the journal in 2007, which described the validation of gene signatures to predict the response of breast cancer to neoadjuvant chemotherapy (Lancet Oncol. 2007;8:1071-1078).

That research was praised by an independent expert contacted by Medscape Medical News at the time, as it showed for the first time that gene signatures could predict responses to individual chemotherapy regimens.

However, since its publication in 2007, the methodology used to generate the response predictions has been questioned by statisticians from the M.D. Anderson Cancer Center in Houston, Texas, the journal notes.

The Lancet Oncology was contacted by senior author Richard Iggo, PhD, from the Swiss Institute for Experimental Cancer Research in Epalinges, Switzerland, and first author Herv Bonnefoi, MD, from the Institut Bergoni, University of Bordeaux, France. They "expressed grave concerns about the validity of their report in light of evolving events," and said they had repeatedly tried to contact their coauthors at Duke University (including Dr. Potti) without success.

The journal notes that the 15 European coauthors of the paper concur with the "expression of concern" notice that the journal has posted online and said that the 4 coauthors from Duke University have been contacted separately.

Controversy Surrounding Dr. Anil Potti and Duke University

The controversy surrounding Dr. Potti and his team's research at Duke University is outlined in exhaustive detail in a report published in the July 16 issue of The Cancer Letter. This publication found Dr. Potti's false claim of being Rhodes scholar in multiple grant applications submitted by him, and notes that the claim was also featured in a Duke newsletter in January 2007. However, this credential "disappeared" from Dr. Potti's biography later in 2007. The publication also found mentions of 2 other awards that it was unable to verify.

In addition to questions about Dr. Potti's credentials, The Cancer Letter notes that research coming out of his group has been "marred by corrections and even corrections of corrections," and points out that "errors in genomics research could have direct implications for patients."

Dr. Potti is considered to be a pioneer of personalized medicine because of his team's work on using gene signatures to predict responses to chemotherapy, and he has been featured in Duke University commercials aimed at the general public, the publication notes.

However, this work has been questioned by other scientists, it points out.

Two biostatisticians at the M.D. Anderson Cancer Center, Keith Baggerley, PhD, and Kevin Coombes, PhD, attempted to verify this work but found a series of errors, including mislabeling and mismatching of gene probe identifiers. They published their findings in November 2009 in the Annals of Applied Statistics (2009;3:1309-1334) and concluded: "Unfortunately, poor documentation can shift from an inconvenience to an active danger when it obscures not just methods but errors."

The biostaticians also suggested that the errors they found in the technology which was being used in ongoing clinical trials to allocate patients to treatment group may be putting patients at risk.

The Cancer Letter reports that as a result of that publication, Duke University temporarily suspended 3 clinical trials that were using gene signatures to assign patients to treatment these are the same 3 trials that were suspended again a just few days ago.

However, even though Duke suspended those trials in October 2009, they were restarted again in January 2010 after an internal investigation by Duke's Institutional Review Board confirmed the research and concluded that this approach was "viable and likely to succeed."

When contacted by The Cancer Letter and shown documents obtained under the Freedom of Information Act, the 2 statisticians from M.D. Anderson who had questioned the technology said they were not satisfied by the internal review. "Duke's statement implies that other members of the scientific community should be able to replicate the reported results with the data available," they told the publication. "Having tried, we can confidently state that this is not yet true."

The letter to the National Cancer Institute from the group of 31 scientists, which comprises many professors of statistics and biostatistics from prestigious US universities, including Johns Hopkins, Harvard, and Princeton, refers both to the Annals of Applied Statistics paper and The Cancer Letter reports.

"It is absolutely premature to use these prediction models to influence the therapeutic options open to cancer patients," the letter says, as independent experts have been unable to substantiate the researchers' claims using the researchers' own data. If the data and analysis can be validated, then it would be appropriate to reinitiate the trials, but until then, suspension of the ongoing trials is necessary, "given the potential of patients being assigned to improper treatment arms...[and] the associated potential risk posed to these patients."

Authors and Disclosures

Journalist

Zosia Chustecka

Zosia Chustecka is news editor for Medscape Hematology-Oncology and prior news editor of jointandbone.org, a Web site acquired by WebMD. A veteran medical journalist based in London, UK, she has won a prize from the British Medical Journalists Association and is a pharmacology graduate. She has written for a wide variety of publications aimed at the medical and related health professions. She can be contacted at ZChustecka@webmd.net.

Zosia Chustecka has disclosed no relevant financial relationships.

Link to comment
Share on other sites

It is the hope is that any patient with cancer would have their tumor biopsied and profiled. The profile would then be displayed as a unique genetic signature, which would in turn predict which therapy is most likely to work. However.....

Gene-Expression Signatures in Lung Cancer: Not Ready Yet

Roxanne Nelson - Medscape Medical News

March 17, 2010 — The identification of prognostic markers could assist in the clinical management of nonsmall-cell lung cancers (NSCLC). Although molecular profiling of tumors has led to the identification of gene-expression patterns, a new review has found "little evidence" that any of the signatures are ready for use in the clinical setting.

In addition, the researchers reported that they found "serious problems in the design and analysis of many of the studies" that were included in their review, published online March 16 in the Journal of the National Cancer Institute.

Even in its earliest stages, lung cancer has a very high recurrence rate and mortality, the authors note. Current clinical staging techniques have limitations in terms of predicting recurrence and guiding treatment, but the ability to identify new molecular targets using techniques such as microarray-based gene-expression profiling has the potential to improve patient care.

Inconclusive Results Thus Far

Studies have reported mixed results. As previously reported by Medscape Oncology, one recent review article found that gene-expression profiling failed to outperform standard histologic examinations. However, another study reported that a "5-gene signature" was closely associated with relapse-free and overall survival among patients with NSCLC.

More recently, at the 2010 Joint Conference on Molecular Origins of Lung Cancer, researchers reported that a mutated epidermal growth-factor receptor (EGFR) gene signature was a validated therapeutic target in NSCLC, and suggested that this gene signature might provide "predictive value and biological insights" into EGFR inhibitor responses in lung adenocarcinomas.

For the current review, Jyothi Subramanian, PhD, and Richard Simon, DSc, from the Biometric Research Branch at the National Cancer Institute in Bethesda, Maryland, conducted a literature search of studies published from 2002 to 2009 to critically evaluate studies that reported prognostic gene-expression signatures in NSCLC.

Little Evidence of Gene Signatures

The authors selected 16 studies as being most relevant, and closely assessed them for a number of criteria, including the appropriateness of the study design, the statistical validation of the prognostic signature on independent datasets, the presentation of results in an unbiased manner, and the demonstration of medical utility for the new signature beyond that obtained using existing treatment guidelines.

They noted that one of the "striking findings" is that none of the studies succeeded in showing that gene-expression signatures had better predictive power "over and above known risk factors." In fact, they note, the majority of the risk factors outlined by the National Comprehensive Cancer Network (NCCN) guideline were not even considered by most of the studies they reviewed.

For example, the extent of residual tumor after resection is the most important variable, after stage, when making decisions about adjuvant chemotherapy, according to the NCCN guideline. But only 7 of the studies stated that completeness of resection was a criterion for patient selection.

Drs. Subramanian and Simon point out that "the most important medical question that needs to be answered by a new prognostic signature in NSCLC is whether it can identify the subset of stage IA patients who might benefit from adjuvant chemotherapy." But only 2 studies in their survey included validation results for this subpopulation.

The majority of papers presented overall validation results for stage I patients, and some of the signatures were successful in identifying high-risk stage I patients. However, whether or not the signature was better at predicting overall survival than tumor size or other standard risk factors was not adequately addressed and was unclear from most of these studies, the authors report. Only 1 study, they note, reported a marginal improvement in the predictive accuracy for their gene-expression signature, compared with tumor size, for stage I patients

Another important medical need is the ability to identify the subset of stage IB and stage II patients who are at a low risk for disease recurrence without chemotherapy, the authors explain. But only one of the studies presented separated validation results for this subgroup of patients; a second study was the only one that reported the statistical significance of the prognostic signature for validation in stage II samples. The lack of predictiveness for stage II patients could be the result of the small number of such patients in the study samples, they note.

Most of the studies presented validation results on data that were not used for developing the predictive signatures.

"Most of the studies presented validation results on data that were not used for developing the predictive signatures," they write; in addition, "none of the 16 studies reviewed adequately addressed the question of the predictive power that could be attained by using easily measurable clinicopathological factors for stage I samples."

On the basis of their observations and analyses, the authors suggest a set of guidelines to aid the design, analysis, and evaluation of prognostic gene-expression studies, with a focus on NSCLC.

"Clinical validity of a prognostic signature implies demonstrating that the test result correlates with clinical outcome," they write, whereas "medical utility of a prognostic signature means that the test result is actionable, leading to patient benefit."

Therefore, the ultimate test of clinical validity for a prognostic signature is how well it performs in a prospective clinical trial. Several such trials are currently underway, including the CALGB 30506 trial that was recently initiated to clinically test the lung metagene prognostic signature in lung cancer, the authors point out.

"Regardless of clinical validation, unless a new prognostic signature provides additional risk stratification within the stage and risk-factor groupings on which current treatment guidelines are based, its broad acceptance in medical practice is unlikely," the authors conclude.

J Natl Cancer Inst. Published online March 16, 2010

Link to comment
Share on other sites

Gene expression (signature) assays are panels of markers that can predict the likelihood of cancer recurrence in various populations. Functonal profiling assay is a test for drug activity against a tumor. Pharmacogenomic testing is a test to identify patients who are likely to have the most toxicity.

By testing the gene expression markers of a patient, oncologists can identify those patients unlikely to benefit from adjuvant chemotherapy from those that would. If the patient needs adjuvant chemotherapy, by testing the patient's tumor cells and testing the patient toxicity tolerance, the oncologist can select drugs that have a higher probability of being effective for an individual patient rather than selecting drugs based on the average responses of many patients in large clinical trials.

What a cancer patient would like ideally, is to know whether they would benefit from adjuvant chemotherapy. If so, which active drugs have the highest probability of working and are relatively non-toxic in a given patient.

Whether a patient would benefit from adjuvant therapy depends on two things: (1) whether the tumor is "destined" to come back in the first place and (2) whether the tumor is "sensitive" to drugs which might be used to keep it from coming back.

The gene expression (signature) marker assays actually could be calibrated to provide information both about the possibility of recurrence and also chemosensitivity. The problem is dissecting one from the other. Studies to date have just looked at whether people had a recurrence.

You can identify gene expression patterns (via assays) which correlate with this. But it can be hard and even impossible to tell what exactly you are measuring: is it intrinsic aggressiveness of the tumor? sensitivity to adriamycin? sensitivity to cyclophosphamide? sensitivity to taxol? sensitivity to tamoxifen? You find a gene expression panel which correlates with something, but picking apart the pieces is hard.

You can begin to do this if you combine gene expression studies (molecular profiling) with cell culture studies (functional tumor cell profiling). Use the functional profiling as the gold standard to define the difference between sensitivity and resistance. Then see which pattern correlates with which for individual tumors and individual drugs.

When the decision is made to treat a patient with chemotherapy, most patients are treated with a combination of drugs. The "functional profiling" method differs from existing DNA and RNA tests in that it assesses the activity of a drug upon combined effect of all cellular processes, using several metabolic and morphologic endpoints. Other tests, such as those which identify DNA or RNA sequences or gene expression signatures of individual proteins often examine only one component of a much larger, interactive process.

No gene-based test can discriminate differing levels of anti-tumor activity occurring among different therapy drugs. Nor can available gene-based tests identify situations in which it is advantageous to combine the new "targeted" drugs with other types of cancer drugs. So far, only cell-based functional profiling has demonstrated this critical ability.

Not only is this an important predictive test, it is also a unique tool that can help to identify newer and better drugs, evaluate promising drug combinations, and serve as a "gold standard" correlative model with which to develop new DNA, RNA, and protein-based tests that better predict for drug activity.

Genomic testing is not the answer, without cell "function" analysis. Functional tumor cell profiling has its own very sophisticted program to discover gene expression microarrays which predict for responsiveness to drug therapy. The way to identify informative gene expression patterns is to have a gold standard and that cell-based functional profiling assays are by far the most powerful, efficient, useful gold standard to have. It grasps the potential value of the assays today to individualize therapy.

And then you come to the 1,000 pound gorilla of a question: What effect will the different individual drugs have in combination in different, individual tumors? This is where cell-based functional profiling assays will always be able to provide uniquely valuable information. But it's not one versus the other. The best thing is to combine these different tests in ways which make the most sense. One month's worth of herceptin + avastin costs $8000. That's without any docetaxel and blood cell growth factors and anti-emetics. If nothing else, we can't afford too much trial and error treatment.

There are hundreds of different therapeutic drug regimens which any one or in combination can help cancer patients. The system is overloaded with drugs and underloaded with the wisdom and expertise for using them. We have produced an entire generation of investigators in clinical oncology who believe that the only valid form of clinical research is to perform "well-designed," prospective, randomized trials in which patients are randomized to receive one empiric drug combination versus another empiric drug combination.

The problem is not with using the prospective, randomized trial as a research instrument. The problem comes from applying this time and resource-consuming instrument to address hypotheses of trivial importance (do most cancers prefer Coke or Pepsi?). The failure of 30 years' worth of clinical trials research into "one-size-fits-all" therapy will eventually force a consideration of new approaches. All the more reason to "test the tumor" first - properly.

Link to comment
Share on other sites

  • 1 month later...

Since the new millenium there has been the increasing acceptance of the concept that cancer is a very heterogenous disease and that it would be a good thing to try and individualize treatment. Oncologists are increasingly open to the concept of personalized therapy.

Driving this change has been the success of a few drugs which target specific molecular targets within cancer cells. For instance, Gleevec in a relatively rare disease called chronic myelogenous leukemia (CML). Herceptin, which targets a mutation present in some patients with breast cancer. Iressa and Tarceva, which help some patients with a mutation in lung cancer.

It has become routine to test breast cancer patients for the mutation conferring sensitivity to Herceptin. It is becoming routine to test lung cancer patients for the mutation conferring sensitivity to Iressa and Tarceva. When a tumor has certain KRAS mutations, the partially effective colon cancer drug Erbitux, is very unlikely to work.

So we've have Her2 testing for predicting Herceptin activity in breast cancer. EGFR mutation testing to predict for Iressa and Tarceva (two different flavors of the same, similar type of drug) in lung cancer. KRAS mutation to predict for Erbitux in colon cancer. Of course, this leaves out the three dozen other drugs and a myriad of drug combinations, which may often be even more effective in each of these diseases, and leaves out virtually all of the other forms of cancer.

Beyond this, there have been attempts to develop molecular-based tests to examine a broader range of chemotherapeutic drugs. New technologies for measuring the expression (biological activity) of literally hundreds to thousands of genes as part of a single test. There are two main technologies involved: RT-PCR (reverse transcription polymerase chain reaction) and DNA microarray.

Dr. Larry Weisenthal, one of the pioneers of functional profiling, has described the use of RT-PCR and DNA microarrays in personalized oncology as analogous to the introduction of the personal computer. Dazzling hardware in search of a killer application. This was wonderful technology and the geekiest of people bought them and played with them, but they really didn't start to do anything for a mass market until the introduction of the first killer application, which was a spreadsheet program called Visicalc.

So what research scientists in universities and cancer centers have been doing for the past ten years is to try and figure out a way to use this dazzling technology to look for patterns of gene expression which correlate with and predict for the activity of anticancer drugs. Hundreds of millions of dollars have been spent on this effort. Objectively speaking, it's like the emperor's new clothes. So far, a qualified failure.

Academics are besides themselves over the promise of the new technology. It seems so cool that it simply must be good for something. How about in the area of identifying drugs which will work in individual patients? It has been a major bust by whatever standard you choose to apply. Objectively, if you compare and contrast the peer-reviewed medical literature supporting the use of functional profiling for personalizing drug selection versus the correspond literature supporting molecular profiling, the literature supporting functional profiling wins (big time!).

The scientist who reported the best results with molecular profiling (Dr. Anil Potti of Duke University) has recently been accused of fraud and his clinical trials have been suspended.

Link to comment
Share on other sites

"The simple answer is that cancer isn’t simple," according to Dr. Robert Nagourney, one of the pioneers of functional profiling analysis.

Cancer dynamics are not linear. Cancer biology does not conform to the dictates of molecular biologists. Once again, we are forced to confront the realization that genotype does not equal phenotype.

The first chink in the armor of this argument came when scientific reviewers issued an “expression of concern” regarding the validity of the method. Further analyses revealed evidence that the technologies for the prediction of response in individual patients could not be reproduced. As the reviewers stated, “The scientific community should be able to replicate the results with the reported data available.”

They continued, “Having tried, we can confidently state that this is not yet true.” The NCI convened a group of 31 scientists, who concluded, “It is absolutely premature to use these prediction models to influence the therapeutic options open to cancer patients.”

While much attention has been given to the genomics field, the NCI has determined that - at this time - treatment selection results cannot be duplicated and the genomic methodology is not ready for clinical application.

In a nutshell, cancer cells utilize cross talk and redundancy to circumvent therapies. They back up, zig-zag and move in reverse, regardless of what the sign posts say. Using genomic signatures to predict response is like saying that Dr. Seuss and Shakespeare are truly the same because they use the same words. The building blocks of human biology are carefully construed into the complexities that we recognize as human beings. However appealing gene profiling may appear to those engaged in this field (such as Response Genetics, Caris, the group from Duke and many others) it will be years, perhaps decades, before these profiles can approximate the vagaries of human cancer.

Functional profiling analyses, which measure biological signals rather than DNA indicators, will continue to provide clinically validated information and play an important role in cancer drug selection. The data that support functional profiling analyses is demonstrably greater and more compelling than any data currently generated from DNA analyses. Functional profiling remains the most validated technique for selecting effective therapies for cancer patients.

Link to comment
Share on other sites

  • 10 months later...

The leading lights of genomics in May 27, 2010's New England Journal of Medicine offer a expectations-lowering retrospective on the genomics revolution's impact on health care. It is the first in a series of articles on Genomic Medicine in NEJM, occasioned by the ten-year anniversary of the sequencing of the human genome.

The scientist in charge of that effort, Francis Collins, now heads the National Institutes of Health. He is one of three co-authors of a new review that notes:

Most SNPs (single nucleotide polymorphisms or small variations in a single gene) associated with common diseases explain a small proportion of the observed contribution of heredity to the risk of disease - in many cases less than 5 to 10% - substantially limiting the use of these markers to predict risk. It thus comes as no surprise that as yet there are no evidence-based guidelines that recommend the use of SNP markers in assessing the risk of common diseases in clinical care.

http://www.nejm.org/doi/full/10.1056/NE ... =d1a2b3572

In an accompany commentary, Harold Varmus, a former NIH director who is slated to become the new head of the National Cancer Institute, also seeks to lower expectations about the impact of genomics on health care. He specifically takes aim at mechanistic interpretations of "personalized" medicine, which is often used to refer to the use of an individual's genomic analysis to drive medication strategies.

The term "personalized medicine" has become nearly ubiquitous as a means of conveying how molecular tests can subdivide diagnostic categories and refine therapeutic choices. This phrase may also prove to be strategically successful - by preemptively warding off claims that an overreliance on genotypes in medical practice is deterministic and thus "impersonal," or that genetic approaches undermine more traditional approaches to "personalized" care that are based on knowledge of a patient's behavior, diet, social circumstances, and environment. Of course, both genetic and nongenetic information is important; the more we know about a patient - genes and physiology, character and context - the better we will be as physicians. By measuring the distance to a fuller integration of genomic knowledge into patient care, this new series of articles may encourage a more nuanced calibration of what it means to "personalize" medicine.

Most of the first article and comment in the series is devoted to outlining the promise of genomics, of course. We'd expect nothing less from the scientists-turned-government-officials who are in charge of awarding billions of dollars annually to researchers pursuing population-based gene-disease correlation studies from their desktop computers. But it's an important milestone in its admission that genes in the vast majority of cases are not destiny and, with the exception of a few cancers that have been well studied (like breast cancer), provide limited guidance to care.

http://www.nejm.org/doi/full/10.1056/NE ... ?query=TOC

Link to comment
Share on other sites

About a decade ago, scientists figured out how to transform genetic instructions into an electronic format. Gene profiling using a "microarray" - a chip of glass arrayed with thousands of gene fragments - was expected to revolutionize medicine by decoding the basis of disease.

"All human illness can be studied by microarray analysis, and the ultimate goal of this work is to develop effective treatments or cures for every human disease by 2050," wrote Mark Schena, an inventor of the technology.

But skepticism had set in. In an article in the Lancet, researchers reanalyzed the seven largest microarray studies on cancer prognosis. In five of the seven, this technology performed no better than flipping a coin. The two other studies barely beat horoscopes, according to John P. Ioannidis, a clinical epidemiologist with Tufts University School of Medicine, who wrote in an accompanying editorial.

To understand why, consider the fable about six blind men and an elephant. Each man feels a different part of the animal. One man argues that the creature is a snake, another a spear, another a wall, and so on. A little girl who can see the elephant says, "Each of you is right, but you are all wrong."

Depending on how researchers "feel" their molecular data - using computer analysis to massage, stroke and ignore certain parts - they may discover right answers that are all wrong.

David Ransohoff, a University of North Carolina epidemiologist, says results cannot be trusted unless they can be produced again and again: "Figuring out whether a result is real and not simply caused by chance is determined in part by validation - by reproducing the result in an independent set of samples." In other words, go feel another elephant.

But even that is not enough, Ransohoff and other experts say. The ultimate validation requires clinical studies in actual patients. A molecular diagnostic method must be as reliable as traditional tools such as imaging tests and surgical biopsy.

This analysis is tremendously manipulative, indirect, and often ambiguous. One problem is that trace proteins - the potential biomarkers - may be swamped by other proteins, despite techniques to concentrate the scarcest ones on the special chip that goes into the mass spectrometer.

Another problem is that the spectrometer's measurements - made after vaporizing the proteins and giving them a positive charge - are least reliable in the low range where biomarkers are presumed to exist.

Finally, the spectrometry results can be thrown off by countless variables, including machine miscalibration and handling of blood samples. All of which makes results difficult to reproduce, even in the same lab using the same blood samples.

Source: Lancet, February 7, 2002

Link to comment
Share on other sites

By Defending Potti, Duke Officials Become Target Of Charges Of Institutional Failure. Duke Officials Decline To Provide Details Of Probe. Biostatisticians Write To Varmus Asking NCI To Investigate. The Lancet Oncology Issues “Expression Of Concern” Over Paper. Duke Insiders Allege Intimidation By Administration. Also in this issue: ODAC Votes To Strip Breast Cancer Indication From Avastin.

http://www.cancerletter.com/downloads/20100803_10

Ninety percent of biomarkers studies are total crap. And this is so, even if the logistical, study conduct issues are carried out flawlessly. Sloppiness a la Potti/Nevins leads to 100 percent crap. But it’s not just Potti/Nevins. The whole concept of using molecular signatures of any kind to do anything beyond the most straightforward of cases is so flawed that everyone should have seen the problems at the beginning. A beautiful biological technology is no different than a beautiful computer technology. It’s not worth much without some very good apps, and personalized molecular medicine is still waiting for its first killer app.

“100 Percent Crap”

Donald Berry, chairman of the Department of Biostatistics and head of the Division of Quantitative Sciences at MD Anderson, said the Duke scandal [i.e. Potti] puts the entire field of genomics at risk.

(Berry then said the following):

“About 10 years ago, I read in Newsweek that the high-paying, glamorous job of the new millenium was bioinformatics,” Berry, one of the statisticians who signed the letter to Varmus, said in an email. “We were going to cure diseases in the near time frame. (Francis Collins was at the forefront of pushing this attitude.) My reaction was that we didn’t know how to handle one gene (and we still don’t), never mind 20,000 genes.

“It was clear then, and it is clear now, that false-positive leads pop up all over the place and we have to keep banging them back down, as in ‘Whack-a-Mole.’ I say ‘we.’ Unfortunately, few people understand this, although the plethora of unconfirmable observations gets people asking, ‘Why?’ I’ve been saying for years that 90 percent of biomarkers studies are crap. And this is so even if the logistical, study conduct issues are carried out flawlessly. Sloppiness a la Potti/Nevins leads to 100 percent crap.”

Link to comment
Share on other sites

When Juliet Jacobs found out she had lung cancer, she was terrified, but realized that her hope lay in getting the best treatment medicine could offer. So she got a second opinion, then a third. In February of 2010, she ended up at Duke University, where she entered a research study whose promise seemed stunning.

Doctors would assess her tumor cells, looking for gene patterns that would determine which drugs would best attack her particular cancer. She would not waste precious time with ineffective drugs or trial-and-error treatment. The Duke program — considered a breakthrough at the time — was the first fruit of the new genomics, a way of letting a cancer cell’s own genes reveal the cancer’s weaknesses.

But the research at Duke turned out to be wrong. Its gene-based tests proved worthless, and the research behind them was discredited. Ms. Jacobs died a few months after treatment, and her husband and other patients’ relatives are suing Duke.

http://www.nytimes.com/2011/07/08/healt ... genes.html

It's not just Potti, and it's not just microarrays. The whole concept of using molecular "signatures" of any kind to do anything beyond the most straightforward of cases (i.e. single gene mutations, etc.) is so flawed that everyone should have seen the problems at the beginning.

The reason what no one seemingly sees it now can be explained by the facts that the technology itself is so elegant and beautiful. But a beautiful biological technology is no different than a beautiful computer technology -- it's not worth much without some very good applications ("apps"), and personalized molecular medicine is still waiting for its first killer app.

Until such time as cancer patients are selected for therapies predicated upon their own unique biology (and not population studies), we will confront one targeted drug after another.

The solution to this problem has been to investigate the targeting agents in each individual patient's tissue culture, alone and in combination with other drugs, to gauge the likelihood that the targeting will favorably influence each patient's outcome.

Functional profiling results to date in patients with a multitude type of cancers suggest this to be a highly productive direction.

Link to comment
Share on other sites

Dr. Robert Nagourney, one of the pioneers of cell culture assays, has often described his personal misgivings surrounding the application of gene profiles for the prediction of response to therapeutics. His initial concerns regarded the oversimplification of biological processes and the attempt of analyte-driven investigators to ascribe linear pathways to non-linear events.

The complexities of human tumor biology took a turn toward the incomprehensible with the publication of a lead article in Nature by the group from Harvard under Dr. Pier Paulo Pandolfi. Dr. Nagourney sat in as Dr. Pandolfi reviewed his work during the Pezcoler Award lecture, held Monday, April 4, 2011, in Orlando at the AACR meeting.

What Dr. Pandolfi’s group found was that gene regulation is under the control of messenger RNA (mRNA) that are made both by coding regions and non-coding regions of the DNA. By competing for small interfering RNAs (siRNA) the gene and pseudogene mRNAs regulate one another. That is to say that RNA speaks to RNA and determines what genes will be expressed.

To put this in context, Dr. Pandolfi’s findings suggest that the 2 percent of the human genome that codes for known proteins (the part that everyone currently studies) represents only 1/20 of the whole story. One of the most important cancer related genes (PTEN), is under the regulation of 250 separate, unrelated genes. Thus, PTEN, KRAS and all genes, are under the direct regulation and control of genetic elements that no one has ever studied.

This observation represents one more nail in the coffin of unidimensional thinkers who have attempted to draw straight lines from genes to functions. This further suggests that attempts on the part of gene profilers to characterize patients likelihoods of response based on gene mutations are not only misguided but, may actually be dishonest.

The need for phenotype analyses like the functional profiling performed at Rational Therapeutics has never been greater. As the systems biologists point out, complexity is the hallmark of biological existence. Attempts to oversimplify phenomena that cannot be simplified, have, and will continue to, lead us in the wrong direction.

Literature Citation: Poliseno, L., et al. 2010. A coding-independent function of gene and pseudogene mRNAs regulates tumor biology. Nature. 2010 Jun 24; 465(7301):1016-7.)

Link to comment
Share on other sites

  • 4 weeks later...

What some patients are experiencing that there is no right mutation (positive) or wrong mutation (negative). There are the right drugs and the wrong drugs. There are "sensitive" drugs and there are "resistant" drugs. There is no lung cancer chemo, or breast cancer chemo, or ovarian cancer chemo. There is chemo that is sensitive (effective) or resistant (ineffective) to each and every "individual" cancer patient, not populations. There are chemos that share across tumor types.

Patients would certainly have a better chance of success had their cancer been "chemo-sensitive" rather than "chemo-resistant" where it is more apparent that chemotherapy improves the survival of patients, and where identifying the most effective chemotherapy would be more likely to improve survival.

Targeted therapy is supposed to halt the growth of certain cancers by zeroing in on a signaling molecule critical to the survival of those cancer cells. And, although these targeted therapies are initially effective in a subset of patients, the drugs eventually stop working, and the tumors begin to grow again. That is because of cross-talk. Cancers share pathway across tumor types.

All the genetic mutation or amplification testing can tell us is whether or not the cells are potentially susceptible to a mechanism/pathway of attack. It only gives an indication that is based on a statistical likelihood of whether a certain drug will work for average populations (not individuals). It can't tell you if one targeted drug is better or worse than another targeted drug which may target the same specific pathway. There are differences. The drug has to get inside the cells in order to target anything.

The "cell" is a system, an integrated, interacting network of genes, proteins and other cellular constituents that produce functions (genes and proteins do not operate alone within the cell). You need to analyze the systems' response to drug treatments, not just one or a few pathways. The cell "function" methodology measures the net effect of all processes within the cancer (the entire genome), acting with and against each other in real time, and it tests living cells actually exposed to drugs and drug combinations of interest.

The ultimate driver of the functional assay is the cell, composed of hundreds of complex molecules that regulate the pathways necessary for vital cellular functions. If a targeted drug could perturb any one of these pathways, it is important to examine the effects of the drug within the context of the cell. It allows for testing of different drugs within the same class and drug combinations to detect drug synergy and drug antagonism. The "forest" is looked at and not just the "trees." There are many (too many) targets to altered cellular (forest) function, hence all the different "trees" which correlate in different situations.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.