What are the sticking points between statistics and pharmacometrics?

I am interested in gathering perspectives from statistical and pharmacometric practitioners on where we have the most difficulty gaining alignments. Please share your experiences and some context around each topic. We can then use this discussion forum to dive more deeply into the issues.

When do statisticians and pharmacometricians disagree?

3 Likes

This has been extensively discussed when Lewis Sheiner wrote his critical opinion in CPT (The intellectual health of clinical drug evaluation. Clin. Pharmacol. Ther. 50, 4–9, 1991). One of the best discussion of this issue and comparison between statisticians and pharmacometricians (called pharmacokineticist at the time) is from Stephen Senn who gave a lecture at the European Cooperation in the Field of Scientific and Technical Research (COST) meeting in Geneva in 1997 and further published it in CPT (Statisticians and pharmacokineticists: what they can still learn from each other. Clin Pharmacol Ther. 88, 328-34, 2010).

Senn paper is a good start I believe, things have evolved since… but not that much.

1 Like

Rene,

Thanks for the great references. They are both excellent reads. I would add Ken Kowalski’s paper in Statistics in Biopharmaceutical Research (May 2015, 7(2), pp. 148-159) to the mix as well.

Many of the points raised result from lack of alignment around the purpose of the given study. Is it to explore what might happen and develop hypotheses as to why, is it to continue to build on what we have learned so far and further our understanding, or is it to make decisions? Sheiner argues to “restore intellectual primacy to the questions we ask, not the methods by which we answer them.” Unfortunately, this is not an “either/or” choice. Scientists must recognize that both aspects are required. Asking the right questions with appropriate clarity is essential; however, the methods need to then be tailored to those questions for the intended application. Nor is it possible to eliminate judgments form the design, analysis, or interpretation of research. Good science demands transparency in these judgments and their implication. They cannot be abdicated to “decision-makers”.

Meaningful research cannot be conducted without appropriate consideration to the application of the research. The questions the research is designed to answer need to be relevant to the application, and the methods used to answer the questions then need to meet the specific needs. A set of pairwise comparisons cannot provide information about the time course of drug response, and an exposure-response model cannot determine whether the effect observed for a given dose is larger than the observed variability. Neither answers the question “Does the drug work?”

We should be able to align on which analyses make the most sense to use for which questions for which applications. Certain approaches are better suited for exploration, others for learning, and others for decision-making. There is no reason why data from a given study can’t be used for all, as long as we are transparent in our intentions.

Continued advances in innovative designs and model-informed drug development will require that we work together. We must synergistically apply the best approaches of both sciences with appropriate rigor to accelerate delivery of more effective and safer medicines to patients. Ideally, we can do this while increasing our confidence and transparency in the reliability of our conclusions.

This is a good discussion. There are two things I have seen statisticians often struggle with PMX. One is that there are not much discussions in terms of the limiations of PMX models. The limitations on well known statistical models have been studied for a long time and taught in schools. Some of them actually inherited by PMX models, others are corrected. It would be importation for pharmacometricians to study the limitations of the PMX models.
The other thing is the attention to study design elements and source of variability of responses in PMX models. There are many sources of variability due to study design (such as period effect in a crossover study). They should be adjusted before evaluating the factors of interest. With/without these factors can lead to different set of covariate in population analysis.

Many thanks to @rbruno100 for the Sheiner piece. The problem of SxP is, I believe, fundamentally a philosophical one. So Sheiner’s unselfconscious and unaffected appeal to epistemology seems most salutary. I believe Sheiner ends up with an imprecise diagnosis, however, because he does not avail himself of important ideas from Karl Popper’s toolkit. Specifically, I am inclined to identify what Karl Popper called ‘justificationism’ [1] as the root cause of the problem. (Sheiner does seem to appreciate something to this effect in his discussion around “certainty” in the HOW DID THIS HAPPEN section. My aim in introducing justificationism is not to substitute a more posh-sounding word, but rather to provide an index to Popper’s comprehensive, far-ranging and devastating treatment in [1].)

The schism between EBM and Person-Centered Healthcare (PCH) bears some strong resemblance to that between (respectively) Biostatistics and Pharmacometrics. Both schisms originate in large part from divergent metaphysical commitments. I have advanced this case in a paper slated for the upcoming 2017 Philosophy Theme Issue of JECP, but already published early online [2]. My references #2 and #10 in the piece make excellent introductions to the philosophical problems (and the associated technical language), readily accessible to persons already steeped in the sciences.

Kind regards,
David

  1. Popper KR, Bartley WW, Popper KR. Realism and the Aim of Science. London ; New York: Routledge; 1993.
  2. Norris DC. Casting a realist’s eye on the real world of medicine: Against Anjum’s ontological relativism. Journal of Evaluation in Clinical Practice. 2017. doi:10.1111/jep.12689.

Apropos of @jingtao.wu’s comments, @andy.stein’s preprint on BioRxiv’s new “Drug Development and Clinical Therapeutics” channel shows an example of a pharmacometrician examining his models in a mode that statisticians would very much appreciate and understand. (The model limitations Andy explores are specifically those related to parameter identifiability.)

Following up @rbruno100’s post, having at last been able to acquire and read Senn’s 2010 piece, “Statisticians and Pharmacokineticists: What They Can Still Learn From Each Other?” I now have a question about that piece.

I’m puzzled by Senn’s characterizations (p. 2, left column—quoting from his 1997 paper) of pharmaceutical statistics as “pragmatic in purpose” and pharmaco[metrics] as “explanatory in purpose.” I’m not sure I quite buy this characterization, nor that I truly know just what Senn might mean by this. Could anyone enlighten me? Some of this seems to loop back to @MattR’s commentary from Jan 16.

Great question! It would be lovely to have Professor Senn chime in. I often think of statistics as evaluating what happened and pharmacometrics as explaining why it happened. This is largely due to the differences in the structures of the models. Typically, the statistical models used to evaluate a clinical study are constructed to detect a difference between the treatment arms (sometimes assuming a structure for dose levels, sometimes not). They rely on randomization to provide a causal mechanism for inference, and seek to determine whether the observed differences are greater than would be expected if the treatment(s) had no effect. Pharmacometric models, on the other hand, typically incorporate exposure as a causal mechanism for response. Further, they leverage knowledge of pharmacology and physiology to provide explanations for variability in exposure and the subsequent impact on response. As is always the case, the appropriateness of the modeling approach depends on the question being asked, but typically the two approaches can be used synergistically to provide a robust understanding of clinical trial results. I’d be interested in others’ thoughts as well.

1 Like

I think that MattR is broadly right in the distinctions given.

The distinction between explanatory and pragmatic purposes of clinical trials is due to : Schwartz, D., & Lellouch, J. (1967). Explanatory and pragmatic attitudes in therapeutic trials. J Chronic Dis, 20, 637-648
Another word for explanatory might be scientific
Because the most visible part of drug development that statisticians are involved in is Phase III where relatively assumption-free demonstrations of proof associated with intention to treat philosophies are regarded as being particularly valuable, statisticians tend to have been involved with pragmatic trials. Designs are chosen that do not need modelling assumptions but can be justified in terms of randomisation provided that data are not missing. The causes investigated are human: give a patients this drug and this happens.
Pharmacometricians are more interested in relating effects to underlying chemical and physiological processes.

However, my own view is that these two approaches need not be as different as sometimes suggested. For example, statistical analysis, even if strongly related to design employed and randomisation, can be improved by using modelling approaches. See

Senn, S. J. (2005). An unreasonable prejudice against modelling? Pharmaceutical statistics, 4, 87-89

On the other hand, randomised designs are rarely optimal but also often robust. I think it can be useful to consider carefully what has to be assumed when using alternatives, such as adaptive or dose escalation designs. These can often turn out to be extremely inefficient if secular trends are present and this serves to show that strong (if admittedly on occasion reasonable) assumptions are involved in using them. IMO these assumptions are best made explicit rather than overlooking them as if of no consequence.

Nevertheless, these distinctions between the two schools are unimportant compared to what they together can teach the world: namely that different sources of variation have to be understood and hence analysed appropriately using carefully designed experiment. Only then can we manage complexity appropriately. Widespread failure to appreciate this is responsible for the overblown claims we currently have regarding personalised medicine. See https://errorstatistics.com/2014/07/26/s-senn-responder-despondency-myths-of-personalized-medicine-guest-post/

2 Likes

Many thanks, @StephenSenn, for elaborating on this point. Seeking out Schwartz & Lellouch (1967), I have learned it was republished more recently in J Clin Epidemiol., alongside 7 other fresh articles revisiting the explanatory-pragmatic distinction [1-8]. One of these articles [8] even articulates a new mechanistic-practical contrast, advancing ‘practical’ trials as useful for individual-level decision making (doctor-patient) as against ‘pragmatic’ trials that would appeal to policy-makers.

What that larger discussion helped me to appreciate, however, was that the explanatory-pragmatic continuum concerns the intent, design and interpretation of RCTs specifically. In that regard, this distinction places matters in an overly ‘biostatistical’ frame, and so may not be suited to bridge an ideological or methodological gap with pharmacometrics. Furthermore, as your example of covariate adjustment post randomization illustrates, the distinction seems to grate against common-sense word meanings—at least for pragmatic. A trialist who accepts that all human knowledge is conjectural, and that randomization is imperfect, may adopt a ‘pragmatic’ (in the common-sense meaning) approach to analyzing her trial, and undertake to explore what different estimates emerge from adjustment under a variety of conjectured models. (To make matters worse, some of those conjectured models may even have an ‘explanatory’ character!) This brand of ‘pragmatism’ (which I would identify more formally with the metaphysical position of representational realism [9]) seems in ample supply in Pharmacometrics, and indeed rather too scarce in a Biostatistics dominated by idealism (in the technical sense [9]) or by what Whitehead called fallacies of misplaced concreteness [10].

  1. Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Clin Epidemiol. 2009;62(5):499-505. doi:10.1016/j.jclinepi.2009.01.012.
  2. Zwarenstein M, Treweek S. What kind of randomized trials do we need? J Clin Epidemiol. 2009;62(5):461-463. doi:10.1016/j.jclinepi.2009.01.011.
  3. Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62(5):464-475. doi:10.1016/j.jclinepi.2008.12.011.
  4. Oxman AD, Lombard C, Treweek S, Gagnier JJ, Maclure M, Zwarenstein M. Why we will remain pragmatists: four problems with the impractical mechanistic framework and a better solution. J Clin Epidemiol. 2009;62(5):485-488. doi:10.1016/j.jclinepi.2008.08.015.
  5. Oxman AD, Lombard C, Treweek S, Gagnier JJ, Maclure M, Zwarenstein M. A pragmatic resolution. J Clin Epidemiol. 2009;62(5):495-498. doi:10.1016/j.jclinepi.2008.08.014.
  6. Maclure M. Explaining pragmatic trials to pragmatic policymakers. J Clin Epidemiol. 2009;62(5):476-478. doi:10.1016/j.jclinepi.2008.06.021.
  7. Karanicolas PJ, Montori VM, Devereaux PJ, Schünemann H, Guyatt GH. The practicalists’ response. J Clin Epidemiol. 2009;62(5):489-494. doi:10.1016/j.jclinepi.2008.08.013.
  8. Karanicolas PJ, Montori VM, Devereaux PJ, Schünemann H, Guyatt GH. A new ’Mechanistic-Practical” Framework for designing and interpreting randomized trials. J Clin Epidemiol. 2009;62(5):479-484. doi:10.1016/j.jclinepi.2008.02.009.
  9. Blackmore J. On the Inverted Use of the Terms “Realism” and “Idealism” Among Scientists and Historians of Science. Br J Philos Sci. 1979;30(2):125–134.
  10. Norris DC. Dose Titration Algorithm Tuning (DTAT) should supersede ‘the’ Maximum Tolerated Dose (MTD) in oncology dose-finding trials. F1000Research. 2017;6:112. doi:10.12688/f1000research.10624.2.