Join us this Friday, April 7, 2017 for an interesting discussion on Dose Titration Algorithm Tuning (DTAT) by David Norris. Please see below for background material, youtube link for the session and timings. Spread the word!
A question from a viewer - What would be the source of decision to chose the initial slope for dose titration?
how much overhead is DTAT compared to traditional EBE or bayesian probability driven dose decisions
Thanks for this question, Vijay. I’m glad to follow up now also with a written answer.
The initial slope mentioned here is just 1 of 3 tuning parameters in Table 3 in the DTAT paper. I think the question applies equally well to any such tuning parameter (like the starting dose, or the relaxation constant), so let me answer it in that broader context here.
In the paper, the starting value of slope1 is derived from a simulation exercise. To the extent that animal or prior first-in-human studies yield models that can be used as a basis for similar simulations, then you could conduct a similar exercise. At a philosophical level, DTAT embraces modeling and the fact that our models are always imperfect. By the time we’re willing to try an investigational new drug in humans, we ought by all means to have not just one model, but several competing models that could address the question of initial tuning parameters. The principle might very well be to perform DTAT in silico before doing it in vivo.
As to the question of (computational) overhead—please correct me if something else was meant—the calculations performed for this study were nearly instantaneous. The pomp package I used in this work employed deSolve under the hood to run my ODEs, and this works very quickly on models like these. I believe the DTAT paper makes it abundantly clear that even very modest titration algorithms that move us in the direction of MTDi can yield great benefits.
But I very much hope that practical DTAT applications of the future will increasingly involve very rich models (e.g., SDEs rather than ODEs) with many parameters and multivariate measurements to be assimilated. The best cumulative growth of our knowledge will come from applying intensely realistic, mechanistic models amenable to ‘multivariate’ criticism from multiple directions. (The more severe and multifaceted the criticism, the greater the improvement these models can enjoy, and the greater the growth in our knowledge!) Consequently, I anticipate that practical DTAT applications will increasingly use computationally demanding model formulations and estimation methods (like particle MCMC, which I’m exploring currently).
On the other hand, I expect these computational constraints will be severely binding only in our R&D work (with its multiple, nested simulations, etc.) and not in the dosing of individual patients in trials. (That is, I doubt the care team will ever be impatiently waiting for MCMC chains to mix as the next scheduled dose draws nigh. But we, on the other hand, ought to be pushing the limits of our patience in our modeling & simulation work, and telling our computation colleagues about our problems.)