Optimal PK sampling shedule

I was talking with some friends the other day about optimizing PK sampling designs and was a little bit surprised at how infrequently they used optimal design. That got me wondering whether this was particular to their situation or an opportunity that is being missed more generally.

When you’re designing a PK study, how often do you use optimal design software or simulation to help plan a PK sampling schedule? When you don’t use OD tools, why don’t you? Is it too much effort? Are the tools not readily available to you? Is the sampling schedule driven by other factors out of your control?


Hi Jonathan,

Thank you for bringing up this topic. There are several methods that I know of:

  1. Select PK sampling time points by eyeball: several time points to catch Cmax and the elimination phase.
  2. WinNonlin has a inflation factor function in WNL Classic 5 Modeling -> PK Model. It can compare the two sets of PK sampling time points and tell which one is better. WinNonlin user guide has an example of this.
  3. PFIM can automatically select the optimal PK sampling time points for a user given condition.
  4. Simulation and bootstrap method to manually select the optimal PK sampling time points.

In my opinion, for the NCA purpose, the 1st and 2nd methods are frequently used. For the sparse sampling purpose, the 3rd and 4th methods are frequently used.
The sampling schedule is also driven by the availability and convenience of the patient and to collect other blood samples for medical reasons at the same time.

Best wishes,


Hi Jonathan and Michelle,

I find a few different things at play here:

In early development, sampling times are an educated guess based on the nonclinical data. Assuming a model structure is over-ambitious, and more samples should be taken so that information is learned early. In large Phase 2 trials, sampling times are often driven by logistics for the patient and site.

The time that I find optimal sampling most useful is for late-phase biocomparison or DDI studies where most of the information is known a priori.

That said, I don’t know how often optimal design is used there. I usually see those studies replicate the early phase 1 sampling scheme with minor tweaks.


1 Like

Hi Jonathan, Michelle and Bill,

Thanks for bringing up such an interesting question!

There are several hindrances that often discourage users to immerse themselves in OD:

  1. The mathematical aspect of the theory itself (Believe me or not but matrices scare people off !!!) and often communication used to describe OD in conferences/meetings highly emphasizes on equations instead of a focus on potential applications. This approach only already discourages people to even start trying.
  2. Softwares that are not always the most user-friendly and often under-advertised (and developed mainly by academics only), although the PODE (Population Optimum Design of Experiments) group is trying to bring together a bundle of recommended ones through several publications where they perform comparison tests.
  3. For practical reason, re-coding the final model obtained in NM or Monolix for the OD tool can be a hurdle for many. Hopefully platforms like DDMORE will help.
  4. The aspect of who should own the decision on design variables: is it the clinical pharmacologist or the statistician? This line is not always crystal clear in teams
  5. The myth that OD is all about PK samples only, therefore, when it is simple and one- or two-compartment models, why do we even bother to optimize since we can guess where samples should be taken?
  6. The assumption of a model (although OD has several optimality criteria to take into account the model uncertainty and any OD design should be as a good practice, tested using simulation and re-estimation approaches) and this can be indeed problematic for early phase (when prior information is poor).
  7. The idea that OD will only give you implausible and non-realistic sampling points (there are actually ways to constraints the numeric values to only integers or so).

The limitations described above are just a few, but enough to explain why OD is slow-taken.

Saying so, OD is definitely useful in multiple areas, even in PK samples!
For instance, pediatric (or elderly or special subgroup) studies require minimum intervention, which sampling points from the adult profile do we take away? How do we take into account age group 's profiles (maturation/allometric scaling etc.) and have in average sampling points that will describe well the clearance ? what about sampling windows?
Another situation could be DDI studies as Bill mentioned.

In my opinion though, I think that most of the benefits go to the optimization of PK/PD or disease progression models, and has not been yet explored enough, either in academics or industries.
For instance, expensive biomarkers sampling or invasive interventions, composite score endpoints that can be trimmed down to the subscores the most informative to drug effet/disease progression etc., enrollment criteria decision-making (in slow disease progression, what age group is the most informative), dose-ranging or adaptive designs (what dose level, how many per group), etc.

There are definitely a lots of opportunities out there to explore for OD.

Best wishes,


That’s a great answer [@vongc] Camille, to an interesting thread (thanks Jonathan) and touches on a number of points that led me to propose the session on ‘optimal design in preclinical oncology’ at last year’s ACOP7 meeting. I think you’re absolutely right - if it’s only about monotherapy PK, it’s an uphill battle to go ‘optimal’.

For me, (and I do work mostly in the preclinical space, so it may not generalize), I think the real value is likely to come when there are multiple readouts to be assessed or multiple timescales. Examples might be combination studies using compounds with very different PK profiles (that might be the DDI example that @bdenney Bill proposed; or PD endpoints that show very different kinetics, perhaps target-engagement AND cancer-pathway inhibition or efficacy AND safety endpoints).

I’m working up some examples of these using some popular tools in R (popED, RxODE). I have in mind to publish a tutorial at some point, would welcome input from interested parties/co-authors.



Optimal design seems to be employed proportionally to the to cost of the samples and the consequence of not getting the right information. E.g. in the pediatric space, sampling is very “expensive” (patient burden). Davda 2014 provides a set of PK samples “optimal” for mAbs. That’s for a healthy volunteer trial, so the sampling is “cheap”.

Where I think OD should be more often applied is dose selection in Phase 2. Andrew Hooker and France Mentre have been thinking about this area a lot, and I hope industry takes note and designs Phase 2’s (efficiently) to capture the dose-response correctly.

1 Like