So after posting I did some more benchmarking - it seems that the ‘bottleneck’ when mapping between mapped volumes is the initial file creation of the swath of temp files. I found that when running in a folder that was synced back to the host os as a volume (windows) running something the first time was ~30% slower, however when re-running the model, the docker and native performance were identical (sometimes even faster on a windows machine with different compilers). I’m not sure if the extra I/0 is bottlenecking nonmem from proceeding to the next step or what, or if it is a side effect of having those files into the host os cache.
To your point, the volatility and $$$ of a ram disk seems like it would be a bit overkill IMO, I haven’t been bottlenecked yet to my knowledge with a ssd, even when writting out fort50 chains for bayesian fits. Nevertheless, I also do have my eye on some of the pcie ssds that have over 1gbps throughput.
Anyway, I also pushed up some docker files that I’m using - https://github.com/dpastoor/dockerfiles
They are a bit simpler than yours, (need to manually put the NMCD and nonmem.lic file in the parent folder, but they are simple and easy to follow. bare nonmem + psn image comes out to ~ 850 mb.
@bdenney have you tried doing any cloud deployments with docker - eg package up a set of runs as a snapshot, and push that snapshot to aws/gce/digital ocean and run that way? Or just mainly using it for isolation?