Uncategorized

Athlete Biological Passport Standard Deviation ?

When I drill down in to data it’s no surprise when I catch small errors and discrepancies pop. But the thing that jumped out when drilling down into the Cobo ABP data http://veloclinic.com/cobo-athlete-biological-passport-visualization-and-discussion/ was that the Z-score model was apparently producing far wider cut offs than the actual ABP. Jeroen Swart pointed this out to me (hopefully I’m not getting him in trouble by dragging him in to this).

Doing some quick cut off hacking I found the Z-score model converged with the ABP output on the example he tweeted if I used 1 standard deviation cut offs rather than 2.3 standard deviation cut offs. See my Cobo post for the quick rationale why the Z-score and ABP models should basically converge given sufficient data points.

Interest piqued, I grabbed some published examples and the first to take on is Zorzoli and Rossi Figure 2 https://onlinelibrary.wiley.com/doi/full/10.1002/dta.173

Figure reproduced for education purposes only.

So first off, the numbers are hard to read due to poor resolution and overlap so I did my best to reproduce them. Not getting the number quite right can affect the work below. Then I plotted the figure date with the Z-score model with 1 standard deviation cut offs overlaid to see how they compare.

From the plots its clear that the models converge very closely on the OFF score and Reticulocyte % and fairly well on the Hgb.

This convergence is a problem for the paper because the paper uses this figure as an example of a likely doped profile:

ABP profile of an athlete considered as suspicious

And states:

In these profiles the Bayesian adaptive model has identified the Hb or Off‐hr score abnormal with a 99% probability (either for the single measurement as a function of previous results or for the complete sequence) or with normal or lower levels of probability.

Meaning that the figure is showing points that are outside the 99th percentile i.e. outside 2.3 standard deviations.

Recall however, I am using 1 standard deviation (67th percentile) as the cut offs for the Z-score model and that the Z-score models and ABP models should converge.

Given that the ABP software is not publicly available I can’t confirm what it statistically to generate the figure used by Zorzoli and Rossi, but I can show my work for the Z-score model: https://drive.google.com/file/d/1YqcRHieehucKumXG9QhcjAsC34Vnd7OH/view?usp=sharing

The question is whether this is a one off issue of stat hacking in a couple of figures used for “illustrative” purposes, or has the ABP black box output not been sufficiently vetted/replicated?

Or is something else entirely going on. For example, the published literature on the ABP is that the cut offs are based on specificity rather than probability and is there some undisclosed doping “prevalence” being passed in to the ABP model which happens to work out to probability cut offs with tighter bounds?

Don’t know, either way interesting…

Thanks for paying attention, cheers.