# Rethinking Intermittent Modelling

Previous attempts have have focused on either trying to normalize power to Critical Power/Functional Threshold Power:

nP = %CP^4

or

vessel approaches of trying to dynamically track W’/FRC balance

W’b = W’ – (P-CP)*t + (CP-P)*t*(f[reconstitution]).

The normalized power approach will typically work reasonably well for power outputs close to CP/FTP. Outside a fairly narrow range however physiological variability is going to make it fairly useless for many.

The vessel approaches will potentially work better as demonstrated by Skiba. However, any dead reckoning is going to prone to cumulative drift and drift issues will only be compounded by trying to expand the model to include limiters that come into to play above Super Critical Power and below Critical Power.

My thought is to consider a statistical approach were pacing/stress follow a simple rule of local and global intermittency.

The basic premise is that the best mean maximal power for any given duration can be achieved by a nearly constant effort when starting from a primed state. Any deviation from this mean will result in a penalty in terms of lost potential work.

The deviation from the mean, or intermittency, can be quantified as the relative standard error. The penalty associated with the intemittency can be normalized to intermittency capacity at any given power.

For example between CP and SCP, since work is debited against a fully available W’ regardless of the rate, then within in this range there is little penalty for intermittency. Note that while the intermittency envelope is going to have some structural overlap with the W’ envelope, it doesn’t have the weirdness of suddenly disappearing below CP.

Plotting the results of power versus intermittency versus time gives us a three dimensional illustration of performance capability.

Hmm, it looks like a Power Duration Curve but now all grown up.