For a single time series, the distribution is the same distribution as in the two sample Kolmogorov Smirnov Test, namely the distribution of the maximal value of the absolute values of a Brownian bridge. It is computated as follows (Durbin, 1973 and van Mulbregt, 2018):
For \(t_n(x) < 1\):
$$
P(t_n(X) \le t_n(x)) =
\frac{\sqrt{2 \pi}}{t_n(x)} t (1 + t^8(1 + t^{16}(1 + t^{24}(1 + ...))))$$
up to \(t^{8 k_{max}},
k_{max} = \lfloor \sqrt{2 - \log(tol)}\rfloor\), where \(t = \exp(-\pi^2 / (8x^2))\)
else:
$$
P(t_n(X) \le t_n(x)) = 2 \sum_{k = 1}^{\infty} (-1)^{k - 1} \exp(-2 k^2 x^2)$$
until
\(|2 (-1)^{k - 1} \exp(-2 k^2 x^2) - 2 (-1)^{(k-1) - 1} \exp(-2 (k-1)^2 x^2)| \le tol.
\)
In case of multiple time series, the distribution equals that of the maximum of an p
dimensional squared Bessel bridge. It can be computed by (Kiefer, 1959)
$$P(t_n(X) \le t_n(x)) =
\frac{4}{ \Gamma(p / 2) 2^{p / 2} t_n^p } \sum_{i = 1}^{\infty} \frac{(\gamma_{(p - 2)/2, n})^{p - 2} \exp(-(\gamma_{(p - 2)/2, n})^2 / (2t_n^2))}{J_{p/2}(\gamma_{(p - 2)/2, n})^2 },$$
where \(J_p\) is the Bessel function of first kind and p-th order, \(\Gamma\) is the gamma function and \(\gamma_{p, n}\) denotes the n-th zero of \(J_p\).