Leading strategies
in competitive online prediction
Abstract
We start from a simple asymptotic result for the problem of online regression with the quadratic loss function: the class of continuous limitedmemory prediction strategies admits a “leading prediction strategy”, which not only asymptotically performs at least as well as any continuous limitedmemory strategy but also satisfies the property that the excess loss of any continuous limitedmemory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.
1 Introduction
Suppose is a normed function class of prediction strategies (the “benchmark class”). It is well known that, under some restrictions on , there exists a “master prediction strategy” (sometimes also called a “universal strategy”) that performs almost as well as the best strategies in whose norm is not too large (see, e.g., [9, 5]). The “leading prediction strategies” constructed in this paper satisfy a stronger property: the loss of any prediction strategy in whose norm is not too large exceeds the loss of a leading strategy by the divergence between the predictions output by the two prediction strategies. Therefore, the leading strategy implicitly serves as a standard for prediction strategies in whose norm is not too large: such a prediction strategy suffers a small loss to the degree that its predictions resemble the leading strategy’s predictions, and the only way to compete with the leading strategy is to imitate it.
We start the formal exposition with a simple asymptotic result (Proposition 1 in §2) asserting the existence of leading strategies in the problem of online regression with the quadratic loss function for the class of continuous limitedmemory prediction strategies. To state a nonasymptotic version of this result (Proposition 2) we introduce several general definitions that are used throughout the paper. In the following two sections Proposition 2 is generalized in two directions, to the loss functions given by Bregman divergences (§3) and by strictly proper scoring rules (§4). Competitive online prediction typically avoids making any stochastic assumptions about the way the observations are generated, but in §5 we consider, mostly for comparison purposes, the case where observations are generated stochastically. That section contains most of the references to the related literature, although there are bibliographical remarks scattered throughout the paper. The proofs are gathered in §6. The final section, §7, discusses possible directions of further research.
There are many techniques for constructing master strategies, such as gradient descent, strong and weak aggregating algorithms, following the perturbed leader, defensive forecasting, to mention just a few. In this paper we will use defensive forecasting (proposed in [31] and based on [39, 32] and much earlier work by Levin, Foster, and Vohra). The master strategies constructed using defensive forecasting automatically satisfy the stronger properties required of leading strategies; on the other hand, it is not clear whether leading strategies can be constructed using other techniques.
2 Online quadraticloss regression
Our general prediction protocol is:
Online prediction protocol
FOR :
Reality announces .
Predictor announces .
Reality announces .
END FOR.
At the beginning of each round Forecaster is given some side information relevant to predicting the following observation , after which he announces his prediction . The side information is taken from the information space , the observations from the observation space , and the predictions from the prediction space . The error of prediction is measured by a loss function , so that is the loss suffered by Predictor on round .
A prediction strategy is a strategy for Predictor in this protocol. More explicitly, each prediction strategy maps each sequence
(1) 
to a prediction ; we will call the situation space and its elements situations. We will sometimes use the notation
(2) 
where and are Reality’s moves in the online prediction protocol.
In this section we will always assume that for some , , and ; in other words, we will consider the problem of online quadraticloss regression (with the observations bounded in absolute value by a known constant ).
Asymptotic result
Let be a positive integer. We say that a prediction strategy is order Markov if depends on (2) only via . More explicitly, is order Markov if and only if there exists a function
such that, for all and all (2),
A limitedmemory prediction strategy is a prediction strategy which is order Markov for some . (The expression “Markov strategy” being reserved for “order 0 Markov strategy”.)
Proposition 1
Let and be a metric compact. There exists a strategy for Predictor that guarantees
(3) 
as for the predictions output by any continuous limitedmemory prediction strategy.
The strategy whose existence is asserted by Proposition 1 is a leading strategy in the sense discussed in §1: the average loss of a continuous limitedmemory strategy is determined by how well it manages to imitate the leading strategy. And once we know the predictions made by and by the leading strategy, we can find the excess loss of over the leading strategy without need to know the actual observations.
Leading strategies for reproducing kernel Hilbert spaces
In this subsection we will state a nonasymptotic version of Proposition 1. Since is a vector space, the sum of two prediction strategies and the product of a scalar (i.e., real number) and a prediction strategy can be defined pointwise:
Let be a Hilbert space of prediction strategies (with the pointwise operations of addition and multiplication by scalar). Its embedding constant is defined by
(4) 
We will be interested in the case and will refer to satisfying this condition as reproducing kernel Hilbert spaces (RKHS) with finite embedding constant. (More generally, is said to be an RKHS if the internal supremum in (4) is finite for each .) In our informal discussions we will be assuming that is a moderately large constant.
Proposition 2
Let , , and be an RKHS of prediction strategies with finite embedding constant . There exists a strategy for Predictor that guarantees
(5) 
where are ’s predictions, .
For an whose norm is not too large (i.e., satisfying ), (5) shows that
Proposition 1 is obtained by applying Proposition 2 to large (“universal”) RKHS. The details will be given in §6, and here we will only demonstrate this idea with a simple but nontrivial example. Let and be positive integer constants such that . A prediction strategy will be included in if its predictions satisfy
where is a function from the Sobolev space (see, e.g., [2] for the definition and properties of Sobolev spaces); is defined to be the Sobolev norm of . Every continuous function of can be arbitrarily well approximated by functions in , and so is a suitable class of prediction strategies if we believe that neither nor are useful in predicting .
Very large benchmark classes
Some interesting benchmark classes of prediction strategies are too large to equip with the structure of RKHS [35]. However, an analogue of Proposition 2 can also be proved for some Banach spaces of prediction strategies (with the pointwise operations of addition and multiplication by scalar) for which the constant defined by (4) is finite. The modulus of convexity of a Banach space is defined as the function
where is the unit sphere in .
The existence of leading strategies (in a somewhat weaker sense than in Proposition 2) is asserted in the following result.
Proposition 3
Let , , and be a Banach space of prediction strategies having a finite embedding constant (see (4)) and satisfying
for some . There exists a strategy for Predictor that guarantees
(6) 
where are ’s predictions.
The example of a benchmark class of prediction strategies given after Proposition 2 but with ranging over the Sobolev space , , is covered by this proposition. The parameter describes the “degree of regularity” of the elements of , and taking sufficiently large we can reach arbitrarily irregular functions in the Sobolev hierarchy.
3 Predictions evaluated by Bregman divergences
A predictable process is a function mapping the situation space to , . Notice that for any function and any prediction strategy the composition (mapping each situation to ) is a predictable process; such compositions will be used in Theorems 1–3 below. A Hilbert space of predictable processes (with the usual pointwise operations) is called an RKHS with finite embedding constant if (4) is finite.
The notion of Bregman divergence was introduced in [8], and is now widely used in competitive online prediction (see, e.g., [18, 6, 19, 21, 10]). Suppose (although it would be interesting to extend Theorem 1 to the case where is replaced by any Euclidean, or even Hilbert, space). Let and be two realvalued functions defined on . The expression
(7) 
is said to be the corresponding Bregman divergence if whenever . (Bregman divergence is usually defined for and ranging over a Euclidean space.) In all our examples will be a strictly convex continuously differentiable function and its derivative, in which case we abbreviate to .
We will be using the standard notation
where is a subset of the domain of .
Theorem 1
Suppose is a bounded subset of . Let be an RKHS of predictable processes with finite embedding constant and be realvalued functions on . There exists a strategy for Predictor that guarantees, for all prediction strategies and ,
(8) 
where are ’s predictions.
The expression in (8) is interpreted as when ; in this case (8) holds vacuously. Similar conventions will be made in all following statements.
Two of the most important Bregman divergences are obtained from the convex functions and (negative entropy, defined for ); they are the quadratic loss function
(9) 
and the relative entropy (also known as the Kullback–Leibler divergence)
(10) 
respectively. If we apply Theorem 1 to them, (9) leads (assuming ) to a weaker version of Proposition 2, with the righthand side of (8) twice as large as that of (5), and (10) leads to the following corollary.
Corollary 1
Let , , and the loss function be
(defined in (10)). Let be an RKHS of predictable processes with finite embedding constant . There exists a strategy for Predictor that guarantees, for all prediction strategies ,
where are ’s predictions.
The log likelihood ratio appears because in this case.
Analogously to Proposition 2, Theorem 1 (as well as Theorems 2–3 in the next section) can be easily generalized to Banach spaces of predictable processes. One can also state asymptotic versions of Theorems 1–3 similar to Proposition 1; and the continuous limitedmemory strategies of Proposition 1 could be replaced by the equally interesting classes of continuous stationary strategies (as in [34]) or Markov strategies (possibly discontinuous, as in [33]). We will have to refrain from pursuing these developments in this paper.
4 Predictions evaluated by strictly proper scoring rules
In this section we consider the case where and . Every loss function will be extended to the domain by the formula
intuitively, is the expected loss of the prediction when the probability of is . Let us say that a loss function is a strictly proper scoring rule if
(it is optimal to give the prediction equal to the true probability of when the latter is known and belongs to ). In this case the function
can serve as a measure of difference between predictions and : it is nonnegative and is zero only when . (Cf. [14], §4.)
The exposure of a loss function is defined as
Theorem 2
Let , , be a strictly proper scoring rule, and be an RKHS of predictable processes with finite embedding constant . There exists a strategy for Predictor that guarantees, for all prediction strategies and all ,
(11) 
where are ’s predictions.
Two popular strictly proper scoring rules are the quadratic loss function and the log loss function
Applied to the quadratic loss function, Theorem 2 becomes essentially a special case of Proposition 2. For the log loss function we have , and so we obtain the following corollary.
Corollary 2
Let , , , be the log loss function, and be an RKHS of predictable processes with finite embedding constant . There exists a strategy for Predictor that guarantees, for all prediction strategies ,
where are ’s predictions.
A weaker version (with the bound twice as large) of Corollary 2 would be a special case of Corollary 1 were it not for the restriction of the observation space to in the latter. Using methods of [31], it is even possible to get rid of the restriction in Corollary 2. Since the log loss function plays a fundamental role in information theory (the cumulative loss corresponds to the code length), we state this result as our next theorem.
Theorem 3
Let , , be the log loss function, and be an RKHS of predictable processes with finite embedding constant . There exists a strategy for Predictor that guarantees, for all prediction strategies ,
where are ’s predictions.
5 Stochastic Reality and Jeffreys’s law
In this section we revert to the quadratic regression framework of §2 and assume , . (It will be clear that similar results hold for Bregman divergences and strictly proper scoring rules, but we stick to the simplest case since our main goal in this section is to discuss the related literature.)
Proposition 4
Suppose . Let be a prediction strategy and be generated as (remember that are defined by (2)), where the noise random variables have expected value zero given . For any other prediction strategy , any , and any ,
(12) 
with probability at least , where are ’s predictions and are ’s predictions.
Corollary 3
Suppose . Let be an RKHS of prediction strategies with finite embedding constant , be a prediction strategy whose predictions are guaranteed to satisfy (5) (a “leading prediction strategy”), be a prediction strategy in , and be generated as where the noise random variables have expected value zero given . For any and any , the conjunction of
(13) 
and
(14) 
holds with probability at least , where are ’s predictions and are ’s predictions.
We can see that if the “true” (in the sense of outputting the true expectations) strategy belongs to the RKHS and is not too large, not only the loss of the leading strategy will be close to that of the true strategy, but their predictions will be close as well.
Jeffreys’s law
In the rest of this section we will explain the connection of this paper with the phenomenon widely studied in probability theory and the algorithmic theory of randomness and dubbed “Jeffreys’s law” by Dawid [12, 15]. The general statement of “Jeffreys’s law” is that two successful prediction strategies produce similar predictions (cf. [12], §5.2). To better understand this informal statement, we first discuss two notions of success for prediction strategies.
As argued in [37], there are (at least) two very different kinds of predictions, which we will call “Spredictions” and “Dpredictions”. Both Spredictions and Dpredictions are elements of (in our current context), and the prefixes “S” and “D” refer to the way in which we want to evaluate their quality. Spredictions are Statements about Reality’s behaviour, and they are successful if they withstand attempts to falsify them; standard means of falsification are statistical tests (see, e.g., [11], Chapter 3) and gambling strategies ([28]; for a more recent exposition, see [25]). Dpredictions do not claim to be falsifiable statements about Reality; they are Decisions deemed successful if they lead to a good cumulative loss.
As an example, let us consider the predictions and in Proposition 4. The former are Spredictions; they can be rejected if (12) fails to happen for a small (the complement of (12) can be used as the critical region of a statistical test). The latter are Dpredictions: we are only interested in their cumulative loss. If are successful ((12) holds for a moderately small ) and are successful (in the sense of their cumulative loss being close to the cumulative loss of the successful Spredictions ; this is the best that can be achieved as, by (12), the latter cannot be much larger than the former), they will be close to each other, in the sense . We can see that Proposition 4 implies a “mixed” version of Jeffreys’s law, asserting the proximity of Spredictions and Dpredictions.
Similarly, Corollary 3 is also a mixed version of Jeffreys’s law: it asserts the proximity of the Spredictions (which are part of our falsifiable model ) and the Dpredictions (successful in the sense of leading to a good cumulative loss; cf. (5)).
Proposition 2 immediately implies two “pure” versions of Jeffreys’s laws for Dpredictions:

if a prediction strategy with not too large performs well, in the sense that its loss is close to the leading strategy’s loss, ’s predictions will be similar to the leading strategy’s predictions; more precisely,

therefore, if two prediction strategies and with and not too large perform well, in the sense that their loss is close to the leading strategy’s loss, their predictions will be similar.
It is interesting that the leading strategy can be replaced by a master strategy for the second version: if and gave very different predictions and both performed almost as well as the master strategy, the mixed strategy would beat the master strategy; this immediately follows from
where and are ’s and ’s predictions, respectively, and is the observation.
The usual versions of Jeffreys’s law are, however, statements about Spredictions. The quality of Spredictions is often evaluated using universal statistical tests (as formalized by MartinLöf [23]) or universal gambling strategies (Levin [22], Schnorr [24]). For example, Theorem 7.1 of [13] and Theorem 3 of [29] state that if two computable Sprediction strategies are both successful, their predictions will asymptotically agree. Earlier, somewhat less intuitive, statements of Jeffreys’s law were given in terms of absolute continuity of probability measures: see, e.g., [7] and [20]. Solomonoff [26] proved a version of Jeffreys’s law that holds “on average” (rather than for individual sequences).
This paper is, to my knowledge, the first to state a version of Jeffreys’s law for Dpredictions (although a step in this direction was made in Theorem 8 of [30]).
6 Proofs
In this section we prove, or give proof sketches of, Propositions 1–4 and Theorems 1–3. Proposition 2 is a special case of Theorem 1, but its proof is more intuitive and we give it separately (proving Proposition 3 along the way).
Proof of Propositions 2 and 3
Noticing that
(15) 
we can use the results of [36], §6, asserting the existence of a prediction strategy producing predictions that satisfy
(16) 
(see (24) in [36]; this a special case of good calibration) and
(17) 
(see (25) in [36]; this a special case of good resolution).
Replacing (16) and (17) with the corresponding statements for Banach function spaces ([35], (52) and (53)) we obtain the proof of Proposition 3.

In [36] we considered only prediction strategies for which depends on (see (2)) via ; in the terminology of this paper these are (order 0) Markov strategies. It is easy to see that considering only Markov strategies does not lead to a loss of generality: if we redefine the object as , any prediction strategy will become a Markov prediction strategy.
Proof of Proposition 1
Proposition 1 will follow from the following lemma, proved (without stating it explicitly) in [27] (proof of Theorem 2).
Lemma 1 ([27])
Let be a separable set in . There exists an RKHS on with finite embedding constant such that is dense in in metric .

Let be a dense (in metric ) sequence of elements of . Set
for ,
and let be the unique RKHS with reproducing kernel (see the Moore–Aronszajn theorem in [3], Theorem 2). It is clear that is finite. By Lemma 2 below, each belongs to since it can be represented as
where consists of all s except a at the th position. Therefore, is dense in .
The following lemma was used in the proof.
Lemma 2
Let , where is a Hilbert space. The RKHS corresponding to the reproducing kernel consists of all functions , , with the inner product of and equal to , standing for the projection onto the span of .

By the Moore–Aronszajn theorem ([3], Theorem 2) there is a unique RKHS with reproducing kernel , so we only need to check that the function space defined in the statement of the lemma is an RKHS with as reproducing kernel.
First we need to check that the inner product is well defined. This follows from the obvious fact that the equality of the functions and for implies . The continuity of each evaluation functional is also obvious.
The representer of is (in the sense that for each ) and so the reproducing kernel of indeed coincides with .
Now we can easily deduce Proposition 1 from Proposition 2. The set of all continuous order Markov strategies is a separable set in the Banach space of continuous prediction strategies with the sup metric (by [17], Corollary 4.2.18). Therefore, the set of all continuous limitedmemory strategies is separable in .
Let be the RKHS whose existence is asserted by Lemma 1; we will see that any strategy for Predictor satisfying (5) and will satisfy (3) with output by a limitedmemory strategy . Indeed, for any we can find that is close in to . If are ’s predictions and are ’s predictions, (5) implies that
from some on. Since can be taken arbitrarily small, we have (3).
Proof of Theorem 1
The proof is based on the generalized law of cosines
(18) 
(which follows directly from the definition (7)). From (18) we deduce
(19) 
From Theorem 3 in [38] we can see that there is a prediction strategy guaranteeing
(20) 
and from Theorem 4 in [38] we can see that there is a prediction strategy guaranteeing
(21) 
We need, however, a single strategy guaranteeing some versions of (20) and (21). Such a strategy can be obtained by merging a strategy guaranteeing (20) and a strategy guaranteeing (21) (as in [36], Corollaries 3 and 4).
Setting
(22) 
so that , and letting be output by the K29 algorithm based on (22), we obtain
(23) 
from Theorem 3 of [38], and we obtain
(24) 
from the proof of Theorem 4 and from Theorem 3 of [38].

As we mentioned earlier, the leading constant in the bound of Theorem 1 (and its corollary) is worse than those in other results in this paper, in the intersection of their domains of application. The explanation is that Theorem 1 is based on the K29 algorithm, whereas all other results are based on the more sophisticated “K29 algorithm”.
Proof sketch of Theorem 2
The proof is similar to that of Theorem 1, with the role of the generalized law of cosines (18) played by the equation
(25) 
for some and . Since can take only two possible values, suitable and are easy to find: it suffices to solve the linear system
Subtracting these equations we obtain (abbreviating to ), which in turn gives . Therefore, (25) gives
(26) 
There are prediction strategies that guarantee
(27) 
(cf. [31], Theorem 2) and there are prediction strategies that guarantee
(28) 
(cf. [31], Theorem 3); merging such strategies as in [36], Corollaries 3 and 4, we can easily obtain (11) from (26), (27), and (28).
Proof sketch of Theorem 3
It is shown in [31] that there is a prediction strategy guaranteeing
(29) 
and
(30) 
(see (21), (22), and the subsection “Proof: Part II” in [31], the technical report), where is the reproducing kernel of . Comparing (29) and (30) with (26), we can see that Theorem 3 will follow from
which in turn will follow from
It remains to notice that and to calculate
Proof of Proposition 4
7 Conclusion
The existence of master strategies (strategies whose loss is less than or close to the loss of any strategy with not too large a norm) can be shown for a very wide class of loss functions. On the contrary, leading strategies appear to exist for a rather narrow class of loss functions. It would be very interesting to delineate the class of loss functions for which a leading strategy does exist. In particular, does this class contain any loss functions except Bregman divergences and strictly proper scoring rules?
Even if a leading strategy does not exist, one might look for a strategy such that the loss of any strategy whose norm is not too large lies between the loss of plus some measure of difference between ’s and ’s predictions and the loss of plus another measure of difference between ’s and ’s predictions.
Acknowledgments
I am grateful to the anonymous referees of the conference version of this paper for their comments. This work was partially supported by MRC (grant S505/65).
References
 [1] Robert A. Adams. Sobolev Spaces, volume 65 of Pure and Applied Mathematics. Academic Press, New York, first edition, 1975.
 [2] Robert A. Adams and John J. F. Fournier. Sobolev Spaces, volume 140 of Pure and Applied Mathematics. Academic Press, Amsterdam, second edition, 2003. This new edition is not a superset of [1]: some less important material is deleted.
 [3] Nachman Aronszajn. La théorie générale des noyaux reproduisants et ses applications, première partie. Proceedings of the Cambridge Philosophical Society, 39:133–153 (additional note: p. 205), 1943. The second part of this paper is [4].
 [4] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337–404, 1950.
 [5] Peter Auer, Nicolò CesaBianchi, and Claudio Gentile. Adaptive and selfconfident online learning algorithms. Journal of Computer and System Sciences, 64:48–75, 2002.
 [6] Katy S. Azoury and Manfred K. Warmuth. Relative loss bounds for online density estimation with the exponential family of distributions. Machine Learning, 43:211–246, 2001.
 [7] David Blackwell and Lester Dubins. Merging of opinions with increasing information. Annals of Mathematical Statistics, 33:882–886, 1962.
 [8] Lev M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Physics, 7:200–217, 1967.
 [9] Nicolò CesaBianchi, Philip M. Long, and Manfred K. Warmuth. Worstcase quadratic loss bounds for online prediction of linear functions by gradient descent. IEEE Transactions on Neural Networks, 7:604–619, 1996.
 [10] Nicolò CesaBianchi and Gábor Lugosi. Prediction, Learning, and Games. Cambridge University Press, Cambridge, 2006.
 [11] David R. Cox and David V. Hinkley. Theoretical Statistics. Chapman and Hall, London, 1974.
 [12] A. Philip Dawid. Statistical theory: the prequential approach. Journal of the Royal Statistical Society A, 147:278–292, 1984.
 [13] A. Philip Dawid. Calibrationbased empirical probability (with discussion). Annals of Statistics, 13:1251–1285, 1985.
 [14] A. Philip Dawid. Proper measures of discrepancy, uncertainty and dependence, with applications to predictive experimental design. Technical Report 139, Department of Statistical Science, University College London, November 1994. This technical report was revised (and its title was slightly changed) in August 1998.
 [15] A. Philip Dawid. Probability, causality and the empirical world: a Bayes–de Finetti–Popper–Borel synthesis. Statistical Science, 19:44–57, 2004.
 [16] Luc Devroye, László Györfi, and Gábor Lugosi. A Probabilistic Theory of Pattern Recognition, volume 31 of Applications of Mathematics. Springer, New York, 1996.
 [17] Ryszard Engelking. General Topology, volume 6 of Sigma Series in Pure Mathematics. Heldermann, Berlin, second edition, 1989.
 [18] David P. Helmbold, Jyrki Kivinen, and Manfred K. Warmuth. Relative loss bounds for single neurons. IEEE Transactions on Neural Networks, 10:1291–1304, 1999.
 [19] Mark Herbster and Manfred K. Warmuth. Tracking the best linear predictor. Journal of Machine Learning Research, 1:281–309, 2001.
 [20] Yury M. Kabanov, Robert Sh. Liptser, and Albert N. Shiryaev. To the question of absolute continuity and singularity of probability measures (in Russian). Matematicheskii Sbornik, 104:227–247, 1977.
 [21] Jyrki Kivinen and Manfred K. Warmuth. Relative loss bounds for multidimensional regression problems. Machine Learning, 45:301–329, 2001.
 [22] Leonid A. Levin. On the notion of a random sequence. Soviet Mathematics Doklady, 14:1413–1416, 1973.
 [23] Per MartinLöf. The definition of random sequences. Information and Control, 9:602–619, 1966.
 [24] Claus P. Schnorr. Zufälligkeit und Wahrscheinlichkeit. Springer, Berlin, 1971.
 [25] Glenn Shafer and Vladimir Vovk. Probability and Finance: It’s Only a Game! Wiley, New York, 2001.
 [26] Ray J. Solomonoff. Complexitybased induction systems: comparisons and convergence theorems. IEEE Transactions on Information Theory, IT24:422–432, 1978.
 [27] Ingo Steinwart, Don Hush, and Clint Scovel. Function classes that approximate the Bayes risk. In Gábor Lugosi and Hans Ulrich Simon, editors, Proceedings of the Nineteenth Annual Conference on Learning Theory, volume 4005 of Lecture Notes in Artificial Intelligence, pages 79–93, Berlin, 2006. Springer.
 [28] Jean Ville. Etude critique de la notion de collectif. GauthierVillars, Paris, 1939.
 [29] Vladimir Vovk. On a randomness criterion. Soviet Mathematics Doklady, 35:656–660, 1987.
 [30] Vladimir Vovk. Probability theory for the Brier game. Theoretical Computer Science, 261:57–79, 2001.
 [31] Vladimir Vovk. Competitive online learning with a convex loss function. Technical Report arXiv:cs.LG/0506041 (version 3), arXiv.org ePrint archive, September 2005.
 [32] Vladimir Vovk. Nonasymptotic calibration and resolution. Technical Report arXiv:cs.LG/0506004 (version 3), arXiv.org ePrint archive, August 2005.
 [33] Vladimir Vovk. Competing with Markov prediction strategies. Technical report, arXiv.org ePrint archive, July 2006.
 [34] Vladimir Vovk. Competing with stationary prediction strategies. Technical Report arXiv:cs.LG/0607067, arXiv.org ePrint archive, July 2006.
 [35] Vladimir Vovk. Competiting with wild prediction rules. Technical Report arXiv:cs.LG/0512059 (version 2), arXiv.org ePrint archive, January 2006.
 [36] Vladimir Vovk. Online regression competitive with reproducing kernel Hilbert spaces. Technical Report arXiv:cs.LG/0511058 (version 2), arXiv.org ePrint archive, January 2006.
 [37] Vladimir Vovk. Predictions as statements and decisions. Technical Report arXiv:cs.LG/0606093, arXiv.org ePrint archive, June 2006.
 [38] Vladimir Vovk, Ilia Nouretdinov, Akimichi Takemura, and Glenn Shafer. Defensive forecasting for linear protocols. Technical Report arXiv:cs.LG/0506007 (version 2), arXiv.org ePrint archive, September 2005.
 [39] Vladimir Vovk, Akimichi Takemura, and Glenn Shafer. Defensive forecasting. Technical Report arXiv:cs.LG/0505083, arXiv.org ePrint archive, May 2005.