In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory setting. There are several variations (various notions of effective dimension) of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object (contrasting with measure and other, different, notions of size). Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are "larger" than lines or curves, and yet "smaller" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable (or partially locatable) in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 (since it is a point) but effective dimension 1 (because, roughly speaking, it can't be effectively localized any better than a small interval, which has Hausdorff dimension 1).
Rigorous definitions
This article will define effective dimension for subsets of Cantor space 2ω; closely related definitions exist for subsets of Euclidean space Rn. We will move freely between considering a set X of natural numbers, the infinite sequence given by the characteristic function of X, and the real number with binary expansion 0.X.
Martingales and other gales
A martingale on Cantor space 2ω is a function d: 2ω → R≥ 0 from Cantor space to nonnegative reals which satisfies the fairness condition:
A martingale is thought of as a betting strategy, and the function gives the capital of the better after seeing a sequence σ of 0s and 1s. The fairness condition then says that the capital after a sequence σ is the average of the capital after seeing σ0 and σ1; in other words the martingale gives a betting scheme for a bookie with 2:1 odds offered on either of two "equally likely" options, hence the name fair.
(Note that this is subtly different from the probability theory notion of martingale.[1] That definition of martingale has a similar fairness condition, which also states that the expected value after some observation is the same as the value before the observation, given the prior history of observations. The difference is that in probability theory, the prior history of observations just refers to the capital history, whereas here the history refers to the exact sequence of 0s and 1s in the string.)
A supermartingale on Cantor space is a function d as above which satisfies a modified fairness condition:
A supermartingale is a betting strategy where the expected capital after a bet is no more than the capital before a bet, in contrast to a martingale where the two are always equal. This allows more flexibility, and is very similar in the non-effective case, since whenever a supermartingale d is given, there is a modified function d' which wins at least as much money as d and which is actually a martingale. However it is useful to allow the additional flexibility once one starts talking about actually giving algorithms to determine the betting strategy, as some algorithms lend themselves more naturally to producing supermartingales than martingales.
An s-gale is a function d as above of the form
for e some martingale.
An s-supergale is a function d as above of the form
for e some supermartingale.
An s-(super)gale is a betting strategy where some amount of capital is lost to inflation at each step. Note that s-gales and s-supergales are examples of supermartingales, and the 1-gales and 1-supergales are precisely the martingales and supermartingales.
Collectively, these objects are known as "gales".
A gale d succeeds on a subset X of the natural numbers if where denotes the n-digit string consisting of the first n digits of X.
A gale d succeeds strongly on X if .
All of these notions of various gales have no effective content, but one must necessarily restrict oneself to a small class of gales, since some gale can be found which succeeds on any given set. After all, if one knows a sequence of coin flips in advance, it is easy to make money by simply betting on the known outcomes of each flip. A standard way of doing this is to require the gales to be either computable or close to computable:
A gale d is called constructive, c.e., or lower semi-computable if the numbers are uniformly left-c.e. reals (i.e. can uniformly be written as the limit of an increasing computable sequence of rationals).
The effective Hausdorff dimension of a set of natural numbers X is .[2]
The effective packing dimension of X is .[3]
Kolmogorov complexity definition
Kolmogorov complexity can be thought of as a lower bound on the algorithmic compressibility of a finite sequence (of characters or binary digits). It assigns to each such sequence w a natural number K(w) that, intuitively, measures the minimum length of a computer program (written in some fixed programming language) that takes no input and will output w when run.
The effective Hausdorff dimension of a set of natural numbers X is .[4][5][6][7]
The effective packing dimension of a set X is .[3][4][5]
From this one can see that both the effective Hausdorff dimension and the effective packing dimension of a set are between 0 and 1, with the effective packing dimension always at least as large as the effective Hausdorff dimension. Every random sequence will have effective Hausdorff and packing dimensions equal to 1, although there are also nonrandom sequences with effective Hausdorff and packing dimensions of 1.
Comparison to classical dimension
If Z is a subset of 2ω, its Hausdorff dimension is .[2]
The packing dimension of Z is .[3]
Thus the effective Hausdorff and packing dimensions of a set are simply the classical Hausdorff and packing dimensions of (respectively) when we restrict our attention to c.e. gales.
Define the following:
A consequence of the above is that these all have Hausdorff dimension .[8]
and all have packing dimension 1.
and all have packing dimension .
References
- ↑ John M. Hitchcock; Jack H. Lutz (2006). "Why computational complexity requires stricter martingales". Theory of Computing Systems.
- 1 2 Jack H. Lutz (2003). "Dimension in complexity classes". SIAM Journal on Computing. 32 (5): 1236–1259. arXiv:cs/0203016. doi:10.1137/s0097539701417723.
- 1 2 3 Krishna B. Athreya; John M. Hitchcock; Jack H. Lutz; Elvira Mayordomo (2007). "Effective strong dimension in algorithmic information and computational complexity". SIAM Journal on Computing. 37 (3): 671–705. arXiv:cs/0211025. doi:10.1137/s0097539703446912. S2CID 27038.
- 1 2 Jin-yi Cai; Juris Hartmanis (1994). "On Hausdorff and Topological Dimensions of the Kolmogorov Complexity of the Real Line". Journal of Computer and System Sciences. 49 (3): 605–619. doi:10.1016/S0022-0000(05)80073-X.
- 1 2 Ludwig Staiger (1993). "Kolmogorov complexity and Hausdorff dimension". Information and Computation. 103 (2): 159–194. doi:10.1006/inco.1993.1017.
- ↑ Elvira Mayordomo (2002). "A Kolmogorov complexity characterization of constructive Hausdorff dimension". Information Processing Letters. 84: 1–3. doi:10.1016/S0020-0190(02)00343-5.
- ↑ Ludwig Staiger (2005). "Constructive dimension equals Kolmogorov complexity". Information Processing Letters. 93 (3): 149–153. doi:10.1016/j.ipl.2004.09.023.
- ↑ Boris Ryabko (1994). "Coding of combinatorial sources and Hausdorff dimension". Soviet Mathematics - Doklady.
- J. H. Lutz (2005). "Effective fractal dimensions". Mathematical Logic Quarterly. 51 (1): 62–72. CiteSeerX 10.1.1.143.7654. doi:10.1002/malq.200310127. S2CID 206223293.
- J. Reimann (2004). "Computability and fractal dimension, PhD thesis". Ruprecht-Karls Universität Heidelberg.
{{cite journal}}
: Cite journal requires|journal=
(help) - L. Staiger (2007). "The Kolmogorov complexity of infinite words". Theoretical Computer Science. 383 (2–3): 187–199. doi:10.1016/j.tcs.2007.04.013.