The implications are proper (1) ) Martin-L¨ of random 6( computably random A set C is called high if ; 00 T C 0 . Equivalently, C computes a function that dominates each computable function (Martin, 1966). Theorem (N., Stephan, Terwijn, 2005) Every high set C Turing computes a set Z that is computably random. I Let h L e i e 2 N be a list of all partial computable martingales, I Define Z so that the martingale L = P e 2 � e L e is bounded along Z . I Use highness of C to deal with partiality. 19/1
The implications are proper (2) On the other hand, if a computably enumerable set C is Turing above a random, then C is Turing equivalent to the halting problem ; 0 by the “Arslanov Completeness Critierion”. There is a high computably enumerable set C < T ; 0 . Therefore Martin-L¨ of random 6( computably random Another way to separate the ML and computable randomness: use the (prefix-free) Kolmogorov complexity of the initial segments. For ML-random Z we have K ( Z � n ) + O (1) � n . There is a computably random Y such that K ( Y � n ) = O (log n ). 20/1
The implications are proper (3) ) computably random 6( Schnorr random. I First proved by Yongge Wang. I It is shown by a direct construction (see e.g. N’s book “Computability and Randomness”, Ch. 7). I N., Stephan, Terwijn, 2005 separate the two notions in each high degree. Note that any separation has to occur within the high degrees: Theorem (N., Stephan, Terwijn, 2005) If Z is not high and Schnorr random, then Z is ML-random. 21/1
3. E ff ective versions of almost everywhere theorems 22/1
E ff ective almost everywhere theorems and randomness The “almost everywhere” theorems didn’t tell us whether the given object is well-behaved at a particular real. Now consider the case where the given object is algorithmic in some sense. I How strong an algorithmic randomness notion for a real z is needed to make the theorem hold at z ? I Will the theorem in fact characterize the randomness notion? Once this is settled, we can provide “concrete” examples of reals at which the nice behaviour occurs. For instance, Chaitin’s Ω is ML-random. 23/1
Continuing the story of e ff ective a.e. theorems, after Bishop (1967) and Demuth (1975) Recall Birkho ff ’s 1939 theorem: Let ( X, µ, T ) be a measure preserving system, and let f : X ! R is measurable. For µ -almost every x , the limit as N ! 1 of the averages of f � T i ( x ) over 0 i < N , exists. I V’yugin, 1999 (TCS) shows that ML-randomness su ffi ces for the e ff ective Birkho ff theorem. (Note that T : ✓ X ! X only needs to be defined µ -a.e.) I He uses Bishop, Thm. 6 on page 236, which is closely related to his result on BV (Thm. 7). I Hoyrup, Rojas, Galatolo 2010-13 develop e ff ective ergodic theory. 24/1
Schnorr randomness and L 1 -computability Pathak (2009), Pathak, Rojas, and Simpson (2012) proved an e ff ective version of another of Lebesgue’s theorem (but taking into account only the existence of limits, not the value). z 2 [0 , 1] d is Schnorr random , for every L 1 -computable function g : [0 , 1] d ! R , Z 1 lim g exists . λ ( B r ( z )) r ! 0 + B r ( z ) Implication ( also due to Freer, Kjos-Hanssen, N., Stephan. 25/1
E ff ective form of the first Lebesgue theorem A function f : [ a, b ] ! R is of bounded variation if n � 1 X V ( f ) = sup | f ( t i +1 ) � f ( t i ) | < 1 , i =1 the sup taken over all collections t 1 t 2 . . . t n in [ a, b ]. Theorem (Brattka, Miller, N; to appear in TAMS) Let f : [0 , 1] ! R be non-decreasing and computable. Then z is computably random ) f 0 ( z ) exists. I Under the weaker hypothesis that f has bounded variation, f 0 ( z ) exists for each Martin-L¨ of random real z , but not necessarily for each computably random. (Demuth, 1975; Brattka, Miller,N. ta). I Some depth here that doesn’t show in classical analysis. Jordan decomposition f = g 0 � g 1 for nondecreasing g i is not e ff ective! 26/1
Functions-to-tests To prove the first result: If f is computable nondecreasing, we (uniformly in f ) build a computable martingale M such that f 0 ( z ) fails to exist ) M succeeds on z . (I will give detail later when I do the polynomial time computable case.) Corollary Each computable nondecreasing function f is di ff erentiable at a (uniformly obtained) computable real. PROOF: Each computable martingale fails on some computable real, which can be obtained uniformly. 27/1
Converses (tests-to-functions) I Both the nondecreasing and the bounded variation cases also have converses: if z is not random in the appropriate sense, then some computable function of the respective type fails to be di ff erentiable at z (BMN, to appear). I So one could take the di ff erentiability properties for classes of e ff ective functions as definitions of randomness notions! z is computably random , each computable nondecreasing function is di ff erentiable at z z is Martin-L¨ of random , each computable function of bd. variation di ff erentiable at z . 28/1
A new proof of Demuth’s result Here is the proof of Brattka/Miller/N. (TAMS, ta) of the result of Demuth on BV function. We get a stronger form: Let r be a Martin-L¨ of random real. Suppose f is uniformly computable on the rationals, and f is of bounded variation. Then f 0 ( r ) exists. I By Jordan’s result, f = h 0 � h 1 for some nondecreasing functions h 0 , h 1 . I One can show that r is Martin-L¨ of random (hence computably random) relative to some oracle set X encoding such a pair h 0 , h 1 . I By the previous theorem, relativized to X , the h i are both di ff erentiable at r . Thus f 0 ( r ) = h 0 0 ( r ) � h 0 1 ( r ) exists. ⇤ 29/1
The strength of the Jordan decomposition theorem I Note that the pairs h 0 , h 1 with f = h 0 � h 1 (not necessarily continuous) can be seen as a Π 0 1 class P . I We obtain the decomposition because r is random in some member of P (the “low for r ” basis theorem). Results by Greenberg, N., Yokoyama, Slaman (see 2013 Logic Blog and upcoming project report by Marcus Triplett) show: I Jordan decomposition of any continuous BV f into continuous functions h 0 , h 1 is equivalent to ACA over RCA . I Jordan decomposition of any continuous BV f into nondecreasing functions h 0 , h 1 is equivalent to WKL over RCA . 30/1
Randomness notions given by function classes (BMN ta) 31/1
4. Polynomial time randomness and di ff erentiability 32/1
Special Cauchy names I A Cauchy name is a sequence of rationals ( p i ) i 2 N such that 8 k > i | p i � p k | 2 � i . I We represent a real x by a Cauchy name converging to x . For feasible analysis, we use a compact set of Cauchy names: the signed digit representation of a real. Such Cauchy names, called special, have the form i X b k 2 � k , p i = k =0 where b k 2 { � 1 , 0 , 1 } . (Also, b 0 = 0 , b 1 = 1.) So they are given by paths through { � 1 , 0 , 1 } ω , something a resource-bounded Turing machine can process. 33/1
Polynomial time computable functions The following has been formulated in equivalent ways by Ker-i-Ko (1989), Weihrauch (2000), Braverman (2008). Definition A function g : [0 , 1] ! R is polynomial time computable if there is a polynomial time TM turning every special Cauchy name for x 2 [0 , 1] into a special Cauchy name for g ( x ). This means that the first n symbols of a special Cauchy name for g ( x ) can be computed in time polynomially in n , thereby using polynomially many symbols of the oracle tape that holds a special Cauchy name for x . 34/1
Examples of polynomial time computable functions I Functions such as e x , sin x are polynomial time computable. I To see this one uses rapidly a converging approximation sequence, such as e x = P n x n /n !. I As Braverman (2008) points out, e x is computable in time O ( n 3 ). I Namely, from O ( n 3 ) symbols of x we can in time O ( n 3 ) compute an approximation of e x with error 2 � n . I Better algorithms may exist (e.g. search the 1987 book J. Borwein and P. Borwein, Pi and the AGM). I Breutzman, Juedes and Lutz (MLQ, 2004) have given an example of a polynomial time computable function that is no-where di ff erentiable. It is a variant of the Weierstrass function P n 2 � n cos(5 n π x ). 35/1
Polynomial time randomness Recall that a betting strategy, or martingale, is a function M : 2 < ω ! R + 0 such that M ( σ 0) + M ( σ 1) = 2 M ( σ ) for each string σ . Definition A betting strategy M : 2 < ω ! R is called polynomial time computable if from a string σ and an i 2 N we can, in time polynomial in | σ | + i , compute the i -th component of a special Cauchy name for M ( σ ). In this case we can compute a polynomial time martingale in base 2 dominating M (Schnorr / Figueira-N). Definition We say Z 2 2 N is polynomial time random if no polynomial time betting strategy succeeds on Z . 36/1
Polynomial time randomness Definition We say Z 2 2 N is polynomial time random if no polynomial time betting strategy succeeds on Z . I This was first studied in Yongge Wang’s 1992 thesis (Uni Heidelberg). I Figueira, N 2013 showed that the notion is base invariant: it is about reals rather than sequences of digits for a fixed base (such as 2). Proposition (Existence in super polynomial time classes) Suppose the function t ( n ) is time constructible and dominates all polynomials. There is polynomial random Z computable in time O ( t ( n )) (i.e. the language consisting of the initial segments of Z is O ( t )-computable). 37/1
Lebesgue’s Thm (A) and its converse in the polytime setting Theorem (N., STACS 2014) The following are equivalent for a real z 2 [0 , 1]. (I) z (written in binary expansion) is polynomial time random (II) f 0 ( z ) exists, for each non-decreasing function f that is polynomial time computable. I The same method works for primitive recursive randomness/functions, and even computable randomness/computable functions. I So this also yields new proof of Brattka/Miller/Nies. 38/1
Proof of the easy direction (II) ! (I) Suppose that f 0 ( z ) exists, for each non-decreasing function f that is polynomial time computable. We want to show that z is polynomial time random. Let S g ( σ ) denote the slope of a non-decreasing function g at the basic dyadic interval given by 1 string σ . This is a betting strategy. ) 1 1 Slope = M(1) ( M = e Essentially, each betting p o l S strategy M is of the form S g for nondecreasing g . If M is Slope = M(10) g polynomial time then so is g . 0.10 0.11 1.0 Since g 0 ( z ) exists, M is bounded along z . 39/1
Slopes and their limits For a function f : ✓ R ! R , for a pair a, b of distinct reals let S f ( a, b ) = f ( a ) � f ( b ) . a � b For f defined on the rationals, the lower and upper (pseudo-)derivatives are D f ( x ) = lim inf h ! 0 + { S f ( a, b ) | a x b ^ 0 < b � a h } , e e h ! 0 + { S f ( a, b ) | a x b ^ 0 < b � a h } . Df ( x ) = lim sup where a, b range over rationals in [0 , 1]. Example: f ( x ) = x sin(1 /x ). f (0) = � 1 , e D Df (0) = 1. e For f defined in a nbhd of x and continuous at x , f ( x ) = e f 0 ( x ) exists i ff D Df ( x ) < 1 . e 40/1
Slopes at basic dyadic intervals The subscript 2 indicates restriction to basic dyadic intervals [ σ ], where σ is a string, containing z : e { S f ( σ ) | x 2 [ σ ] } . D 2 f ( x ) = lim sup | σ | !1 Recall: if f is non-decreasing then M ( σ ) = S f ( σ ) is a betting strategy. We say that M converges on z if lim n M ( Z � n ) exists. We have the following basic connections: I M succeeds on z , e D 2 f ( z ) = 1 . 2 f ( z ) = e I M converges on z , D D 2 f ( z ) < 1 e 41/1
Proof of the harder direction (I) ! (II) Now suppose that z = 0 .Z 2 [0 , 1] is polynomial time random. We want to show that f 0 ( z ) exists, for each non-decreasing function f that is polynomial time computable. I Consider the polynomial time computable betting strategy M ( σ ) = S f ( σ ) . I lim n M ( Z � n ) exists and is finite for each polynomially random Z . This is an e ffi cient version of Doob’s martingale convergence theorem. 2 f ( z ) = e I Therefore D D 2 f ( z ) < 1 . e 42/1
Porosity Assume for a contradiction that f 0 ( z ) fails to exist. We have oscillation of slopes of f at arbitrarily small intervals around z . We want success of a betting strategy at basic dyadic intervals corresponding to prefixes of Z . I First suppose that e D 2 f ( z ) < p < e Df ( z ). D 2 f ( z ) < p there is a string σ ⇤ � Z such that I Since e 8 σ [ Z � σ ⌫ σ ⇤ ) S f ( σ ) p ]. I Choose k with p (1 + 2 � k +1 ) < e Df ( z ). Let � denote the prefix relation of strings. The next lemma says that [ σ ⇤ ] � S { ( σ ): σ ⌫ σ ⇤ ^ S f ( σ ) > p } is porous at z . Lemma (High slopes at dyadic intervals) There are arbitrarily large n such that S f ( τ n ) > p for some basic dyadic interval [ τ n ] of length 2 � n � k which is contained in [ z � 2 � n +2 , z + 2 � n +2 ]. 43/1
We may suppose σ ⇤ is the empty string, i.e., S f ( σ ) p for all dyadic intervals [ σ ] containing z . By the lemma, there are arbitrarily large n such that S f ( τ n ) > p for some basic dyadic interval [ τ n ] of length 2 � n � k which is contained in [ z � 2 � n +2 , z + 2 � n +2 ]. Good case: there are infinitely many n with η = Z � n � 4 � τ n . Then the strategy that from such η on bets everything on the strings of length n + k other than τ n gains a fixed factor 2 k +4 / (2 k +4 � 1) on Z each time. Also, it never goes down on Z , so it succeeds. Bad case: for almost all n we have Z � n � 4 6� τ n . This means 0 . τ n is on the left side of z . So the strategy can’t use τ n as it splits o ff from Z before η is read. 44/1
The shifting–by–1/3 trick Fix m 2 N . For k 2 Z consider an interval I = [ k 2 � m , ( k + 1)2 � m ] . For r 2 Z consider an interval J = 1 / 3 + [ r 2 � m , ( r + 1)2 � m ] . The distance between an endpoint of I and an endpoint of J is at least 1 / (3 · 2 m ). To see this: assume that k 2 � m � ( p 2 � m + 1 / 3) < 1 / (3 · 2 m ). This yields (3 k � 3 p � 2 m ) / (3 · 2 m ) < 1 / (3 · 2 m ), and hence 3 | 2 m , a contradiction. 45/1
Using this trick to finish the proof of (I) ! (II) We may assume that z > 1 / 2. In the “bad” case that Z � n � 4 6� τ n for almost all τ n , we instead bet on the dyadic expansion Y of z � 1 / 3. I Given η 0 = Y � n � 4 , look for an extension τ 0 � η 0 of length n + k + 1, such that 1 / 3 + [ τ 0 ] ✓ [ τ ] for a string [ τ ] with S f ( τ ) > p . (Then Y 62 [ τ 0 ].) I If it is found, bet everything on the other extensions of η 0 of that length n + k + 1. This strategy gains a fixed factor 2 k +5 / (2 k +5 � 1) on Y each time n is as above. It never goes down on Y , so it succeeds. So we get a polytime martingale that wins on z � 1 / 3. By Figueira and N (2013), polytime randomness is base invariant, so z � 1 / 3 is polynomially random. This yields a contradiction. The case D f ( z ) < D 2 f ( z ) is analogous, using a “low dyadic e e slopes” lemma instead. 46/1
Shifted dyadic versus full di ff erentiability For a rational q let D q be the collection of intervals of the form q + [ k 2 � m , ( k + 1)2 � m ] where k 2 Z , m 2 N . Question Let f : [0 , 1] ! R be continuous nondecreasing, and let z 2 (0 , 1). Suppose that for each rational q , [ a,b ] 2 D q , z 2 [ a,b ] , b � a ! 0 S f ( a, b ) lim exists. Is f already di ff erentiable at z ? 47/1
5. Di ff erentiability of Lipschitz functions 48/1
Computable randomness and Lipschitz functions Recall that f is Lipschitz if | f ( x ) � f ( y | C ( | x � y | ) for some C 2 N . Theorem (Freer, Kjos, N, Stephan, Computability, 2014) A real z is computably random ( ) each computable Lipschitz function f : [0 , 1] ! R is di ff erentiable at z . ) : Write f ( x ) = ( f ( x ) + Cx ) � Cx . Then f ( x ) + Cx is = computable and non-decreasing. From the monotone case (BMN), we obtain a test (martingale) for this function. If f 0 ( z ) does not exists, then z fails this test. ( = : Turn success of a martingale on a real into oscillation of the slopes, around the real, of a Lipschitz function. 49/1
Rademacher’s theorem Theorem (Rademacher, 1920) Let f : [0 , 1] n ! R be Lipschitz. Then the derivative Df ( z ) (an element of R n ) exists for almost every vector z 2 [0 , 1] n . To define computable randomness of a vector z 2 [0 , 1] n : I Take the binary expansion of the n components of z . I We can bet on the corresponding sequence of blocks of n bits. Rute (2012) studies this notion, for instance invariance under computable measure preserving operators. 50/1
E ff ective form of Rademacher Theorem (Galicki and Turetsky, arxiv.org/abs/1410.8578 ) z 2 [0 , 1] n is computably random ) every computable Lipschitz function f : [0 , 1] n ! R is di ff erentiable at z . For a vector v , the directional derivative Df ( z ; v ) is the derivative of the function t ! f ( z + tv ) at 0. The proof has three steps: I all partial derivatives exist at z I all directional derivatives for computable directions exist I Df ( z ; v ) is linear on computable directions v Since f is Lipschitz this show that f Gˆ ateaux-di ff erentiable at z : all directional derivatives exist, and the value is linear in the direction. Again since f is Lipschitz, this yields the full di ff erentiability of f at z . ⇤ 51/1
Other approaches to e ff ective Lipschitz functions The polynomial time case is open. Question Suppose z 2 [0 , 1] n is polynomially random. Is every polynomial time Lipschitz function f : [0 , 1] n ! R is di ff erentiable at z ? I Abbas Edalat has developed an approach to di ff erentiability of e ff ective Lipschitz functions using domain theory. See his recent paper in TCS. I It involves the Clarke gradient (a set-valued derivative) to get around the measure 0 set where the function is not classically di ff erentiable. 52/1
6. Two more almost everywhere results: Carleson-Hunt (1966/68) and Weyl (1916) 53/1
Carleson-Hunt Thm (suggested by Manfred Sauter) Theorem (Carleson, 1966 for p = 2; improved by Hunt 1968) Let f 2 L p [ � π , π ] be a periodic function. Then the Fourier series c N ( z ) = P f ( n ) e inz converges for almost every z . | n | N ˆ Question Suppose f is L p -computable for computable p > 1. Which randomness property of z su ffi ces to make the sequence c N ( z ) converge? I We say z is weakly 2-random if z is in no null Π 0 2 set. This properly implies Martin-L¨ of randomness. I As an easy consequence of Carleson-Hunt theorem, weak 2-randomness of z su ffi ces. For fixed rationals α < β , the statement that, say, Re c N ( z ) oscillates between values < α and > β is Π 0 2 . 54/1
E ff ective Weyl Theorem Theorem (Weyl, 1916) Let ( a i ) i 2 N be a sequence of distinct integers. Then for almost every real z , the sequence a i z mod 1 is uniformly distributed in [0 , 1]. Suppose now ( a i ) i 2 N is computable. Avigad (2012) shows that I Schnorr randomness of z su ffi ces to make the conclusion of Weyl’s theorem hold. I There is a z satisfying the conclusion of the theorem which is in some null e ff ectively closed set (hence not even “Kurtz random”). 55/1
7. E ff ective ergodic theory: multiple recurrence 56/1
Classical theory A measurable operator T on a probability space ( X, B , µ ) is measure preserving if µT � 1 ( A ) = µA for each A 2 B . The following is Furstenberg’s multiple recurrence theorem (1977); see Furstenberg’s book on recurrence, 2014 edition, Thm. 7.15. Theorem Let ( X, B , µ ) be a probability space. Let T 1 , . . . , T k be commuting measure preserving operators on X . For each P 2 B with µP > 0, there is n > 0 such that µ ( T i T � n ( P )) > 0. i With a little measure theory one can easily strengthen this to an “almost-everywhere” type result: a.e. z 2 P 9 n [ z 2 T i T � n ( P )]. i 57/1
k -recurrence in Cantor space Let X = 2 N with the shift operator S : X ! X that takes the first bit o ff a sequence. Definition Let P ✓ 2 N be measurable, and Z 2 2 N . We say that Z is k -recurrent in P if S n ( Z ) , S 2 n ( Z ) , . . . , S kn ( Z ) 2 P for some n � 1, i.e. Z 2 T 1 i k S � ni ( P ). Theorem (Downey, Nandakumar, N., in preparation) Let P ✓ 2 N be a Π 0 1 class of positive measure. Each Martin-L¨ of random Z is k -recurrent in P , for each k � 1. Martin-L¨ of-randomness is necessary even for k = 1. If Z is not ML-random, no “tail” S n ( Z ) is in the Π 0 1 class of positive measure P = { Y : 8 r K ( Y � r ) � r � 1 } by the Levin-Schnorr Theorem. 58/1
General Conjecture It is likely that an e ff ective multiple recurrence theorem holds in full generality for ML-randomness and Π 0 1 sets. Conjecture Let ( X, µ ) be a computable probability space. Let T 1 , . . . , T n be computable measure preserving transformation that commute pairwise. Let P be a Π 0 1 class with µP > 0. If Z 2 P is ML-random then 9 n V i T n i z 2 P . I By the classical of Furstenberg, this holds for weakly 2-random Z (i.e., in no null Π 0 2 class). I Jason Rute has pointed out that if µ P is computable, then Schnorr randomness of Z is su ffi cient, also by the classical result. A draft of this work is available on the 2015 Logic Blog. 59/1
Randomness and analysis: a tutorial Part II: Lebesgue density and its applications to randomness Andr´ e Nies CCC 2015, Kochel am See 1/24
Density Let � denote uniform (Lebesgue) measure. Definition Let E be a subset of [0 , 1]. The (lower) density of E at a real z is � ( J \ E ) ⇢ ( E | z ) = lim inf , | J | z 2 J , | J | ! 0 where J ranges over intervals. This gauges how much, at least, of E is in intervals that zoom in on z . ⇢ ( E | z ) is the limit over intervals containing z . Clearly ⇢ ( E | z ) = 1 $ ⇢ ( E | z ) = 1. 2/24
Lebesgue’s Theorem: towards an e ff ective version Recall: ⇢ ( E | z ) = lim inf J interval ,z 2 J, | J | ! 0 � ( J \ E ) / | J | . Theorem (Lebesgue Density Theorem, 1910) Let E ✓ [0 , 1] be measurable. Then for almost every z 2 [0 , 1] : if z 2 E , then ⇢ ( E | z ) = 1 . I For open E , this is immediate, and actually holds for all z 2 [0 , 1]. I For closed E , this is the simplest case where there is something to prove. E ✓ [0 , 1] is e ff ectively closed (or Π 0 1 ) if there is an e ff ective list of open intervals with rational endpoints that has union [0 , 1] \ E . Definition (Main) We say that a real z is a density-one point if ⇢ ( E | z ) = 1 for every e ff ectively closed E 3 z . 3/24
Martin-L¨ of randomness and density Does Martin-L¨ of randomness ensure that an e ff ectively closed E ✓ [0 , 1] with z 2 E has density one at z ? Answer: NO! Example I Let E 6 = ; , E ✓ [0 , 1] be an e ff ectively closed set containing only Martin-L¨ of randoms. I E.g., E = [0 , 1] \ S 1 where h S r i r 2 N is a universal ML-test. I Let z = min( E ). I Then ⇢ ( E | z ) = 0 even though z is ML-random. 4/24
Density randomness Definition Let E be a measurable subset of 2 N . The lower dyadic density of E at Z 2 2 N is n !1 2 n � ([ Z � n ] \ E ) . ⇢ 2 ( E | Z ) = lim inf Definition We say that Z ✓ N is density random if Z is ML-random and ⇢ 2 ( P | Z ) = 1 for each Π 0 1 class P 2 Z . For ML-random Z , one can equivalently require the full density equals 1 in the setting of reals, by a result of Khan and Miller (2013). 5/24
Three characterisations of density randomness Theorem The following are equivalent for Z 2 2 N , z = 0 .Z . I Z is density random I [Madison group, 2012] Each left-c.e. martingale M converges: lim n M ( Z � n ) exists ( M is left-c.e. if M ( � ) is a left-c.e. real uniformly in string � ) I [N., 2014] g 0 ( z ) exists for each interval-c.e. function g I [Miyabe, N., Zhang 2013] For each integrable lower + , the “averaging” semicomputable function f : [0 , 1] ! R statement of the Lebesgue di ff erentiation theorem holds at z . For background and complete proofs see Miyabe, N., Zhang 2013. The continuous interval-c.e. functions with g (0) = 0 are precisely the variation functions of computable functions by Freer et al. 2014. 6/24
2. Anti-random sequences 000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000 000000000000000000000001000000000000000001111111111111 111111111111111111111111111111111111111111111100000000 000000000000000000111000000000000000000000000000000000 . . . 7/24
Basic objects of computability theory 8/24
Basic objects of computability theory I The computable subsets of N Computable sets 8/24
Basic objects of computability theory ∅ ' I The computable subsets of N I the halting problem ; 0 Computable sets 8/24
Basic objects of computability theory ∅ ' I The computable subsets of N I the halting problem ; 0 ∆ 0 !!! sets 2 I Turing reducibility T I the ∆ 0 2 sets ( A T ; 0 ) Computable sets 8/24
Basic objects of computability theory ∅ ' I The computable subsets of N I the halting problem ; 0 C.e. sets I Turing reducibility T I the computably enumerable sets Computable sets 8/24
Adding the world of (anti-)randomness Computable sets 9/24
Adding the world of (anti-)randomness Ω Z I The Martin-L¨ of random sets Z , such as Chaitin’s halting probability Ω . We have Ω ⌘ T ; 0 . Computable sets 9/24
Adding the world of (anti-)randomness Ω I The antirandom ( K -trivial) sets. K-trivial sets 9/24
Adding the world of (anti-)randomness Ω I The antirandom ( K -trivial) sets. D (c.e.) If A is K -trivial, then there is c.e. A K -trivial set D � tt A . (N. 2005) K-trivial sets 9/24
Kuˇ cera’s theorem and the covering problem Computable sets 10/24
Kuˇ cera’s theorem and the covering problem Let Z be a random ∆ 0 2 set. Then there is a c.e., incomputable set A T Z . (Kuˇ cera, 1986) Z C.e. sets A Computable sets 10/24
Kuˇ cera’s theorem and the covering problem Let Z be random with Z 6� T ; 0 . ∅ � Let A T Z be c.e. Z Then A is K -trivial. (Hirschfeldt, N., Stephan, 2007) C.e. sets A K-trivial sets 10/24
Kuˇ cera’s theorem and the covering problem Let Z be random with Z 6� T ; 0 . ∅ � Let A T Z be c.e. Then A is K -trivial. (Hirschfeldt, N., Stephan, 2007) Covering problem (Stephan, 2004) C.e. sets Let A be a c.e. K -trivial set. A K-trivial sets 10/24
Kuˇ cera’s theorem and the covering problem Let Z be random with Z 6� T ; 0 . ∅ � Let A T Z be c.e. Z Then A is K -trivial. (Hirschfeldt, N., Stephan, 2007) Covering problem (Stephan, 2004) C.e. sets Let A be a c.e. K -trivial set. Is there a ML-random Z � T A A with Z 6� T ; 0 ? K-trivial sets 10/24
Kuˇ cera’s theorem and the covering problem Let Z be random with Z 6� T ; 0 . ∅ � Let A T Z be c.e. Z Then A is K -trivial. (Hirschfeldt, N., Stephan, 2007) Covering problem (Stephan, 2004) C.e. sets Let A be a c.e. K -trivial set. Is there a ML-random Z � T A A with Z 6� T ; 0 ? We may omit the assumption that A is c.e.: if not, replace A by a c.e. K -trivial set D above A . 10/24
A strong solution to the covering problem Theorem 5 + 2 authors ∅ � There is a ML-random set Z < T ; 0 above all the K -trivials. Z K-trivial sets 11/24
A strong solution to the covering problem Theorem 5 + 2 authors ∅ � There is a ML-random set Z < T ; 0 above all the K -trivials. Z How random can Z be? K-trivial sets 11/24
A strong solution to the covering problem Theorem 5 + 2 authors ∅ � There is a ML-random set Z < T ; 0 above all the K -trivials. Z How random can Z be? Answer: not much more than Martin-L¨ of random. K-trivial sets 11/24
A strong solution to the covering problem Theorem 5 + 2 authors ∅ � There is a ML-random set Z < T ; 0 above all the K -trivials. Z How random can Z be? Answer: not much more than Martin-L¨ of random. How close to ; 0 must Z lie? K-trivial sets 11/24
A strong solution to the covering problem Theorem 5 + 2 authors ∅ � There is a ML-random set Z < T ; 0 Z above all the K -trivials. How random can Z be? Answer: not much more than Martin-L¨ of random. How close to ; 0 must Z lie? Answer: Z is very close to ; 0 . K-trivial sets 11/24
Background on antirandom sets 12/24
Descriptive string complexity K Consider a partial computable function from binary strings to binary strings (called machine). It is called prefix-free if its domain is an antichain under the prefix relation of strings. There is a universal prefix-free machine U : for every prefix-free machine M , M ( � ) = y implies U ( ⌧ ) = y for some ⌧ with | ⌧ | | � | + d M , and the constant d M only depends on M . The prefix-free Kolmogorov complexity of string y is the length of a shortest U -description of y : K ( y ) = min {| � | : U ( � ) = y } . 13/24
Definition of K -triviality In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧ , up to additive const we have K ( | ⌧ | ) K ( ⌧ ), since we can compute | ⌧ | from ⌧ . 14/24
Definition of K -triviality In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧ , up to additive const we have K ( | ⌧ | ) K ( ⌧ ), since we can compute | ⌧ | from ⌧ . Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K -trivial if, for some b 2 N , 8 n [ K ( A � n ) K ( n ) + b ], namely, all its initial segments have minimal K -complexity. 14/24
Definition of K -triviality In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧ , up to additive const we have K ( | ⌧ | ) K ( ⌧ ), since we can compute | ⌧ | from ⌧ . Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K -trivial if, for some b 2 N , 8 n [ K ( A � n ) K ( n ) + b ], namely, all its initial segments have minimal K -complexity. It is not hard to see that K ( n ) 2 log 2 n + O (1). 14/24
Definition of K -triviality In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧ , up to additive const we have K ( | ⌧ | ) K ( ⌧ ), since we can compute | ⌧ | from ⌧ . Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K -trivial if, for some b 2 N , 8 n [ K ( A � n ) K ( n ) + b ], namely, all its initial segments have minimal K -complexity. It is not hard to see that K ( n ) 2 log 2 n + O (1). Z is random , 8 n [ K ( Z � n ) > n � O (1)] 14/24
Definition of K -triviality In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧ , up to additive const we have K ( | ⌧ | ) K ( ⌧ ), since we can compute | ⌧ | from ⌧ . Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K -trivial if, for some b 2 N , 8 n [ K ( A � n ) K ( n ) + b ], namely, all its initial segments have minimal K -complexity. It is not hard to see that K ( n ) 2 log 2 n + O (1). A is K -trivial , 8 n [ K ( A � n ) K ( n ) + O (1)] 14/24
Definition of K -triviality In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧ , up to additive const we have K ( | ⌧ | ) K ( ⌧ ), since we can compute | ⌧ | from ⌧ . Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K -trivial if, for some b 2 N , 8 n [ K ( A � n ) K ( n ) + b ], namely, all its initial segments have minimal K -complexity. It is not hard to see that K ( n ) 2 log 2 n + O (1). Z is random , 8 n [ K ( Z � n ) > n � O (1)] A is K -trivial , 8 n [ K ( A � n ) K ( n ) + O (1)] Thus, being K -trivial means being far from random . 14/24
Connecting density and K -triviality This is based on the following work: [Oberwolfach] Bienvenu, Greenberg, Kuˇ cera, N. Turetsky 2012 JEMS, to appear [Berkeley] Day and Miller 2012 Math. Research Letters, to appear [Paris] Bienvenu, Miller, H¨ olzl and N. 2011 STACS 2012, JML 2014 15/24
Turing incompleteness and positive density Definition We say that a real z is a positive density point if ⇢ ( E | z ) > 0 for every e ff ectively closed E 3 z . For a real z 62 Q , let Z 2 2 N denote its binary expansion: z = 0 .Z . Theorem (Paris) Let z be a Martin-L¨ of random real. Then Z is NOT above the halting problem ; 0 , z is a positive density point. 16/24
The main connection of density and K -trivials Recall: ⇢ ( E | z ) = lim inf | J | → 0 ,z ∈ J � ( J \ E ) / | J | . Definition (Recall) We say that a real z is a density-one point if ⇢ ( E | z ) = 1 for every e ff ectively closed E 3 z . In other words, z satisfies the Lebesgue Theorem for e ff ectively closed sets. Z ∅ � Theorem (Oberwolfach) Let z be a Martin-L¨ of random real. Suppose z is NOT a density-one point. Then Z is above all the K -trivials. K-trivial sets 17/24
The main connection of density and K -trivials Z ∅ � Theorem Let z be a Martin-L¨ of random real. Suppose z is not a density-one point. Then Z is above all the K -trivials. K-trivial sets 18/24
The main connection of density and K -trivials Z ∅ � Theorem Let z be a Martin-L¨ of random real. Suppose z is not a density-one point. Then Z is above all the K -trivials. K-trivial sets To solve the covering problem, we need to know: Does Z as in the picture exist? 18/24
The main connection of density and K -trivials Z ∅ � Theorem Let z be a Martin-L¨ of random real. Suppose z is not a density-one point. Then Z is above all the K -trivials. K-trivial sets To solve the covering problem, we need to know: Does Z as in the picture exist? Where do we get a ML-random set Z 6� T ; 0 that is not a density-one point? 18/24
Recommend
More recommend