nonlinear expectations and stochastic calculus under
play

Nonlinear Expectations and Stochastic Calculus under Uncertainty - PDF document

Nonlinear Expectations and Stochastic Calculus under Uncertainty with a New Central Limit Theorem and G-Brownian Motion Shige PENG Institute of Mathematics Shandong University 250100, Jinan, China peng@sdu.edu.cn Version: first edition 2


  1. iv part do not need the “ σ -sub-additive” assumption, and readers even need not to have the background of classical probability theory. In fact, in the whole part of the first five chapters we only use a very basic knowledge of functional analysis such as Hahn-Banach Theorem (see Appendix A). A special situation is when all the sublinear expectations in this book become linear. In this case this book can be still considered as using a new and very simple approach to teach the classical Itˆ o’s stochastic calculus, since this book does not need the knowledge of probability theory. This is an important advantage to use expectation as our basic notion. The “authentic probabilistic parts”, i.e., the pathwise analysis of our G - Brownian motion and the corresponding random variables, view as functions of G -Brownian path, is presented in Chapter VI. Here just as the classical “ P -sure analysis”, we introduce “ˆ c-sure analysis” for G -capacity ˆ c. Readers who are not interested in the deep parts of stochastic analysis of G -Brownian motion theory do not need to read this chapter. This book was based on the author’s Lecture Notes [100] for several series of lectures, for the 2nd Workshop Stochastic Equations and Related Topic Jena, July 23–29, 2006; Graduate Courses of Yantai Summer School in Finance, Yantai University, July 06–21, 2007; Graduate Courses of Wuhan Summer School, July 24–26, 2007; Mini-Course of Institute of Applied Mathematics, AMSS, April 16–18, 2007; Mini-course in Fudan University, May 2007 and August 2009; Graduate Courses of CSFI, Osaka University, May 15–June 13, 2007; Minerva Research Foundation Lectures of Columbia University in Fall of 2008; Mini- Workshop of G -Brownian motion and G -expectations in Weihai, July 2009, and series talks in Hong Kong during my recent one-month visit to Department of Applied Mathematics, Hong Kong Polytechnic University. The hospitalities and encouragements of the above institutions and the enthusiasm of the audiences are the main engine to realize this lecture notes. I thank for many comments and suggestions given during those courses, especially to Li Juan and Hu Mingshang. During the preparation of this book, a special reading group was organized with members Hu Mingshang, Li Xinpeng, Xu Xiaoming, Lin Yiqing, Su Chen, Wang Falei and Yin Yue. They proposed very helpful suggestions for the revision of the book. Hu Mingshang and Li Xinpeng have made a great effort for the final edition. Their efforts are decisively important to the realization of this book.

  2. Contents Chapter I Sublinear Expectations and Risk Measures · · · · · · 1 § 1 Sublinear Expectations and Sublinear Expectation Spaces · · · · · 1 § 2 Representation of a Sublinear Expectation · · · · · · · · · · · · · · 4 § 3 Distributions, Independence and Product Spaces · · · · · · · · · · 6 § 4 Completion of Sublinear Expectation Spaces · · · · · · · · · · · · 10 § 5 Coherent Measures of Risk · · · · · · · · · · · · · · · · · · · · · · 12 Notes and Comments · · · · · · · · · · · · · · · · · · · · · · · · · · · · 14 Chapter II Law of Large Numbers and Central Limit Theorem 15 § 1 Maximal Distribution and G -normal Distribution · · · · · · · · · · 15 § 2 Existence of G -distributed Random Variables · · · · · · · · · · · · 22 § 3 Law of Large Numbers and Central Limit Theorem · · · · · · · · · 23 Notes and Comments · · · · · · · · · · · · · · · · · · · · · · · · · · · · 31 o’s Integral · · · · · · · · 33 Chapter III G -Brownian Motion and Itˆ § 1 · · · · · · · · · · · · 33 G -Brownian Motion and its Characterization § 2 Existence of G -Brownian Motion · · · · · · · · · · · · · · · · · · · 36 § 3 · · · · · · · · · · · · · · · 40 Itˆ o’s Integral with G –Brownian Motion § 4 · · · · · · · · 44 Quadratic Variation Process of G –Brownian Motion § 5 The Distribution of � B � · · · · · · · · · · · · · · · · · · · · · · · · 49 § 6 G –Itˆ o’s Formula · · · · · · · · · · · · · · · · · · · · · · · · · · · · 54 § 7 Generalized G -Brownian Motion · · · · · · · · · · · · · · · · · · · 59 Notes and Comments · · · · · · · · · · · · · · · · · · · · · · · · · · · · 62 Chapter IV G -martingales and Jensen’s Inequality · · · · · · · · 63 § 1 The Notion of G -martingales · · · · · · · · · · · · · · · · · · · · · 63 § 2 On G -martingale Representation Theorem · · · · · · · · · · · · · · 65 § 3 G –convexity and Jensen’s Inequality for G –expectations · · · · · · 67 Notes and Comments · · · · · · · · · · · · · · · · · · · · · · · · · · · · 71 Chapter V Stochastic Differential Equations · · · · · · · · · · · · 73 § 1 Stochastic Differential Equations · · · · · · · · · · · · · · · · · · · 73 § 2 Backward Stochastic Differential Equations · · · · · · · · · · · · · 75 § 3 Nonlinear Feynman-Kac Formula · · · · · · · · · · · · · · · · · · · 77 v

  3. vi Contents Notes and Comments · · · · · · · · · · · · · · · · · · · · · · · · · · · · 80 Chapter VI Capacity and Quasi-Surely Analysis for G -Brownian · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 83 Paths § 1 Integration theory associated to an upper probability · · · · · · · · 83 § 2 · · · · · · · · · · · · · · · 93 G -expectation as an Upper Expectation § 3 · · · · · · · · · · · · 96 G -capacity and Paths of G -Brownian Motion Notes and Comments · · · · · · · · · · · · · · · · · · · · · · · · · · · · 98 Appendix A Preliminaries in Functional Analysis · · · · · · · · · 101 § 1 Completion of Normed Linear Spaces · · · · · · · · · · · · · · · · 101 § 2 The Hahn-Banach Extension Theorem · · · · · · · · · · · · · · · · 102 § 3 Dini’s Theorem and Tietze’s Extension Theorem · · · · · · · · · · 102 Appendix B Preliminaries in Probability Theory · · · · · · · · · · 103 § 1 Kolmogorov’s Extension Theorem · · · · · · · · · · · · · · · · · · 103 § 2 Kolmogorov’s Criterion · · · · · · · · · · · · · · · · · · · · · · · · 104 § 3 Daniell-Stone Theorem · · · · · · · · · · · · · · · · · · · · · · · · 106 Appendix C Viscosity Solutions · · · · · · · · · · · · · · · · · · · · 107 § 1 The Definition of Viscosity Solutions · · · · · · · · · · · · · · · · · 107 § 2 Comparison Theorem · · · · · · · · · · · · · · · · · · · · · · · · · 109 § 3 Perron’s Method and Existence · · · · · · · · · · · · · · · · · · · · 115 § 4 Krylov’s Regularity Estimate for Parabolic PDE · · · · · · · · · · 117 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 121 Bibliography Index of Symbols · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 131 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 132 Index

  4. Chapter I Sublinear Expectations and Risk Measures The sublinear expectation is also called the upper expectation or the upper prevision, and this notion is used in situations when the probability models have uncertainty. In this chapter, we present the basic notion of sublinear ex- pectations and the corresponding sublinear expectation spaces. We give the representation theorem of a sublinear expectation and the notions of distribu- tions and independence under the framework of sublinear expectation. We also introduce a natural Banach norm of a sublinear expectation in order to get the completion of a sublinear expectation space which is a Banach space. As a fundamentally important example, we introduce the notion of coherent risk measures in finance. A large part of notions and results in this chapter will be throughout this book. § 1 Sublinear Expectations and Sublinear Expec- tation Spaces Let Ω be a given set and let H be a linear space of real valued functions defined on Ω. In this book, we suppose that H satisfies c ∈ H for each constant c and | X | ∈ H if X ∈ H . The space H can be considered as the space of random variables. Definition 1.1 A Sublinear expectation E is a functional E : H → R satis- fying (i) Monotonicity: E [ X ] ≥ E [ Y ] if X ≥ Y. (ii) Constant preserving: E [ c ] = c for c ∈ R . 1

  5. 2 Chap.I Sublinear Expectations and Risk Measures (iii) Sub-additivity: For each X, Y ∈ H , E [ X + Y ] ≤ E [ X ] + E [ Y ] . (iv) Positive homogeneity: E [ λX ] = λ E [ X ] for λ ≥ 0 . The triple (Ω , H , E ) is called a sublinear expectation space . If (i) and (ii) are satisfied, E is called a nonlinear expectation and the triple (Ω , H , E ) is called a nonlinear expectation space . Definition 1.2 Let E 1 and E 2 be two nonlinear expectations defined on (Ω , H ) . E 1 is said to be dominated by E 2 if E 1 [ X ] − E 1 [ Y ] ≤ E 2 [ X − Y ] for X, Y ∈ H . Remark 1.3 From (iii) , a sublinear expectation is dominated by itself. In many situations, (iii) is also called the property of self-domination. If the inequality in (iii) becomes equality, then E is a linear expectation, i.e., E is a linear functional satisfying (i) and (ii) . Remark 1.4 (iii)+(iv) is called sublinearity . This sublinearity implies (v) Convexity : E [ αX + (1 − α ) Y ] ≤ α E [ X ] + (1 − α ) E [ Y ] for α ∈ [0 , 1] . If a nonlinear expectation E satisfies convexity, we call it a convex expecta- tion . The properties (ii)+(iii) implies (vi) Cash translatability : E [ X + c ] = E [ X ] + c for c ∈ R . In fact, we have E [ X ] + c = E [ X ] − E [ − c ] ≤ E [ X + c ] ≤ E [ X ] + E [ c ] = E [ X ] + c. For property (iv) , an equivalence form is E [ λX ] = λ + E [ X ] + λ − E [ − X ] for λ ∈ R . In this book, we will systematically study the sublinear expectation spaces. In the following chapters, unless otherwise stated, we consider the following sublinear expectation space (Ω , H , E ): if X 1 , · · · , X n ∈ H then ϕ ( X 1 , · · · , X n ) ∈ H for each ϕ ∈ C l.Lip ( R n ) where C l.Lip ( R n ) denotes the linear space of functions ϕ satisfying | ϕ ( x ) − ϕ ( y ) | ≤ C (1 + | x | m + | y | m ) | x − y | for x, y ∈ R n , some C > 0, m ∈ N depending on ϕ. In this case X = ( X 1 , · · · , X n ) is called an n -dimensional random vector, de- noted by X ∈ H n .

  6. 3 § 1 Sublinear Expectations and Sublinear Expectation Spaces Remark 1.5 It is clear that if X ∈ H then | X | , X m ∈ H . More generally, ϕ ( X ) ψ ( Y ) ∈ H if X, Y ∈ H and ϕ, ψ ∈ C l.Lip ( R ) . In particular, if X ∈ H then E [ | X | n ] < ∞ for each n ∈ N . Here we use C l.Lip ( R n ) in our framework only for some convenience of tech- niques. In fact our essential requirement is that H contains all constants and, moreover, X ∈ H implies | X | ∈ H . In general, C l.Lip ( R n ) can be replaced by any one of the following spaces of functions defined on R n . • L ∞ ( R n ): the space of bounded Borel-measurable functions; • C b ( R n ): the space of bounded and continuous functions; • C k b ( R n ): the space of bounded and k -time continuously differentiable func- tions with bounded derivatives of all orders less than or equal to k ; • C unif ( R n ): the space of bounded and uniformly continuous functions; • C b.Lip ( R n ): the space of bounded and Lipschitz continuous functions; • L 0 ( R n ): the space of Borel measurable functions. Next we give two examples of sublinear expectations. Example 1.6 In a game we select a ball from a box containing W white, B black and Y yellow balls. The owner of the box, who is the banker of the game, does not tell us the exact numbers of W, B and Y . He or she only informs us that W + B + Y = 100 and W = B ∈ [20 , 25] . Let ξ be a random variable defined by  1 if we get a white ball;  ξ = 0 if we get a yellow ball;  − 1 if we get a black ball. Problem: how to measure a loss X = ϕ ( ξ ) for a given function ϕ on R . We know that the distribution of ξ is � − 1 � 0 1 with uncertainty: p ∈ [ µ, µ ] = [0 . 4 , 0 . 5] . p p 1 − p 2 2 Thus the robust expectation of X = ϕ ( ξ ) is E [ ϕ ( ξ )] := sup E P [ ϕ ( ξ )] P ∈P [ p = sup 2( ϕ (1) + ϕ ( − 1)) + (1 − p ) ϕ (0)] . p ∈ [ µ,µ ] Here, ξ has distribution uncertainty.

  7. 4 Chap.I Sublinear Expectations and Risk Measures Example 1.7 A more general situation is that the banker of a game can choose among a set of distributions { F ( θ, A ) } A ∈B ( R ) ,θ ∈ Θ of a random variable ξ . In this situation the robust expectation of a risk position ϕ ( ξ ) for some ϕ ∈ C l.Lip ( R ) is � E [ ϕ ( ξ )] := sup ϕ ( x ) F ( θ, dx ) . θ ∈ Θ R Exercise 1.8 Prove that a functional E satisfies sublinearity if and only if it satisfies convexity and positive homogeneity. Exercise 1.9 Suppose that all elements in H are bounded. Prove that the strongest sublinear expectation on H is E ∞ [ X ] := X ∗ = sup X ( ω ) . ω ∈ Ω Namely, all other sublinear expectations are dominated by E ∞ [ · ] . § 2 Representation of a Sublinear Expectation A sublinear expectation can be expressed as a supremum of linear expectations. Theorem 2.1 Let E be a functional defined on a linear space H satisfying sub- additivity and positive homogeneity. Then there exists a family of linear func- tionals { E θ : θ ∈ Θ } defined on H such that E [ X ] = sup E θ [ X ] for X ∈ H θ ∈ Θ and, for each X ∈ H , there exists θ X ∈ Θ such that E [ X ] = E θ X [ X ] . Furthermore, if E is a sublinear expectation, then the corresponding E θ is a linear expectation. Proof. Let Q = { E θ : θ ∈ Θ } be the family of all linear functionals dominated by E , i.e., E θ [ X ] ≤ E [ X ], for all X ∈ H , E θ ∈ Q . We first prove that Q is non empty. For a given X ∈ H , we set L = { aX : a ∈ R } which is a subspace of H . We define I : L → R by I [ aX ] = a E [ X ], ∀ a ∈ R , then I [ · ] forms a linear functional on H and I ≤ E on L . Since E [ · ] is sub- additive and positively homogeneous, by Hahn-Banach theorem (see Appendix A), there exists a linear functional E on H such that E = I on L and E ≤ E on H . Thus E is a linear functional dominated by E such that E [ X ] = E [ X ]. We now define E Θ [ X ] := sup E θ [ X ] for X ∈ H . θ ∈ Θ It is clear that E Θ = E . Furthermore, if E is a sublinear expectation, then we have that, for each nonnegative element X ∈ H , E [ X ] = − E [ − X ] ≥ − E [ − X ] ≥ 0 . For each c ∈ R , − E [ c ] = E [ − c ] ≤ E [ − c ] = − c and E [ c ] ≤ E [ c ] = c , so we get E [ c ] = c . Thus E is a linear expectation. The proof is complete. �

  8. 5 § 2 Representation of a Sublinear Expectation Remark 2.2 It is important to observe that the above linear expectation E θ is only “finitely additive”. A sufficient condition for the σ -additivity of E θ is to assume that E [ X i ] → 0 for each sequence { X i } ∞ i =1 of H such that X i ( ω ) ↓ 0 for each ω . In this case, it is clear that E θ [ X i ] → 0 . Thus we can apply the well-known Daniell-Stone Theorem (see Theorem 3.3 in Appendix B) to find a σ -additive probability measure P θ on (Ω , σ ( H )) such that � X ∈ H . E θ [ X ] = X ( ω ) dP θ , Ω The corresponding model uncertainty of probabilities is the subset { P θ : θ ∈ Θ } , and the corresponding uncertainty of distributions for an n -dimensional random vector X in H is { F X ( θ, A ) := P θ ( X ∈ A ) : A ∈ B ( R n ) } . In many situation, we may concern the probability uncertainty, and the probability maybe only finitely additive. So next we will give another version of the above representation theorem. Let P f be the collection of all finitely additive probability measures on (Ω , F ), we consider L ∞ 0 (Ω , F ) the collection of risk positions with finite val- ues, which consists risk positions X of the form N � X ( ω ) = x i I A i ( ω ) , x i ∈ R , A i ∈ F , i = 1 , · · · , N. i =1 It is easy to check that, under the norm �·� ∞ , L ∞ 0 (Ω , F ) is dense in L ∞ (Ω , F ). For a fixed Q ∈ P f and X ∈ L ∞ 0 (Ω , F ) we define � N N � � E Q [ X ] = E Q [ x i I A i ( ω )] := x i Q ( A i ) = X ( ω ) Q ( dω ) . Ω i =1 i =1 E Q : L ∞ 0 (Ω , F ) → R is a linear functional. It is easy to check that E Q satisfies (i) monotonicity and (ii) constant preserving. It is also continuous under � X � ∞ . | E Q [ X ] | ≤ sup | X ( ω ) | = � X � ∞ . ω ∈ Ω 0 is dense in L ∞ we then can extend E Q from L ∞ Since L ∞ 0 to a linear continuous functional on L ∞ (Ω , F ). Proposition 2.3 The linear functional E Q [ · ] : L ∞ (Ω , F ) → R satisfies (i) and (ii) . Inversely each linear functional η ( · ) : L ∞ (Ω , F ) → R satisfying (i) and (ii) induces a finitely additive probability measure via Q η ( A ) = η ( I A ) , A ∈ F . The corresponding expectation is η itself � η ( X ) = X ( ω ) Q η ( dω ) . Ω

  9. 6 Chap.I Sublinear Expectations and Risk Measures Theorem 2.4 A sublinear expectation E has the following representation: there exists a subset Q ⊂ P f , such that E [ X ] = sup E Q [ X ] for X ∈ H . Q ∈Q Proof. By Theorem 2.1, we have E [ X ] = sup E θ [ X ] for X ∈ H , θ ∈ Θ where E θ is a linear expectation on H for fixed θ ∈ Θ. We can define a new sublinear expectation on L ∞ (Ω , σ ( H )) by ˜ E θ [ X ] := inf { E θ [ Y ]; Y ≥ X, Y ∈ H} . It is not difficult to check that ˜ E θ is a sublinear expectation on L ∞ (Ω , σ ( H )), where σ ( H ) is the smallest σ -algebra generated by H . We also have E θ ≤ ˜ E θ on H , by Hahn-Banach theorem, E θ can be extended from H to L ∞ (Ω , σ ( H )), by Proposition 2.3, there exists Q ∈ P f , such that E θ [ X ] = E Q [ X ] for X ∈ H . So there exists Q ⊂ P f , such that E [ X ] = sup E Q [ X ] for X ∈ H . Q ∈Q � Exercise 2.5 Prove that ˜ E θ is a sublinear expectation. § 3 Distributions, Independence and Product Spaces We now give the notion of distributions of random variables under sublinear expectations. Let X = ( X 1 , · · · , X n ) be a given n -dimensional random vector on a sublin- ear expectation space (Ω , H , E ). We define a functional on C l.Lip ( R n ) by F X [ ϕ ] := E [ ϕ ( X )] : ϕ ∈ C l.Lip ( R n ) → R . The triple ( R n , C l.Lip ( R n ) , F X ) forms a sublinear expectation space. F X is called the distribution of X under E . In the σ -additive situation (see Remark 2.2), we have the following form: � F X [ ϕ ] = sup R n ϕ ( x ) F X ( θ, dx ) . θ ∈ Θ

  10. 7 § 3 Distributions, Independence and Product Spaces Definition 3.1 Let X 1 and X 2 be two n –dimensional random vectors defined on sublinear expectation spaces (Ω 1 , H 1 , E 1 ) and (Ω 2 , H 2 , E 2 ) , respectively. They d are called identically distributed , denoted by X 1 = X 2 , if for ϕ ∈ C l.Lip ( R n ) . E 1 [ ϕ ( X 1 )] = E 2 [ ϕ ( X 2 )] d It is clear that X 1 = X 2 if and only if their distributions coincide. We say that the distribution of X 1 is stronger than that of X 2 if E 1 [ ϕ ( X 1 )] ≥ E 2 [ ϕ ( X 2 )] , for each ϕ ∈ C l.Lip ( R n ) . d Remark 3.2 In the case of sublinear expectations, X 1 = X 2 implies that the uncertainty subsets of distributions of X 1 and X 2 are the same, e.g., in the framework of Remark 2.2, { F X 1 ( θ 1 , · ) : θ 1 ∈ Θ 1 } = { F X 2 ( θ 2 , · ) : θ 2 ∈ Θ 2 } . Similarly if the distribution of X 1 is stronger than that of X 2 , then { F X 1 ( θ 1 , · ) : θ 1 ∈ Θ 1 } ⊃ { F X 2 ( θ 2 , · ) : θ 2 ∈ Θ 2 } . The distribution of X ∈ H has the following four typical parameters: σ 2 := E [ X 2 ] , σ 2 := − E [ − X 2 ] . µ := E [ X ] , µ := − E [ − X ] , ¯ ¯ µ ] and [ σ 2 , ¯ σ 2 ] characterize the mean-uncertainty and the The intervals [ µ, ¯ variance-uncertainty of X respectively. The following property is very useful in our sublinear expectation theory. Proposition 3.3 Let (Ω , H , E ) be a sublinear expectation space and X, Y be two random variables such that E [ Y ] = − E [ − Y ] , i.e., Y has no mean-uncertainty. Then we have E [ X + αY ] = E [ X ] + α E [ Y ] for α ∈ R . In particular, if E [ Y ] = E [ − Y ] = 0 , then E [ X + αY ] = E [ X ] . Proof. We have E [ αY ] = α + E [ Y ] + α − E [ − Y ] = α + E [ Y ] − α − E [ Y ] = α E [ Y ] for α ∈ R . Thus E [ X + αY ] ≤ E [ X ] + E [ αY ] = E [ X ] + α E [ Y ] = E [ X ] − E [ − αY ] ≤ E [ X + αY ] . � Definition 3.4 A sequence of n -dimensional random vectors { η i } ∞ i =1 defined on a sublinear expectation space (Ω , H , E ) is said to converge in distribu- tion (or converge in law ) under E if for each ϕ ∈ C b.Lip ( R n ) , the sequence { E [ ϕ ( η i )] } ∞ i =1 converges.

  11. 8 Chap.I Sublinear Expectations and Risk Measures The following result is easy to check. Proposition 3.5 Let { η i } ∞ i =1 converge in law in the above sense. Then the mapping F [ · ] : C b.Lip ( R n ) → R defined by i →∞ E [ ϕ ( η i )] for ϕ ∈ C b.Lip ( R n ) F [ ϕ ] := lim is a sublinear expectation defined on ( R n , C b.Lip ( R n )) . The following notion of independence plays a key role in the sublinear ex- pectation theory. Definition 3.6 In a sublinear expectation space (Ω , H , E ) , a random vector Y ∈ H n is said to be independent from another random vector X ∈ H m under E [ · ] if for each test function ϕ ∈ C l.Lip ( R m + n ) we have E [ ϕ ( X, Y )] = E [ E [ ϕ ( x, Y )] x = X ] . Remark 3.7 In a sublinear expectation space (Ω , H , E ) , Y is independent from X means that the uncertainty of distributions { F Y ( θ, · ) : θ ∈ Θ } of Y does not change after the realization of X = x . In other words, the “conditional sublinear expectation” of Y with respect to X is E [ ϕ ( x, Y )] x = X . In the case of linear expectation, this notion of independence is just the classical one. Remark 3.8 It is important to note that under sublinear expectations the con- dition “ Y is independent from X ” does not imply automatically that “ X is independent from Y ”. Example 3.9 We consider a case where E is a sublinear expectation and X, Y ∈ H are identically distributed with E [ X ] = E [ − X ] = 0 and σ 2 = E [ X 2 ] > σ 2 = − E [ − X 2 ] . We also assume that E [ | X | ] = E [ X + + X − ] > 0 , thus E [ X + ] = 1 2 E [ | X | + X ] = 1 2 E [ | X | ] > 0 . In the case where Y is independent from X , we have E [ XY 2 ] = E [ X + σ 2 − X − σ 2 ] = ( σ 2 − σ 2 ) E [ X + ] > 0 . But if X is independent from Y , we have E [ XY 2 ] = 0 . The independence property of two random vectors X, Y involves only the “joint distribution” of ( X, Y ). The following result tells us how to construct random vectors with given “marginal distributions” and with a specific direction of independence. Definition 3.10 Let (Ω i , H i , E i ) , i = 1 , 2 be two sublinear expectation spaces. We denote H 1 ⊗ H 2 := { Z ( ω 1 , ω 2 ) = ϕ ( X ( ω 1 ) , Y ( ω 2 )) : ( ω 1 , ω 2 ) ∈ Ω 1 × Ω 2 , ( X, Y ) ∈ H m 1 × H n 2 , ϕ ∈ C l.Lip ( R m + n ) } ,

  12. 9 § 3 Distributions, Independence and Product Spaces and, for each random variable of the above form Z ( ω 1 , ω 2 ) = ϕ ( X ( ω 1 ) , Y ( ω 2 )) , ϕ ( x ) := E 2 [ ϕ ( x, Y )] , x ∈ R m . ( E 1 ⊗ E 2 )[ Z ] := E 1 [ ¯ ϕ ( X )] , where ¯ It is easy to check that the triple (Ω 1 × Ω 2 , H 1 ⊗ H 2 , E 1 ⊗ E 2 ) forms a sublinear expectation space. We call it the product space of sublinear expectation spaces (Ω 1 , H 1 , E 1 ) and (Ω 2 , H 2 , E 2 ) . In this way, we can define the product space n n n � � � ( Ω i , H i , E i ) i =1 i =1 i =1 of given sublinear expectation spaces (Ω i , H i , E i ) , i = 1 , 2 , · · · , n . In partic- ular, when (Ω i , H i , E i ) = (Ω 1 , H 1 , E 1 ) we have the product space of the form 1 , H ⊗ n 1 , E ⊗ n (Ω n 1 ) . Let X, ¯ X be two n -dimensional random vectors on a sublinear expectation d X is called an independent copy of X if ¯ ¯ = X and ¯ space (Ω , H , E ). X X is independent from X . The following property is easy to check. Proposition 3.11 Let X i be an n i -dimensional random vector on sublinear expectation space (Ω i , H i , E i ) for i = 1 , · · · , n , respectively. We denote Y i ( ω 1 , · · · , ω n ) := X i ( ω i ) , i = 1 , · · · , n. Then Y i , i = 1 , · · · , n , are random vectors on ( � n i =1 Ω i , � n i =1 H i , � n i =1 E i ) . d Moreover we have Y i = X i and Y i +1 is independent from ( Y 1 , · · · , Y i ) , for each i . d Furthermore, if (Ω i , H i , E i ) = (Ω 1 , H 1 , E 1 ) and X i = X 1 , for all i , then we d also have Y i = Y 1 . In this case Y i is said to be an independent copy of Y 1 for i = 2 , · · · , n . Remark 3.12 In the above construction the integer n can be also infinite. In this case each random variable X ∈ � ∞ i =1 H i belongs to ( � k i =1 Ω i , � k i =1 H i , � k i =1 E i ) for some positive integer k < ∞ and ∞ k � � E i [ X ] := E i [ X ] . i =1 i =1 Example 3.13 We consider a situation where two random variables X and Y in H are identically distributed and their common distribution is � F X [ ϕ ] = F Y [ ϕ ] = sup ϕ ( y ) F ( θ, dy ) for ϕ ∈ C l.Lip ( R ) , θ ∈ Θ R where for each θ ∈ Θ , { F ( θ, A ) } A ∈B ( R ) is a probability measure on ( R , B ( R )) . In this case, ” Y is independent from X ” means that the joint distribution of X and Y is � � � � F ( θ 1 , dx ) for ψ ∈ C l.Lip ( R 2 ) . F X,Y [ ψ ] = sup sup ψ ( x, y ) F ( θ 2 , dy ) θ 1 ∈ Θ θ 2 ∈ Θ R R

  13. 10 Chap.I Sublinear Expectations and Risk Measures Remark 3.14 The situation “ Y is independent from X ”often appears when Y occurs after X , thus a robust expectation should take the information of X into account. Exercise 3.15 Suppose X, Y ∈ H d and Y is an independent copy of X . Prove that for each a ∈ R , b ∈ R d , a + � b, Y � is an independent copy of a + � b, X � . Exercise 3.16 Let (Ω , H , E ) be a sublinear expectation space. Prove that if E [ ϕ ( X )] = E [ ϕ ( Y )] for any ϕ ∈ C b,Lip , then it still holds for any ϕ ∈ C l,Lip . That is, we can replace ϕ ∈ C l,Lip in Definition 3.1 by ϕ ∈ C b,Lip . § 4 Completion of Sublinear Expectation Spaces Let (Ω , H , E ) be a sublinear expectation space. We have the following useful inequalities. We first give the following well-known inequalities. Lemma 4.1 For r > 0 and 1 < p, q < ∞ with 1 p + 1 q = 1 , we have | a + b | r ≤ max { 1 , 2 r − 1 } ( | a | r + | b | r ) for a, b ∈ R , (4.1) | ab | ≤ | a | p + | b | q q . (4.2) p Proposition 4.2 For each X, Y ∈H , we have E [ | X + Y | r ] ≤ 2 r − 1 ( E [ | X | r ] + E [ | Y | r ]) , (4.3) E [ | XY | ] ≤ ( E [ | X | p ]) 1 /p · ( E [ | Y | q ]) 1 /q , (4.4) ( E [ | X + Y | p ]) 1 /p ≤ ( E [ | X | p ]) 1 /p + ( E [ | Y | p ]) 1 /p , (4.5) where r ≥ 1 and 1 < p, q < ∞ with 1 p + 1 q = 1 . In particular, for 1 ≤ p < p ′ , we have ( E [ | X | p ]) 1 /p ≤ ( E [ | X | p ′ ]) 1 /p ′ . Proof. The inequality (4.3) follows from (4.1). For the case E [ | X | p ] · E [ | Y | q ] > 0, we set X Y ξ = ( E [ | X | p ]) 1 /p , η = ( E [ | Y | q ]) 1 /q . By (4.2) we have E [ | ξη | ] ≤ E [ | ξ | p + | η | q q ] ≤ E [ | ξ | p p ] + E [ | η | q q ] p = 1 p + 1 q = 1 .

  14. 11 § 4 Completion of Sublinear Expectation Spaces Thus (4.4) follows. For the case E [ | X | p ] · E [ | Y | q ] = 0 , we consider E [ | X | p ] + ε and E [ | Y | q ] + ε for ε > 0 . Applying the above method and letting ε → 0 , we get (4.4). We now prove (4.5). We only consider the case E [ | X + Y | p ] > 0. E [ | X + Y | p ] = E [ | X + Y | · | X + Y | p − 1 ] ≤ E [ | X | · | X + Y | p − 1 ] + E [ | Y | · | X + Y | p − 1 ] ≤ ( E [ | X | p ]) 1 /p · ( E [ | X + Y | ( p − 1) q ]) 1 /q + ( E [ | Y | p ]) 1 /p · ( E [ | X + Y | ( p − 1) q ]) 1 /q . Since ( p − 1) q = p , we have (4.5). By(4.4), it is easy to deduce that ( E [ | X | p ]) 1 /p ≤ ( E [ | X | p ′ ]) 1 /p ′ for 1 ≤ p < p ′ . � For each fixed p ≥ 1, we observe that H p 0 = { X ∈ H , E [ | X | p ] = 0 } is a linear subspace of H . Taking H p 0 as our null space, we introduce the quotient space H / H p 0 . Observing that, for every { X } ∈ H / H p 0 with a representation X ∈ H , we can define an expectation E [ { X } ] := E [ X ] which is still a sublinear 1 p . By Proposition 4.2, it is easy to check expectation. We set � X � p := ( E [ | X | p ]) that �·� p forms a Banach norm on H / H p 0 . We extend H / H p 0 to its completion H p under this norm, then ( ˆ ˆ H p , �·� p ) is a Banach space. In particular, when p = 1 , we denote it by ( ˆ H , �·� ) . For each X ∈ H , the mappings X + ( ω ) : H → H X − ( ω ) : H → H and satisfy | X + − Y + | ≤ | X − Y | | X − − Y − | = | ( − X ) + − ( − Y ) + | ≤ | X − Y | . and Thus they are both contraction mappings under �·� p and can be continuously extended to the Banach space ( ˆ H p , �·� p ). We can define the partial order “ ≥ ” in this Banach space. Definition 4.3 An element X in ( ˆ H , �·� ) is said to be nonnegative, or X ≥ 0 , 0 ≤ X , if X = X + . We also denote by X ≥ Y , or Y ≤ X , if X − Y ≥ 0 . It is easy to check that X ≥ Y and Y ≥ X imply X = Y on ( ˆ H p , �·� p ). For each X, Y ∈ H , note that | E [ X ] − E [ Y ] | ≤ E [ | X − Y | ] ≤ || X − Y || p . Thus the sublinear expectation E [ · ] can be continuously extended to ( ˆ H p , �·� p ) on which it is still a sublinear expectation. Let (Ω , H , E 1 ) be a nonlinear expectation space. E 1 is said to be dominated by E if

  15. 12 Chap.I Sublinear Expectations and Risk Measures E 1 [ X ] − E 1 [ Y ] ≤ E [ X − Y ] for X, Y ∈ H . From this we can easily deduce that | E 1 [ X ] − E 1 [ Y ] | ≤ E [ | X − Y | ] , thus the nonlinear expectation E 1 [ · ] can be continuously extended to ( ˆ H p , �·� p ) on which it is still a nonlinear expectation. ˆ Remark 4.4 It is important to note that X 1 , · · · , X n ∈ H does not imply ϕ ( X 1 , · · · , X n ) ∈ ˆ H for each ϕ ∈ C l.Lip ( R n ) . Thus, when we talk about the no- tions of distributions, independence and product spaces on (Ω , ˆ H , E ) , the space C l.Lip ( R n ) is replaced by C b.Lip ( R n ) unless otherwise stated. Exercise 4.5 Prove that the inequalities (4.3) , (4.4) , (4.5) still hold for (Ω , ˆ H , E ) . § 5 Coherent Measures of Risk Let the pair (Ω , H ) be such that Ω is a set of scenarios and H is the collection of all possible risk positions in a financial market. If X ∈ H , then for each constant c , X ∨ c , X ∧ c are all in H . One typical example in finance is that X is the tomorrow’s price of a stock. In this case, any European call or put options with strike price K of forms ( S − K ) + , ( K − S ) + are in H . A risk supervisor is responsible for taking a rule to tell traders, securities companies, banks or other institutions under his supervision, which kind of risk positions is unacceptable and thus a minimum amount of risk capitals should be deposited to make the positions acceptable. The collection of acceptable positions is defined by A = { X ∈ H : X is acceptable } . This set has meaningful properties in economy. Definition 5.1 A set A is called a coherent acceptable set if it satisfies (i) Monotonicity: X ∈ A , Y ≥ X imply Y ∈ A . (ii) 0 ∈ A but − 1 �∈ A . (iii) Positive homogeneity X ∈ A implies λX ∈ A for λ ≥ 0 . (iv) Convexity: X, Y ∈ A imply αX + (1 − α ) Y ∈ A for α ∈ [0 , 1] .

  16. 13 § 5 Coherent Measures of Risk Remark 5.2 (iii)+(iv) imply (v) Sublinearity: X, Y ∈ A ⇒ µX + νY ∈ A for µ, ν ≥ 0 . Remark 5.3 If the set A only satisfies (i) , (ii) and (iv) , then A is called a convex acceptable set . In this section we mainly study the coherent case. Once the rule of the acceptable set is fixed, the minimum requirement of risk deposit is then auto- matically determined. Definition 5.4 Given a coherent acceptable set A , the functional ρ ( · ) defined by ρ ( X ) = ρ A ( X ) := inf { m ∈ R : m + X ∈ A} , X ∈ H is called the coherent risk measure related to A . It is easy to see that ρ ( X + ρ ( X )) = 0 . Proposition 5.5 ρ ( · ) is a coherent risk measure satisfying four properties: (i) Monotonicity: If X ≥ Y then ρ ( X ) ≤ ρ ( Y ) . (ii) Constant preserving: ρ (1) = − ρ ( − 1) = − 1 . (iii) Sub-additivity: For each X, Y ∈ H , ρ ( X + Y ) ≤ ρ ( X ) + ρ ( Y ) . (iv) Positive homogeneity: ρ ( λX ) = λρ ( X ) for λ ≥ 0 . Proof. (i), (ii) are obvious. We now prove (iii). Indeed, ρ ( X + Y ) = inf { m ∈ R : m + ( X + Y ) ∈ A} = inf { m + n : m, n ∈ R , ( m + X ) + ( n + Y ) ∈ A} ≤ inf { m ∈ R : m + X ∈ A} + inf { n ∈ R : n + Y ∈ A} = ρ ( X ) + ρ ( Y ) . To prove (iv), in fact the case λ = 0 is trivial; when λ > 0, ρ ( λX ) = inf { m ∈ R : m + λX ∈ A} = λ inf { n ∈ R : n + X ∈ A} = λρ ( X ) , where n = m/λ . � Obviously, if E is a sublinear expectation, we define ρ ( X ) := E [ − X ], then ρ is a coherent risk measure. Inversely, if ρ is a coherent risk measure, we define E [ X ] := ρ ( − X ), then E is a sublinear expectation. Exercise 5.6 Let ρ ( · ) be a coherent risk measure. Then we can inversely define A ρ := { X ∈ H : ρ ( X ) ≤ 0 } . Prove that A ρ is a coherent acceptable set.

  17. 14 Chap.I Sublinear Expectations and Risk Measures Notes and Comments The sublinear expectation is also called the upper expectation (see Huber (1981) [59] in robust statistics), or the upper prevision in the theory of imprecise prob- abilities (see Walley (1991) [118] and a rich literature provided in the Notes of this book). To our knowledge, the Representation Theorem 2.1 was firstly obtained for the case where Ω is a finite set by [59], and this theorem was redis- covered independently by Artzner, Delbaen, Eber and Heath (1999) [3] and then by Delbaen (2002) [35] for the general Ω. A typical example of dynamic nonlin- ear expectation, called g –expectation (small g ), was introduced in Peng (1997) [90] in the framework of backward stochastic differential equations. Readers are referred to Briand, Coquet, Hu, M´ emin and Peng [14], Chen [18], Chen and Epstein [19], Chen, Kulperger and Jiang [20], Chen and Peng [21] and [22], Co- quet, Hu, M´ emin and Peng [26] [27], Jiang [67], Jiang and Chen [68, 69], Peng [92] and [95], Peng and Xu [105] and Rosazza [110] for the further development of this theory. It seems that the notions of distributions and independence un- der nonlinear expectations were new. We think that these notions are perfectly adapted for the further development of dynamic nonlinear expectations. For other types of the related notions of distributions and independence under non- linear expectations or non-additive probabilities, we refer to the Notes of the book [118] and the references listed in Marinacci (1999) [81] and Maccheroni and Marinacci (2005) [82]. Coherent risk measures can be also regarded as sub- linear expectations defined on the space of risk positions in financial market. This notion was firstly introduced in [3]. Readers can be referred also to the well-known book of F¨ ollmer and Schied (2004)[51] for the systematical presen- tation of coherent risk measures and convex risk measures. For the dynamic risk measure in continuous time, see [110] or [95], Barrieu and El Karoui (2004) [9] using g -expectations. Super-hedging and super pricing (see El Karoui and Quenez (1995) [43] and El Karoui, Peng and Quenez (1997) [44]) are also closely related to this formulation.

  18. Chapter II Law of Large Numbers and Central Limit Theorem In this chapter, we first introduce two types of fundamentally important distri- butions, namely, maximal distribution and G -normal distribution, in the theory of sublinear expectations. The former corresponds to constants and the lat- ter corresponds to normal distribution in classical probability theory. We then present the law of large numbers (LLN) and central limit theorem (CLT) un- der sublinear expectations. It is worth pointing out that the limit in LLN is a maximal distribution and the limit in CLT is a G -normal distribution. § 1 Maximal Distribution and G -normal Distri- bution We will firstly define a special type of very simple distributions which are fre- quently used in practice, known as “worst case risk measure”. Definition 1.1 ( maximal distribution ) A d -dimensional random vector η = ( η 1 , · · · , η d ) on a sublinear expectation space (Ω , H , E ) is called maximal dis- tributed if there exists a bounded, closed and convex subset Γ ⊂ R d such that E [ ϕ ( η )] = max y ∈ Γ ϕ ( y ) . Remark 1.2 Here Γ gives the degree of uncertainty of η . It is easy to check that this maximal distributed random vector η satisfies d for a, b ≥ 0 , aη + b ¯ η = ( a + b ) η where ¯ η is an independent copy of η . We will see later that in fact this relation characterizes a maximal distribution. Maximal distribution is also called “worst case risk measure” in finance. 15

  19. 16 Chap.II Law of Large Numbers and Central Limit Theorem Remark 1.3 When d = 1 we have Γ = [ µ, µ ] , where µ = E [ η ] and µ = − E [ − η ] . The distribution of η is ˆ F η [ ϕ ] = E [ ϕ ( η )] = sup ϕ ( y ) for ϕ ∈ C l.Lip ( R ) . µ ≤ y ≤ ¯ µ d Recall a well-known characterization: X = N (0 , Σ) if and only if � d aX + b ¯ a 2 + b 2 X for a, b ≥ 0 , X = (1.1) where ¯ X is an independent copy of X . The covariance matrix Σ is defined by Σ = E [ XX T ]. We now consider the so called G -normal distribution in probabil- ity model uncertainty situation. The existence, uniqueness and characterization will be given later. Definition 1.4 ( G -normal distribution ) A d -dimensional random vector X = ( X 1 , · · · , X d ) T on a sublinear expectation space (Ω , H , E ) is called (centralized) G -normal distributed if � d a 2 + b 2 X aX + b ¯ X = for a, b ≥ 0 , where ¯ X is an independent copy of X . √ Remark 1.5 Noting that E [ X + ¯ X ] = 2 E [ X ] and E [ X + ¯ X ] = E [ 2 X ] = √ 2 E [ X ] , we then have E [ X ] = 0 . Similarly, we can prove that E [ − X ] = 0 . Namely, X has no mean-uncertainty. The following property is easy to prove by the definition. Proposition 1.6 Let X be G -normal distributed. Then for each A ∈ R m × d , AX is also G -normal distributed. In particular, for each a ∈ R d , � a , X � is a 1 -dimensional G -normal distributed random variable, but its inverse is not true (see Exercise 1.15). We denote by S ( d ) the collection of all d × d symmetric matrices. Let X be G -normal distributed and η be maximal distributed d -dimensional random vectors on (Ω , H , E ). The following function is very important to characterize their distributions: G ( p, A ) := E [1 ( p, A ) ∈ R d × S ( d ) . 2 � AX, X � + � p, η � ] , (1.2) It is easy to check that G is a sublinear function monotonic in A ∈ S ( d ) in the p ∈ R d and A, ¯ A ∈ S ( d ) following sense: for each p, ¯  p, A + ¯ p, ¯ G ( p + ¯ A ) ≤ G ( p, A ) + G (¯ A ) ,  G ( λp, λA ) = λG ( p, A ) , ∀ λ ≥ 0 , (1.3)  ≥ G ( p, ¯ if A ≥ ¯ G ( p, A ) A ) , A.

  20. 17 § 1 Maximal Distribution and G -normal Distribution Clearly, G is also a continuous function. By Theorem 2.1 in Chap.I, there exists a bounded and closed subset Γ ⊂ R d × R d × d such that [1 2tr[ AQQ T ] + � p, q � ] for ( p, A ) ∈ R d × S ( d ) . G ( p, A ) = sup (1.4) ( q,Q ) ∈ Γ We have the following result, which will be proved in the next section. Proposition 1.7 Let G : R d × S ( d ) → R be a given sublinear and continuous function, monotonic in A ∈ S ( d ) in the sense of (1.3) . Then there exists a G - normal distributed d -dimensional random vector X and a maximal distributed d -dimensional random vector η on some sublinear expectation space (Ω , H , E ) satisfying (1.2) and � d a 2 + b 2 X, ( a 2 + b 2 ) η ) , ( aX + b ¯ X, a 2 η + b 2 ¯ η ) = ( for a, b ≥ 0 , (1.5) where ( ¯ X, ¯ η ) is an independent copy of ( X, η ) . Definition 1.8 The pair ( X, η ) satisfying (1.5) is called G -distributed . Remark 1.9 In fact, if the pair ( X, η ) satisfies (1.5) , then � d a 2 + b 2 X, aη + b ¯ d aX + b ¯ X = η = ( a + b ) η for a, b ≥ 0 . Thus X is G -normal and η is maximal distributed. The above pair ( X, η ) is characterized by the following parabolic partial differential equation (PDE for short) defined on [0 , ∞ ) × R d × R d : ∂ t u − G ( D y u, D 2 x u ) = 0 , (1.6) u | t =0 = ϕ , where G : R d × S ( d ) → R is defined by with Cauchy condition (1.2) and D 2 u = ( ∂ 2 x i x j u ) d i,j =1 , Du = ( ∂ x i u ) d i =1 . The PDE (1.6) is called a G -equation . In this book we will mainly use the notion of viscosity solution to describe the solution of this PDE. For reader’s convenience, we give a systematical intro- duction of the notion of viscosity solution and its related properties used in this book (see Appendix C, Section 1-3). It is worth to mention here that for the case where G is non-degenerate, the viscosity solution of the G -equation becomes a classical C 1 , 2 solution (see Appendix C, Section 4). Readers without knowledge of viscosity solutions can simply understand solutions of the G -equation in the classical sense along the whole book. Proposition 1.10 For the pair ( X, η ) satisfying (1.5) and a function ϕ ∈ C l.Lip ( R d × R d ) , we define √ tX, y + tη )] , ( t, x, y ) ∈ [0 , ∞ ) × R d × R d . u ( t, x, y ) := E [ ϕ ( x +

  21. 18 Chap.II Law of Large Numbers and Central Limit Theorem Then we have u ( t + s, x, y ) = E [ u ( t, x + √ sX, y + sη )] , s ≥ 0 . (1.7) We also have the estimates: for each T > 0 , there exist constants C, k > 0 such y ∈ R d , that, for all t, s ∈ [0 , T ] and x, ¯ x, y, ¯ y ) | ≤ C (1 + | x | k + | y | k + | ¯ x | k + | ¯ y | k )( | x − ¯ | u ( t, x, y ) − u ( t, ¯ x, ¯ x | + | y − ¯ y | ) (1.8) and | u ( t, x, y ) − u ( t + s, x, y ) | ≤ C (1 + | x | k + | y | k )( s + | s | 1 / 2 ) . (1.9) Moreover, u is the unique viscosity solution, continuous in the sense of (1.8) and (1.9) , of the PDE (1.6) . Proof. Since √ √ u ( t, x, y ) − u ( t, ¯ y ) = E [ ϕ ( x + tX, y + tη )] − E [ ϕ (¯ x, ¯ x + tX, ¯ y + tη )] √ √ ≤ E [ ϕ ( x + tX, y + tη ) − ϕ (¯ x + tX, ¯ y + tη )] ≤ E [ C 1 (1 + | X | k + | η | k + | x | k + | y | k + | ¯ x | k + | ¯ y | k )] × ( | x − ¯ x | + | y − ¯ y | ) ≤ C (1 + | x | k + | y | k + | ¯ x | k + | ¯ y | k )( | x − ¯ x | + | y − ¯ y | ) , we have (1.8). Let ( ¯ X, ¯ η ) be an independent copy of ( X, η ). By (1.5), √ u ( t + s, x, y ) = E [ ϕ ( x + t + sX, y + ( t + s ) η )] √ = E [ ϕ ( x + √ sX + t ¯ X, y + sη + t ¯ η )] √ = E [ E [ ϕ ( x + √ s � t ¯ x + X, y + s � y + t ¯ η )] ( e y )=( X,η ) ] x, e = E [ u ( t, x + √ sX, y + sη )] , we thus obtain (1.7). From this and (1.8) it follows that u ( t + s, x, y ) − u ( t, x, y ) = E [ u ( t, x + √ sX, y + sη ) − u ( t, x, y )] ≤ E [ C 1 (1 + | x | k + | y | k + | X | k + | η | k )( √ s | X | + s | η | )] , thus we obtain (1.9). Now, for a fixed ( t, x, y ) ∈ (0 , ∞ ) × R d × R d , let ψ ∈ C 2 , 3 ([0 , ∞ ) × R d × R d ) b be such that ψ ≥ u and ψ ( t, x, y ) = u ( t, x, y ). By (1.7) and Taylor’s expansion, it follows that, for δ ∈ (0 , t ), √ 0 ≤ E [ ψ ( t − δ, x + δX, y + δη ) − ψ ( t, x, y )] C ( δ 3 / 2 + δ 2 ) − ∂ t ψ ( t, x, y ) δ ≤ ¯ √ � � δ + � D y ψ ( t, x, y ) , η � δ + 1 D 2 + E [ � D x ψ ( t, x, y ) , X � x ψ ( t, x, y ) X, X δ ] 2 � � = − ∂ t ψ ( t, x, y ) δ + E [ � D y ψ ( t, x, y ) , η � + 1 C ( δ 3 / 2 + δ 2 ) ] δ + ¯ D 2 x ψ ( t, x, y ) X, X 2 C ( δ 3 / 2 + δ 2 ) , x ψ )( t, x, y ) + ¯ = − ∂ t ψ ( t, x, y ) δ + δG ( D y ψ, D 2

  22. 19 § 1 Maximal Distribution and G -normal Distribution from which it is easy to check that [ ∂ t ψ − G ( D y ψ, D 2 x ψ )]( t, x, y ) ≤ 0 . Thus u is a viscosity subsolution of (1.6). Similarly we can prove that u is a viscosity supersolution of (1.6). � Corollary 1.11 If both ( X, η ) and ( ¯ X, ¯ η ) satisfy (1.5) with the same G , i.e., � � G ( p, A ) := E [1 2 � AX, X � + � p, η � ] = E [1 A ¯ X, ¯ for ( p, A ) ∈ R d × S ( d ) , X + � p, ¯ η � ] 2 d d = ( ¯ then ( X, η ) X, ¯ η ) . In particular, X = − X . Proof. For each ϕ ∈ C l.Lip ( R d × R d ) , we set √ u ( t, x, y ) := E [ ϕ ( x + tX, y + tη )] , √ η )] , ( t, x, y ) ∈ [0 , ∞ ) × R d × R d . t ¯ u ( t, x, y ) := E [ ϕ ( x + ¯ X, y + t ¯ By Proposition 1.10, both u and ¯ u are viscosity solutions of the G -equation (1.6) with Cauchy condition u | t =0 = ¯ u | t =0 = ϕ . It follows from the uniqueness of the viscosity solution that u ≡ ¯ u . In particular, E [ ϕ ( X, η )] = E [ ϕ ( ¯ X, ¯ η )]. d = ( ¯ Thus ( X, η ) X, ¯ η ). � Corollary 1.12 Let ( X, η ) satisfy (1.5) . For each ψ ∈ C l.Lip ( R d ) we define √ tX + tη )] , ( t, x ) ∈ [0 , ∞ ) × R d . v ( t, x ) := E [ ψ (( x + (1.10) Then v is the unique viscosity solution of the following parabolic PDE: ∂ t v − G ( D x v, D 2 x v ) = 0 , v | t =0 = ψ. (1.11) Moreover, we have v ( t, x + y ) ≡ u ( t, x, y ) , where u is the solution of the PDE (1.6) with initial condition u ( t, x, y ) | t =0 = ψ ( x + y ) . Example 1.13 Let X be G -normal distributed. The distribution of X is char- acterized by √ ϕ ∈ C l.Lip ( R d ) . u ( t, x ) = E [ ϕ ( x + tX )] , In particular, E [ ϕ ( X )] = u (1 , 0) , where u is the solution of the following parabolic PDE defined on [0 , ∞ ) × R d : ∂ t u − G ( D 2 u ) = 0 , u | t =0 = ϕ, (1.12) where G = G X ( A ) : S ( d ) → R is defined by G ( A ) := 1 2 E [ � AX, X � ] , A ∈ S ( d ) .

  23. 20 Chap.II Law of Large Numbers and Central Limit Theorem The parabolic PDE (1.12) is called a G -heat equation . It is easy to check that G is a sublinear function defined on S ( d ) . By Theorem 2.1 in Chap.I, there exists a bounded, convex and closed subset Θ ⊂ S ( d ) such that 1 2 E [ � AX, X � ] = G ( A ) = 1 A ∈ S ( d ) . 2 sup tr [ AQ ] , (1.13) Q ∈ Θ Since G ( A ) is monotonic: G ( A 1 ) ≥ G ( A 2 ) , for A 1 ≥ A 2 , it follows that Θ ⊂ S + ( d ) = { θ ∈ S ( d ) : θ ≥ 0 } = { BB T : B ∈ R d × d } , where R d × d is the set of all d × d matrices. If Θ is a singleton: Θ = { Q } , then X is classical zero-mean normal distributed with covariance Q . In general, Θ d characterizes the covariance uncertainty of X . We denote X = N ( { 0 } × Θ) (Recall equation (1.4) , we can set ( q, Q ) ∈ { 0 } × Θ ). d d = N ( { 0 } × [ σ 2 , ¯ σ 2 ]) (We also denoted by X When d = 1 , we have X = σ 2 = E [ X 2 ] and σ 2 = − E [ − X 2 ] . The corresponding G - N (0 , [ σ 2 , ¯ σ 2 ]) ), where ¯ heat equation is ∂ t u − 1 xx u ) + − σ 2 ( ∂ 2 σ 2 ( ∂ 2 xx u ) − ) = 0 , u | t =0 = ϕ. 2(¯ For the case σ 2 > 0 , this equation is also called the Barenblatt equation. In the following two typical situations, the calculation of E [ ϕ ( X )] is very easy: • For each convex function ϕ , we have � ∞ ϕ ( σ 2 y ) exp( − y 2 1 E [ ϕ ( X )] = √ 2 ) dy. 2 π −∞ Indeed, for each fixed t ≥ 0, it is easy to check that the function u ( t, x ) := √ E [ ϕ ( x + tX )] is convex in x : √ u ( t, αx + (1 − α ) y ) = E [ ϕ ( αx + (1 − α ) y + tX )] √ √ ≤ α E [ ϕ ( x + tX )] + (1 − α ) E [ ϕ ( x + tX )] = αu ( t, x ) + (1 − α ) u ( t, x ) . xx u ) − ≡ 0 and thus the above G -heat equation becomes It follows that ( ∂ 2 ∂ t u = σ 2 2 ∂ 2 xx u, u | t =0 = ϕ. • For each concave function ϕ , we have � ∞ ϕ ( σ 2 y ) exp( − y 2 1 E [ ϕ ( X )] = √ 2 ) dy. 2 π −∞

  24. 21 § 1 Maximal Distribution and G -normal Distribution In particular, E [ X 2 ] = σ 2 , − E [ − X 2 ] = σ 2 E [ X ] = E [ − X ] = 0 , and E [ X 4 ] = 3 σ 4 , − E [ − X 4 ] = 3 σ 4 . Example 1.14 Let η be maximal distributed, the distribution of η is character- ized by the following parabolic PDE defined on [0 , ∞ ) × R d : ∂ t u − g ( Du ) = 0 , u | t =0 = ϕ, (1.14) where g = g η ( p ) : R d → R is defined by p ∈ R d . g η ( p ) := E [ � p, η � ] , It is easy to check that g η is a sublinear function defined on R d . By Theorem Θ ⊂ R d such 2.1 in Chap.I, there exists a bounded, convex and closed subset ¯ that p ∈ R d . g ( p ) = sup � p, q � , (1.15) q ∈ ¯ Θ By this characterization, we can prove that the distribution of η is given by � ˆ ϕ ∈ C l.Lip ( R d ) , F η [ ϕ ] = E [ ϕ ( η )] = sup ϕ ( v ) = sup R d ϕ ( x ) δ v ( dx ) , (1.16) v ∈ ¯ v ∈ ¯ Θ Θ where δ v is Dirac measure. Namely it is the maximal distribution with the uncertainty subset of probabilities as Dirac measures concentrated at ¯ Θ . We = N (¯ d Θ × { 0 } ) (Recall equation (1.4) , we can set ( q, Q ) ∈ ¯ denote η Θ × { 0 } ). In particular, for d = 1 , µp + − µp − , g η ( p ) := E [ pη ] = ¯ p ∈ R , µ = E [ η ] and µ = − ˆ where ¯ E [ − η ] . The distribution of η is given by (1.16). We d µ ] × { 0 } ) . denote η = N ([ µ, ¯ d = N ( { 0 } × [ σ 2 , σ 2 ]) with Exercise 1.15 We consider X = ( X 1 , X 2 ) , where X 1 σ > σ , X 2 is an independent copy of X 1 . Show that (1) For each a ∈ R 2 , � a, X � is a 1 -dimensional G -normal distributed random variable. (2) X is not G -normal distributed. Exercise 1.16 Let X be G -normal distributed. For each ϕ ∈ C l.Lip ( R d ) , we define a function √ tX )] , ( t, x ) ∈ [0 , ∞ ) × R d . u ( t, x ) := E [ ϕ ( x + Show that u is the unique viscosity solution of the PDE (1.12) with Cauchy condition u | t =0 = ϕ.

  25. 22 Chap.II Law of Large Numbers and Central Limit Theorem For each ϕ ∈ C l.Lip ( R d ) , we Exercise 1.17 Let η be maximal distributed. define a function u ( t, y ) := E [ ϕ ( y + tη )] , ( t, y ) ∈ [0 , ∞ ) × R d . Show that u is the unique viscosity solution of the PDE (1.14) with Cauchy condition u | t =0 = ϕ. § 2 Existence of G -distributed Random Variables In this section, we give the proof of the existence of G-distributed random variables, namely, the proof of Proposition 1.7. Let G : R d × S ( d ) → R be a given sublinear function monotonic in A ∈ S ( d ) in the sense of (1.3). We now construct a pair of d -dimensional random vectors ( X, η ) on some sublinear expectation space (Ω , H , E ) satisfying (1.2) and (1.5). For each ϕ ∈ C l.Lip ( R 2 d ) , let u = u ϕ be the unique viscosity solution of the G -equation (1.6) with u ϕ | t =0 = ϕ . We take � Ω = R 2 d , � H = C l.Lip ( R 2 d ) ω = ( x, y ) ∈ R 2 d . The corresponding sublinear expectation � E [ · ] is defined and � by � E [ ξ ] = u ϕ (1 , 0 , 0), for each ξ ∈ � H of the form ξ ( � ω ) = ( ϕ ( x, y )) ( x,y ) ∈ R 2 d ∈ C l.Lip ( R 2 d ). The monotonicity and sub-additivity of u ϕ with respect to ϕ are known in the theory of viscosity solution. For reader’s convenience we provide a new and simple proof in Appendix C (see Corollary 2.4 and Corollary 2.5). The constant preserving and positive homogeneity of � E [ · ] are easy to check. Thus the functional � E [ · ] : � H → R forms a sublinear expectation. We now consider a pair of d -dimensional random vectors ( � X, � η )( � ω ) = ( x, y ). We have � E [ ϕ ( � η )] = u ϕ (1 , 0 , 0) for ϕ ∈ C l.Lip ( R 2 d ) . X, � In particular, just setting ϕ 0 ( x, y ) = 1 2 � Ax, x � + � p, y � , we can check that u ϕ 0 ( t, x, y ) = G ( p, A ) t + 1 2 � Ax, x � + � p, y � . We thus have � � E [1 ( p, A ) ∈ R d × S ( d ) . � A � X, � η � ] = u ϕ 0 (1 , 0 , 0) = G ( p, A ) , X + � p, � 2 We construct a product space (Ω , H , E ) = ( � Ω × � Ω , � H ⊗ � H , � E ⊗ � E ) , and introduce two pairs of random vectors ω 1 , ( ¯ ω 2 ) ∈ � Ω × � ( X, η )( � ω 1 , � ω 2 ) = � X, ¯ η )( � ω 1 , � ω 2 ) = � ω 2 , ( � ω 1 , � Ω . d = ( � η ) and ( ¯ By Proposition 3.11 in Chap.I, ( X, η ) X, � X, ¯ η ) is an independent copy of ( X, η ).

  26. 23 § 3 Law of Large Numbers and Central Limit Theorem We now want to prove that the distribution of ( X, η ) satisfies condition (1.5). For each ϕ ∈ C l.Lip ( R 2 d ) and for each fixed λ > 0, (¯ y ) ∈ R 2 d , since x, ¯ √ the function v defined by v ( t, x, y ) := u ϕ ( λt, ¯ x + λx, ¯ y + λy ) solves exactly the same equation (1.6), but with Cauchy condition √ v | t =0 = ϕ (¯ x + λ × · , ¯ y + λ × · ) . Thus √ y + λη )] = v (1 , 0 , 0) = u ϕ ( λ, ¯ E [ ϕ (¯ x + λX, ¯ x, ¯ y ) . By the definition of E , for each t > 0 and s > 0, √ √ tX + √ s ¯ tx + √ s ¯ E [ ϕ ( η )] = E [ E [ ϕ ( X, tη + s ¯ X, ty + s ¯ η )] ( x,y )=( X,η ) ] √ tX, tη )] = u u ϕ ( s, · , · ) ( t, 0 , 0) = E [ u ϕ ( s, = u ϕ ( t + s, 0 , 0) √ = E [ ϕ ( t + sX, ( t + s ) η )] . √ tX + √ s ¯ = ( √ t + sX, ( t + s ) η ). Thus the distribution of d Namely ( X, tη + s ¯ η ) ( X, η ) satisfies condition (1.5). Remark 2.1 From now on, when we mention the sublinear expectation space (Ω , H , E ) , we suppose that there exists a pair of random vectors ( X, η ) on (Ω , H , E ) such that ( X, η ) is G-distributed. d σ 2 ]) with σ 2 < ¯ Exercise 2.2 Prove that ˆ E [ X 3 ] > 0 for X = N ( { 0 } × [ σ 2 , ¯ σ 2 . It is worth to point that ˆ E [ ϕ ( X )] not always equal to sup σ 2 ≤ σ ≤ ¯ σ 2 E σ [ ϕ ( X )] for ϕ ∈ C l,Lip ( R ) , where E σ denotes the linear expectation corresponding to the normal distributed density function N (0 , σ 2 ) . § 3 Law of Large Numbers and Central Limit Theorem Theorem 3.1 ( Law of large numbers ) Let { Y i } ∞ i =1 be a sequence of R d - valued random variables on a sublinear expectation space (Ω , H , E ) . We assume d = Y i and Y i +1 is independent from { Y 1 , · · · , Y i } for each i = 1 , 2 , · · · . that Y i +1 Then the sequence { ¯ S n } ∞ n =1 defined by � n S n := 1 ¯ Y i n i =1 converges in law to a maximal distribution, i.e., n →∞ E [ ϕ ( ¯ S n )] = E [ ϕ ( η )] , lim (3.17)

  27. 24 Chap.II Law of Large Numbers and Central Limit Theorem for all functions ϕ ∈ C ( R d ) satisfying linear growth condition ( | ϕ ( x ) | ≤ C (1 + | x | )) , where η is a maximal distributed random vector and the corresponding sublinear function g : R d → R is defined by g ( p ) := E [ � p, Y 1 � ] , p ∈ R d . Remark 3.2 When d = 1 , the sequence { ¯ S n } ∞ n =1 converges in law to N ([ µ, ¯ µ ] × { 0 } ) , where ¯ µ = E [ Y 1 ] and µ = − E [ − Y 1 ] . For the general case, the sum � n Θ ⊂ R d is the bounded, i =1 Y i converges in law to N (¯ Θ × { 0 } ) , where ¯ 1 n convex and closed subset defined in Example 1.14. If we take in particular Θ ( y ) = inf {| x − y | : x ∈ ¯ Θ } , then by (3.17) we have the following ϕ ( y ) = d ¯ generalized law of large numbers: n � Θ ( 1 n →∞ E [ d ¯ lim Y i )] = sup d ¯ Θ ( θ ) = 0 . (3.18) n θ ∈ ¯ Θ i =1 If Y i has no mean-uncertainty, or in other words, ¯ Θ is a singleton: ¯ Θ = { ¯ θ } , then (3.18) becomes n � n →∞ E [ | 1 Y i − ¯ θ | ] = 0 . lim n i =1 Theorem 3.3 (Central limit theorem with zero-mean) Let { X i } ∞ i =1 be a sequence of R d -valued random variables on a sublinear expectation space (Ω , H , E ) . d = X i and X i +1 is independent from { X 1 , · · · , X i } for each We assume that X i +1 i = 1 , 2 , · · · . We further assume that E [ X 1 ] = E [ − X 1 ] = 0 . Then the sequence { ¯ S n } ∞ n =1 defined by n � 1 ¯ √ n S n := X i i =1 converges in law to X , i.e., n →∞ E [ ϕ ( ¯ S n )] = E [ ϕ ( X )] , lim for all functions ϕ ∈ C ( R d ) satisfying linear growth condition, where X is a G -normal distributed random vector and the corresponding sublinear function G : S ( d ) → R is defined by G ( A ) := E [1 2 � AX 1 , X 1 � ] , A ∈ S ( d ) . Remark 3.4 When d = 1 , the sequence { ¯ S n } ∞ n =1 converges in law to N ( { 0 } × [ σ 2 , σ 2 ]) , where σ 2 = E [ X 2 1 ] and σ 2 = − E [ − X 2 1 ] . In particular, if σ 2 = σ 2 , then it becomes a classical central limit theorem.

  28. 25 § 3 Law of Large Numbers and Central Limit Theorem The following theorem is a nontrivial generalization of the above two theo- rems. Theorem 3.5 (Central limit theorem with law of large numbers) Let i =1 be a sequence of R d × R d -valued random vectors on a sublin- { ( X i , Y i ) } ∞ d ear expectation space (Ω , H , E ) . We assume that ( X i +1 , Y i +1 ) = ( X i , Y i ) and ( X i +1 , Y i +1 ) is independent from { ( X 1 , Y 1 ) , · · · , ( X i , Y i ) } for each i = 1 , 2 , · · · . We further assume that E [ X 1 ] = E [ − X 1 ] = 0 . Then the sequence { ¯ S n } ∞ n =1 defined by n � ( X i √ n + Y i ¯ S n := n ) i =1 converges in law to X + η , i.e., n →∞ E [ ϕ ( ¯ S n )] = E [ ϕ ( X + η )] , lim (3.19) for all functions ϕ ∈ C ( R d ) satisfying a linear growth condition, where the pair ( X, η ) is G -distributed. The corresponding sublinear function G : R d × S ( d ) → R is defined by G ( p, A ) := E [ � p, Y 1 � + 1 p ∈ R d . 2 � AX 1 , X 1 � ] , A ∈ S ( d ) , Thus E [ ϕ ( X + η )] can be calculated by Corollary 1.12. The following result is equivalent to the above central limit theorem. Theorem 3.6 We make the same assumptions as in Theorem 3.5. Then for each function ϕ ∈ C ( R d × R d ) satisfying linear growth condition, we have n n � � X i Y i n →∞ E [ ϕ ( lim √ n, n )] = E [ ϕ ( X, η )] . i =1 i =1 Proof. It is easy to prove Theorem 3.5 by Theorem 3.6. To prove Theorem 3.6 from Theorem 3.5, it suffices to define a pair of 2 d -dimensional random vectors ¯ ¯ X i = ( X i , 0) , Y i = (0 , Y i ) for i = 1 , 2 , · · · . We have n n n � � � ¯ ¯ X i Y i X i Y i n ))] = E [ ϕ ( ¯ n →∞ E [ ϕ ( lim √ n, n )] = lim n →∞ E [ ϕ ( ( √ n + X + η )] i =1 i =1 i =1 = E [ ϕ ( X, η )] with ¯ X = ( X, 0) and ¯ η = (0 , η ). �

  29. 26 Chap.II Law of Large Numbers and Central Limit Theorem To prove Theorem 3.5, we need the following norms to measure the regularity of a given real functions u defined on Q = [0 , T ] × R d : � u � C 0 , 0 ( Q ) = | u ( t, x ) | , sup ( t,x ) ∈ Q d � � u � C 1 , 1 ( Q ) = � u � C 0 , 0 ( Q ) + � ∂ t u � C 0 , 0 ( Q ) + � ∂ x i u � C 0 , 0 ( Q ) , i =1 d � � � � ∂ x i x j u � � u � C 1 , 2 ( Q ) = � u � C 1 , 1 ( Q ) + C 0 , 0 ( Q ) . i,j =1 For given constants α, β ∈ (0 , 1), we denote | u ( s, x ) − u ( t, y ) | � u � C α,β ( Q ) = sup | r − s | α + | x − y | β , x,y ∈ R d , x � = y s,t ∈ [0 ,T ] ,s � = t d � � u � C 1+ α, 1+ β ( Q ) = � u � C α,β ( Q ) + � ∂ t u � C α,β ( Q ) + � ∂ x i u � C α,β ( Q ) , i =1 d � � � � ∂ x i x j u � � u � C 1+ α, 2+ β ( Q ) = � u � C 1+ α, 1+ β ( Q ) + C α,β ( Q ) . i,j =1 If, for example, � u � C 1+ α, 2+ β ( Q ) < ∞ , then u is said to be a C 1+ α, 2+ β -function on Q . We need the following lemma. Lemma 3.7 We assume the same assumptions as in Theorem 3.5. We further assume that there exists a constant β > 0 such that, for each A , ¯ A ∈ S ( d ) with A ≥ ¯ A , we have � ¯ � ] ≥ β tr [ A − ¯ E [ � AX 1 , X 1 � ] − E [ AX 1 , X 1 A ] . (3.20) Then our main result (3.19) holds. Proof. We first prove (3.19) for ϕ ∈ C b.Lip ( R d ). For a small but fixed h > 0, let V be the unique viscosity solution of ∂ t V + G ( DV, D 2 V ) = 0 , ( t, x ) ∈ [0 , 1 + h ) × R d , V | t =1+ h = ϕ. (3.21) Since ( X, η ) satisfies (1.5), we have V ( h, 0) = E [ ϕ ( X + η )] , V (1 + h, x ) = ϕ ( x ) . (3.22) Since (3.21) is a uniformly parabolic PDE and G is a convex function, by the interior regularity of V (see Appendix C), we have � V � C 1+ α/ 2 , 2+ α ([0 , 1] × R d ) < ∞ for some α ∈ (0 , 1) .

  30. 27 § 3 Law of Large Numbers and Central Limit Theorem We set δ = 1 n and S 0 = 0. Then n − 1 � V (1 , ¯ { V (( i + 1) δ, ¯ S i +1 ) − V ( iδ, ¯ S n ) − V (0 , 0) = S i ) } i =0 n − 1 � { [ V (( i + 1) δ, ¯ S i +1 ) − V ( iδ, ¯ S i +1 )] + [ V ( iδ, ¯ S i +1 ) − V ( iδ, ¯ = S i )] } i =0 n − 1 � � � I i δ + J i = δ i =0 with, by Taylor’s expansion, � � √ � � S i ) δ +1 J i δ = ∂ t V ( iδ, ¯ D 2 V ( iδ, ¯ DV ( iδ, ¯ S i ) X i +1 , X i +1 δ + S i ) , X i +1 δ + Y i +1 δ 2 � 1 [ ∂ t V (( i + β ) δ, ¯ S i +1 ) − ∂ t V ( iδ, ¯ S i +1 )] dβ + [ ∂ t V ( iδ, ¯ S i +1 ) − ∂ t V ( iδ, ¯ I i δ = δ S i )] δ 0 � � � � δ 3 / 2 + 1 D 2 V ( iδ, ¯ D 2 V ( iδ, ¯ δ 2 + S i ) X i +1 , Y i +1 S i ) Y i +1 , Y i +1 2 � 1 � 1 � � √ √ Θ i + βγ ( X i +1 δ + Y i +1 δ ) , X i +1 δ + Y i +1 δ γdβdγ 0 0 with √ βγ = D 2 V ( iδ, ¯ δ + Y i +1 δ )) − D 2 V ( iδ, ¯ Θ i S i + γβ ( X i +1 S i ) . Thus n − 1 n − 1 n − 1 n − 1 � � � � J i I i δ ] ≤ E [ V (1 , ¯ J i I i E [ δ ] − E [ − S n )] − V (0 , 0) ≤ E [ δ ] + E [ δ ] . (3.23) i =0 i =0 i =0 i =0 We now prove that E [ � n − 1 i =0 J i δ ] = 0. For J i δ , note that � � � � √ √ DV ( iδ, ¯ DV ( iδ, ¯ E [ S i ) , X i +1 δ ] = E [ − S i ) , X i +1 δ ] = 0 , then, from the definition of the function G , we have E [ J i δ ] = E [ ∂ t V ( iδ, ¯ S i ) + G ( DV ( iδ, ¯ S i ) , D 2 V ( iδ, ¯ S i ))] δ. Combining the above two equalities with ∂ t V + G ( DV, D 2 V ) = 0 as well as the independence of ( X i +1 , Y i +1 ) from { ( X 1 , Y 1 ) , · · · , ( X i , Y i ) } , it follows that n − 1 n − 2 � � J i J i E [ δ ] = E [ δ ] = · · · = 0 . i =0 i =0 Thus (3.23) can be rewritten as n − 1 n − 1 � � I i δ ] ≤ E [ V (1 , ¯ I i − E [ − S n )] − V (0 , 0) ≤ E [ δ ] . i =0 i =0

  31. 28 Chap.II Law of Large Numbers and Central Limit Theorem But since both ∂ t V and D 2 V are uniformly α 2 -h¨ older continuous in t and α - older continuous in x on [0 , 1] × R d , we then have h¨ δ | ≤ Cδ 1+ α/ 2 (1 + | X i +1 | 2+ α + | Y i +1 | 2+ α ) . | I i It follows that δ | ] ≤ Cδ 1+ α/ 2 (1 + E [ | X 1 | 2+ α + | Y 1 | 2+ α ]) . E [ | I i Thus − C ( 1 n ) α/ 2 (1 + E [ | X 1 | 2+ α + | Y 1 | 2+ α ]) ≤ E [ V (1 , ¯ S n )] − V (0 , 0) ≤ C ( 1 n ) α/ 2 (1 + E [ | X 1 | 2+ α + | Y 1 | 2+ α ]) . As n → ∞ , we have n →∞ E [ V (1 , ¯ lim S n )] = V (0 , 0) . (3.24) On the other hand, for each t, t ′ ∈ [0 , 1 + h ] and x ∈ R d , we have � | V ( t, x ) − V ( t ′ , x ) | ≤ C ( | t − t ′ | + | t − t ′ | ) . √ Thus | V (0 , 0) − V ( h, 0) | ≤ C ( h + h ) and, by (3.24), √ | E [ V (1 , ¯ S n )] − E [ ϕ ( ¯ S n )] | = | E [ V (1 , ¯ S n )] − E [ V (1 + h, ¯ S n )] | ≤ C ( h + h ) . It follows from (3.22) and (3.24) that √ n →∞ | E [ ϕ ( ¯ S n )] − E [ ϕ ( X + η )] | ≤ 2 C ( lim sup h + h ) . Since h can be arbitrarily small, we have n →∞ E [ ϕ ( ¯ S n )] = E [ ϕ ( X + η )] . lim � Remark 3.8 From the proof we can check that the main assumption of identical distribution of { X i , Y i } ∞ i =1 can be weaken to E [ � p, Y i � + 1 i = 1 , 2 , · · · , p ∈ R d , A ∈ S ( d ) . 2 � AX i , X i � ] = G ( p, A ) , Another essential condition is E [ | X i | 2+ δ ] + E [ | Y i | 1+ δ ] ≤ C for some δ > 0 . We do not need the condition E [ | X i | n ] + E [ | Y i | n ] < ∞ for each n ∈ N . We now give the proof of Theorem 3.5. Proof of Theorem 3.5 . For the case when the uniform elliptic condition (3.20) does not hold, we first introduce a perturbation to prove the above convergence for ϕ ∈ C b.Lip ( R d ). According to Definition 3.10 and Proposition 3.11 in Chap I,

  32. 29 § 3 Law of Large Numbers and Central Limit Theorem we can construct a sublinear expectation space (¯ Ω , ¯ H , ¯ E ) and a sequence of three d random vectors { ( ¯ X i , ¯ i =1 such that, for each n = 1 , 2 , · · · , { ( ¯ X i , ¯ κ i ) } ∞ Y i ) } n Y i , ¯ = i =1 i =1 and ( ¯ X n +1 , ¯ κ n +1 ) is independent from { ( ¯ X i , ¯ { ( X i , Y i ) } n κ i ) } n Y n +1 , ¯ Y i , ¯ i =1 and, moreover, � E [ ψ ( ¯ ¯ X i , ¯ R d E [ ψ ( X i , Y i , x )] e −| x | 2 / 2 dx for ψ ∈ C l.Lip ( R 3 × d ) . κ i )] = (2 π ) − d/ 2 Y i , ¯ We then use the perturbation ¯ i = ¯ X ε X i + ε ¯ κ i for a fixed ε > 0. It is easy to see that the sequence { ( ¯ i , ¯ X ε Y i ) } ∞ i =1 satisfies all conditions in the above CLT, in particular, � � � � ] = G ( p, A ) + ε 2 E [1 G ε ( p, A ) := ¯ A ¯ X ε 1 , ¯ X ε p, ¯ + Y 1 2 tr[ A ] . 1 2 Thus it is strictly elliptic. We then can apply Lemma 3.7 to n n n � ¯ ¯ � ¯ ¯ � X ε Y i X i Y i κ i ¯ ¯ S ε i √ n + √ n + √ n n := ( n ) = ( n ) + εJ n , J n = i =1 i =1 i =1 and obtain E [ ϕ ( ¯ ¯ n )] = ¯ E [ ϕ ( ¯ S ε lim X + ¯ η + ε ¯ κ )] , n →∞ where (( ¯ η, 0)) is ¯ G -distributed under ¯ X, ¯ κ ) , (¯ E [ · ] and � ¯ κ 1 ) T � � Y 1 , 0) T � E [1 κ 1 ) T , ( ¯ ¯ p, ¯ A ) := ¯ A ( ¯ p, ( ¯ ¯ p ∈ R 2 d . A ∈ S (2 d ) , G (¯ X 1 , ¯ X 1 , ¯ + ¯ ] , ¯ 2 By Proposition 1.6, it is easy to prove that ( ¯ X + ε ¯ κ, ¯ η ) is G ε -distributed and ( ¯ X, ¯ η ) is G -distributed. But we have | E [ ϕ ( ¯ S n )] − ¯ E [ ϕ ( ¯ n )] | = | ¯ E [ ϕ ( ¯ n − εJ n )] − ¯ E [ ϕ ( ¯ S ε S ε S ε n )] | ≤ εC ¯ E [ | J n | ] ≤ C ′ ε and similarly, | E [ ϕ ( X + η )] − ¯ E [ ϕ ( ¯ κ )] | = | ¯ E [ ϕ ( ¯ η )] − ¯ E [ ϕ ( ¯ κ )] | ≤ Cε . X + ¯ η + ε ¯ X +¯ X +¯ η + ε ¯ Since ε can be arbitrarily small, it follows that n →∞ E [ ϕ ( ¯ S n )] = E [ ϕ ( X + η )] for ϕ ∈ C b.Lip ( R d ) . lim On the other hand, it is easy to check that sup n E [ | ¯ S n | 2 ] + E [ | X + η | 2 ] < ∞ . We then can apply the following lemma to prove that the above convergence holds for ϕ ∈ C ( R d ) with linear growth condition. The proof is complete. � Lemma 3.9 Let (Ω , H , E ) and ( � Ω , � H , � E ) be two sublinear expectation spaces and let Y n ∈ H and Y ∈ � H , n = 1 , 2 , · · · , be given. We assume that, for a given p ≥ 1 , sup n E [ | Y n | p ] + � E [ | Y | p ] < ∞ . If the convergence lim n →∞ E [ ϕ ( Y n )] = � E [ ϕ ( Y )] holds for each ϕ ∈ C b.Lip ( R d ) , then it also holds for all functions ϕ ∈ C ( R d ) with the growth condition | ϕ ( x ) | ≤ C (1 + | x | p − 1 ) .

  33. 30 Chap.II Law of Large Numbers and Central Limit Theorem Proof. We first prove that the above convergence holds for ϕ ∈ C b ( R d ) with ϕ ∈ C b.Lip ( R d ) a compact support. In this case, for each ε > 0, we can find a ¯ ϕ ( x ) | ≤ ε such that sup x ∈ R d | ϕ ( x ) − ¯ 2 . We have | E [ ϕ ( Y n )] − � ϕ ( Y n )] | + | � E [ ϕ ( Y )] − � E [ ϕ ( Y )] | ≤ | E [ ϕ ( Y n )] − E [ ¯ E [ ¯ ϕ ( Y )] | ϕ ( Y n )] − � ϕ ( Y n )] − � + | E [ ¯ E [ ¯ ϕ ( Y )] | ≤ ε + | E [ ¯ E [ ¯ ϕ ( Y )] | . Thus lim sup n →∞ | E [ ϕ ( Y n )] − � E [ ϕ ( Y )] | ≤ ε . The convergence must hold since ε can be arbitrarily small. Now let ϕ be an arbitrary C ( R d )-function with growth condition | ϕ ( x ) | ≤ C (1+ | x | p − 1 ). For each N > 0 we can find ϕ 1 , ϕ 2 ∈ C ( R d ) such that ϕ = ϕ 1 + ϕ 2 where ϕ 1 has a compact support and ϕ 2 ( x ) = 0 for | x | ≤ N , and | ϕ 2 ( x ) | ≤ | ϕ ( x ) | for all x . It is clear that | ϕ 2 ( x ) | ≤ 2 C (1 + | x | p ) for x ∈ R d . N Thus | E [ ϕ ( Y n )] − � E [ ϕ ( Y )] | = | E [ ϕ 1 ( Y n ) + ϕ 2 ( Y n )] − � E [ ϕ 1 ( Y ) + ϕ 2 ( Y )] | ≤ | E [ ϕ 1 ( Y n )] − � E [ ϕ 1 ( Y )] | + E [ | ϕ 2 ( Y n ) | ] + � E [ | ϕ 2 ( Y ) | ] E [ ϕ 1 ( Y )] | + 2 C ≤ | E [ ϕ 1 ( Y n )] − � N (2 + E [ | Y n | p ] + � E [ | Y | p ]) ¯ C ≤ | E [ ϕ 1 ( Y n )] − � E [ ϕ 1 ( Y )] | + N , where ¯ C = 2 C (2+sup n E [ | Y n | p ]+ � E [ | Y | p ]) . We thus have lim sup n →∞ | E [ ϕ ( Y n )] − ¯ � C E [ ϕ ( Y )] | ≤ N . Since N can be arbitrarily large, E [ ϕ ( Y n )] must converge to � E [ ϕ ( Y )]. � Exercise 3.10 Let X i ∈ H , i = 1 , 2 , · · · , be such that X i +1 is independent from { X 1 , · · · , X i } , for each i = 1 , 2 , · · · . We further assume that E [ X i ] = − E [ − X i ] = 0 , i ] = σ 2 < ∞ , lim i →∞ E [ X 2 i →∞ − E [ − X 2 i ] = σ 2 , lim E [ | X i | 2+ δ ] ≤ M for some δ > 0 and a constant M. Prove that the sequence { ¯ S n } ∞ n =1 defined by n � 1 ¯ S n = √ n X i i =1 converges in law to X , i.e., n →∞ E [ ϕ ( ¯ S n )] = E [ ϕ ( X )] for ϕ ∈ C b,lip ( R ) , lim where X ∼ N ( { 0 } × [ σ 2 , σ 2 ]) . In particular, if σ 2 = σ 2 , it becomes a classical central limit theorem.

  34. 31 § 3 Law of Large Numbers and Central Limit Theorem Notes and Comments The contents of this chapter are mainly from Peng (2008) [103] (see also Peng (2007) [99]). The notion of G -normal distribution was firstly introduced by Peng (2006) [98] for 1-dimensional case, and Peng (2008) [102] for multi-dimensional case. In the classical situation, a distribution satisfying equation (1.1) is said to be stable (see L´ evy (1925) [75] and (1965) [76]). In this sense, our G -normal distribution can be considered as the most typical stable distribution under the framework of sublinear expectations. Marinacci (1999) [81] used different notions of distributions and indepen- dence via capacity and the corresponding Choquet expectation to obtain a law of large numbers and a central limit theorem for non-additive probabilities (see also Maccheroni and Marinacci (2005) [82] ). But since a sublinear expectation can not be characterized by the corresponding capacity, our results can not be derived from theirs. In fact, our results show that the limit in CLT, under uncertainty, is a G -normal distribution in which the distribution uncertainty is not just the parameter of the classical normal distributions (see Exercise 2.2). The notion of viscosity solutions plays a basic role in the definition and properties of G -normal distribution and maximal distribution. This notion was initially introduced by Crandall and Lions (1983) [29]. This is a fundamentally important notion in the theory of nonlinear parabolic and elliptic PDEs. Read- ers are referred to Crandall, Ishii and Lions (1992) [30] for rich references of the beautiful and powerful theory of viscosity solutions. For books on the theory of viscosity solutions and the related HJB equations, see Barles (1994) [8], Fleming and Soner (1992) [49] as well as Yong and Zhou (1999) [122]. We note that, for the case when the uniform elliptic condition holds, the vis- cosity solution (1.10) becomes a classical C 1+ α 2 , 2+ α -solution (see Krylov (1987) [74] and the recent works in Cabre and Caffarelli (1997) [17] and Wang (1992) [117]). In 1-dimensional situation, when σ 2 > 0, the G -equation becomes the following Barenblatt equation: ∂ t u + γ | ∂ t u | = △ u, | γ | < 1 . This equation was first introduced by Barenblatt (1979) [7] (see also Avellaneda, Levy and Paras (1995) [5]).

  35. 32 Chap.II Law of Large Numbers and Central Limit Theorem

  36. Chapter III G -Brownian Motion and Itˆ o’s Integral The aim of this chapter is to introduce the concept of G -Brownian motion, to study its properties and to construct Itˆ o’s integral with respect to G -Brownian motion. We emphasize here that our definition of G -Brownian motion is con- sistent with the classical one in the sense that if there is no volatility uncer- tainty. Our G -Brownian motion also has independent increments with identical G -normal distributions. G -Brownian motion has a very rich and interesting new structure which non-trivially generalizes the classical one. We thus can establish the related stochastic calculus, especially Itˆ o’s integrals and the re- lated quadratic variation process. A very interesting new phenomenon of our G -Brownian motion is that its quadratic process also has independent incre- ments which are identically distributed. The corresponding G -Itˆ o’s formula is obtained. § 1 G -Brownian Motion and its Characterization Definition 1.1 Let (Ω , H , E ) be a sublinear expectation space. ( X t ) t ≥ 0 is called a d -dimensional stochastic process if for each t ≥ 0 , X t is a d -dimensional random vector in H . Let G ( · ) : S ( d ) → R be a given monotonic and sublinear function. By Theorem 2.1 in Chap. I, there exists a bounded, convex and closed subset Σ ⊂ S + ( d ) such that G ( A ) = 1 A ∈ S ( d ) . 2 sup ( A, B ) , B ∈ Σ By Section 2 in Chap. II, we know that the G -normal distribution N ( { 0 } × Σ) exists. We now give the definition of G -Brownian motion. 33

  37. 34 Chap.III G -Brownian Motion and Itˆ o’s Integral Definition 1.2 A d -dimensional process ( B t ) t ≥ 0 on a sublinear expectation space (Ω , H , E ) is called a G –Brownian motion if the following properties are satisfied: (i) B 0 ( ω ) = 0 ; (ii) For each t, s ≥ 0 , the increment B t + s − B t is N ( { 0 }× s Σ) -distributed and is independent from ( B t 1 , B t 2 , · · · , B t n ) , for each n ∈ N and 0 ≤ t 1 ≤ · · · ≤ t n ≤ t . Remark 1.3 We can prove that, for each t 0 > 0 , ( B t + t 0 − B t 0 ) t ≥ 0 is a G - Brownian motion. For each λ > 0 , ( λ − 1 2 B λt ) t ≥ 0 is also a G -Brownian motion. This is the scaling property of G -Brownian motion, which is the same as that of the classical Brownian motion. We will denote in the rest of this book for each a = ( a 1 , · · · , a d ) T ∈ R d . B a t = � a , B t � By the above definition we have the following proposition which is important in stochastic calculus. Proposition 1.4 Let ( B t ) t ≥ 0 be a d -dimensional G - Brownian motion on a sublinear expectation space (Ω , H , E ) . Then ( B a t ) t ≥ 0 is a 1 -dimensional G a - aa T α + − σ 2 Brownian motion for each a ∈ R d , where G a ( α ) = 1 2 ( σ 2 − aa T α − ) , aa T = 2 G ( aa T ) = E [ � a , B 1 � 2 ] , σ 2 − aa T = − 2 G ( − aa T ) = − E [ −� a , B 1 � 2 ] . σ 2 d = N ( { 0 } × [ sσ 2 − aa T , sσ 2 In particular, for each t, s ≥ 0 , B a t + s − B a aa T ]) . t Proposition 1.5 For each convex function ϕ , we have � ∞ x 2 1 E [ ϕ ( B a t + s − B a � ϕ ( x ) exp( − t )] = ) dx. 2 sσ 2 2 πsσ 2 −∞ aa T aa T For each concave function ϕ and σ 2 − aa T > 0 , we have � ∞ x 2 1 E [ ϕ ( B a t + s − B a t )] = � ϕ ( x ) exp( − ) dx. 2 sσ 2 2 πsσ 2 −∞ − aa T − aa T In particular, we have s ) 2 ] = σ 2 s ) 4 ] = 3 σ 4 aa T ( t − s ) 2 , E [( B a t − B a E [( B a t − B a aa T ( t − s ) , s ) 2 ] = − σ 2 s ) 4 ] = − 3 σ 4 − aa T ( t − s ) 2 . E [ − ( B a t − B a E [ − ( B a t − B a − aa T ( t − s ) , The following theorem gives a characterization of G -Brownian motion. Theorem 1.6 Let ( B t ) t ≥ 0 be a d -dimensional process defined on a sublinear expectation space (Ω , H , E ) such that (i) B 0 ( ω )= 0 ; (ii) For each t, s ≥ 0 , B t + s − B t and B s are identically distributed and B t + s − B t is independent from ( B t 1 , B t 2 , · · · , B t n ) , for each n ∈ N and 0 ≤ t 1 ≤ · · · ≤ t n ≤ t . (iii) E [ B t ] = E [ − B t ] = 0 and lim t ↓ 0 E [ | B t | 3 ] t − 1 = 0 . Then ( B t ) t ≥ 0 is a G -Brownian motion with G ( A ) = 1 2 E [ � AB 1 , B 1 � ] , A ∈ S ( d ) .

  38. 35 § 1 G -Brownian Motion and its Characterization √ d Proof. We only need to prove that B 1 is G -normal distributed and B t = tB 1 . We first prove that E [ � AB t , B t � ] = 2 G ( A ) t, A ∈ S ( d ) . For each given A ∈ S ( d ), we set b ( t ) = E [ � AB t , B t � ]. Then b (0) = 0 and | b ( t ) | ≤ | A | ( E [ | B t | 3 ]) 2 / 3 → 0 as t → 0. Since for each t, s ≥ 0, b ( t + s ) = E [ � AB t + s , B t + s � ] = ˆ E [ � A ( B t + s − B s + B s ) , B t + s − B s + B s � ] = E [ � A ( B t + s − B s ) , ( B t + s − B s ) � + � AB s , B s � + 2 � A ( B t + s − B s ) , B s � ] = b ( t ) + b ( s ) , we have b ( t ) = b (1) t =2 G ( A ) t . √ d We now prove that B 1 is G -normal distributed and B t = tB 1 . For this, we just need to prove that, for each fixed ϕ ∈ C b.Lip ( R d ), the function ( t, x ) ∈ [0 , ∞ ) × R d u ( t, x ) := E [ ϕ ( x + B t )] , is the viscosity solution of the following G -heat equation: ∂ t u − G ( D 2 u ) = 0 , u | t =0 = ϕ. (1.1) We first prove that u is Lipschitz in x and 1 2 -H¨ older continuous in t . In fact, for each fixed t , u ( t, · ) ∈ C b.Lip ( R d ) since | u ( t, x ) − u ( t, y ) | = | E [ ϕ ( x + B t )] − E [ ϕ ( y + B t )] | ≤ E [ | ϕ ( x + B t ) − ϕ ( y + B t ) | ] ≤ C | x − y | , where C is Lipschitz constant of ϕ . For each δ ∈ [0 , t ], since B t − B δ is independent from B δ , we also have u ( t, x ) = E [ ϕ ( x + B δ + ( B t − B δ )] = E [ E [ ϕ ( y + ( B t − B δ ))] y = x + B δ ] , hence u ( t, x ) = E [ u ( t − δ, x + B δ )] . (1.2) Thus | u ( t, x ) − u ( t − δ, x ) | = | E [ u ( t − δ, x + B δ ) − u ( t − δ, x )] | ≤ E [ | u ( t − δ, x + B δ ) − u ( t − δ, x ) | ] � √ ≤ E [ C | B δ | ] ≤ C 2 G ( I ) δ. To prove that u is a viscosity solution of (1.1), we fix ( t, x ) ∈ (0 , ∞ ) × R d and let v ∈ C 2 , 3 ([0 , ∞ ) × R d ) be such that v ≥ u and v ( t, x ) = u ( t, x ). From (1.2) b we have v ( t, x ) = E [ u ( t − δ, x + B δ )] ≤ E [ v ( t − δ, x + B δ )] .

  39. 36 Chap.III G -Brownian Motion and Itˆ o’s Integral Therefore by Taylor’s expansion, 0 ≤ E [ v ( t − δ, x + B δ ) − v ( t, x )] = E [ v ( t − δ, x + B δ ) − v ( t, x + B δ ) + ( v ( t, x + B δ ) − v ( t, x ))] = E [ − ∂ t v ( t, x ) δ + � Dv ( t, x ) , B δ � + 1 2 � D 2 v ( t, x ) B δ , B δ � + I δ ] ≤ − ∂ t v ( t, x ) δ + 1 2 E [ � D 2 v ( t, x ) B δ , B δ � ] + E [ I δ ] = − ∂ t v ( t, x ) δ + G ( D 2 v ( t, x )) δ + E [ I δ ] , where � 1 I δ = − [ ∂ t v ( t − βδ, x + B δ ) − ∂ t v ( t, x )] δdβ 0 � 1 � 1 � ( D 2 v ( t, x + αβB δ ) − D 2 v ( t, x )) B δ , B δ � αdβdα. + 0 0 With the assumption (iii) we can check that lim δ ↓ 0 E [ | I δ | ] δ − 1 = 0, from which we get ∂ t v ( t, x ) − G ( D 2 v ( t, x )) ≤ 0, hence u is a viscosity subsolution of (1.1). We can analogously prove that u is a viscosity supersolution. Thus u is a viscosity solution and ( B t ) t ≥ 0 is a G -Brownian motion. The proof is complete. � d Exercise 1.7 Let B t be a 1-dimensional Brownian motion, and B 1 = N ( { 0 } × [ σ 2 , σ 2 ]) . Prove that for each m ∈ N , � √ m 2( m − 1)!! σ m t 2 / 2 π m is odd , ˆ E [ | B t | m ] = m ( m − 1)!! σ m t m is even . 2 § 2 Existence of G -Brownian Motion In the rest of this book, we denote by Ω = C d 0 ( R + ) the space of all R d –valued continuous paths ( ω t ) t ∈ R + , with ω 0 = 0, equipped with the distance ∞ � ρ ( ω 1 , ω 2 ) := 2 − i [( max t ∈ [0 ,i ] | ω 1 t − ω 2 t | ) ∧ 1] . i =1 For each fixed T ∈ [0 , ∞ ) , we set Ω T := { ω ·∧ T : ω ∈ Ω } . We will consider the canonical process B t ( ω ) = ω t , t ∈ [0 , ∞ ), for ω ∈ Ω. For each fixed T ∈ [0 , ∞ ), we set L ip (Ω T ) := { ϕ ( B t 1 ∧ T , · · · , B t n ∧ T ) : n ∈ N , t 1 , · · · , t n ∈ [0 , ∞ ) , ϕ ∈ C l.Lip ( R d × n ) } . It is clear that L ip (Ω t ) ⊆ L ip (Ω T ), for t ≤ T . We also set ∞ � L ip (Ω) := L ip (Ω n ) . n =1

  40. 37 § 2 Existence of G -Brownian Motion Remark 2.1 It is clear that C l.Lip ( R d × n ) , L ip (Ω T ) and L ip (Ω) are vector lat- Moreover, note that ϕ, ψ ∈ C l.Lip ( R d × n ) imply ϕ · ψ ∈ C l.Lip ( R d × n ) , tices. then X , Y ∈ L ip (Ω T ) imply X · Y ∈ L ip (Ω T ) . In particular, for each t ∈ [0 , ∞ ) , B t ∈ L ip (Ω) . Let G ( · ) : S ( d ) → R be a given monotonic and sublinear function. In the following, we want to construct a sublinear expectation on (Ω , L ip (Ω)) such that the canonical process ( B t ) t ≥ 0 is a G -Brownian motion. For this, we first construct a sequence of d -dimensional random vectors ( ξ i ) ∞ i =1 on a sublinear expectation space ( � Ω , � H , � E ) such that ξ i is G -normal distributed and ξ i +1 is independent from ( ξ 1 , · · · , ξ i ) for each i = 1 , 2 , · · · . We now introduce a sublinear expectation ˆ E defined on L ip (Ω) via the fol- lowing procedure: for each X ∈ L ip (Ω) with X = ϕ ( B t 1 − B t 0 , B t 2 − B t 1 , · · · , B t n − B t n − 1 ) for some ϕ ∈ C l.Lip ( R d × n ) and 0 = t 0 < t 1 < · · · < t n < ∞ , we set ˆ E [ ϕ ( B t 1 − B t 0 , B t 2 − B t 1 , · · · , B t n − B t n − 1 )] E [ ϕ ( √ t 1 − t 0 ξ 1 , · · · , � := � t n − t n − 1 ξ n )] . The related conditional expectation of X = ϕ ( B t 1 , B t 2 − B t 1 , · · · , B t n − B t n − 1 ) under Ω t j is defined by ˆ E [ X | Ω t j ] = ˆ E [ ϕ ( B t 1 , B t 2 − B t 1 , · · · , B t n − B t n − 1 ) | Ω t j ] (2.3) := ψ ( B t 1 , · · · , B t j − B t j − 1 ) , where � � ψ ( x 1 , · · · , x j ) = � E [ ϕ ( x 1 , · · · , x j , t j +1 − t j ξ j +1 , · · · , t n − t n − 1 ξ n )] . It is easy to check that ˆ E [ · ] consistently defines a sublinear expectation on L ip (Ω) and ( B t ) t ≥ 0 is a G -Brownian motion. Since L ip (Ω T ) ⊆ L ip (Ω), ˆ E [ · ] is also a sublinear expectation on L ip (Ω T ). Definition 2.2 The sublinear expectation ˆ E [ · ]: L ip (Ω) → R defined through the above procedure is called a G –expectation . The corresponding canonical process ( B t ) t ≥ 0 on the sublinear expectation space (Ω , L ip (Ω) , ˆ E ) is called a G –Brownian motion. In the rest of this book, when we talk about G –Brownian motion, we mean that the canonical process ( B t ) t ≥ 0 is under G -expectation. Proposition 2.3 We list the properties of ˆ E [ ·| Ω t ] that hold for each X, Y ∈ L ip (Ω) : (i) If X ≥ Y , then ˆ E [ X | Ω t ] ≥ ˆ E [ Y | Ω t ] .

  41. 38 Chap.III G -Brownian Motion and Itˆ o’s Integral ˆ E [ η | Ω t ] = η , for each t ∈ [0 , ∞ ) and η ∈ L ip (Ω t ) . (ii) ˆ E [ X | Ω t ] − ˆ E [ Y | Ω t ] ≤ ˆ E [ X − Y | Ω t ] . (iii) (iv) ˆ E [ ηX | Ω t ] = η + ˆ E [ X | Ω t ] + η − ˆ E [ − X | Ω t ] for each η ∈ L ip (Ω t ) . (v) ˆ E [ˆ E [ X | Ω t ] | Ω s ] = ˆ E [ X | Ω t ∧ s ] , in particular , ˆ E [ˆ E [ X | Ω t ]] = ˆ E [ X ] . For each X ∈ L ip (Ω t ) , ˆ E [ X | Ω t ] = ˆ E [ X ] , where L ip (Ω t ) is the linear space of random variables with the form ϕ ( B t 2 − B t 1 , B t 3 − B t 2 , · · · , B t n +1 − B t n ) , n = 1 , 2 , · · · , ϕ ∈ C l.Lip ( R d × n ) , t 1 , · · · , t n , t n +1 ∈ [ t, ∞ ) . Remark 2.4 (ii) and (iii) imply E [ X + η | Ω t ] = ˆ ˆ E [ X | Ω t ] + η for η ∈ L ip (Ω t ) . We now consider the completion of sublinear expectation space (Ω , L ip (Ω) , ˆ E ) . We denote by L p G (Ω), p ≥ 1, the completion of L ip (Ω) under the norm � X � p := (ˆ E [ | X | p ]) 1 /p . Similarly, we can define L p G (Ω T ), L p T ) and L p G (Ω t G (Ω t ). It is clear that for each 0 ≤ t ≤ T < ∞ , L p G (Ω t ) ⊆ L p G (Ω T ) ⊆ L p G (Ω). According to Sec.4 in Chap.I, ˆ E [ · ] can be continuously extended to ( L 1 G (Ω) , ||· || ). We now consider the extension of conditional G -expectation. For each fixed t ≤ T , the conditional G -expectation ˆ E [ ·| Ω t ] : L ip (Ω T ) → L ip (Ω t ) is a continuous mapping under �·� . Indeed, we have E [ X | Ω t ] − ˆ ˆ E [ Y | Ω t ] ≤ ˆ E [ X − Y | Ω t ] ≤ ˆ E [ | X − Y || Ω t ] , then | ˆ E [ X | Ω t ] − ˆ E [ Y | Ω t ] | ≤ ˆ E [ | X − Y || Ω t ] . We thus obtain � � � � � ˆ E [ X | Ω t ] − ˆ E [ Y | Ω t ] � ≤ � X − Y � . It follows that ˆ E [ ·| Ω t ] can be also extended as a continuous mapping ˆ E [ ·| Ω t ] : L 1 G (Ω T ) → L 1 G (Ω t ) . If the above T is not fixed, then we can obtain ˆ E [ ·| Ω t ] : L 1 G (Ω) → L 1 G (Ω t ). Remark 2.5 The above proposition also holds for X, Y ∈ L 1 G (Ω) . But in (iv) , η ∈ L 1 G (Ω t ) should be bounded, since X, Y ∈ L 1 G (Ω) does not imply X · Y ∈ L 1 G (Ω) . In particular, we have the following independence: E [ X | Ω t ] = ˆ ˆ ∀ X ∈ L 1 G (Ω t ) . E [ X ] , We give the following definition similar to the classical one:

  42. 39 § 2 Existence of G -Brownian Motion G (Ω)) n is said to be Definition 2.6 An n -dimensional random vector Y ∈ ( L 1 independent from Ω t for some given t if for each ϕ ∈ C b.Lip ( R n ) we have E [ ϕ ( Y ) | Ω t ] = ˆ ˆ E [ ϕ ( Y )] . Remark 2.7 Just as in the classical situation, the increments of G –Brownian motion ( B t + s − B t ) s ≥ 0 are independent from Ω t . The following property is very useful. G (Ω) be such that ˆ E [ Y | Ω t ] = − ˆ Proposition 2.8 Let X, Y ∈ L 1 E [ − Y | Ω t ] , for some t ∈ [0 , T ] . Then we have ˆ E [ X + Y | Ω t ] = ˆ E [ X | Ω t ] + ˆ E [ Y | Ω t ] . In particular, if ˆ E [ Y | Ω t ] = ˆ E G [ − Y | Ω t ] = 0 , then ˆ E [ X + Y | Ω t ] = ˆ E [ X | Ω t ] . Proof. This follows from the following two inequalities: E [ X + Y | Ω t ] ≤ ˆ ˆ E [ X | Ω t ] + ˆ E [ Y | Ω t ] , E [ X + Y | Ω t ] ≥ ˆ ˆ E [ X | Ω t ] − ˆ E [ − Y | Ω t ] = ˆ E [ X | Ω t ] + ˆ E [ Y | Ω t ]. � Example 2.9 For each fixed a ∈ R d , s ≤ t , we have ˆ ˆ E [ B a t − B a s | Ω s ] = 0 , E [ − ( B a t − B a s ) | Ω s ] = 0 , ˆ ˆ s ) 2 | Ω s ] = σ 2 s ) 2 | Ω s ] = − σ 2 E [( B a t − B a aa T ( t − s ) , E [ − ( B a t − B a − aa T ( t − s ) , ˆ ˆ s ) 4 | Ω s ] = 3 σ 4 aa T ( t − s ) 2 , s ) 4 | Ω s ] = − 3 σ 4 − aa T ( t − s ) 2 , E [( B a t − B a E [ − ( B a t − B a aa T = 2 G ( aa T ) and σ 2 − aa T = − 2 G ( − aa T ) . where σ 2 Example 2.10 For each a ∈ R d , n ∈ N , 0 ≤ t ≤ T, X ∈ L 1 G (Ω t ) and ϕ ∈ C l.Lip ( R ) , we have ˆ t ) | Ω t ] = X + ˆ t ) | Ω t ] + X − ˆ E [ Xϕ ( B a T − B a E [ ϕ ( B a T − B a E [ − ϕ ( B a T − B a t ) | Ω t ] = X + ˆ t )] + X − ˆ E [ ϕ ( B a T − B a E [ − ϕ ( B a T − B a t )] . In particular, we have ˆ t ) | Ω t ] = X + ˆ t )] + X − ˆ E [ X ( B a T − B a E [( B a T − B a E [ − ( B a T − B a t )] = 0 . This, together with Proposition 2.8, yields ˆ t ) | Ω t ] = ˆ Y ∈ L 1 E [ Y + X ( B a T − B a E [ Y | Ω t ] , G (Ω) .

  43. 40 Chap.III G -Brownian Motion and Itˆ o’s Integral We also have ˆ t ) 2 | Ω t ] = X + ˆ t ) 2 ] + X − ˆ t ) 2 ] E [ X ( B a T − B a E [( B a T − B a E [ − ( B a T − B a = [ X + σ 2 aa T − X − σ 2 − aa T ]( T − t ) and ˆ t ) 2 n − 1 | Ω t ] = X + ˆ t ) 2 n − 1 ] + X − ˆ t ) 2 n − 1 ] E [ X ( B a T − B a E [( B a T − B a E [ − ( B a T − B a = | X | ˆ T − t ) 2 n − 1 ] . E [( B a Example 2.11 Since ˆ s ) | Ω s ] = ˆ E [2 B a s ( B a t − B a E [ − 2 B a s ( B a t − B a s ) | Ω s ] = 0 , we have t ) 2 − ( B a s ) 2 − ( B a ˆ s ) 2 | Ω s ] = ˆ s ) 2 | Ω s ] E [( B a E [( B a t − B a s + B a s ) 2 + 2( B a = ˆ E [( B a t − B a t − B a s ) B a s | Ω s ] = σ 2 aa T ( t − s ) . Exercise 2.12 Show that if X ∈ Lip (Ω T ) and ˆ E [ X ] = − ˆ E [ − X ] , then ˆ E [ X ] = E P [ X ] , where P is a Wiener measure on Ω . Exercise 2.13 For each s, t ≥ 0 , we set B s t := B t + s − B s . Let η = ( η ij ) d i,j =1 ∈ L 1 G (Ω s ; S ( d )) . Prove that ˆ E [ � ηB s t , B s t �| Ω s ] = 2 G ( η ) t. § 3 Itˆ o’s Integral with G –Brownian Motion Definition 3.1 For T ∈ R + , a partition π T of [0 , T ] is a finite ordered subset π T = { t 0 , t 1 , · · · , t N } such that 0 = t 0 < t 1 < · · · < t N = T . µ ( π T ) := max {| t i +1 − t i | : i = 0 , 1 , · · · , N − 1 } . We use π N T = { t N 0 , t N 1 , · · · , t N N } to denote a sequence of partitions of [0 , T ] such that lim N →∞ µ ( π N T ) = 0 . Let p ≥ 1 be fixed. We consider the following type of simple processes: for a given partition π T = { t 0 , · · · , t N } of [0 , T ] we set N − 1 � η t ( ω ) = ξ k ( ω ) I [ t k ,t k +1 ) ( t ) , k =0 where ξ k ∈ L p G (Ω t k ), k = 0 , 1 , 2 , · · · , N − 1 are given. The collection of these processes is denoted by M p, 0 G (0 , T ).

  44. 41 § 3 Itˆ o’s Integral with G –Brownian Motion G (0 , T ) with η t ( ω ) = � N − 1 Definition 3.2 For an η ∈ M p, 0 k =0 ξ k ( ω ) I [ t k ,t k +1 ) ( t ) , the related Bochner integral is � T N − 1 � ξ k ( ω )( t k +1 − t k ) . η t ( ω ) dt := 0 k =0 For each η ∈ M p, 0 G (0 , T ), we set � T N − 1 � E T [ η ] := 1 η t dt ] = 1 ˜ ˆ ˆ E [ E [ ξ k ( ω )( t k +1 − t k )] . T T 0 k =0 It is easy to check that ˜ E T : M p, 0 G (0 , T ) → R forms a sublinear expectation. We G (0 ,T ) , under which, M p, 0 then can introduce a natural norm � η � M p G (0 , T ) can be extended to M p G (0 , T ) which is a Banach space. Definition 3.3 For each p ≥ 1 , we denote by M p G (0 , T ) the completion of M Gp, 0 (0 , T ) under the norm � � 1 /p � T ˆ | η t | p dt ] � η � M p G (0 ,T ) := E [ . 0 It is clear that M p G (0 , T ) ⊃ M q G (0 , T ) for 1 ≤ p ≤ q. We also use M p G (0 , T ; R n ) for all n -dimensional stochastic processes η t = ( η 1 t , · · · , η n t ), t ≥ 0 with η i t ∈ M p G (0 , T ), i = 1 , 2 , · · · , n . We now give the definition of Itˆ o’s integral. For simplicity, we first introduce Itˆ o’s integral with respect to 1-dimensional G –Brownian motion. σ 2 α + − Let ( B t ) t ≥ 0 be a 1-dimensional G –Brownian motion with G ( α ) = 1 2 (¯ σ 2 α − ), where 0 ≤ σ ≤ ¯ σ < ∞ . Definition 3.4 For each η ∈ M 2 , 0 G (0 , T ) of the form N − 1 � η t ( ω ) = ξ j ( ω ) I [ t j ,t j +1 ) ( t ) , j =0 we define � T N − 1 � I ( η ) = η t dB t := ξ j ( B t j +1 − B t j ) . 0 j =0 Lemma 3.5 The mapping I : M 2 , 0 G (0 , T ) → L 2 G (Ω T ) is a continuous linear mapping and thus can be continuously extended to I : M 2 G (0 , T ) → L 2 G (Ω T ) . We have � T ˆ E [ η t dB t ] = 0 , (3.4) 0 � T � T ˆ η t dB t ) 2 ] ≤ ¯ σ 2 ˆ η 2 E [( E [ t dt ] . (3.5) 0 0

  45. 42 Chap.III G -Brownian Motion and Itˆ o’s Integral Proof. From Example 2.10, for each j , E [ ξ j ( B t j +1 − B t j ) | Ω t j ] = ˆ ˆ E [ − ξ j ( B t j +1 − B t j ) | Ω t j ] = 0 . We have � T � t N − 1 ˆ η t dB t ] = ˆ E [ E [ η t dB t + ξ N − 1 ( B t N − B t N − 1 )] 0 0 � t N − 1 = ˆ η t dB t + ˆ E [ E [ ξ N − 1 ( B t N − B t N − 1 ) | Ω t N − 1 ]] 0 � t N − 1 = ˆ E [ η t dB t ] . 0 Then we can repeat this procedure to obtain (3.4). We now give the proof of (3.5). Firstly, from Example 2.10, we have � T �� t N − 1 � 2 ˆ η t dB t ) 2 ] = ˆ E [( E [ η t dB t + ξ N − 1 ( B t N − B t N − 1 ) ] 0 0 �� t N − 1 � 2 = ˆ + ξ 2 N − 1 ( B t N − B t N − 1 ) 2 E [ η t dB t 0 �� t N − 1 � ξ N − 1 ( B t N − B t N − 1 )] + 2 η t dB t 0 �� t N − 1 � 2 = ˆ + ξ 2 N − 1 ( B t N − B t N − 1 ) 2 ] E [ η t dB t 0 N − 1 � = · · · = ˆ ξ 2 i ( B t i +1 − B t i ) 2 ] . E [ i =0 Then, for each i = 0 , 1 , · · · , N − 1, we have i ( B t i +1 − B t i ) 2 − σ 2 ξ 2 ˆ E [ ξ 2 i ( t i +1 − t i )] i ( B t i +1 − B t i ) 2 − σ 2 ξ 2 =ˆ E [ˆ E [ ξ 2 i ( t i +1 − t j ) | Ω t i ]] =ˆ E [ σ 2 ξ 2 i ( t i +1 − t i ) − σ 2 ξ 2 i ( t i +1 − t i )] = 0 . Finally, we have � T N − 1 � ˆ η t dB t ) 2 ] = ˆ ξ 2 i ( B t i +1 − B t i ) 2 ] E [( E [ 0 i =0 N − 1 N − 1 N − 1 � � � i ( B t i +1 − B t i ) 2 − ≤ ˆ ξ 2 σ 2 ξ 2 i ( t i +1 − t i )] + ˆ σ 2 ξ 2 E [ E [ i ( t i +1 − t i )] i =0 i =0 i =0 N − 1 N − 1 � � i ( B t i +1 − B t i ) 2 − σ 2 ξ 2 E [ ξ 2 ˆ i ( t i +1 − t i )] + ˆ σ 2 ξ 2 E [ ≤ i ( t i +1 − t i )] i =0 i =0 � T N − 1 � =ˆ σ 2 ξ 2 σ 2 ˆ η 2 E [ E [ i ( t i +1 − t i )] = ¯ t dt ] . 0 i =0

  46. 43 § 3 Itˆ o’s Integral with G –Brownian Motion � Definition 3.6 We define, for a fixed η ∈ M 2 G (0 , T ) , the stochastic integral � T η t dB t := I ( η ) . 0 It is clear that (3.4) and (3.5) still hold for η ∈ M 2 G (0 , T ). We list some main properties of Itˆ o’s integral of G –Brownian motion. We denote, for some 0 ≤ s ≤ t ≤ T , � t � T η u dB u := I [ s,t ] ( u ) η u dB u . s 0 Proposition 3.7 Let η, θ ∈ M 2 G (0 , T ) and let 0 ≤ s ≤ r ≤ t ≤ T . Then we have � t � r � t (i) s η u dB u = s η u dB u + r η u dB u . � t � t � t s θ u dB u , if α is bounded and in L 1 (ii) s ( αη u + θ u ) dB u = α s η u dB u + G (Ω s ) . � T (iii) ˆ r η u dB u | Ω s ] = ˆ for X ∈ L 1 E [ X + E [ X | Ω s ] G (Ω) . We now consider the multi-dimensional case. Let G ( · ) : S ( d ) → R be a given monotonic and sublinear function and let ( B t ) t ≥ 0 be a d -dimensional G – For each fixed a ∈ R d , we still use B a Brownian motion. t := � a , B t � . Then aa T α + − 1 2 ( σ 2 ( B a t ) t ≥ 0 is a 1-dimensional G a –Brownian motion with G a ( α ) = aa T = 2 G ( aa T ) and σ 2 − aa T = − 2 G ( − aa T ) . Similar to 1- σ 2 − aa T α − ) , where σ 2 dimensional case, we can define Itˆ o’s integral by � T for η ∈ M 2 η t dB a I ( η ) := t , G (0 , T ) . 0 We still have, for each η ∈ M 2 G (0 , T ), � T ˆ E [ η t dB a t ] = 0 , 0 � T � T ˆ aa T ˆ t ) 2 ] ≤ σ 2 η 2 E [( η t dB a E [ t dt ] . 0 0 Furthermore, Proposition 3.7 still holds for the integral with respect to B a t . Exercise 3.8 Prove that, for a fixed η ∈ M 2 G (0 , T ) , � T � T � T σ 2 ˆ η 2 t dt ] ≤ ˆ η t dB t ) 2 ] ≤ σ 2 ˆ η 2 E [ E [( E [ t dt ] , 0 0 0 where σ 2 = ˆ 1 ] and σ 2 = − ˆ E [ B 2 E [ − B 2 1 ] . Exercise 3.9 Prove that, for each η ∈ M p G (0 , T ) , we have � T � T ˆ ˆ | η t | p dt ] ≤ E [ | η t | p ] dt. E [ 0 0

  47. 44 Chap.III G -Brownian Motion and Itˆ o’s Integral § 4 Quadratic Variation Process of G –Brownian Motion We first consider the quadratic variation process of 1-dimensional G –Brownian d = N ( { 0 } × [ σ 2 , ¯ σ 2 ]). Let π N motion ( B t ) t ≥ 0 with B 1 t , N = 1 , 2 , · · · , be a sequence of partitions of [0 , t ]. We consider N − 1 � B 2 ( B 2 j +1 − B 2 t = j ) t N t N j =0 N − 1 N − 1 � � j ) 2 . = 2 B t N j ( B t N j +1 − B t N j ) + ( B t N j +1 − B t N j =0 j =0 � t As µ ( π N 0 B s dB s in L 2 t ) → 0, the first term of the right side converges to 2 G (Ω). The second term must be convergent. We denote its limit by � B � t , i.e., � t N − 1 � j ) 2 = B 2 � B � t := lim ( B t N j +1 − B t N t − 2 B s dB s . (4.6) µ ( π N t ) → 0 0 j =0 By the above construction, ( � B � t ) t ≥ 0 is an increasing process with � B � 0 = 0. We call it the quadratic variation process of the G –Brownian motion B . It characterizes the part of statistic uncertainty of G –Brownian motion. It is important to keep in mind that � B � t is not a deterministic process unless σ =¯ σ , i.e., when ( B t ) t ≥ 0 is a classical Brownian motion. In fact we have the following lemma. Lemma 4.1 For each 0 ≤ s ≤ t < ∞ , we have ˆ σ 2 ( t − s ) , E [ � B � t − � B � s | Ω s ] = ¯ (4.7) ˆ E [ − ( � B � t − � B � s ) | Ω s ] = − σ 2 ( t − s ) . (4.8) By the definition of � B � and Proposition 3.7 (iii), Proof. � t E [ � B � t − � B � s | Ω s ] = ˆ ˆ E [ B 2 t − B 2 s − 2 B u dB u | Ω s ] s = ˆ E [ B 2 t − B 2 σ 2 ( t − s ) . s | Ω s ] = ¯ The last step follows from Example 2.11. We then have (4.7). The equality (4.8) can be proved analogously with the consideration of ˆ E [ − ( B 2 t − B 2 s ) | Ω s ]= − σ 2 ( t − s ). � A very interesting point of the quadratic variation process � B � is, just like the G –Brownian motion B itself, the increment � B � s + t − � B � s is independent from Ω s and identically distributed with � B � t . In fact we have

  48. 45 § 4 Quadratic Variation Process of G –Brownian Motion Lemma 4.2 For each fixed s,t ≥ 0 , � B � s + t −� B � s is identically distributed with � B � t and independent from Ω s . Proof. The results follow directly from � s + t � s � B � s + t − � B � s = B 2 B r dB r − [ B 2 s + t − 2 s − 2 B r dB r ] 0 0 � s + t = ( B s + t − B s ) 2 − 2 ( B r − B s ) d ( B r − B s ) s = � B s � t , where � B s � is the quadratic variation process of the G –Brownian motion B s t = B s + t − B s , t ≥ 0. � We now define the integral of a process η ∈ M 1 G (0 , T ) with respect to � B � . We first define a mapping: � T N − 1 � ξ j ( � B � t j +1 − � B � t j ) : M 1 , 0 G (0 , T ) → L 1 Q 0 ,T ( η ) = η t d � B � t := G (Ω T ) . 0 j =0 Lemma 4.3 For each η ∈ M 1 , 0 G (0 , T ) , � T ˆ σ 2 ˆ E [ | Q 0 ,T ( η ) | ] ≤ ¯ E [ | η t | dt ] . (4.9) 0 Thus Q 0 ,T : M 1 , 0 G (0 , T ) → L 1 G (Ω T ) is a continuous linear mapping. Conse- quently, Q 0 ,T can be uniquely extended to M 1 G (0 , T ) . We still denote this map- ping by � T η t d � B � t := Q 0 ,T ( η ) for η ∈ M 1 G (0 , T ) . 0 We still have � T � T ˆ σ 2 ˆ | η t | dt ] for η ∈ M 1 E [ | E [ η t d � B � t | ] ≤ ¯ G (0 , T ) . (4.10) 0 0 Proof. Firstly, for each j = 1 , · · · , N − 1, we have E [ | ξ j | ( � B � t j +1 − � B � t j ) − σ 2 | ξ j | ( t j +1 − t j )] ˆ =ˆ E [ˆ E [ | ξ j | ( � B � t j +1 − � B � t j ) | Ω t j ] − σ 2 | ξ j | ( t j +1 − t j )] =ˆ E [ | ξ j | σ 2 ( t j +1 − t j ) − σ 2 | ξ j | ( t j +1 − t j )] = 0 .

  49. 46 Chap.III G -Brownian Motion and Itˆ o’s Integral Then (4.9) can be checked as follows: N − 1 N − 1 � � ˆ ξ j ( � B � t j +1 − � B � t j ) | ] ≤ ˆ E [ | E [ | ξ j | � B � t j +1 − � B � t j ] j =0 j =0 N − 1 � N − 1 � ≤ ˆ | ξ j | [( � B � t j +1 − � B � t j ) − σ 2 ( t j +1 − t j )]] + ˆ E [ σ 2 E [ | ξ j | ( t j +1 − t j )] j =0 j =0 N − 1 N − 1 � � E [ | ξ j | [( � B � t j +1 − � B � t j ) − σ 2 ( t j +1 − t j )]] + ˆ ˆ E [ σ 2 ≤ | ξ j | ( t j +1 − t j )] j =0 j =0 � T N − 1 � =ˆ E [ σ 2 | ξ j | ( t j +1 − t j )] = σ 2 ˆ E [ | η t | dt ] . 0 j =0 � Proposition 4.4 Let 0 ≤ s ≤ t , ξ ∈ L 2 G (Ω s ) , X ∈ L 1 G (Ω) . Then ˆ s )] = ˆ E [ X + ξ ( B 2 t − B 2 E [ X + ξ ( B t − B s ) 2 ] = ˆ E [ X + ξ ( � B � t − � B � s )] . Proof. By (4.6) and Proposition 3.7 (iii), we have � t ˆ s )] = ˆ E [ X + ξ ( B 2 t − B 2 E [ X + ξ ( � B � t − � B � s + 2 B u dB u )] s = ˆ E [ X + ξ ( � B � t − � B � s )] . We also have E [ X + ξ (( B t − B s ) 2 + 2( B t − B s ) B s )] ˆ s )] = ˆ E [ X + ξ ( B 2 t − B 2 = ˆ E [ X + ξ ( B t − B s ) 2 ] . � We have the following isometry. Proposition 4.5 Let η ∈ M 2 G (0 , T ) . Then � T � T ˆ η t dB t ) 2 ] = ˆ η 2 E [( E [ t d � B � t ] . (4.11) 0 0 We first consider η ∈ M 2 , 0 Proof. G (0 , T ) of the form N − 1 � η t ( ω ) = ξ j ( ω ) I [ t j ,t j +1 ) ( t ) j =0

  50. 47 § 4 Quadratic Variation Process of G –Brownian Motion � T 0 η t dB t = � N − 1 and then j =0 ξ j ( B t j +1 − B t j ) . From Proposition 3.7, we get ˆ E [ X + 2 ξ j ( B t j +1 − B t j ) ξ i ( B t i +1 − B t i )] = ˆ E [ X ] for X ∈ L 1 G (Ω), i � = j. Thus � T N − 1 N − 1 � � ˆ η t dB t ) 2 ] = ˆ ξ j ( B t j +1 − B t j )) 2 ] = ˆ ξ 2 j ( B t j +1 − B t j ) 2 ] . E [( E [( E [ 0 j =0 j =0 From this and Proposition 4.4, it follows that � T � T N − 1 � ˆ η t dB t ) 2 ] = ˆ j ( � B � t j +1 − � B � t j )] = ˆ ξ 2 η 2 E [( E [ E [ t d � B � t ] . 0 0 j =0 Thus (4.11) holds for η ∈ M 2 , 0 G (0 , T ). We can continuously extend the above equality to the case η ∈ M 2 G (0 , T ) and get (4.11). � We now consider the multi-dimensional case. Let ( B t ) t ≥ 0 be a d -dimensional G –Brownian motion. For each fixed a ∈ R d , ( B a t ) t ≥ 0 is a 1-dimensional G a – Brownian motion. Similar to 1-dimensional case, we can define � t N − 1 � j ) 2 = ( B a t ) 2 − 2 � B a � t := ( B a j +1 − B a B a s dB a lim s , t N t N µ ( π N t ) → 0 0 j =0 where � B a � is called the quadratic variation process of B a . The above results also hold for � B a � . In particular, � T � T ˆ aa T ˆ η t d � B a � t | ] ≤ σ 2 | η t | dt ] for η ∈ M 1 E [ | E [ G (0 , T ) 0 0 and � T � T ˆ t ) 2 ] = ˆ η t dB a η 2 t d � B a � t ] for η ∈ M 2 E [( E [ G (0 , T ) . 0 0 Let a = ( a 1 , · · · , a d ) T and ¯ a d ) T be two given vectors in R d . We a = (¯ a 1 , · · · , ¯ then have their quadratic variation processes of � B a � and � B ¯ a � . We can define their mutual variation process by � a � � a � � a � t := 1 B a + B ¯ B a − B ¯ B a , B ¯ t − 4[ t ] � a � � a � = 1 B a + ¯ B a − ¯ 4[ t − t ] . Since � B a − ¯ a − a � = �− B a − ¯ a � = � B ¯ a � , we see that � B a , B ¯ a � t = � B ¯ a , B a � t . In particular, we have � B a , B a � = � B a � . Let π N t , N = 1 , 2 , · · · , be a sequence of partitions of [0 , t ]. We observe that N − 1 � N − 1 � k ) = 1 ) 2 − ( B a − ¯ [( B a + ¯ t k +1 − B a + ¯ t k +1 − B a − ¯ ) 2 ] . ( B a k +1 − B a k )( B ¯ a k +1 − B ¯ a a a a a t N t N t N t N t k t k 4 k =0 k =0

  51. 48 Chap.III G -Brownian Motion and Itˆ o’s Integral Thus as µ ( π N t ) → 0 we have N − 1 � � a � ( B a k +1 − B a k )( B ¯ a k +1 − B ¯ a B a , B ¯ lim k ) = t . t N t N t N t N N →∞ k =0 We also have � a � � a � � a � t = 1 B a + ¯ B a − ¯ B a , B ¯ 4[ t − t ] � t � t = 1 ) 2 − 2 ) 2 + 2 4[( B a + ¯ B a + ¯ dB a + ¯ − ( B a − ¯ B a − ¯ dB a − ¯ a a a a a a ] t s s t s s 0 0 � t � t = B a t B ¯ t − a B a s dB ¯ a s − B ¯ s dB a a s . 0 0 Now for each η ∈ M 1 G (0 , T ), we can consistently define � T � T � T � a � � a � � a � t = 1 t − 1 B a + ¯ B a − ¯ B a , B ¯ η t d η t d η t d t . 4 4 0 0 0 Lemma 4.6 Let η N ∈ M 2 , 0 G (0 , T ) , N = 1 , 2 , · · · , be of the form N − 1 � η N ξ N t ( ω ) = k ( ω ) I [ t N k +1 ) ( t ) k ,t N k =0 T ) → 0 and η N → η in M 2 with µ ( π N G (0 , T ) , as N → ∞ . Then we have the following convergence in L 2 G (Ω T ) : � T N − 1 � � a � ξ N k ( B a k +1 − B a k )( B ¯ a k +1 − B ¯ a B a , B ¯ k ) → η t d t . t N t N t N t N 0 k =0 Proof. Since � a � � a � B a , B ¯ B a , B ¯ k = ( B a k +1 − B a k )( B ¯ a k +1 − B ¯ a k +1 − k ) t N t N t N t N t N t N � t N � t N k +1 k +1 − ( B a s − B a k ) dB ¯ a s − ( B ¯ s − B ¯ a a k ) dB a s , t N t N t N t N k k we only need to prove � t N N − 1 � k +1 ˆ ( ξ N k ) 2 ( s ) 2 ] → 0 . E [ ( B a s − B a k ) dB ¯ a t N t N k =0 k For each k = 1 , · · · , N − 1, we have � t N k +1 s ) 2 − C ( ξ N ˆ E [( ξ N k ) 2 ( k ) 2 ( t N k +1 − t N k ) 2 ] ( B a s − B a k ) dB ¯ a t N t N k � t N k +1 =ˆ E [ˆ E [( ξ N k ) 2 ( s ) 2 | Ω t N k ] − C ( ξ N k ) 2 ( t N k +1 − t N k ) 2 ] ( B a s − B a k ) dB ¯ a t N t N k k ) 2 − C ( ξ N ≤ ˆ E [ C ( ξ N k ) 2 ( t N k +1 − t N k ) 2 ( t N k +1 − t N k ) 2 ] = 0 ,

  52. 49 § 5 The Distribution of � B � σ 2 σ 2 where C = ¯ aa T ¯ a T / 2. ¯ a¯ Thus we have � t N N − 1 � k +1 ˆ ( ξ N k ) 2 ( s ) 2 ] E [ ( B a s − B a k ) dB ¯ a t N t N k =0 k � t N N − 1 � k +1 s ) 2 − C ( t N ≤ ˆ ( ξ N k ) 2 [( k +1 − t N k ) 2 ]] ( B a s − B a k ) dB ¯ a E [ t N t N k =0 k N − 1 � + ˆ C ( ξ N k ) 2 ( t N k +1 − t N k ) 2 ] E [ k =0 � t N N − 1 � k +1 s ) 2 − C ( t N ˆ E [( ξ N k ) 2 [( k +1 − t N k ) 2 ]] ≤ ( B a s − B a k ) dB ¯ a t N t N k =0 k N − 1 � + ˆ C ( ξ N k ) 2 ( t N k +1 − t N k ) 2 ] E [ k =0 � T N − 1 � ≤ ˆ C ( ξ N k ) 2 ( t N k +1 − t N k ) 2 ] ≤ Cµ ( π N T )ˆ | η N t | 2 dt ] , E [ E [ 0 k =0 As µ ( π N T ) → 0, the proof is complete. � Exercise 4.7 Let B t be a 1-dimensional G-Brownian motion and ϕ be a bounded and Lipschitz function on R . Show that N − 1 � k ) 2 − ( � B � t N ˆ E [ | lim ϕ ( B t N k )[( B t N k +1 − B t N k +1 − � B � t N k )] | ] = 0 , N →∞ k =0 where t N k = kT/N, k = 0 , 2 , · · · , N − 1 . Exercise 4.8 Prove that, for a fixed η ∈ M 1 G (0 , T ) , � T � T � T σ 2 ˆ | η t | dt ] ≤ ˆ | η t | d � B � t ] ≤ σ 2 ˆ E [ E [ E [ | η t | dt ] , 0 0 0 where σ 2 = ˆ 1 ] and σ 2 = − ˆ E [ B 2 E [ − B 2 1 ] . The Distribution of � B � § 5 In this section, we first consider the 1-dimensional G –Brownian motion ( B t ) t ≥ 0 d = N ( { 0 } × [ σ 2 , ¯ σ 2 ]). with B 1 The quadratic variation process � B � of G -Brownian motion B is a very in- teresting process. We have seen that the G -Brownian motion B is a typical process with variance uncertainty but without mean-uncertainty. In fact, � B � is

  53. 50 Chap.III G -Brownian Motion and Itˆ o’s Integral concentrated all uncertainty of the G -Brownian motion B . Moreover, � B � itself is a typical process with mean-uncertainty. This fact will be applied to measure the mean-uncertainty of risk positions. Lemma 5.1 We have E [ � B � 2 ˆ σ 4 t 2 . t ] ≤ 10¯ (5.12) Proof. Indeed, � t 2 − 2 ˆ E [ � B � 2 t ] = ˆ B u dB u ) 2 ] E [( B t 0 � t ≤ 2ˆ t ] + 8ˆ E [ B 4 B u dB u ) 2 ] E [( 0 � t σ 4 t 2 + 8¯ σ 2 ˆ 2 du ] ≤ 6¯ E [ B u 0 � t σ 4 t 2 + 8¯ ˆ σ 2 2 ] du E [ B u ≤ 6¯ 0 σ 4 t 2 . = 10¯ � Proposition 5.2 Let ( b t ) t ≥ 0 be a process on a sublinear expectation space (Ω , H , ˆ E ) such that (i) b 0 = 0 ; (ii) For each t, s ≥ 0 , b t + s − b t is identically distributed with b s and independent from ( b t 1 , b t 2 , · · · , b t n ) for each n ∈ N and 0 ≤ t 1 , · · · , t n ≤ t ; t ] t − 1 = 0 . (iii) lim t ↓ 0 ˆ E [ b 2 Then b t is N ([ µt, µt ] × { 0 } ) -distributed with µ = ˆ E [ b 1 ] and µ = − ˆ E [ − b 1 ] . Proof. We first prove that E [ b t ] = µt and − ˆ ˆ E [ − b t ] = µt. We set ϕ ( t ) := ˆ E [ b t ]. Then ϕ (0) = 0 and lim t ↓ 0 ϕ ( t ) =0. Since for each t, s ≥ 0 , ϕ ( t + s ) = ˆ E [ b t + s ] = ˆ E [( b t + s − b s ) + b s ] = ϕ ( t ) + ϕ ( s ) . Thus ϕ ( t ) is linear and uniformly continuous in t, which means that ˆ E [ b t ] = µt . Similarly − ˆ E [ − b t ] = µt . We now prove that b t is N ([ µt, µt ] × { 0 } )-distributed. By Exercise 1.17 in Chap.II, we just need to prove that for each fixed ϕ ∈ C b.Lip ( R ), the function u ( t, x ) := ˆ E [ ϕ ( x + b t )] , ( t, x ) ∈ [0 , ∞ ) × R

  54. 51 § 5 The Distribution of � B � is the viscosity solution of the following parabolic PDE: ∂ t u − g ( ∂ x u ) = 0 , u | t =0 = ϕ (5.13) with g ( a ) = µa + − µa − . We first prove that u is Lipschitz in x and 1 2 -H¨ older continuous in t . In fact, for each fixed t , u ( t, · ) ∈ C b.Lip ( R ) since | ˆ E [ ϕ ( x + b t )] − ˆ E [ ϕ ( y + b t )] | ≤ ˆ E [ | ϕ ( x + b t ) − ϕ ( y + b t ) | ] ≤ C | x − y | . For each δ ∈ [0 , t ], since b t − b δ is independent from b δ , we have u ( t, x ) = ˆ E [ ϕ ( x + b δ + ( b t − b δ )] = ˆ E [ˆ E [ ϕ ( y + ( b t − b δ ))] y = x + b δ ] , hence u ( t, x ) = ˆ E [ u ( t − δ, x + b δ )] . (5.14) Thus | u ( t, x ) − u ( t − δ, x ) | = | ˆ E [ u ( t − δ, x + b δ ) − u ( t − δ, x )] | ≤ ˆ E [ | u ( t − δ, x + b δ ) − u ( t − δ, x ) | ] √ ≤ ˆ E [ C | b δ | ] ≤ C 1 δ. To prove that u is a viscosity solution of the PDE (5.13), we fix a point ( t, x ) ∈ (0 , ∞ ) × R and let v ∈ C 2 , 2 ([0 , ∞ ) × R ) be such that v ≥ u and b v ( t, x ) = u ( t, x ). From (5.14), we have v ( t, x ) = ˆ E [ u ( t − δ, x + b δ )] ≤ ˆ E [ v ( t − δ, x + b δ )] . Therefore, by Taylor’s expansion, 0 ≤ ˆ E [ v ( t − δ, x + b δ ) − v ( t, x )] = ˆ E [ v ( t − δ, x + b δ ) − v ( t, x + b δ ) + ( v ( t, x + b δ ) − v ( t, x ))] = ˆ E [ − ∂ t v ( t, x ) δ + ∂ x v ( t, x ) b δ + I δ ] ≤ − ∂ t v ( t, x ) δ + ˆ E [ ∂ x v ( t, x ) b δ ] + ˆ E [ I δ ] = − ∂ t v ( t, x ) δ + g ( ∂ x v ( t, x )) δ + ˆ E [ I δ ] , where � 1 I δ = δ [ − ∂ t v ( t − βδ, x + b δ ) + ∂ t v ( t, x )] dβ 0 � 1 + b δ [ ∂ x v ( t, x + βb δ ) − ∂ x v ( t, x )] dβ. 0

  55. 52 Chap.III G -Brownian Motion and Itˆ o’s Integral t ] t − 1 = 0 , we can check that With the assumption that lim t ↓ 0 ˆ E [ b 2 E [ | I δ | ] δ − 1 = 0 , ˆ lim δ ↓ 0 from which we get ∂ t v ( t, x ) − g ( ∂ x v ( t, x )) ≤ 0, hence u is a viscosity subsolution of (5.13). We can analogously prove that u is also a viscosity supersolution. It follows that b t is N ([ µt, µt ] × { 0 } )-distributed. The proof is complete. � It is clear that � B � satisfies all the conditions in the Proposition 5.2, thus we immediately have Theorem 5.3 � B � t is N ([ σ 2 t, ¯ σ 2 t ] ×{ 0 } ) -distributed, i.e., for each ϕ ∈ C l.Lip ( R ) , ˆ E [ ϕ ( � B � t )] = sup σ 2 ϕ ( vt ) . (5.15) σ 2 ≤ v ≤ ¯ Corollary 5.4 For each 0 ≤ t ≤ T < ∞ , we have σ 2 ( T − t ) ≤ � B � T − � B � t ≤ ¯ σ 2 ( T − t ) in L 1 G (Ω) . Proof. It is a direct consequence of ˆ σ 2 ( T − t )) + ] = σ 2 ) + ( T − t ) = 0 E [( � B � T − � B � t − ¯ sup σ 2 ( v − ¯ σ 2 ≤ v ≤ ¯ and ˆ E [( � B � T − � B � t − σ 2 ( T − t )) − ] = σ 2 ( v − σ 2 ) − ( T − t ) = 0 . sup σ 2 ≤ v ≤ ¯ � Corollary 5.5 We have, for each t, s ≥ 0 , n ∈ N , E [( � B � t + s − � B � s ) n | Ω s ] = ˆ ˆ E [ � B � n σ 2 n t n t ] = ¯ (5.16) and E [ − � B � n ˆ E [ − ( � B � t + s − � B � s ) n | Ω s ] = ˆ t ] = − σ 2 n t n . (5.17) We now consider the multi-dimensional case. For notational simplicity, we denote by B i := B e i the i -th coordinate of the G –Brownian motion B , under a given orthonormal basis ( e 1 , · · · , e d ) of R d . We denote � B i , B j � ( � B � t ) ij := t . Then � B � t , t ≥ 0, is an S ( d )-valued process. Since ˆ E [ � AB t , B t � ] = 2 G ( A ) t for A ∈ S ( d ) ,

  56. 53 § 5 The Distribution of � B � we have � d � B i , B j � E [( � B � t , A )] = ˆ ˆ E [ a ij t ] i,j =1 � t � t d � = ˆ a ij ( B i t B j B i s dB j B j s dB i E [ t − s − s )] 0 0 i,j =1 d � = ˆ t B j a ij B i E [ t ] = 2 G ( A ) t for A ∈ S ( d ) , i,j =1 where ( a ij ) d i,j =1 = A . Now we set, for each ϕ ∈ C l.Lip ( S ( d )) , v ( t, X ) := ˆ E [ ϕ ( X + � B � t )] , ( t, X ) ∈ [0 , ∞ ) × S ( d ) . Let Σ ⊂ S + ( d ) be the bounded, convex and closed subset such that G ( A ) = 1 A ∈ S ( d ) . 2 sup ( A, B ) , B ∈ Σ Proposition 5.6 The function v solves the following first order PDE: ∂ t v − 2 G ( Dv ) = 0 , v | t =0 = ϕ, where Dv = ( ∂ x ij v ) d i,j =1 . We also have v ( t, X ) = sup ϕ ( X + t Λ) . Λ ∈ Σ Sketch of the Proof. We have v ( t + δ, X ) = ˆ E [ ϕ ( X + � B � δ + � B � t + δ − � B � δ )] = ˆ E [ v ( t, X + � B � δ )] . The rest part of the proof is similar to the 1-dimensional case. � Corollary 5.7 We have � B � t ∈ t Σ := { t × γ : γ ∈ Σ } , � or equivalently, d t Σ ( � B � t ) = 0 , where d U ( X ) = inf { ( X − Y, X − Y ) : Y ∈ U } . Proof. Since ˆ E [ d t Σ ( � B � t )] = sup d t Σ ( t Λ) = 0, Λ ∈ Σ it follows that d t Σ ( � B � t ) = 0. � Exercise 5.8 Complete the proof of Proposition 5.6.

  57. 54 Chap.III G -Brownian Motion and Itˆ o’s Integral § 6 G –Itˆ o’s Formula In this section, we give Itˆ o’s formula for a “ G -Itˆ o process” X . For simplicity, we first consider the case of the function Φ is sufficiently regular. Lemma 6.1 Let Φ ∈ C 2 ( R n ) with ∂ x ν Φ , ∂ 2 x µ x ν Φ ∈ C b.Lip ( R n ) for µ, ν = 1 , · · · , n . Let s ∈ [0 , T ] be fixed and let X = ( X 1 , · · · , X n ) T be an n –dimensional process on [ s, T ] of the form � B i , B j � � B i , B j � s ) + β νj ( B j X ν t = X ν s + α ν ( t − s ) + η νij ( t − B j t − s ) , where, for ν = 1 , · · · , n , i, j = 1 , · · · , d , α ν , η νij and β νj are bounded elements s ) T is a given random vector in L 2 in L 2 G (Ω s ) and X s = ( X 1 s , · · · , X n G (Ω s ) . Then we have, in L 2 G (Ω t ) , � t � t ∂ x ν Φ( X u ) β νj dB j ∂ x ν Φ( X u ) α ν du Φ( X t ) − Φ( X s ) = u + (6.18) s s � t � B i , B j � [ ∂ x ν Φ( X u ) η νij + 1 2 ∂ 2 x µ x ν Φ( X u ) β µi β νj ] d + u . s Here we use the , i.e., the above repeated indices µ, ν , i and j imply the sum- mation. Proof. For each positive integer N , we set δ = ( t − s ) /N and take the partition π N [ s,t ] = { t N 0 , t N 1 , · · · , t N N } = { s, s + δ, · · · , s + Nδ = t } . We have N − 1 � Φ( X t ) − Φ( X s ) = [Φ( X t N k +1 ) − Φ( X t N k )] (6.19) k =0 N − 1 � k )( X ν k +1 − X ν = { ∂ x ν Φ( X t N k ) t N t N k =0 + 1 2[ ∂ 2 k )( X µ k +1 − X µ k )( X ν k +1 − X ν k ) + η N x µ x ν Φ( X t N k ] } , t N t N t N t N where k )]( X µ k +1 − X µ η N k = [ ∂ 2 k )) − ∂ 2 k )( X ν k +1 − X ν x µ x ν Φ( X t N k + θ k ( X t N k +1 − X t N x µ x ν Φ( X t N k ) t N t N t N t N with θ k ∈ [0 , 1]. We have ˆ E [ | η N k | 2 ] = ˆ E [ | [ ∂ 2 k )) − ∂ 2 x µ x ν Φ( X t N k + θ k ( X t N k +1 − X t N x µ x ν Φ( X t N k )] × ( X µ k +1 − X µ k )( X ν k +1 − X ν k ) | 2 ] t N t N t N t N k | 6 ] ≤ C [ δ 6 + δ 3 ] , ≤ c ˆ E [ | X t N k +1 − X t N

  58. 55 § 6 G –Itˆ o’s Formula where c is the Lipschitz constant of { ∂ 2 x µ x ν Φ } n µ,ν =1 and C is a constant inde- pendent of k . Thus N − 1 � N − 1 � ˆ ˆ η N k | 2 ] ≤ N E [ | η N k | 2 ] → 0 . E [ | k =0 k =0 The rest terms in the summation of the right side of (6.19) are ξ N t + ζ N with t N − 1 � � B i , B j � � B i , B j � ξ N k )[ α ν ( t N k +1 − t N k ) + η νij ( { ∂ x ν Φ( X t N k +1 − t = k ) t N t N k =0 k )] + 1 + β νj ( B j k +1 − B j k )( B j k +1 − B j 2 ∂ 2 k ) β µi β νj ( B i k +1 − B i k ) } x µ x ν Φ( X t N t N t N t N t N t N t N and N − 1 � � B i , B j � � B i , B j � = 1 ζ N ∂ 2 k ) { [ α µ ( t N k +1 − t N k ) + η µij ( k +1 − x µ x ν Φ( X t N k )] t t N t N 2 k =0 � B l , B m � � B l , B m � × [ α ν ( t N k +1 − t N k ) + η νlm ( k +1 − k )] t N t N � B i , B j � � B i , B j � + 2[ α µ ( t N k +1 − t N k ) + η µij ( k )] β νl ( B l k +1 − B l k +1 − k ) } . t N t N t N t N We observe that, for each u ∈ [ t N k , t N k +1 ), N − 1 � ˆ k +1 ) ( u ) | 2 ] E [ | ∂ x ν Φ( X u ) − ∂ x ν Φ( X t N k ) I [ t N k ,t N k =0 = ˆ k ) | 2 ] E [ | ∂ x ν Φ( X u ) − ∂ x ν Φ( X t N ≤ c 2 ˆ k | 2 ] ≤ C [ δ + δ 2 ] , E [ | X u − X t N where c is the Lipschitz constant of { ∂ x ν Φ } n ν =1 and C is a constant independent of k . Thus � N − 1 k +1 ) ( · ) converges to ∂ x ν Φ( X · ) in M 2 k =0 ∂ x ν Φ( X t N k ) I [ t N G (0 , T ). k ,t N Similarly, � N − 1 k =0 ∂ 2 k +1 ) ( · ) converges to ∂ 2 x µ x ν Φ( X · ) in M 2 x µ x ν Φ( X t N k ) I [ t N G (0 , T ) . k ,t N From Lemma 4.6 as well as the definitions of the integrations of dt , dB t and d � B � t , the limit of ξ N in L 2 G (Ω t ) is just the right hand side of (6.18). By the t next Remark we also have ζ N → 0 in L 2 G (Ω t ). We then have proved (6.18). � t Remark 6.2 To prove ζ N → 0 in L 2 G (Ω t ) , we use the following estimates: for t = � N − 1 ψ N ∈ M 2 , 0 G (0 , T ) with ψ N k =0 ξ N k +1 ) ( t ) , and π N = { t N 0 , · · · , t N t k I [ t N N } k ,t N t T E [ � N − 1 T ) = 0 and ˆ such that lim N →∞ µ ( π N k =0 | ξ N t k | 2 ( t N k +1 − t N k )] ≤ C , for all N = E [ | � N − 1 1 , 2 , · · · , we have ˆ k =0 ξ N k ( t N k +1 − t N k ) 2 | 2 ] → 0 and, for any fixed a , ¯ a ∈ R d , N − 1 N − 1 � � ˆ k ) 2 | 2 ] ≤ C ˆ ξ N | ξ N k | 2 ( � B a � t N k ) 3 ] E [ | k ( � B a � t N k +1 − � B a � t N E [ k +1 − � B a � t N k =0 k =0 N − 1 � ≤ C ˆ | ξ N k | 2 σ 6 aa T ( t N k +1 − t N k ) 3 ] → 0 , E [ k =0

  59. 56 Chap.III G -Brownian Motion and Itˆ o’s Integral N − 1 � ˆ ξ N k )( t N k +1 − t N k ) | 2 ] E [ | k ( � B a � t N k +1 − � B a � t N k =0 N − 1 � ≤ C ˆ | ξ N k | 2 ( t N k +1 − t N k ) 2 ] k )( � B a � t N k +1 − � B a � t N E [ k =0 N − 1 � ≤ C ˆ | ξ N k | 2 σ 4 aa T ( t N k +1 − t N k ) 3 ] → 0 , E [ k =0 as well as N − 1 � ˆ ξ N k ( t N k +1 − t N k ) | 2 ] k )( B a k +1 − B a E [ | t N t N k =0 N − 1 � ≤ C ˆ | ξ N k | 2 ( t N k +1 − t N k | 2 ] E [ k ) | B a k +1 − B a t N t N k =0 N − 1 � ≤ C ˆ | ξ N k | 2 σ 2 aa T ( t N k +1 − t N k ) 2 ] → 0 E [ k =0 and N − 1 � ˆ ξ N k ) | 2 ] E [ | k ( � B a � t N k +1 − � B a � t N k )( B ¯ a k +1 − B ¯ a t N t N k =0 N − 1 � ≤ C ˆ | ξ N k | 2 ( � B a � t N k | 2 ] E [ k +1 − � B a � t N k ) | B ¯ a k +1 − B ¯ a t N t N k =0 N − 1 � ≤ C ˆ | ξ N k | 2 σ 2 aa T σ 2 a T ( t N k +1 − t N k ) 2 ] → 0 . E [ ¯ a¯ k =0 � We now consider a general form of G –Itˆ o’s formula. Consider � t � t � t � B i , B j � X ν t = X ν α ν η νij β νj s dB j 0 + s ds + d s + s . s 0 0 0 Proposition 6.3 Let Φ ∈ C 2 ( R n ) with ∂ x ν Φ , ∂ 2 x µ x ν Φ ∈ C b.Lip ( R n ) for µ, ν = 1 , · · · , n . Let α ν , β νj and η νij , ν = 1 , · · · , n , i, j = 1 , · · · , d be bounded processes in M 2 G (0 , T ) . Then for each t ≥ 0 we have, in L 2 G (Ω t ) � t � t ∂ x ν Φ( X u ) β νj u dB j ∂ x ν Φ( X u ) α ν Φ( X t ) − Φ( X s ) = u + u du (6.20) s s � t � B i , B j � + 1 [ ∂ x ν Φ( X u ) η νij 2 ∂ 2 x µ x ν Φ( X u ) β µi u β νj + u ] d u . u s

  60. 57 § 6 G –Itˆ o’s Formula Proof. We first consider the case where α , η and β are step processes of the form N − 1 � η t ( ω ) = ξ k ( ω ) I [ t k ,t k +1 ) ( t ) . k =0 From the above lemma, it is clear that (6.20) holds true. Now let � t � t � t � B i , B j � X ν,N = X ν α ν,N η νij,N β νj,N dB j 0 + ds + d s + s , t s s s 0 0 0 where α N , η N and β N are uniformly bounded step processes that converge to α , η and β in M 2 G (0 , T ) as N → ∞ , respectively. From Lemma 6.1, � t � t Φ( X N t ) − Φ( X N ∂ x ν Φ( X N u ) β νj,N dB j ∂ x ν Φ( X N u ) α ν,N s ) = u + du (6.21) u u s s � t � B i , B j � + 1 [ ∂ x ν Φ( X N u ) η νij,N 2 ∂ 2 x µ x ν Φ( X N u ) β µi,N β νj,N + ] d u . u u u s Since E [ | X ν,N ˆ − X ν t | 2 ] t � T s ) 2 + | η ν,N s | 2 + | β ν,N ≤ C ˆ [( α ν,N − α ν − η ν − β ν s | 2 ] ds ] , E [ s s s 0 where C is a constant independent of N , we can prove that, in M 2 G (0 , T ), ∂ x ν Φ( X N · ) η νij,N → ∂ x ν Φ( X · ) η νij , · · ∂ 2 x µ x ν Φ( X N · ) β µi,N β νj,N → ∂ 2 x µ x ν Φ( X · ) β µi · β νj · , · · ∂ x ν Φ( X N · ) α ν,N → ∂ x ν Φ( X · ) α ν · , · ∂ x ν Φ( X N · ) β νj,N → ∂ x ν Φ( X · ) β νj · . · We then can pass to limit as N → ∞ in both sides of (6.21) to get (6.20). � In order to consider the general Φ, we first prove a useful inequality. For the G -expectation ˆ E , we have the following representation (see Chap.VI): ˆ E P [ X ] for X ∈ L 1 E [ X ] = sup G (Ω) , (6.22) P ∈P where P is a weakly compact family of probability measures on (Ω , B (Ω)). G (0 , T ) with p ≥ 2 and let a ∈ R d be fixed. Then Proposition 6.4 Let β ∈ M p � T t ∈ L p 0 β t dB a we have G (Ω T ) and � T � T ˆ t | p ] ≤ C p ˆ β 2 t d � B a � t | p/ 2 ] . E [ | β t dB a E [ | (6.23) 0 0

  61. 58 Chap.III G -Brownian Motion and Itˆ o’s Integral Proof. It suffices to consider the case where β is a step process of the form N − 1 � β t ( ω ) = ξ k ( ω ) I [ t k ,t k +1 ) ( t ) . k =0 For each ξ ∈ L ip (Ω t ) with t ∈ [0 , T ], we have � T ˆ E [ ξ β s dB a s ] = 0 . t � T From this we can easily get E P [ ξ t β s dB a s ] = 0 for each P ∈ P , which implies � t 0 β s dB a that ( s ) t ∈ 0 ,T ] is a P -martingale. Similarly we can prove that � t � t s ) 2 − β 2 β s dB a s d � B a � s , t ∈ [0 , T ] M t := ( 0 0 is a P -martingale for each P ∈ P . By the Burkholder-Davis-Gundy inequalities, we have � T � T � T t | p ] ≤ C p E P [ | β 2 t d � B a � t | p/ 2 ] ≤ C p ˆ β 2 t d � B a � t | p/ 2 ] , β t dB a E P [ | E [ | 0 0 0 where C p is a universal constant independent of P . Thus we get (6.23). � We now give the general G –Itˆ o’s formula. Theorem 6.5 Let Φ be a C 2 -function on R n such that ∂ 2 x µ x ν Φ satisfy polyno- mial growth condition for µ, ν = 1 , · · · , n . Let α ν , β νj and η νij , ν = 1 , · · · , n , i, j = 1 , · · · , d be bounded processes in M 2 G (0 , T ) . Then for each t ≥ 0 we have in L 2 G (Ω t ) � t � t ∂ x ν Φ( X u ) β νj u dB j ∂ x ν Φ( X u ) α ν Φ( X t ) − Φ( X s ) = u + u du (6.24) s s � t � B i , B j � + 1 [ ∂ x ν Φ( X u ) η νij 2 ∂ 2 x µ x ν Φ( X u ) β µi u β νj + u ] d u . u s Proof. By the assumptions on Φ, we can choose a sequence of functions Φ N ∈ C 2 0 ( R n ) such that x µ x ν Φ( x ) | ≤ C 1 | Φ N ( x ) − Φ( x ) | + | ∂ x ν Φ N ( x ) − ∂ x ν Φ( x ) | + | ∂ 2 x µ x ν Φ N ( x ) − ∂ 2 N (1+ | x | k ) , where C 1 and k are positive constants independent of N . Obviously, Φ N satisfies the conditions in Proposition 6.3, therefore, � t � t ∂ x ν Φ N ( X u ) β νj u dB j ∂ x v Φ N ( X u ) α ν Φ N ( X t ) − Φ N ( X s ) = u + u du (6.25) s s � t � B i , B j � + 1 [ ∂ x ν Φ N ( X u ) η νij 2 ∂ 2 x µ x ν Φ N ( X u ) β µi u β νj + u ] d u . u s

  62. 59 § 7 Generalized G -Brownian Motion For each fixed T > 0, by Proposition 6.4, there exists a constant C 2 such that ˆ E [ | X t | 2 k ] ≤ C 2 for t ∈ [0 , T ] . Thus we can prove that Φ N ( X t ) → Φ( X t ) in L 2 G (Ω t ) and in M 2 G (0 , T ) , ∂ x ν Φ N ( X · ) η νij → ∂ x ν Φ( X · ) η νij , · · ∂ 2 x µ x ν Φ N ( X · ) β µi · β νj → ∂ 2 x µ x ν Φ( X · ) β µi · β νj · , · ∂ x ν Φ N ( X · ) α ν · → ∂ x ν Φ( X · ) α ν · , ∂ x ν Φ N ( X · ) β νj → ∂ x ν Φ( X · ) β νj · . · We then can pass to limit as N → ∞ in both sides of (6.25) to get (6.24). � Corollary 6.6 Let Φ be a polynomial and a , a ν ∈ R d be fixed for ν = 1 , · · · , n . Then we have � t � t � B a µ , B a ν � u + 1 ∂ x ν Φ( X u ) dB a ν ∂ 2 Φ( X t ) − Φ( X s ) = x µ x ν Φ( X u ) d u , 2 s s t ) T . In particular, we have, for k = 2 , 3 , · · · , where X t = ( B a 1 t , · · · , B a n � t � t s + k ( k − 1) t ) k = k s ) k − 1 dB a s ) k − 2 d � B a � s . ( B a ( B a ( B a 2 0 0 If ˆ E becomes a linear expectation, then the above G –Itˆ o’s formula is the classical one. § 7 Generalized G -Brownian Motion Let G : R d × S ( d ) → R be a given continuous sublinear function monotonic in A ∈ S ( d ). Then by Theorem 2.1 in Chap.I, there exists a bounded, convex and closed subset Σ ⊂ R d × S + ( d ) such that [1 for ( p, A ) ∈ R d × S ( d ) . G ( p, A ) = sup 2tr[ AB ] + � p, q � ] ( q,B ) ∈ Σ By Chapter II, we know that there exists a pair of d -dimensional random vectors ( X, Y ) which is G -distributed. We now give the definition of the generalized G -Brownian motion. Definition 7.1 A d -dimensional process ( B t ) t ≥ 0 on a sublinear expectation space (Ω , H , ˆ E ) is called a generalized G -Brownian motion if the follow- ing properties are satisfied: (i) B 0 ( ω ) = 0 ; (ii) For each t, s ≥ 0 , the increment B t + s − B t identically distributed with √ sX + sY and is independent from ( B t 1 , B t 2 , · · · , B t n ) , for each n ∈ N and 0 ≤ t 1 ≤ · · · ≤ t n ≤ t , where ( X, Y ) is G -distributed.

  63. 60 Chap.III G -Brownian Motion and Itˆ o’s Integral The following theorem gives a characterization of the generalized G -Brownian motion. Theorem 7.2 Let ( B t ) t ≥ 0 be a d -dimensional process defined on a sublinear expectation space (Ω , H , ˆ E ) such that (i) B 0 ( ω )= 0 ; (ii) For each t, s ≥ 0 , B t + s − B t and B s are identically distributed and B t + s − B t is independent from ( B t 1 , B t 2 , · · · , B t n ) , for each n ∈ N and 0 ≤ t 1 ≤ · · · ≤ t n ≤ t . E [ | B t | 3 ] t − 1 = 0 . (iii) lim t ↓ 0 ˆ Then ( B t ) t ≥ 0 is a generalized G -Brownian motion with G ( p, A ) = lim δ ↓ 0 ˆ E [ � p, B δ � + 2 � AB δ , B δ � ] δ − 1 for ( p, A ) ∈ R d × S ( d ) . 1 2 � AB δ , B δ � ] δ − 1 exists. For each Proof. We first prove that lim δ ↓ 0 ˆ E [ � p, B δ � + 1 fixed ( p, A ) ∈ R d × S ( d ), we set E [ � p, B t � + 1 f ( t ) := ˆ 2 � AB t , B t � ] . Since | f ( t + h ) − f ( t ) | ≤ ˆ E [( | p | + 2 | A || B t | ) | B t + h − B t | + | A || B t + h − B t | 2 ] → 0 , we get that f ( t ) is a continuous function. It is easy to prove that E [ � q, B t � ] = ˆ ˆ E [ � q, B 1 � ] t for q ∈ R d . Thus for each t, s > 0, | f ( t + s ) − f ( t ) − f ( s ) | ≤ C ˆ E [ | B t | ] s, where C = | A | ˆ E [ | B 1 | ]. By (iii), there exists a constant δ 0 > 0 such that ˆ E [ | B t | 3 ] ≤ t for t ≤ δ 0 . Thus for each fixed t > 0 and N ∈ N such that Nt ≤ δ 0 , we have | f ( Nt ) − Nf ( t ) | ≤ 3 4 C ( Nt ) 4 / 3 . From this and the continuity of f , it is easy to show that lim t ↓ 0 f ( t ) t − 1 exists. Thus we can get G ( p, A ) for each ( p, A ) ∈ R d × S ( d ). It is also easy to check that G is a continuous sublinear function monotonic in A ∈ S ( d ). We only need to prove that, for each fixed ϕ ∈ C b.Lip ( R d ), the function u ( t, x ) := ˆ ( t, x ) ∈ [0 , ∞ ) × R d E [ ϕ ( x + B t )] , is the viscosity solution of the following parabolic PDE: ∂ t u − G ( Du, D 2 u ) = 0 , u | t =0 = ϕ. (7.26)

  64. 61 § 7 Generalized G -Brownian Motion We first prove that u is Lipschitz in x and 1 2 -H¨ older continuous in t . In fact, for each fixed t , u ( t, · ) ∈ C b.Lip ( R d ) since | ˆ E [ ϕ ( x + B t )] − ˆ E [ ϕ ( y + B t )] | ≤ ˆ E [ | ϕ ( x + B t ) − ϕ ( y + B t ) | ] ≤ C | x − y | . For each δ ∈ [0 , t ], since B t − B δ is independent from B δ , u ( t, x ) = ˆ E [ ϕ ( x + B δ + ( B t − B δ )] = ˆ E [ˆ E [ ϕ ( y + ( B t − B δ ))] y = x + B δ ] . Hence u ( t, x ) = ˆ E [ u ( t − δ, x + B δ )] . (7.27) Thus | u ( t, x ) − u ( t − δ, x ) | = | ˆ E [ u ( t − δ, x + B δ ) − u ( t − δ, x )] | ≤ ˆ E [ | u ( t − δ, x + B δ ) − u ( t − δ, x ) | ] � √ ≤ ˆ E [ C | B δ | ] ≤ C G (0 , I ) + 1 δ. To prove that u is a viscosity solution of (7.26), we fix a ( t, x ) ∈ (0 , ∞ ) × R d and let v ∈ C 2 , 3 ([0 , ∞ ) × R d ) be such that v ≥ u and v ( t, x ) = u ( t, x ). From b (7.27), we have v ( t, x ) = ˆ E [ u ( t − δ, x + B δ )] ≤ ˆ E [ v ( t − δ, x + B δ )] . Therefore, by Taylor’s expansion, 0 ≤ ˆ E [ v ( t − δ, x + B δ ) − v ( t, x )] = ˆ E [ v ( t − δ, x + B δ ) − v ( t, x + B δ ) + ( v ( t, x + B δ ) − v ( t, x ))] E [ − ∂ t v ( t, x ) δ + � Dv ( t, x ) , B δ � + 1 = ˆ 2 � D 2 v ( t, x ) B δ , B δ � + I δ ] E [ � Dv ( t, x ) , B δ � + 1 ≤ − ∂ t v ( t, x ) δ + ˆ 2 � D 2 v ( t, x ) B δ , B δ � ] + ˆ E [ I δ ] , where � 1 I δ = − [ ∂ t v ( t − βδ, x + B δ ) − ∂ t v ( t, x )] δdβ 0 � 1 � 1 � ( D 2 v ( t, x + αβB δ ) − D 2 v ( t, x )) B δ , B δ � αdβdα. + 0 0 E [ | I δ | ] δ − 1 = 0, from which With the assumption (iii) we can check that lim δ ↓ 0 ˆ we get ∂ t v ( t, x ) − G ( Dv ( t, x ) , D 2 v ( t, x )) ≤ 0, hence u is a viscosity subsolution of (7.26). We can analogously prove that u is a viscosity supersolution. Thus u is a viscosity solution and ( B t ) t ≥ 0 is a generalized G -Brownian motion. �

  65. 62 Chap.III G -Brownian Motion and Itˆ o’s Integral Notes and Comments Bachelier (1900) [6] proposed Brownian motion as a model for fluctuations of the stock market, Einstein (1905) [42] used Brownian motion to give experimental confirmation of the atomic theory, and Wiener (1923) [119] gave a mathemati- cally rigorous construction of Brownian motion. Here we follow Kolmogorov’s idea (1956) [72] to construct G -Brownian motion by introducing infinite di- mensional function space and the corresponding family of infinite dimensional sublinear distributions, instead of linear distributions in [72]. The notions of G -Brownian motion and the related stochastic calculus of Itˆ o’s type were firstly introduced by Peng (2006) [98] for 1-dimensional case and then in [102] for multi-dimensional situation. It is very interesting that Denis and Martini (2006) [38] studied super-pricing of contingent claims under model uncertainty of volatility. They have introduced a norm on the space of contin- uous paths Ω = C ([0 , T ]) which corresponds to our L 2 G -norm and developed a stochastic integral. There is no notion of nonlinear expectation and the related nonlinear distribution, such as G -expectation, conditional G -expectation, the related G -normal distribution and the notion of independence in their paper. But on the other hand, powerful tools in capacity theory enable them to obtain pathwise results for random variables and stochastic processes through the lan- guage of “quasi-surely” (see e.g. Dellacherie (1972) [32], Dellacherie and Meyer (1978 and 1982) [33], Feyel and de La Pradelle (1989) [48]) in place of “almost surely” in classical probability theory. A main motivations of G -Brownian motion were the pricing and risk mea- sures under volatility uncertainty in financial markets (see Avellaneda, Levy and Paras (1995) [5] and Lyons (1995) [80]). It was well-known that under volatil- ity uncertainty the corresponding uncertain probabilities are singular from each other. This causes a serious problem for the related path analysis to treat, e.g., path-dependent derivatives, under a classical probability space. Our G - Brownian motion provides a powerful tool to such type of problems. Our new Itˆ o’s calculus for G -Brownian motion is of course inspired from Itˆ o’s groundbreaking work since 1942 [63] on stochastic integration, stochastic differential equations and stochastic calculus through interesting books cited in Chap. IV. Itˆ o’s formula given by Theorem 6.5 is from Peng [98], [102]. Gao (2009)[54] proved a more general Itˆ o’s formula for G -Brownian motion. An interesting problem is: can we get an Itˆ o’s formula in which the conditions correspond the classical one? Recently Li and Peng have solved this problem in [77]. Using nonlinear Markovian semigroup known as Nisio’s semigroup (see Nisio (1976) [84]), Peng (2005) [96] studied the processes with Markovian properties under a nonlinear expectation.

  66. Chapter IV G -martingales and Jensen’s Inequality In this chapter, we introduce the notion of G -martingales and the related Jensen’s inequality for a new type of G -convex functions. Essentially differ- ent from the classical situation, “ M is a G -martingale” does not imply that “ − M is a G -martingale”. § 1 The Notion of G -martingales We now give the notion of G –martingales. Definition 1.1 A process ( M t ) t ≥ 0 is called a G –martingale (respectively, G – supermartingale , G –submartingale ) if for each t ∈ [0 , ∞ ) , M t ∈ L 1 G (Ω t ) and for each s ∈ [0 , t ] , we have ˆ E [ M t | Ω s ] = M s (respectively, ≤ M s , ≥ M s ) . G (Ω) , it is clear that (ˆ Example 1.2 For each fixed X ∈ L 1 E [ X | Ω t ]) t ≥ 0 is a G –martingale. Example 1.3 For each fixed a ∈ R d , it is easy to check that ( B a t ) t ≥ 0 and t ) t ≥ 0 are G –martingales. The process ( � B a � t − σ 2 ( − B a aa T t ) t ≥ 0 is a G –martingale since ˆ aa T t | Ω s ] = ˆ E [ � B a � t − σ 2 E [ � B a � s − σ 2 aa T t + ( � B a � t − � B a � s ) | Ω s ] = � B a � s − σ 2 aa T t + ˆ E [ � B a � t − � B a � s ] = � B a � s − σ 2 aa T s. 63

  67. 64 Chap.IV G -martingales and Jensen’s Inequality Similarly we can show that ( − ( � B a � t − σ 2 aa T t )) t ≥ 0 is a G –submartingale. The t ) 2 ) t ≥ 0 is a G –submartingale since process (( B a s ) 2 + ( B a s ) 2 + 2 B a ˆ t ) 2 | Ω s ] = ˆ E [( B a E [( B a t − B a s ( B a t − B a s ) | Ω s ] s ) 2 + ˆ s ) 2 | Ω s ] = ( B a E [( B a t − B a s ) 2 + σ 2 s ) 2 . = ( B a aa T ( t − s ) ≥ ( B a t ) 2 − σ 2 t ) 2 − � B a � t ) t ≥ 0 are Similarly we can prove that (( B a aa T t ) t ≥ 0 and (( B a G –martingales. In general, we have the following important property. Proposition 1.4 Let M 0 ∈ R , ϕ = ( ϕ j ) d j =1 ∈ M 2 G (0 , T ; R d ) and η = ( η ij ) d i,j =1 ∈ M 1 G (0 , T ; S ( d )) be given and let � t � t � t � B i , B j � ϕ j u dB j η ij M t = M 0 + u + u d u − 2 G ( η u ) du for t ∈ [0 , T ] . 0 0 0 Then M is a G –martingale. Here we still use the , i.e., the above repeated indices i and j imply the summation. � t � t Proof. Since ˆ u | Ω s ] = ˆ E [ s ϕ j u dB j E [ − s ϕ j u dB j u | Ω s ] = 0, we only need to prove that � t � t � B i , B j � ¯ η ij M t = u d u − 2 G ( η u ) du for t ∈ [0 , T ] 0 0 is a G –martingale. It suffices to consider the case where η ∈ M 1 , 0 G (0 , T ; S ( d )), i.e., N − 1 � η t = η t k I [ t k ,t k +1 ) ( t ) . k =0 We have, for s ∈ [ t N − 1 , t N ], E [ ¯ ˆ M t | Ω s ] = ¯ M s + ˆ E [( η t N − 1 , � B � t − � B � s ) − 2 G ( η t N − 1 )( t − s ) | Ω s ] = ¯ M s + ˆ E [( A, � B � t − � B � s )] A = η tN − 1 − 2 G ( η t N − 1 )( t − s ) = ¯ M s . Then we can repeat this procedure backwardly to prove the result for s ∈ [0 , t N − 1 ]. � Corollary 1.5 Let η ∈ M 1 G (0 , T ) . Then for each fixed a ∈ R d , we have � T � T � T − aa T ˆ | η t | dt ] ≤ ˆ aa T ˆ σ 2 | η t | d � B a � t ] ≤ σ 2 E [ E [ E [ | η t | dt ] . (1.1) 0 0 0

  68. 65 § 2 On G -martingale Representation Theorem Proof. For each ξ ∈ M 1 G (0 , T ), by the above proposition, we have � T � T ˆ E [ ξ t d � B a � t − 2 G a ( ξ t ) dt ] = 0 , 0 0 aa T α + − σ 2 where G a ( α ) = 1 2 ( σ 2 − aa T α − ). Letting ξ = | η | and ξ = −| η | , we get � T � T ˆ | η t | d � B a � t − σ 2 E [ | η t | dt ] = 0 , aa T 0 0 � T � T ˆ | η t | d � B a � t + σ 2 E [ − | η t | dt ] = 0 . − aa T 0 0 From the sub-additivity of G -expectation, we can easily get the result. � Remark 1.6 It is worth to mention that for a G –martingale M , in general, − M is not a G –martingale. But in Proposition 1.4, when η ≡ 0 , − M is still a G –martingale. Exercise 1.7 (a) Let ( M t ) t ≥ 0 be a G –supermartingale. Show that ( − M t ) t ≥ 0 is a G –submartingale. (b) Find a G –submartingale ( M t ) t ≥ 0 such that ( − M t ) t ≥ 0 is not a G –supermartingale. Exercise 1.8 (a) Let ( M t ) t ≥ 0 and ( N t ) t ≥ 0 be two G –supermartingales. Prove that ( M t + N t ) t ≥ 0 is a G –supermartingale. (b) Let ( M t ) t ≥ 0 and ( − M t ) t ≥ 0 be two G –martingales. For each G –submartingale (respectively, G –supermartingale) ( N t ) t ≥ 0 , prove that ( M t + N t ) t ≥ 0 is a G – submartingale (respectively, G –supermartingale). § 2 On G -martingale Representation Theorem How to give a G -martingale representation theorem is still a largely open prob- lem. Xu and Zhang (2009) [120] have obtained a martingale representation for a special ‘symmetric’ G -martingale process. A more general situation have been proved by Soner, Touzi and Zhang (preprint in private communications). Here we present the formulation of this G -martingale representation theorem under a very strong assumption. In this section, we consider the generator G : S ( d ) → R satisfying the uni- formly elliptic condition, i.e., there exists a β > 0 such that, for each A, ¯ A ∈ S ( d ) with A ≥ ¯ A , G ( A ) − G ( ¯ A ) ≥ β tr[ A − ¯ A ] . For each ξ = ( ξ j ) d j =1 ∈ M 2 G (0 , T ; R d ) and η = ( η ij ) d i,j =1 ∈ M 1 G (0 , T ; S ( d )), we use the following notations � T � T � T � T d d � � � B i , B j � ξ j t dB j η ij � ξ t , dB t � := ( η t , d � B � t ) := t ; t d t . 0 0 0 0 j =1 i,j =1 We first consider the representation of ϕ ( B T − B t 1 ) for 0 ≤ t 1 ≤ T < ∞ .

  69. 66 Chap.IV G -martingales and Jensen’s Inequality Lemma 2.1 Let ξ = ϕ ( B T − B t 1 ) , ϕ ∈ C b.Lip ( R d ) . Then we have the following representation: � T � T � T ξ = ˆ E [ ξ ] + � β t , dB t � + ( η t , d � B � t ) − 2 G ( η t ) dt. t 1 t 1 t 1 Proof. We know that u ( t, x ) = ˆ E [ ϕ ( x + B T − B t )] is the solution of the following PDE: ∂ t u + G ( D 2 u ) = 0 ( t, x ) ∈ [0 , T ] × R d , u ( T, x ) = ϕ ( x ) . For each ε > 0, by the interior regularity of u (see Appendix C), we have � u � C 1+ α/ 2 , 2+ α ([0 ,T − ε ] × R d ) < ∞ for some α ∈ (0 , 1) . Applying G -Itˆ o’s formula to u ( t, B t − B t 1 ) on [ t 1 , T − ε ], since Du ( t, x ) is uni- formly bounded, letting ε → 0 , we have � T � T ξ = ˆ E [ ξ ] + ∂ t u ( t, B t − B t 1 ) dt + � Du ( t, B t − B t 1 ) , dB t � t 1 t 1 � T + 1 ( D 2 u ( t, B t − B t 1 ) , d � B � t ) 2 t 1 � T � T � Du ( t, B t − B t 1 ) , dB t � + 1 = ˆ ( D 2 u ( t, B t − B t 1 ) , d � B � t ) E [ ξ ] + 2 t 1 t 1 � T G ( D 2 u ( t, B t − B t 1 )) dt. − t 1 � We now give the representation theorem of ξ = ϕ ( B t 1 , B t 2 − B t 1 , · · · , B t N − B t N − 1 ) . Theorem 2.2 Let ξ = ϕ ( B t 1 , B t 2 − B t 1 , · · · , B t N − B t N − 1 ) , ϕ ∈ C b.Lip ( R d × N ) , 0 ≤ t 1 < t 2 < · · · < t N = T < ∞ . Then we have the following representation: � T � T � T ξ = ˆ E [ ξ ] + � β t , dB t � + ( η t , d � B � t ) − 2 G ( η t ) dt. 0 0 0 Proof. We only need to prove the case ξ = ϕ ( B t 1 , B T − B t 1 ). We set, for each ( x, y ) ∈ R 2 d , u ( t, x, y ) = ˆ E [ ϕ ( x, y + B T − B t )]; ϕ 1 ( x ) = ˆ E [ ϕ ( x, B T − B t 1 )] . For each x ∈ R d , we denote ¯ ξ = ϕ ( x, B T − B t 1 ). By Lemma 2.1, we have � T � T � D y u ( t, x, B t − B t 1 ) , dB t � + 1 ¯ ( D 2 ξ = ϕ 1 ( x ) + y u ( t, x, B t − B t 1 ) , d � B � t ) 2 t 1 t 1 � T G ( D 2 − y u ( t, x, B t − B t 1 )) dt. t 1

  70. 67 § 3 G –convexity and Jensen’s Inequality for G –expectations By the definitions of the integrations of dt , dB t and d � B � t , we can replace x by B t 1 and get � T � D y u ( t, B t 1 , B t − B t 1 ) , dB t � ξ = ϕ 1 ( B t 1 ) + t 1 � T � T + 1 ( D 2 G ( D 2 y u ( t, B t 1 , B t − B t 1 ) , d � B � t ) − y u ( t, B t 1 , B t − B t 1 )) dt. 2 t 1 t 1 Applying Lemma 2.1 to ϕ 1 ( B t 1 ), we complete the proof. � We then immediately have the following G -martingale representation theo- rem. Theorem 2.3 Let ( M t ) t ∈ [0 ,T ] be a G -martingale with M T = ϕ ( B t 1 , B t 2 − B t 1 , · · · ,B t N − B t N − 1 ) , ϕ ∈ C b.Lip ( R d × N ) , 0 ≤ t 1 < t 2 < · · · < t N = T < ∞ . Then � t � t � t M t = ˆ E [ M T ] + � β s , dB s � + ( η s , d � B � s ) − 2 G ( η s ) ds, t ≤ T. 0 0 0 Proof. For M T , by Theorem 2.2, we have � T � T � T M T = ˆ E [ M T ] + � β s , dB s � + ( η s , d � B � s ) − 2 G ( η s ) ds. 0 0 0 Taking the conditional G -expectation on both sides of the above equality and by Proposition 1.4, we obtain the result. � § 3 G –convexity and Jensen’s Inequality for G – expectations A very interesting question is whether the well–known Jensen’s inequality still holds for G –expectations. First, we give a new notion of convexity. Definition 3.1 A continuous function h : R → R is called G –convex if for each bounded ξ ∈ L 1 G (Ω) , the following Jensen’s inequality holds: E [ h ( ξ )] ≥ h (ˆ ˆ E [ ξ ]) . In this section, we mainly consider C 2 -functions. Proposition 3.2 Let h ∈ C 2 ( R ) . Then the following statements are equivalent: (i) The function h is G –convex. (ii) For each bounded ξ ∈ L 1 G (Ω) , the following Jensen’s inequality holds: E [ h ( ξ ) | Ω t ] ≥ h (ˆ ˆ E [ ξ | Ω t ]) for t ≥ 0 .

  71. 68 Chap.IV G -martingales and Jensen’s Inequality (iii) For each ϕ ∈ C 2 b ( R d ) , the following Jensen’s inequality holds: ˆ E [ h ( ϕ ( B t ))] ≥ h (ˆ E [ ϕ ( B t )]) for t ≥ 0 . (iv) The following condition holds for each ( y, z, A ) ∈ R × R d × S ( d ) : G ( h ′ ( y ) A + h ′′ ( y ) zz T ) − h ′ ( y ) G ( A ) ≥ 0 . (3.2) To prove the above proposition, we need the following lemmas. Lemma 3.3 Let Φ : R d → S ( d ) be continuous with polynomial growth. Then � t + δ (Φ( B s ) , d � B � s )] δ − 1 = 2ˆ ˆ lim E [ E [ G (Φ( B t ))] . (3.3) δ ↓ 0 t Proof. If Φ is a Lipschitz function, it is easy to prove that � t + δ ˆ (Φ( B s ) − Φ( B t ) , d � B � s ) | ] ≤ C 1 δ 3 / 2 , E [ | t where C 1 is a constant independent of δ . Thus � t + δ (Φ( B s ) , d � B � s )] δ − 1 = lim ˆ E [(Φ( B t ) , � B � t + δ − � B � s )] δ − 1 ˆ lim E [ δ ↓ 0 δ ↓ 0 t = 2ˆ E [ G (Φ( B t ))] . Otherwise, we can choose a sequence of Lipschitz functions Φ N : R d → S ( d ) such that | Φ N ( x ) − Φ( x ) | ≤ C 2 N (1 + | x | k ) , where C 2 and k are positive constants independent of N . It is easy to show that � t + δ (Φ( B s ) − Φ N ( B s ) , d � B � s ) | ] ≤ C ˆ E [ | N δ t and E [ | G (Φ( B t )) − G (Φ N ( B t )) | ] ≤ C ˆ N , where C is a universal constant. Thus � t + δ (Φ( B s ) , d � B � s )] δ − 1 − 2ˆ | ˆ E [ E [ G (Φ( B t ))] | t � t + δ E [ G (Φ N ( B t ))] | + 3 C (Φ N ( B s ) , d � B � s )] δ − 1 − 2ˆ ≤| ˆ E [ N . t Then we have � t + δ E [ G (Φ( B t ))] | ≤ 3 C (Φ( B s ) , d � B � s )] δ − 1 − 2ˆ | ˆ E [ lim sup N . δ ↓ 0 t Since N can be arbitrarily large, we complete the proof. �

  72. 69 § 3 G –convexity and Jensen’s Inequality for G –expectations Lemma 3.4 Let Ψ be a C 2 -function on R d such that D 2 Ψ satisfy polynomial growth condition. Then we have E [Ψ( B δ )] − Ψ(0)) δ − 1 = G ( D 2 Ψ(0)) . δ ↓ 0 (ˆ lim (3.4) Proof. Applying G -Itˆ o’s formula to Ψ( B δ ), we get � δ � δ � D Ψ( B s ) , dB s � + 1 ( D 2 Ψ( B s ) , d � B � s ) . Ψ( B δ ) = Ψ(0) + 2 0 0 Thus we have � δ E [Ψ( B δ )] − Ψ(0) = 1 ˆ ˆ ( D 2 Ψ( B s ) , d � B � s )] . E G [ 2 0 By Lemma 3.3, we obtain the result. � Lemma 3.5 Let h ∈ C 2 ( R ) and satisfy (3.2) . For each ϕ ∈ C b.Lip ( R d ) , let u ( t, x ) be the solution of the G -heat equation: ∂ t u − G ( D 2 u ) = 0 ( t, x ) ∈ [0 , ∞ ) × R d , u (0 , x ) = ϕ ( x ) . (3.5) Then ˜ u ( t, x ) := h ( u ( t, x )) is a viscosity subsolution of G -heat equation (3.5) with initial condition ˜ u (0 , x ) = h ( ϕ ( x )) . Proof. For each ε > 0, we denote by u ε the solution of the following PDE: ∂ t u ε − G ε ( D 2 u ε ) = 0 ( t, x ) ∈ [0 , ∞ ) × R d , u ε (0 , x ) = ϕ ( x ) , where G ε ( A ) := G ( A ) + ε tr[ A ]. Since G ε satisfies the uniformly elliptic condi- tion, by Appendix C, we have u ε ∈ C 1 , 2 ((0 , ∞ ) × R d ). By simple calculation, we have ∂ t h ( u ε ) = h ′ ( u ε ) ∂ t u ε = h ′ ( u ε ) G ε ( D 2 u ε ) and ∂ t h ( u ε ) − G ε ( D 2 h ( u ε )) = f ε ( t, x ) , h ( u ε (0 , x )) = h ( ϕ ( x )) , where f ε ( t, x ) = h ′ ( u ε ) G ( D 2 u ε ) − G ( D 2 h ( u ε )) − εh ′′ ( u ε ) | Du ε | 2 . Since h is G –convex, it follows that f ε ≤ − εh ′′ ( u ε ) | Du ε | 2 . We can also deduce that | Du ε | is uniformly bounded by the Lipschitz constant of ϕ . It is easy to show that u ε uniformly converges to u as ε → 0. Thus h ( u ε ) uniformly converges to h ( u ) and h ′′ ( u ε ) is uniformly bounded. Then we get ∂ t h ( u ε ) − G ε ( D 2 h ( u ε )) ≤ Cε, h ( u ε (0 , x )) = h ( ϕ ( x )) , where C is a constant independent of ε . By Appendix C, we conclude that h ( u ) is a viscosity subsolution. �

  73. 70 Chap.IV G -martingales and Jensen’s Inequality Proof of Proposition 3.2. Obviously (ii) = ⇒ (i) = ⇒ (iii) . We now prove ⇒ (ii) . For ξ ∈ L 1 (iii) = G (Ω) of the form ξ = ϕ ( B t 1 , B t 2 − B t 1 , · · · , B t n − B t n − 1 ) , b ( R d × n ), 0 ≤ t 1 ≤ · · · ≤ t n < ∞ , by the definitions of ˆ where ϕ ∈ C 2 E [ · ] and ˆ E [ ·| Ω t ], we have E [ h ( ξ ) | Ω t ] ≥ h (ˆ ˆ E [ ξ | Ω t ]) , t ≥ 0 . We then can extend this Jensen’s inequality, under the norm || · || = ˆ E [ | · | ], to each bounded ξ ∈ L 1 G (Ω). b ( R d ), we have ˆ E [ h ( ϕ ( B t ))] ≥ h (ˆ ⇒ (iv) : for each ϕ ∈ C 2 (iii) = E [ ϕ ( B t )]) for each t ≥ 0. By Lemma 3.4, we know that E [ ϕ ( B δ )] − ϕ (0)) δ − 1 = G ( D 2 ϕ (0)) δ ↓ 0 (ˆ lim and E [ h ( ϕ ( B δ ))] − h ( ϕ (0))) δ − 1 = G ( D 2 h ( ϕ )(0)) . δ ↓ 0 (ˆ lim Thus we get G ( D 2 h ( ϕ )(0)) ≥ h ′ ( ϕ (0)) G ( D 2 ϕ (0)) . For each ( y, z, A ) ∈ R × R d × S ( d ), we can choose a ϕ ∈ C 2 b ( R d ) such that ( ϕ (0) , Dϕ (0) , D 2 ϕ (0)) = ( y, z, A ). Thus we obtain (iv) . b ( R d ), u ( t, x ) = ˆ ⇒ (iii) : for each ϕ ∈ C 2 (iv) = E [ ϕ ( x + B t )] (respectively, ¯ u ( t, x ) = ˆ E [ h ( ϕ ( x + B t ))]) solves the G -heat equation (3.5). By Lemma 3.5, h ( u ) is a viscosity subsolution of G -heat equation (3.5). It follows from the maximum principle that h ( u ( t, x )) ≤ ¯ u ( t, x ). In particular, (iii) holds. � Remark 3.6 In fact, (i) ⇐ ⇒ (ii) ⇐ ⇒ (iii) still hold without the assumption h ∈ C 2 ( R ) . Proposition 3.7 Let h be a G –convex function and X ∈ L 1 G (Ω) be bounded. Then Y t = h (ˆ E [ X | Ω t ]) , t ≥ 0 , is a G –submartingale. Proof. For each s ≤ t , E [ Y t | Ω s ] = ˆ ˆ E [ h (ˆ E [ X | Ω t ]) | Ω s ] ≥ h (ˆ E [ X | Ω s ]) = Y s . � Exercise 3.8 Suppose that G satisfies the uniformly elliptic condition and h ∈ C 2 ( R ) . Show that h is G -convex if and only if h is convex.

  74. 71 § 3 G –convexity and Jensen’s Inequality for G –expectations Notes and Comments This chapter is mainly from Peng (2007) [100]. Peng (1997) [90] introduced a filtration consistent (or time consistent, or dynamic) nonlinear expectation, called g -expectation, via BSDE, and then in [92] for some basic properties of the g -martingale such as nonlinear Doob-Meyer decomposition theorem, see also Briand, Coquet, Hu, M´ emin and Peng (2000) [14], Chen, Kulperger and Jiang (2003) [20], Chen and Peng (1998) [21] and (2000) [22], Coquet, Hu, M´ emin and Peng (2001) [26], and (2002) [27], Peng (1999) [92], (2004) [95], Peng and Xu (2003) [105], Rosazza (2006) [110]. Our conjecture is that all properties obtained for g -martingales must has its corre- spondence for G -martingale. But this conjecture is still far from being complete. Here we present some properties of G -martingales. The problem G -martingale representation theorem has been raised as a prob- lem in Peng (2007) [100]. In Section 2, we only give a result with very regular random variables. Some very interesting developments to this important prob- lem can be found in Soner, Tuozi and Zhang (2009) [112] and Song (2009) [114]. Under the framework of g -expectation, Chen, Kulperger and Jiang (2003) [20], Hu (2005) [58], Jiang and Chen (2004) [68] investigate the Jensen’s in- equality for g -expectation. Recently, Jia and Peng (2007) [66] introduced the notion of g -convex function and obtained many interesting properties. Certainly a G -convex function concerns fully nonlinear situations.

  75. 72 Chap.IV G -martingales and Jensen’s Inequality

  76. Chapter V Stochastic Differential Equations In this chapter, we consider the stochastic differential equations and backward stochastic differential equations driven by G -Brownian motion. The conditions and proofs of existence and uniqueness of a stochastic differential equation is similar to the classical situation. However the corresponding problems for back- ward stochastic differential equations are not that easy, many are still open. We only give partial results to this direction. § 1 Stochastic Differential Equations In this chapter, we denote by ¯ M p G (0 , T ; R n ), p ≥ 1, the completion of M p, 0 G (0 , T ; R n ) � T 0 ˆ E [ | η t | p ] dt ) 1 /p . It is not hard to prove that ¯ M p G (0 , T ; R n ) ⊆ under the norm ( M p G (0 , T ; R n ). We consider all the problems in the space ¯ M p G (0 , T ; R n ), and the sublinear expectation space (Ω , H , ˆ E ) is fixed. We consider the following SDE driven by a d -dimensional G -Brownian mo- tion: � t � t � t � B i , B j � σ j ( s, X s ) dB j s , t ∈ [0 , T ] , X t = X 0 + b ( s, X s ) ds + h ij ( s, X s ) d s + 0 0 0 (1.1) where the initial condition X 0 ∈ R n is a given constant, and b, h ij , σ j are given G (0 , T ; R n ) for each x ∈ R n and functions satisfying b ( · , x ), h ij ( · , x ), σ j ( · , x ) ∈ ¯ M 2 the Lipschitz condition, i.e., | φ ( t, x ) − φ ( t, x ′ ) | ≤ K | x − x ′ | , for each t ∈ [0 , T ], x , x ′ ∈ R n , φ = b , h ij and σ j , respectively. Here the horizon [0 , T ] can be ¯ M 2 G (0 , T ; R n ) satisfying the arbitrarily large. The solution is a process X ∈ SDE (1.1). We first introduce the following mapping on a fixed interval [0 , T ]: Λ · : ¯ G (0 , T ; R n ) → ¯ M 2 M 2 G (0 , T ; R n ) 73

  77. 74 Chap.V Stochastic Differential Equations by setting Λ t , t ∈ [0 , T ], with � t � t � t � B i , B j � σ j ( s, Y s ) dB j Λ t ( Y ) = X 0 + b ( s, Y s ) ds + h ij ( s, Y s ) d s + s . 0 0 0 We immediately have the following lemma. Lemma 1.1 For each Y, Y ′ ∈ ¯ M 2 G (0 , T ; R n ) , we have the following estimate: � t E [ | Λ t ( Y ) − Λ t ( Y ′ ) | 2 ] ≤ C ˆ E [ | Y s − Y ′ ˆ s | 2 ] ds, t ∈ [0 , T ] , (1.2) 0 where the constant C depends only on the Lipschitz constant K . We now prove that SDE (1.1) has a unique solution. By multiplying e − 2 Ct on both sides of (1.2) and integrating them on [0 , T ], it follows that � T � T � t E [ | Λ t ( Y ) − Λ t ( Y ′ ) | 2 ] e − 2 Ct dt ≤ C ˆ e − 2 Ct E G [ | Y s − Y ′ ˆ s | 2 ] dsdt 0 0 0 � T � T e − 2 Ct dt ˆ E [ | Y s − Y ′ s | 2 ] ds = C 0 s � T = 1 ( e − 2 Cs − e − 2 CT )ˆ E [ | Y s − Y ′ s | 2 ] ds. 2 0 We then have � T � T E [ | Λ t ( Y ) − Λ t ( Y ′ ) | 2 ] e − 2 Ct dt ≤ 1 ˆ ˆ E [ | Y t − Y ′ t | 2 ] e − 2 Ct dt. (1.3) 2 0 0 We observe that the following two norms are equivalent on ¯ M 2 G (0 , T ; R n ), i.e., � T � T E [ | Y t | 2 ] dt ) 1 / 2 ∼ ( ˆ E [ | Y t | 2 ] e − 2 Ct dt ) 1 / 2 . ˆ ( 0 0 From (1.3) we can obtain that Λ( Y ) is a contraction mapping. Consequently, we have the following theorem. Theorem 1.2 There exists a unique solution X ∈ ¯ M 2 G (0 , T ; R n ) of the stochas- tic differential equation (1.1) . We now consider the following linear SDE. For simplicity, we assume that d = 1 and n = 1 . � t � t � t ( b s X s +˜ ( h s X s +˜ X t = X 0 + b s ) ds + h s ) d � B � s + ( σ s X s +˜ σ s ) dB s , t ∈ [0 , T ] , 0 0 0 (1.4) where X 0 ∈ R is given, b . , h . , σ . are given bounded processes in ¯ M 2 G (0 , T ; R ) and ˜ b . , ˜ σ . are given processes in ¯ M 2 G (0 , T ; R ). By Theorem 1.2, we know that the h . , ˜ linear SDE (1.4) has a unique solution.

  78. 75 § 2 Backward Stochastic Differential Equations Remark 1.3 The solution of the linear SDE (1.4) is � t � t � t ˜ (˜ X t = Γ − 1 h s − σ s ˜ σ s )Γ s d � B � s + σ s Γ s dB s ) , t ∈ [0 , T ] , t ( X 0 + b s Γ s ds + ˜ 0 0 0 � t � t � t 0 ( h s − 1 2 σ 2 where Γ t = exp( − 0 b s ds − s ) d � B � s − 0 σ s dB s ) . In particular, if b . , h . , σ . are constants and ˜ b . , ˜ h . , ˜ σ . are zero, then X is a geometric G -Brownian motion. Definition 1.4 We call X is a geometric G -Brownian motion if X t = exp( αt + β � B � t + γB t ) , (1.5) where α, β, γ are constants. Exercise 1.5 Prove that ¯ M p G (0 , T ; R n ) ⊆ M p G (0 , T ; R n ) . Exercise 1.6 Complete the proof of Lemma 1.1. § 2 Backward Stochastic Differential Equations We consider the following type of BSDE: � T � T � B i , B j � Y t = ˆ E [ ξ + f ( s, Y s ) ds + h ij ( s, Y s ) d s | Ω t ] , t ∈ [0 , T ] , (2.6) t t where ξ ∈ L 1 G (Ω T ; R n ) is given, and f, h ij are given functions satisfying f ( · , y ), G (0 , T ; R n ) for each y ∈ R n and the Lipschitz condition, i.e., ¯ M 1 h ij ( · , y ) ∈ | φ ( t, y ) − φ ( t, y ′ ) | ≤ K | y − y ′ | , for each t ∈ [0 , T ], y , y ′ ∈ R n , φ = f and ¯ M 1 G (0 , T ; R n ) satisfying the h ij , respectively. The solution is a process Y ∈ above BSDE. We first introduce the following mapping on a fixed interval [0 , T ]: Λ · : ¯ M 1 G (0 , T ; R n ) → ¯ M 1 G (0 , T ; R n ) by setting Λ t , t ∈ [0 , T ], with � T � T � B i , B j � Λ t ( Y ) = ˆ E [ ξ + s | Ω t ] . f ( s, Y s ) ds + h ij ( s, Y s ) d t t We immediately have Lemma 2.1 For each Y, Y ′ ∈ ¯ M 1 G (0 , T ; R n ) , we have the following estimate: � T ˆ ˆ E [ | Λ t ( Y ) − Λ t ( Y ′ ) | ] ≤ C E [ | Y s − Y ′ s | ] ds, t ∈ [0 , T ] , (2.7) t where the constant C depends only on the Lipschitz constant K .

  79. 76 Chap.V Stochastic Differential Equations We now prove that BSDE (2.6) has a unique solution. By multiplying e 2 Ct on both sides of (2.7) and integrating them on [0 , T ], it follows that � T � T � T E [ | Λ t ( Y ) − Λ t ( Y ′ ) | ] e 2 Ct dt ≤ C ˆ E [ | Y s − Y ′ ˆ s | ] e 2 Ct dsdt 0 0 t � T � s ˆ E [ | Y s − Y ′ e 2 Ct dtds = C s | ] 0 0 � T = 1 s | ]( e 2 Cs − 1) ds E [ | Y s − Y ′ ˆ 2 0 � T ≤ 1 ˆ E [ | Y s − Y ′ s | ] e 2 Cs ds. (2.8) 2 0 We observe that the following two norms are equivalent on ¯ M 1 G (0 , T ; R n ), i.e., � T � T ˆ E [ | Y t | ] e 2 Ct dt. ˆ E [ | Y t | ] dt ∼ 0 0 From (2.8), we can obtain that Λ( Y ) is a contraction mapping. Consequently, we have the following theorem. Theorem 2.2 There exists a unique solution ( Y t ) t ∈ [0 ,T ] ∈ ¯ M 1 G (0 , T ; R n ) of the backward stochastic differential equation (2.6) . Let Y v , v = 1 , 2, be the solutions of the following BSDE: � T � T � B i , B j � E [ ξ v + t = ˆ Y v ( f ( s, Y v s ) + ϕ v ( h ij ( s, Y v s ) + ψ ij,v s ) ds + ) d s | Ω t ] . s t t Then the following estimate holds. Proposition 2.3 We have � T ˆ t | ] ≤ Ce C ( T − t ) (ˆ ˆ E [ | Y 1 t − Y 2 E [ | ξ 1 − ξ 2 | ]+ E [ | ϕ 1 s − ϕ 2 s | + | ψ ij, 1 − ψ ij, 2 | ] ds ) , (2.9) s s t where the constant C depends only on the Lipschitz constant K . Proof. Similar to Lemma 2.1, we have � T E [ | ξ 1 − ξ 2 | ] E [ | Y 1 ˆ t − Y 2 E [ | Y 1 ˆ s − Y 2 s | ] ds + ˆ t | ] ≤ C ( t � T ˆ E [ | ϕ 1 s − ϕ 2 s | + | ψ ij, 1 − ψ ij, 2 + | ] ds ) . s s t By the Gronwall inequality (see Exercise 2.5), we conclude the result. �

  80. 77 § 3 Nonlinear Feynman-Kac Formula Remark 2.4 In particular, if ξ 2 = 0 , ϕ 2 s = − f ( s, 0) , ψ ij, 2 = − h ij ( s, 0) , ϕ 1 s = s 0 , ψ ij, 1 = 0 , we obtain the estimate of the solution of the BSDE. Let Y be the s solution of the BSDE (2.6) . Then � T E [ | Y t | ] ≤ Ce C ( T − t ) (ˆ ˆ ˆ E [ | ξ | ] + E [ | f ( s, 0) | + | h ij ( s, 0) | ] ds ) , (2.10) t where the constant C depends only on the Lipschitz constant K . Exercise 2.5 (The Gronwall inequality) Let u ( t ) be a nonnegative function such that � t u ( t ) ≤ C + A u ( s ) ds for 0 ≤ t ≤ T, 0 where C and A are constants. Prove that u ( t ) ≤ Ce At for 0 ≤ t ≤ T . G (Ω T ; R n ) . Show that the process (ˆ Exercise 2.6 For each ξ ∈ L 1 E [ ξ | Ω t ]) t ∈ [0 ,T ] belongs to ¯ M 1 G (0 , T ; R n ) . Exercise 2.7 Complete the proof of Lemma 2.1. § 3 Nonlinear Feynman-Kac Formula Consider the following SDE: � � B i , B j � dX t,ξ = b ( X t,ξ s ) ds + h ij ( X t,ξ s + σ j ( X t,ξ s ) dB j s ) d s , s ∈ [ t, T ] , s (3.11) X t,ξ = ξ, t G (Ω t ; R n ) is given and b , h ij , σ j : R n → R n are given Lipschitz where ξ ∈ L 2 functions, i.e., | φ ( x ) − φ ( x ′ ) | ≤ K | x − x ′ | , for each x , x ′ ∈ R n , φ = b , h ij and σ j . We then consider associated BSDE: � T � T � B i , B j � = ˆ E [Φ( X t,ξ Y t,ξ f ( X t,ξ r , Y t,ξ g ij ( X t,ξ r , Y t,ξ T ) + ) dr + ) d r | Ω s ] , s r r s s (3.12) where Φ : R n → R is a given Lipschitz function and f , g ij : R n × R → R are given Lipschitz functions, i.e., | φ ( x, y ) − φ ( x ′ , y ′ ) | ≤ K ( | x − x ′ | + | y − y ′ | ), for each x , x ′ ∈ R n , y , y ′ ∈ R , φ = f and g ij . We have the following estimates: Proposition 3.1 For each ξ , ξ ′ ∈ L 2 G (Ω t ; R n ) , we have, for each s ∈ [ t, T ] , ˆ − X t,ξ ′ E [ | X t,ξ | 2 | Ω t ] ≤ C | ξ − ξ ′ | 2 (3.13) s s and ˆ E [ | X t,ξ s | 2 | Ω t ] ≤ C (1 + | ξ | 2 ) , (3.14) where the constant C depends only on the Lipschitz constant K .

  81. 78 Chap.V Stochastic Differential Equations Proof. It is easy to obtain � s | 2 | Ω t ] ≤ C 1 ( | ξ − ξ ′ | 2 + ˆ − X t,ξ ′ ˆ − X t,ξ ′ E [ | X t,ξ E [ | X t,ξ | 2 | Ω t ] dr ) . s s r r t By the Gronwall inequality, we obtain | 2 | Ω t ] ≤ C 1 e C 1 T | ξ − ξ ′ | 2 . ˆ E [ | X t,ξ − X t,ξ ′ s s Similarly, we can get (3.14). � Corollary 3.2 For each ξ ∈ L 2 G (Ω t ; R n ) , we have ˆ E [ | X t,ξ t + δ − ξ | 2 | Ω t ] ≤ C (1 + | ξ | 2 ) δ for δ ∈ [0 , T − t ] , (3.15) where the constant C depends only on the Lipschitz constant K . Proof. It is easy to obtain � t + δ E [ | X t,ξ ˆ t + δ − ξ | 2 | Ω t ] ≤ C 1 (1 + ˆ E [ | X t,ξ s | 2 | Ω t ]) ds. t By Proposition 3.1, we obtain the result. � Proposition 3.3 For each ξ , ξ ′ ∈ L 2 G (Ω t ; R n ) , we have − Y t,ξ ′ | Y t,ξ | ≤ C | ξ − ξ ′ | (3.16) t t and | Y t,ξ | ≤ C (1 + | ξ | ) , (3.17) t where the constant C depends only on the Lipschitz constant K . Proof. For each s ∈ [0 , T ], it is easy to check that � T − Y t,ξ ′ | ≤ C 1 ˆ E [ | X t,ξ − X t,ξ ′ − X t,ξ ′ − Y t,ξ ′ | Y t,ξ ( | X t,ξ | + | Y t,ξ | + | ) dr | Ω s ] . s s r r r r T T s Since ˆ || Ω t ] ≤ (ˆ E [ | X t,ξ − X t,ξ ′ E [ | X t,ξ − X t,ξ ′ | 2 | Ω t ]) 1 / 2 , s s s s we have � T E [ | Y t,ξ ˆ − Y t,ξ ′ || Ω t ] ≤ C 2 ( | ξ − ξ ′ | + E [ | Y t,ξ ˆ − Y t,ξ ′ || Ω t ] dr ) . s s r r s By the Gronwall inequality, we obtain (3.16). Similarly we can get (3.17). � We are more interested in the case when ξ = x ∈ R n . Define u ( t, x ) := Y t,x ( t, x ) ∈ [0 , T ] × R n . , (3.18) t By the above proposition, we immediately have the following estimates: | u ( t, x ) − u ( t, x ′ ) | ≤ C | x − x ′ | , (3.19) | u ( t, x ) | ≤ C (1 + | x | ) , (3.20) where the constant C depends only on the Lipschitz constant K .

  82. 79 § 3 Nonlinear Feynman-Kac Formula Remark 3.4 It is important to note that u ( t, x ) is a deterministic function of ( t, x ) , because X t,x and Y t,x are independent from Ω t . s s Theorem 3.5 For each ξ ∈ L 2 G (Ω t ; R n ) , we have u ( t, ξ ) = Y t,ξ . (3.21) t Proposition 3.6 We have, for δ ∈ [0 , T − t ] , � t + δ � t + δ � B i , B j � u ( t, x ) = ˆ E [ u ( t + δ, X t,x f ( X t,x r , Y t,x g ij ( X t,x r , Y t,x t + δ )+ ) dr + ) d r ] . r r t t (3.22) t + δ,X t,x t + δ,X t,x for s ∈ [ t + δ, T ], we get Y t,x Proof. Since X t,x t + δ t + δ = X t + δ = Y . By s s t + δ Theorem 3.5, we have Y t,x t + δ = u ( t + δ, X t,x t + δ ), which implies the result. � For each A ∈ S ( n ), p ∈ R n , r ∈ R , we set F ( A, p, r, x ) := G ( B ( A, p, r, x )) + � p, b ( x ) � + f ( x, r ) , where B ( A, p, r, x ) is a d × d symmetric matrix with B ij ( A, p, r, x ) := � Aσ i ( x ) , σ j ( x ) � + � p, h ij ( x ) + h ji ( x ) � + g ij ( x, r ) + g ji ( x, r ) . Theorem 3.7 u ( t, x ) is a viscosity solution of the following PDE: � ∂ t u + F ( D 2 u, Du, u, x ) = 0 , (3.23) u ( T, x ) = Φ( x ) . Proof. We first show that u is a continuous function. By (3.19) we know that u is a Lipschitz function in x . It follows from (2.10) and (3.14) that for s ∈ [ t, T ] , ˆ E [ | Y t,x | ] ≤ C (1 + | x | ). Noting (3.15) and (3.22), we get | u ( t, x ) − u ( t + δ, x ) | ≤ s C (1 + | x | )( δ 1 / 2 + δ ) for δ ∈ [0 , T − t ]. Thus u is 1 2 -H¨ older continuous in t , which implies that u is a continuous function. We can also show, that for each p ≥ 2, E [ | X t,x ˆ t + δ − x | p ] ≤ C (1 + | x | p ) δ p/ 2 , (3.24) Now for fixed ( t, x ) ∈ (0 , T ) × R n , let ψ ∈ C 2 , 3 ([0 , T ] × R n ) be such that ψ ≥ u b and ψ ( t, x ) = u ( t, x ). By (3.22), (3.24) and Taylor’s expansion, it follows that, for δ ∈ (0 , T − t ) , � t + δ 0 ≤ ˆ E [ ψ ( t + δ, X t,x f ( X t,x r , Y t,x t + δ ) − ψ ( t, x ) + ) dr r t � t + δ � B i , B j � g ij ( X t,x r , Y t,x + ) d r ] r t ≤ 1 ˆ E [( B ( D 2 ψ ( t, x ) , Dψ ( t, x ) , ψ ( t, x ) , x ) , � B � t + δ − � B � t )] 2 + ( ∂ t ψ ( t, x ) + � Dψ ( t, x ) , b ( x ) � + f ( x, ψ ( t, x ))) δ + C (1 + | x | + | x | 2 + | x | 3 ) δ 3 / 2 ≤ ( ∂ t ψ ( t, x ) + F ( D 2 ψ ( t, x ) , Dψ ( t, x ) , ψ ( t, x ) , x )) δ + C (1 + | x | + | x | 2 + | x | 3 ) δ 3 / 2 ,

  83. 80 Chap.V Stochastic Differential Equations then it is easy to check that ∂ t ψ ( t, x ) + F ( D 2 ψ ( t, x ) , Dψ ( t, x ) , ψ ( t, x ) , x ) ≥ 0 . Thus u is a viscosity subsolution of (3.23). Similarly we can prove that u is a viscosity supersolution of (3.23). � Example 3.8 Let B = ( B 1 , B 2 ) be a 2 -dimensional G -Brownian motion with G ( A ) = G 1 ( a 11 ) + G 2 ( a 22 ) , where G i ( a ) = 1 i a + − σ 2 2( σ 2 i a − ) , i = 1 , 2 . In this case, we consider the following 1 -dimensional SDE: � B 1 � dX t,x = µX t,x s ds + νX t,x s + σX t,x s dB 2 X t,x s d s , = x, s t where µ , ν and σ are constants. The corresponding function u is defined by u ( t, x ) := ˆ E [ ϕ ( X t,x T )] . Then u ( t, x ) = ˆ E [ u ( t + δ, X t,x t + δ )] and u is the viscosity solution of the following PDE: ∂ t u + µx∂ x u + 2 G 1 ( νx∂ x u ) + σ 2 x 2 G 2 ( ∂ 2 xx u ) = 0 , u ( T, x ) = ϕ ( x ) . Exercise 3.9 For each ξ ∈ L p G (Ω t ; R n ) with p ≥ 2 , show that SDE (3.11) has a unique solution in ¯ M p G ( t, T ; R n ) . Furthermore, show that the following estimates hold. ˆ E [ | X t,x − X t,x ′ | p ] ≤ C | x − x ′ | p , s s ˆ E [ | X t,x s | p ] ≤ C (1 + | x | p ) , E [ | X t,x ˆ t + δ − x | p ] ≤ C (1 + | x | p ) δ p/ 2 . Notes and Comments This chapter is mainly from Peng (2007) [100]. There are many excellent books on Itˆ o’s stochastic calculus and stochastic differential equations founded by Itˆ o’s original paper [63], as well as on martin- gale theory. Readers are referred to Chung and Williams (1990) [25], Dellacherie and Meyer (1978 and 1982) [33], He, Wang and Yan (1992) [55], Itˆ o and McKean (1965) [64], Ikeda and Watanabe (1981) [61], Kallenberg (2002) [70], Karatzas and Shreve (1988) [71], Øksendal (1998) [85], Protter (1990) [108], Revuz and Yor (1999)[109] and Yong and Zhou (1999) [122].

  84. 81 § 3 Nonlinear Feynman-Kac Formula Linear backward stochastic differential equation (BSDE) was first introduced by Bismut in (1973) [12] and (1978) [13]. Bensoussan developed this approach in (1981) [10] and (1982) [11]. The existence and uniqueness theorem of a general nonlinear BSDE, was obtained in 1990 in Pardoux and Peng [86]. The present version of the proof was based on El Karoui, Peng and Quenez (1997) [44], which is also a very good survey on BSDE theory and its applications, specially in finance. Comparison theorem of BSDEs was obtained in Peng (1992) [88] for the case when g is a C 1 -function and then in [44] when g is Lipschitz. Nonlinear Feynman-Kac formula for BSDE was introduced by Peng (1992) [89] and [87]. Here we obtain the corresponding Feynman-Kac formula under the framework of G -expectation. We also refer to Yong and Zhou (1999) [122], as well as in Peng (1997) [91] (in 1997, in Chinese) and (2004) [93] for systematic presentations of BSDE theory. For contributions in the developments of this theory, readers can be referred to the literatures listing in the Notes and Comments in Chap. I.

  85. 82 Chap.V Stochastic Differential Equations

  86. Chapter VI Capacity and Quasi-Surely Analysis for G -Brownian Paths In this chapter, we first present a general framework for an upper expectation defined on a metric space (Ω , B (Ω)) and the corresponding capacity to introduce the quasi-surely analysis. The results are important for us to obtain the pathwise analysis for G -Brownian motion. § 1 Integration theory associated to an upper prob- ability Let Ω be a complete separable metric space equipped with the distance d , B (Ω) the Borel σ -algebra of Ω and M the collection of all probability measures on (Ω , B (Ω)). • L 0 (Ω): the space of all B (Ω)-measurable real functions; • B b (Ω): all bounded functions in L 0 (Ω); • C b (Ω): all continuous functions in B b (Ω). All along this section, we consider a given subset P ⊆ M . 1.1 Capacity associated to P We denote c ( A ) := sup P ( A ) , A ∈ B (Ω) . P ∈P One can easily verify the following theorem. 83

  87. 84 Chap.VI Capacity and Quasi-Surely Analysis for G -Brownian Paths Theorem 1.1 The set function c ( · ) is a Choquet capacity, i.e. (see [24, 32]), 1. 0 ≤ c ( A ) ≤ 1 , ∀ A ⊂ Ω . 2. If A ⊂ B , then c ( A ) ≤ c ( B ) . n =1 is a sequence in B (Ω) , then c ( ∪ A n ) ≤ � c ( A n ) . 3. If ( A n ) ∞ 4. If ( A n ) ∞ n =1 is an increasing sequence in B (Ω) : A n ↑ A = ∪ A n , then c ( ∪ A n ) = lim n →∞ c ( A n ) . Furthermore, we have Theorem 1.2 For each A ∈ B (Ω) , we have c ( A ) = sup { c ( K ) : K compact K ⊂ A } . Proof. It is simply because c ( A ) = sup sup P ( K ) = sup sup P ( K ) = sup c ( K ) . P ∈P K compact K compact P ∈P K compact K ⊂ A K ⊂ A K ⊂ A � Definition 1.3 We use the standard capacity-related vocabulary: a set A is polar if c ( A ) = 0 and a property holds “ quasi-surely ” (q.s.) if it holds outside a polar set. Remark 1.4 In other words, A ∈ B (Ω) is polar if and only if P ( A ) = 0 for any P ∈ P . We also have in a trivial way a Borel-Cantelli Lemma. Lemma 1.5 Let ( A n ) n ∈ N be a sequence of Borel sets such that ∞ � c ( A n ) < ∞ . n =1 Then lim sup n →∞ A n is polar . Proof. Applying the Borel-Cantelli Lemma under each probability P ∈ P . � The following theorem is Prohorov’s theorem. Theorem 1.6 P is relatively compact if and only if for each ε > 0 , there exists a compact set K such that c ( K c ) < ε . The following two lemmas can be found in [60]. Lemma 1.7 P is relatively compact if and only if for each sequence of closed sets F n ↓ ∅ , we have c ( F n ) ↓ 0 .

  88. 85 § 1 Integration theory associated to an upper probability Proof. We outline the proof for the convenience of readers. “= ⇒ ” part: It follows from Theorem 1.6 that for each fixed ε > 0, there exists a compact set K such that c ( K c ) < ε . Note that F n ∩ K ↓ ∅ , then there exists an N > 0 such that F n ∩ K = ∅ for n ≥ N , which implies lim n c ( F n ) < ε . Since ε can be arbitrarily small, we obtain c ( F n ) ↓ 0. =” part: For each ε > 0, let ( A k i ) ∞ “ ⇐ i =1 be a sequence of open balls of radius i ) c ↓ ∅ , then there exists an n k such that 1 /k covering Ω. Observe that ( ∪ n i =1 A k c (( ∪ n k k =1 ∪ n k i =1 A k i ) c ) < ε 2 − k . Set K = ∩ ∞ i =1 A k i . It is easy to check that K is compact and c ( K c ) < ε . Thus by Theorem 1.6 P is relatively compact. � Lemma 1.8 Let P be weakly compact. Then for each sequence of closed sets F n ↓ F , we have c ( F n ) ↓ c ( F ) . Proof. We outline the proof for the convenience of readers. For each fixed ε > 0, by the definition of c ( F n ), there exists a P n ∈ P such that P n ( F n ) ≥ c ( F n ) − ε . Since P is weakly compact, there exist P n k and P ∈ P such that P n k converge weakly to P . Thus P ( F m ) ≥ lim sup P n k ( F m ) ≥ lim sup P n k ( F n k ) ≥ lim n →∞ c ( F n ) − ε. k →∞ k →∞ Letting m → ∞ , we get P ( F ) ≥ lim n →∞ c ( F n ) − ε , which yields c ( F n ) ↓ c ( F ). � Following [60] (see also [35, 50]) the upper expectation of P is defined as follows: for each X ∈ L 0 (Ω) such that E P [ X ] exists for each P ∈ P , E [ X ] = E P [ X ] := sup E P [ X ] . P ∈P It is easy to verify Theorem 1.9 The upper expectation E [ · ] of the family P is a sublinear expec- tation on B b (Ω) as well as on C b (Ω) , i.e., 1. for all X, Y in B b (Ω) , X ≥ Y = ⇒ E [ X ] ≥ E [ Y ] . 2. for all X, Y in B b (Ω) , E [ X + Y ] ≤ E [ X ] + E [ Y ] . 3. for all λ ≥ 0 , X ∈ B b (Ω) , E [ λX ] = λ E [ X ] . 4. for all c ∈ R , X ∈ B b (Ω) , E [ X + c ] = E [ X ] + c . Moreover, it is also easy to check Theorem 1.10 We have 1. Let E [ X n ] and E [ � ∞ n =1 X n ] be finite. Then E [ � ∞ n =1 X n ] ≤ � ∞ n =1 E [ X n ] . 2. Let X n ↑ X and E [ X n ] , E [ X ] be finite. Then E [ X n ] ↑ E [ X ] . Definition 1.11 The functional E [ · ] is said to be regular if for each { X n } ∞ n =1 in C b (Ω) such that X n ↓ 0 on Ω , we have E [ X n ] ↓ 0 .

  89. 86 Chap.VI Capacity and Quasi-Surely Analysis for G -Brownian Paths Similar to Lemma 1.7 we have: Theorem 1.12 E [ · ] is regular if and only if P is relatively compact. Proof. “= ⇒ ” part: For each sequence of closed subsets F n ↓ ∅ such that F n , n = 1 , 2 , · · · , are non-empty (otherwise the proof is trivial), there exists { g n } ∞ n =1 ⊂ C b (Ω) satisfying g n = 1 on F n and g n = 0 on { ω ∈ Ω : d ( ω, F n ) ≥ 1 0 ≤ g n ≤ 1 , n } . We set f n = ∧ n i =1 g i , it is clear that f n ∈ C b (Ω) and 1 F n ≤ f n ↓ 0. E [ · ] is regular implies E [ f n ] ↓ 0 and thus c ( F n ) ↓ 0. It follows from Lemma 1.7 that P is relatively compact. =” part: For each { X n } ∞ “ ⇐ n =1 ⊂ C b (Ω) such that X n ↓ 0, we have � ∞ � ∞ E [ X n ] = sup E P [ X n ] = sup P ( { X n ≥ t } ) dt ≤ c ( { X n ≥ t } ) dt. P ∈P P ∈P 0 0 For each fixed t > 0, { X n ≥ t } is a closed subset and { X n ≥ t } ↓ ∅ as n ↑ ∞ . � ∞ By Lemma 1.7, c ( { X n ≥ t } ) ↓ 0 and thus c ( { X n ≥ t } ) dt ↓ 0. Consequently 0 E [ X n ] ↓ 0. � 1.2 Functional spaces We set, for p > 0, • L p := { X ∈ L 0 (Ω) : E [ | X | p ] = sup P ∈P E P [ | X | p ] < ∞} ; • N p := { X ∈ L 0 (Ω) : E [ | X | p ] = 0 } ; • N := { X ∈ L 0 (Ω) : X = 0, c -q.s. } . It is seen that L p and N p are linear spaces and N p = N , for each p > 0. We denote L p := L p / N . As usual, we do not take care about the distinction between classes and their representatives. Lemma 1.13 Let X ∈ L p . Then for each α > 0 c ( {| X | > α } ) ≤ E [ | X | p ] . α p Proof. Just apply Markov inequality under each P ∈ P . � Similar to the classical results, we get the following proposition and the proof is omitted which is similar to the classical arguments. Proposition 1.14 We have 1. For each p ≥ 1 , L p is a Banach space under the norm � X � p := ( E [ | X | p ]) 1 p .

  90. 87 § 1 Integration theory associated to an upper probability 2. For each p < 1 , L p is a complete metric space under the distance d ( X, Y ) := E [ | X − Y | p ] . We set L ∞ := { X ∈ L 0 (Ω) : ∃ a constant M , s.t. | X | ≤ M, q.s. } ; L ∞ := L ∞ / N . Proposition 1.15 Under the norm � X � ∞ := inf { M ≥ 0 : | X | ≤ M, q.s. } , L ∞ is a Banach space. � � Proof. From {| X | > � X � ∞ } = ∪ ∞ | X | ≥ � X � ∞ + 1 we know that | X | ≤ n =1 n � X � ∞ , q.s., then it is easy to check that �·� ∞ is a norm. The proof of the completeness of L ∞ is similar to the classical result. � With respect to the distance defined on L p , p > 0, we denote by • L p b the completion of B b (Ω). • L p c the completion of C b (Ω). By Proposition 1.14, we have c ⊂ L p L p b ⊂ L p , p > 0 . The following Proposition is obvious and the proof is left to the reader. Proposition 1.16 We have q = 1 . Then X ∈ L p and Y ∈ L q implies 1. Let p, q > 1 , 1 p + 1 XY ∈ L 1 and E [ | XY | ] ≤ ( E [ | X | p ]) 1 1 p ( E [ | Y | q ]) q ; Moreover X ∈ L p c and Y ∈ L q c implies XY ∈ L 1 c ; 2. L p 1 ⊂ L p 2 , L p 1 b ⊂ L p 2 b , L p 1 c ⊂ L p 2 c , 0 < p 2 ≤ p 1 ≤ ∞ ; 3. � X � p ↑ � X � ∞ , for each X ∈ L ∞ . Proposition 1.17 Let p ∈ (0 , ∞ ] and ( X n ) be a sequence in L p which converges to X in L p . Then there exists a subsequence ( X n k ) which converges to X quasi- surely in the sense that it converges to X outside a polar set.

  91. 88 Chap.VI Capacity and Quasi-Surely Analysis for G -Brownian Paths Proof. Let us assume p ∈ (0 , ∞ ), the case p = ∞ is obvious since the conver- gence in L ∞ implies the convergence in L p for all p . One can extract a subsequence ( X n k ) such that E [ | X − X n k | p ] ≤ 1 /k p +2 , k ∈ N . We set for all k A k = {| X − X n k | > 1 /k } , then as a consequence of the Markov property (Lemma 1.13) and the Borel- Cantelli Lemma 1.5, c (lim k →∞ A k ) = 0. As it is clear that on (lim k →∞ A k ) c , ( X n k ) converges to X , the proposition is proved. � We now give a description of L p b . Proposition 1.18 For each p > 0 , b = { X ∈ L p : lim L p n →∞ E [ | X | p 1 {| X | >n } ] = 0 } . Proof. We denote J p = { X ∈ L p : lim n →∞ E [ | X | p 1 {| X | >n } ] = 0 } . For each X ∈ J p let X n = ( X ∧ n ) ∨ ( − n ) ∈ B b (Ω). We have E [ | X − X n | p ] ≤ E [ | X | p 1 {| X | >n } ] → 0, as n → ∞ . Thus X ∈ L p b . b , we can find a sequence { Y n } ∞ On the other hand, for each X ∈ L p n =1 in B b (Ω) such that E [ | X − Y n | p ] → 0. Let y n = sup ω ∈ Ω | Y n ( ω ) | and X n = ( X ∧ y n ) ∨ ( − y n ). Since | X − X n | ≤ | X − Y n | , we have E [ | X − X n | p ] → 0. This clearly implies that for any sequence ( α n ) tending to ∞ , lim n →∞ E [ | X − ( X ∧ α n ) ∨ ( − α n ) | p ] = 0. Now we have, for all n ∈ N , E [ | X | p 1 {| X | >n } ] = E [( | X | − n + n ) p 1 {| X | >n } ] � � ≤ (1 ∨ 2 p − 1 ) E [( | X | − n ) p 1 {| X | >n } ] + n p c ( | X | > n ) . The first term of the right hand side tends to 0 since E [( | X | − n ) p 1 {| X | >n } ] = E [ | X − ( X ∧ n ) ∨ ( − n ) | p ] → 0 . For the second term, since n p 2 p 1 {| X | >n } ≤ ( | X | − n 2 ) p 1 {| X | >n } ≤ ( | X | − n 2 ) p 1 {| X | > n 2 } , we have n p 2 p c ( | X | > n ) = n p 2 p E [ 1 {| X | >n } ] ≤ E [( | X | − n 2 ) p 1 {| X | > n 2 } ] → 0 . Consequently X ∈ J p . � Proposition 1.19 Let X ∈ L 1 b . Then for each ε > 0 , there exists a δ > 0 , such that for all A ∈ B (Ω) with c ( A ) ≤ δ , we have E [ | X | 1 A ] ≤ ε .

  92. 89 § 1 Integration theory associated to an upper probability Proof. For each ε > 0, by Proposition 1.18, there exists an N > 0 such that E [ | X | 1 {| X | >N } ] ≤ ε ε 2 . Take δ = 2 N . Then for a subset A ∈ B (Ω) with c ( A ) ≤ δ , we have E [ | X | 1 A ] ≤ E [ | X | 1 A 1 {| X | >N } ] + E [ | X | 1 A 1 {| X |≤ N } ] ≤ E [ | X | 1 {| X | >N } ] + Nc ( A ) ≤ ε . � It is important to note that not every element in L p satisfies the condition lim n →∞ E [ | X | p 1 {| X | >n } ] = 0. We give the following two counterexamples to show that L 1 and L 1 b are different spaces even under the case that P is weakly compact. Example 1.20 Let Ω = N , P = { P n : n ∈ N } where P 1 ( { 1 } ) = 1 and P n ( { 1 } ) = 1 − 1 1 n , P n ( { n } ) = n , for n = 2 , 3 , · · · . P is weakly compact. We consider a function X on N defined by X ( n ) = n , n ∈ N . We have E [ | X | ] = 2 but E [ | X | 1 {| X | >n } ] = 1 �→ 0 . In this case, X ∈ L 1 but X �∈ L 1 b . Example 1.21 Let Ω = N , P = { P n : n ∈ N } where P 1 ( { 1 } ) = 1 and 1 1 P n ( { 1 } ) = 1 − n 2 , P n ( { kn } ) = n 3 , k = 1 , 2 , . . . , n ,for n = 2 , 3 , · · · . P is weakly compact. We consider a function X on N defined by X ( n ) = n , n ∈ N . We have E [ | X | ] = 25 16 and n E [ 1 {| X |≥ n } ] = 1 n → 0 , but E [ | X | 1 {| X |≥ n } ] = 1 2 + 1 2 n �→ 0 . In this case, X is in L 1 , continuous and n E [ 1 {| X |≥ n } ] → 0 , but it is not in L 1 b . Properties of elements in L p 1.3 c Definition 1.22 A mapping X on Ω with values in a topological space is said to be quasi-continuous (q.c.) if ∀ ε > 0 , there exists an open set O with c ( O ) < ε such that X | O c is continuous . Definition 1.23 We say that X : Ω → R has a quasi-continuous version if there exists a quasi-continuous function Y : Ω → R with X = Y q.s.. Proposition 1.24 Let p > 0 . Then each element in L p c has a quasi-continuous version. Proof. Let ( X n ) be a Cauchy sequence in C b (Ω) for the distance on L p . Let us choose a subsequence ( X n k ) k ≥ 1 such that E [ | X n k +1 − X n k | p ] ≤ 2 − 2 k , ∀ k ≥ 1 , and set for all k , ∞ � {| X n i +1 − X n i | > 2 − i/p } . A k = i = k Thanks to the subadditivity property and the Markov inequality, we have � ∞ � ∞ 2 − i = 2 − k +1 . c ( | X n i +1 − X n i | > 2 − i/p ) ≤ c ( A k ) ≤ i = k i = k

  93. 90 Chap.VI Capacity and Quasi-Surely Analysis for G -Brownian Paths As a consequence, lim k →∞ c ( A k ) = 0, so the Borel set A = � ∞ k =1 A k is polar. As each X n k is continuous, for all k ≥ 1, A k is an open set. Moreover, for all k , ( X n i ) converges uniformly on A c k so that the limit is continuous on each A c k . This yields the result. � The following theorem gives a concrete characterization of the space L p c . Theorem 1.25 For each p > 0 , c = { X ∈ L p : X has a quasi-continuous version, L p n →∞ E [ | X | p 1 {| X | >n } ] = 0 } . lim Proof. We denote J p = { X ∈ L p : X has a quasi-continuous version, n →∞ E [ | X | p 1 {| X | >n } ] = 0 } . lim Let X ∈ L p c , we know by Proposition 1.24 that X has a quasi-continuous version. Since X ∈ L p b , we have by Proposition 1.18 that lim n →∞ E [ | X | p 1 {| X | >n } ] = 0. Thus X ∈ J p . On the other hand, let X ∈ J p be quasi-continuous. Define Y n = ( X ∧ n ) ∨ ( − n ) for all n ∈ N . As E [ | X | p 1 {| X | >n } ] → 0, we have E [ | X − Y n | p ] → 0. Moreover, for all n ∈ N , as Y n is quasi-continuous , there exists a closed set F n such that c ( F c 1 n ) < n p +1 and Y n is continuous on F n . It follows from Tietze’s extension theorem that there exists Z n ∈ C b (Ω) such that | Z n | ≤ n and Z n = Y n on F n . We then have n ) ≤ (2 n ) p E [ | Y n − Z n | p ] ≤ (2 n ) p c ( F c n p +1 . So E [ | X − Z n | p ] ≤ (1 ∨ 2 p − 1 )( E [ | X − Y n | p ] + E [ | Y n − Z n | p ]) → 0 , and X ∈ L p c . � c is different from L p We give the following example to show that L p b even under the case that P is weakly compact. Example 1.26 Let Ω = [0 , 1] , P = { δ x : x ∈ [0 , 1] } is weakly compact. It is c = C b (Ω) which is different from L p seen that L p b . c := { X ∈ L ∞ : X has a quasi-continuous version } , we have We denote L ∞ Proposition 1.27 L ∞ is a closed linear subspace of L ∞ . c Proof. For each Cauchy sequence { X n } ∞ n =1 of L ∞ under �·� ∞ , we can find � � c a subsequence { X n i } ∞ � X n i +1 − X n i � ∞ ≤ 2 − i . We may further i =1 such that assume that each X n is quasi-continuous. Then it is easy to prove that for each � � � X n i +1 − X n i � ≤ 2 − i ε > 0, there exists an open set G such that c ( G ) < ε and for all i ≥ 1 on G c , which implies that the limit belongs to L ∞ c . � As an application of Theorem 1.25, we can easily get the following results.

  94. 91 § 1 Integration theory associated to an upper probability Proposition 1.28 Assume that X : Ω → R has a quasi-continuous version and that there exists a function f : R + → R + satisfying lim t →∞ f ( t ) = ∞ and t p E [ f ( | X | )] < ∞ . Then X ∈ L p c . Proof. For each ε > 0, there exists an N > 0 such that f ( t ) t p ≥ 1 ε , for all t ≥ N . Thus E [ | X | p 1 {| X | >N } ] ≤ ε E [ f ( | X | ) 1 {| X | >N } ] ≤ ε E [ f ( | X | )]. Hence lim N →∞ E [ | X | p 1 {| X | >N } ] = 0. From Theorem 1.25 we infer X ∈ L p c . � Lemma 1.29 Let { P n } ∞ n =1 ⊂ P converge weakly to P ∈ P . Then for each X ∈ L 1 c , we have E P n [ X ] → E P [ X ] . Proof. We may assume that X is quasi-continuous, otherwise we can consider its quasi-continuous version which does not change the value E Q for each Q ∈ P . ε For each ε > 0, there exists an N > 0 such that E [ | X | 1 {| X | >N } ] < 2 . Set ε X N = ( X ∧ N ) ∨ ( − N ). We can find an open subset G such that c ( G ) < 4 N and X N is continuous on G c . By Tietze’s extension theorem, there exists Y ∈ C b (Ω) such that | Y | ≤ N and Y = X N on G c . Obviously, for each Q ∈ P , | E Q [ X ] − E Q [ Y ] | ≤ E Q [ | X − X N | ] + E Q [ | X N − Y | ] ≤ ε 2 + 2 N ε 4 N = ε . It then follows that lim sup n →∞ E P n [ X ] ≤ lim n →∞ E P n [ Y ] + ε = E P [ Y ] + ε ≤ E P [ X ] + 2 ε, and similarly lim inf n →∞ E P n [ X ] ≥ E P [ X ] − 2 ε . Since ε can be arbitrarily small, we then have E P n [ X ] → E P [ X ]. � Remark 1.30 For continuous X , the above lemma is Lemma 3.8.7 in [15]. Now we give an extension of Theorem 1.12. Theorem 1.31 Let P be weakly compact and let { X n } ∞ n =1 ⊂ L 1 c be such that X n ↓ X , q.s.. Then E [ X n ] ↓ E [ X ] . Remark 1.32 It is important to note that X does not necessarily belong to L 1 c . Proof. For the case E [ X ] > −∞ , if there exists a δ > 0 such that E [ X n ] > E [ X ] + δ , n = 1 , 2 , · · · , we then can find a P n ∈ P such that E P n [ X n ] > E [ X ] + δ − 1 n , n = 1 , 2 , · · · . Since P is weakly compact, we then can find a subsequence { P n i } ∞ i =1 that converges weakly to some P ∈ P . From which it follows that j →∞ E P nj [ X n i ] ≥ lim sup E P [ X n i ] = lim E P nj [ X n j ] j →∞ { E [ X ] + δ − 1 ≥ lim sup } = E [ X ] + δ , i = 1 , 2 , · · · . n j j →∞

  95. 92 Chap.VI Capacity and Quasi-Surely Analysis for G -Brownian Paths Thus E P [ X ] ≥ E [ X ] + δ . This contradicts the definition of E [ · ]. The proof for the case E [ X ] = −∞ is analogous. � We immediately have the following corollary. Corollary 1.33 Let P be weakly compact and let { X n } ∞ n =1 be a sequence in L 1 c decreasingly converging to 0 q.s.. Then E [ X n ] ↓ 0 . 1.4 Kolmogorov’s criterion Definition 1.34 Let I be a set of indices, ( X t ) t ∈ I and ( Y t ) t ∈ I be two processes indexed by I . We say that Y is a quasi-modification of X if for all t ∈ I , X t = Y t q.s.. Remark 1.35 In the above definition, quasi-modification is also called modifi- cation in some papers. We now give a Kolmogorov criterion for a process indexed by R d with d ∈ N . Theorem 1.36 Let p > 0 and ( X t ) t ∈ [0 , 1] d be a process such that for all t ∈ [0 , 1] d , X t belongs to L p . Assume that there exist positive constants c and ε such that E [ | X t − X s | p ] ≤ c | t − s | d + ε . Then X admits a modification ˜ X such that �� � p � | ˜ X t − ˜ X s | E sup < ∞ , | t − s | α s � = t for every α ∈ [0 , ε/p ) . As a consequence, paths of ˜ X are quasi-surely H¨ oder continuous of order α for every α < ε/p in the sense that there exists a Borel set N of capacity 0 such that for all w ∈ N c , the map t → ˜ X ( w ) is H¨ oder continuous of order α for every α < ε/p . Moreover, if X t ∈ L p c for each t , then we also have ˜ X t ∈ L p c . Proof. Let D be the set of dyadic points in [0 , 1] d : � � ( i 1 2 n , · · · , i d 2 n ); n ∈ N , i 1 , · · · , i d ∈ { 0 , 1 , · · · , 2 n } D = . Let α ∈ [0 , ε/p ). We set | X t − X s | M = sup | t − s | α . s,t ∈ D,s � = t Thanks to the classical Kolmogorov’s criterion (see Revuz-Yor [109]), we know that for any P ∈ P , E P [ M p ] is finite and uniformly bounded with respect to P so that E [ M p ] = sup E P [ M p ] < ∞ . P ∈P

Recommend


More recommend