Princeton University Department of Geosciences Course on Inverse Problems Albert Tarantola Third Lesson: Probability (Elementary Notions)
Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f ( u , v ) , one defines the two marginal probability densities � + ∞ f u ( u ) = − ∞ dv f ( u , v ) � + ∞ f v ( v ) = − ∞ du f ( u , v ) and the two conditional probability densities f ( u , v 0 ) f u | v ( u | v = v 0 ) = � + ∞ − ∞ du f ( u , v 0 ) f ( u 0 , v ) f v | u ( v | u = u 0 ) = � + ∞ − ∞ dv f ( v , u 0 )
0 10 0.05 0 10 5 0 -5 -10 5 0.15 0 -5 -10 10 5 0 -5 -10 0.1 0.2 0.12 0.2 marginal conditional conditional conditional conditional joint 0.25 0.15 0.25 0.1 0.05 0 10 5 0 -5 -10 0.14 0.1 0.02 10 0.15 0.125 0.1 0.075 0.05 0.025 0 5 -5 0 -5 -10 0.12 0.1 0.08 0.06 0.04 -10 0 0.08 -10 0.06 0.04 0.02 0 10 5 0 -5 0.14 5 0.12 0.1 0.08 0.06 0.04 0.02 0 10 marginal
Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f ( u , v ) , one defines the two marginal probability densities � + ∞ f u ( u ) = − ∞ dv f ( u , v ) � + ∞ f v ( v ) = − ∞ du f ( u , v ) and the two conditional probability densities f ( u , v 0 ) f u | v ( u | v = v 0 ) = � + ∞ − ∞ du f ( u , v 0 ) f ( u 0 , v ) f v | u ( v | u = u 0 ) = � + ∞ − ∞ dv f ( v , u 0 )
Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f ( u , v ) , one defines the two marginal probability densities � + ∞ f u ( u ) = − ∞ dv f ( u , v ) � + ∞ f v ( v ) = − ∞ du f ( u , v ) and the two conditional probability densities f ( u , v 0 ) f u | v ( u | v 0 ) = � + ∞ − ∞ du f ( u , v 0 ) f ( u 0 , v ) f v | u ( v | u 0 ) = � + ∞ − ∞ dv f ( v , u 0 )
Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f ( u , v ) , one defines the two marginal probability densities � + ∞ f u ( u ) = − ∞ dv f ( u , v ) � + ∞ f v ( v ) = − ∞ du f ( u , v ) and the two conditional probability densities f ( u , v ) f u | v ( u | v ) = � + ∞ − ∞ du f ( u , v ) f ( u , v ) f v | u ( v | u ) = � + ∞ − ∞ dv f ( v , u )
One has f u | v ( u | v ) = f ( u , v ) f v ( v ) f v | u ( v | u ) = f ( u , v ) f u ( u ) from where (a joint distribution can be expressed by a condi- tional distribution times a marginal distribution) f ( u , v ) = f u | v ( u | v ) f v ( v ) = f v | u ( v | u ) f u ( u ) from where (Bayes theorem) f v | u ( v | u ) f u ( u ) f u | v ( u | v ) = f v ( v )
Recall: f ( u , v ) = f u | v ( u | v ) f v ( v ) = f v | u ( v | u ) f u ( u ) . The two quantities u and v are said to have independent un- certainties if, in fact, f ( u , v ) = f u ( u ) f v ( v ) (the joint distribution equals the product of the two marginal distributions). This implies (and is implied by) f u | v ( u | v ) = f u ( u ) f v | u ( v | u ) = f v ( v ) ; .
-5 5 (the joint distribution is the product two quantities with independent uncertainties 10 5 0 -5 -10 20 15 10 0 0 -5 10 5 0 -5 -10 0 20 15 10 5 of the two marginal distributions)
Let u and v be two Cartesian parameters (then, volumet- ric probabilities and probability densities are identical). Let f ( u , v ) , be a probability density that is not qualitatively differ- ent from a two-dimensional Gaussian. The mean values are � + ∞ � + ∞ u = − ∞ dv u f ( u , v ) − ∞ du � + ∞ � + ∞ v = − ∞ dv v f ( u , v ) − ∞ du the variances are � + ∞ � + ∞ − ∞ dv ( u − u ) 2 f ( u , v ) c uu = σ 2 u = − ∞ du � + ∞ � + ∞ − ∞ dv ( v − v ) 2 f ( u , v ) c vv = σ 2 v = − ∞ du and the covariance is � + ∞ � + ∞ c uv = − ∞ du − ∞ dv ( u − u )( v − v ) f ( u , v )
The covariance matrix is � � � � σ 2 c uu c uv c uv u C = = . σ 2 c vu c vv c vu v It is symmetric and positive definite (or, at least, non-negative). Note: the correlation, defined as c uv c uv ρ uv = = √ c uu √ c vv , σ u σ v − 1 ≤ ρ uv ≤ + 1 . has the property
The general form of a covariance matrix is σ 2 c 12 c 13 . . . c 11 c 12 c 13 . . . 1 σ 2 c 21 c 22 c 23 . . . c 21 c 23 . . . 2 C = = . σ 2 c 31 c 32 c 33 . . . c 31 c 32 . . . 3 . . . ... . . . ... . . . . . . . . . . . . The quantities with immediate interpretation are the standard deviations { σ 1 , σ 2 , σ 3 , . . . } and the correlation matrix 1 . . . ρ 12 ρ 13 1 . . . ρ 21 ρ 23 R = 1 . . . ρ 31 ρ 32 . . . ... . . . . . .
The multidimensional Gaussian distribution is defined as f ( x 1 , x 2 , . . . , x n ) ≡ � − 1 2 ( x − x 0 ) t C -1 ( x − x 0 ) ) f ( x ) = k exp Its mean is x 0 and its covariance is C (not obvious!).
Recommend
More recommend