continuous models of computation computability complexity
play

Continuous models of computation: computability, complexity, - PowerPoint PPT Presentation

Continuous models of computation: computability, complexity, universality Amaury Pouly Universit de Paris, IRIF, CNRS 27 january 2020 1 / 41 Analog computers : the come back! analog computers : hard to program 1930 highly


  1. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f ( x ) = tanh( x ) ◮ hyperbolic tangent y ′ = 1 − y 2 y ( 0 )= 0 , y ( x )= tanh( x ) � x tanh( x ) 12 / 41

  2. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . 1 Example : f ( x ) = ◮ rational function 1 + x 2 f ′ ( x ) = ( 1 + x 2 ) 2 = − 2 xf ( x ) 2 − 2 x 1 y ′ 1 = − 2 y 2 y 2 y 1 ( 0 )= 1 , y 1 ( x )= � 1 1 + x 2 y ′ y 2 ( 0 )= 0 , 2 = 1 y 2 ( x )= x � 12 / 41

  3. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f = g ± h ◮ sum/difference ( f ± g ) ′ = f ′ ± g ′ 12 / 41

  4. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f = gh ◮ product ( gh ) ′ = g ′ h + gh ′ 12 / 41

  5. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f = 1 ◮ inverse g f ′ = − g ′ g 2 = − g ′ f 2 12 / 41

  6. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . � Example : f = g ◮ integral f ′ = g 12 / 41

  7. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f = g ′ ◮ derivative f ′ = g ′′ = ( p 1 ( z )) ′ = ∇ p 1 ( z ) · z ′ 12 / 41

  8. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f = g ◦ h ◮ composition ( z ◦ h ) ′ = ( z ′ ◦ h ) h ′ = p ( z ◦ h ) h ′ 12 / 41

  9. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f ′ = tanh ◦ f ◮ Non-polynomial differential equation f ′′ = (tanh ′ ◦ f ) f ′ = ( 1 − (tanh ◦ f ) 2 ) f ′ 12 / 41

  10. Generable functions (total, univariate) Types Definition f : R → R is generable if there exists ◮ d ∈ N : dimension d , p and y 0 such that the solution y to ◮ p ∈ R d [ R n ] : polynomial y ′ ( x ) = p ( y ( x )) y ( 0 ) = y 0 , vector ◮ y 0 ∈ R d , y : R → R d satisfies f ( x ) = y 1 ( x ) for all x ∈ R . Example : f ( 0 ) = f 0 , f ′ = g ◦ f ◮ Initial Value Problem (IVP) f ′ = g ′′ = ( p ( z )) ′ = ∇ p ( z ) · z ′ 12 / 41

  11. Generable functions : a first summary Nice theory for the class of total and univariate generable functions : ◮ analytic ◮ contains polynomials, sin , cos , tanh , exp ◮ stable under ± , × , /, ◦ and Initial Value Problems (IVP) ◮ technicality on the field K of coefficients for stability under ◦ 13 / 41

  12. Generable functions : a first summary Nice theory for the class of total and univariate generable functions : ◮ analytic ◮ contains polynomials, sin , cos , tanh , exp ◮ stable under ± , × , /, ◦ and Initial Value Problems (IVP) ◮ technicality on the field K of coefficients for stability under ◦ Limitations : ◮ total functions ◮ univariate 13 / 41

  13. Generable functions (generalization) Types Definition f : X ⊆ R n → R is generable if X is open ◮ n ∈ N : input dimension ◮ d ∈ N : dimension connected and ∃ d , p , x 0 , y 0 , y such that ◮ p ∈ K d × d [ R d ] : y ( x 0 ) = y 0 , J y ( x ) = p ( y ( x )) polynomial matrix and f ( x ) = y 1 ( x ) for all x ∈ X . ◮ x 0 ∈ K n J y ( x ) = Jacobian matrix of y at x ◮ y 0 ∈ K d , y : X → R d Notes : ◮ Partial differential equation! ◮ Unicity of solution y ... ◮ ... but not existence (ie you have to show it exists) 14 / 41

  14. Generable functions (generalization) Types Definition f : X ⊆ R n → R is generable if X is open ◮ n ∈ N : input dimension ◮ d ∈ N : dimension connected and ∃ d , p , x 0 , y 0 , y such that ◮ p ∈ K d × d [ R d ] : y ( x 0 ) = y 0 , J y ( x ) = p ( y ( x )) polynomial matrix and f ( x ) = y 1 ( x ) for all x ∈ X . ◮ x 0 ∈ K n J y ( x ) = Jacobian matrix of y at x ◮ y 0 ∈ K d , y : X → R d Example : f ( x 1 , x 2 ) = x 1 x 2 ( n = 2 , d = 3) ◮ monomial 2 y 2 x 1 x 2       0 3 y 2 y 3 3 2  , y ( 0 , 0 ) = 0 J y = 1 0 y ( x ) = x 1 �      0 0 1 x 2 14 / 41

  15. Generable functions (generalization) Types Definition f : X ⊆ R n → R is generable if X is open ◮ n ∈ N : input dimension ◮ d ∈ N : dimension connected and ∃ d , p , x 0 , y 0 , y such that ◮ p ∈ K d × d [ R d ] : y ( x 0 ) = y 0 , J y ( x ) = p ( y ( x )) polynomial matrix and f ( x ) = y 1 ( x ) for all x ∈ X . ◮ x 0 ∈ K n J y ( x ) = Jacobian matrix of y at x ◮ y 0 ∈ K d , y : X → R d Example : f ( x 1 , x 2 ) = x 1 x 2 ◮ monomial 2 ∂ x 1 y 1 = y 2 y 1 ( x ) = x 1 x 2 y 1 ( 0 , 0 )= 0 , 3 , ∂ x 2 y 1 = 3 y 2 y 3 � 2 y 2 ( 0 , 0 )= 0 , ∂ x 1 y 2 = 1 , ∂ x 2 y 2 = 0 y 2 ( x ) = x 1 � y 3 ( 0 , 0 )= 0 , ∂ x 1 y 3 = 0 , ∂ x 2 y 3 = 1 y 3 ( x ) = x 2 � This is tedious! 14 / 41

  16. Generable functions (generalization) Types Definition f : X ⊆ R n → R is generable if X is open ◮ n ∈ N : input dimension ◮ d ∈ N : dimension connected and ∃ d , p , x 0 , y 0 , y such that ◮ p ∈ K d × d [ R d ] : y ( x 0 ) = y 0 , J y ( x ) = p ( y ( x )) polynomial matrix and f ( x ) = y 1 ( x ) for all x ∈ X . ◮ x 0 ∈ K n J y ( x ) = Jacobian matrix of y at x ◮ y 0 ∈ K d , y : X → R d Last example : f ( x ) = 1 x for x ∈ ( 0 , ∞ ) ◮ inverse function ∂ x y = − y 2 y ( x ) = 1 y ( 1 )= 1 , � x 14 / 41

  17. Generable functions : summary Nice theory for the class of multivariate generable functions (over connected domains) : ◮ analytic ◮ contains polynomials, sin , cos , tanh , exp ◮ stable under ± , × , /, ◦ and Initial Value Problems (IVP) ◮ technicality on the field K of coefficients for stability under ◦ 15 / 41

  18. Generable functions : summary Nice theory for the class of multivariate generable functions (over connected domains) : ◮ analytic ◮ contains polynomials, sin , cos , tanh , exp ◮ stable under ± , × , /, ◦ and Initial Value Problems (IVP) ◮ technicality on the field K of coefficients for stability under ◦ Natural questions : ◮ analytic → isn’t that very limited? ◮ can we generate all analytic functions? 15 / 41

  19. Generable functions : summary Nice theory for the class of multivariate generable functions (over connected domains) : ◮ analytic ◮ contains polynomials, sin , cos , tanh , exp ◮ stable under ± , × , /, ◦ and Initial Value Problems (IVP) ◮ technicality on the field K of coefficients for stability under ◦ Natural questions : ◮ analytic → isn’t that very limited? ◮ can we generate all analytic functions? No Riemann Γ and ζ are not generable. 15 / 41

  20. Why is this useful? Writing polynomial ODEs by hand is hard. 16 / 41

  21. Why is this useful? Writing polynomial ODEs by hand is hard. Using generable functions, we can build complicated multivariate partial functions using other operations, and we know they are solutions to polynomial ODEs by construction . 16 / 41

  22. Why is this useful? Writing polynomial ODEs by hand is hard. Using generable functions, we can build complicated multivariate partial functions using other operations, and we know they are solutions to polynomial ODEs by construction . Example : almost rounding function There exists a generable function round such that for any n ∈ Z , x ∈ R , λ > 2 and µ � 0 : ◮ if x ∈ � n − 1 2 , n + 1 � then | round( x , µ, λ ) − n | � 1 2 , 2 n − 1 2 + 1 λ , n + 1 2 − 1 ◮ if x ∈ � � then | round( x , µ, λ ) − n | � e − µ . λ 16 / 41

  23. The theory of computable functions 17 / 41

  24. Computable function � y ( 0 )= q ( x ) x ∈ R y 1 ( t ) f ( x ) y ′ ( t )= p ( y ( t )) t ∈ R + x t f ( x ) = lim t →∞ y 1 ( t ) y x y x y x y x x > y x = y x < y Inputs : x , y ∈ [ 0 , + ∞ ) Output : sign( x − y ) ? 18 / 41

  25. The theory of computable functions Important fact : ◮ contains generable functions ◮ continuous functions 19 / 41

  26. The theory of computable functions Important fact : ◮ contains generable functions ◮ continuous functions ◮ stable under ± , × , / 19 / 41

  27. The theory of computable functions Important fact : ◮ contains generable functions ◮ continuous functions ◮ stable under ± , × , / ◮ stable under ◦ 19 / 41

  28. The theory of computable functions Important fact : ◮ contains generable functions ◮ continuous functions ◮ stable under ± , × , / ◮ stable under ◦ ◮ stable under limits 19 / 41

  29. The theory of computable functions Important fact : ◮ contains generable functions ◮ continuous functions ◮ stable under ± , × , / ◮ stable under ◦ ◮ stable under limits ◮ stable under iteration (with conditions) 19 / 41

  30. The theory of computable functions Important fact : ◮ contains generable functions ◮ continuous functions ◮ stable under ± , × , / ◮ stable under ◦ ◮ stable under limits ◮ stable under iteration (with conditions) Enough to simulate a Turing machine! 19 / 41

  31. The theory of computable functions Important fact : ◮ contains generable functions ◮ continuous functions ◮ stable under ± , × , / ◮ stable under ◦ ◮ stable under limits ◮ stable under iteration (with conditions) Enough to simulate a Turing machine! Proof are too complicated but essentially this is all error management. 19 / 41

  32. Proof gem : iteration with differential equations Assume f is generable, can we iterate f with an ODE? That is, build a generable y such that y ( x , n ) ≈ f [ n ] ( x ) for all n ∈ N 20 / 41

  33. Proof gem : iteration with differential equations Assume f is generable, can we iterate f with an ODE? That is, build a generable y such that y ( x , n ) ≈ f [ n ] ( x ) for all n ∈ N f ( x ) x t 0 1 1 3 2 y ′ ≈ 0 2 2 z ′ ≈ f ( y ) − z 20 / 41

  34. Proof gem : iteration with differential equations Assume f is generable, can we iterate f with an ODE? That is, build a generable y such that y ( x , n ) ≈ f [ n ] ( x ) for all n ∈ N f ( x ) x t 0 1 1 3 2 y ′ ≈ 0 y ′ ≈ z − y 2 2 z ′ ≈ f ( y ) − z z ′ ≈ 0 20 / 41

  35. Proof gem : iteration with differential equations Assume f is generable, can we iterate f with an ODE? That is, build a generable y such that y ( x , n ) ≈ f [ n ] ( x ) for all n ∈ N f [ 2 ] ( x ) f ( x ) x t 0 1 1 3 2 y ′ ≈ 0 y ′ ≈ z − y 2 2 z ′ ≈ f ( y ) − z z ′ ≈ 0 20 / 41

  36. Recap Yes y 1 ( t ) 1 y 1 ( t ) x t − 1 y 1 ( t ) No Theorem (Bournez et al, 2010) This is equivalent to a Turing machine. ◮ analog computability theory ◮ purely continuous characterization of classical computability 21 / 41

  37. The complexity theory of computable functions 22 / 41

  38. Complexity of analog systems ◮ Turing machines : T ( x ) = number of steps to compute on x 23 / 41

  39. Complexity of analog systems ◮ Turing machines : T ( x ) = number of steps to compute on x ◮ GPAC : Tentative definition T ( x ) = ?? y ′ = p ( y ) y ( 0 ) = ( x , 0 , . . . , 0 ) y 1 ( t ) f ( x ) x t 23 / 41

  40. Complexity of analog systems ◮ Turing machines : T ( x ) = number of steps to compute on x ◮ GPAC : Tentative definition T ( x , µ ) = y ′ = p ( y ) y ( 0 ) = ( x , 0 , . . . , 0 ) y 1 ( t ) f ( x ) x t 23 / 41

  41. Complexity of analog systems ◮ Turing machines : T ( x ) = number of steps to compute on x ◮ GPAC : Tentative definition T ( x , µ ) = first time t so that | y 1 ( t ) − f ( x ) | � e − µ y ′ = p ( y ) y ( 0 ) = ( x , 0 , . . . , 0 ) y 1 ( t ) f ( x ) x t 23 / 41

  42. Complexity of analog systems ◮ Turing machines : T ( x ) = number of steps to compute on x ◮ GPAC : Tentative definition T ( x , µ ) = first time t so that | y 1 ( t ) − f ( x ) | � e − µ y ′ = p ( y ) z ( t ) = y ( e t ) y ( 0 ) = ( x , 0 , . . . , 0 ) y 1 ( t ) z 1 ( t ) f ( x ) f ( x ) � x x t t 23 / 41

  43. Complexity of analog systems ◮ Turing machines : T ( x ) = number of steps to compute on x ◮ GPAC : Tentative definition T ( x , µ ) = first time t so that | y 1 ( t ) − f ( x ) | � e − µ y ′ = p ( y ) z ( t ) = y ( e t ) y ( 0 ) = ( x , 0 , . . . , 0 ) y 1 ( t ) z 1 ( t ) f ( x ) f ( x ) � x x t t w ( t ) = y ( e e t ) w 1 ( t ) f ( x ) x t 23 / 41

  44. Complexity of analog systems ◮ Turing machines : T ( x ) = number of steps to compute on x ◮ GPAC : time contraction problem → open problem Tentative definition T ( x , µ ) = first time t so that | y 1 ( t ) − f ( x ) | � e − µ y ′ = p ( y ) z ( t ) = y ( e t ) y ( 0 ) = ( x , 0 , . . . , 0 ) y 1 ( t ) z 1 ( t ) f ( x ) f ( x ) � x x t t w ( t ) = y ( e e t ) Something is wrong... w 1 ( t ) f ( x ) x All functions have constant t time complexity. 23 / 41

  45. Time-space correlation of the GPAC y ′ = p ( y ) z ( t ) = y ( e t ) y ( 0 ) = q ( x ) y 1 ( t ) z 1 ( t ) f ( x ) f ( x ) � ˜ q ( x ) q ( x ) t t 24 / 41

  46. Time-space correlation of the GPAC y ′ = p ( y ) z ( t ) = y ( e t ) y ( 0 ) = q ( x ) y 1 ( t ) z 1 ( t ) f ( x ) f ( x ) � ˜ q ( x ) q ( x ) t t extra component : w ( t ) = e t w ( t ) t 24 / 41

  47. Time-space correlation of the GPAC y ′ = p ( y ) z ( t ) = y ( e t ) y ( 0 ) = q ( x ) y 1 ( t ) z 1 ( t ) f ( x ) f ( x ) � ˜ q ( x ) q ( x ) t t extra component : w ( t ) = e t Observation Time scaling costs “space”. � Time complexity for the GPAC w ( t ) must involve time and space! t 24 / 41

  48. Complexity in the analog world Complexity measure : length of the curve = x y ( 10 ) x y ( 1 ) Time acceleration : same curve = same complexity! 25 / 41

  49. Complexity in the analog world Complexity measure : length of the curve = x y ( 10 ) x y ( 1 ) Time acceleration : same curve = same complexity! ≪ x y ( 1 ) x y ( 1 ) Same time, different curves : different complexity! 25 / 41

  50. Characterization of polynomial time Definition : L ∈ ANALOG-PTIME ⇔ ∃ p polynomial, ∀ word w | w | y ′ = p ( y ) � w i 2 − i y ( 0 ) = ( ψ ( w ) , | w | , 0 , . . . , 0 ) ψ ( w ) = i = 1 y 1 ( t ) 1 ψ ( w ) ℓ ( t ) = length of y − 1 26 / 41

  51. Characterization of polynomial time Definition : L ∈ ANALOG-PTIME ⇔ ∃ p polynomial, ∀ word w | w | y ′ = p ( y ) � w i 2 − i y ( 0 ) = ( ψ ( w ) , | w | , 0 , . . . , 0 ) ψ ( w ) = i = 1 accept : w ∈ L y 1 ( t ) 1 ψ ( w ) ℓ ( t ) = length of y computing − 1 satisfies 1. if y 1 ( t ) � 1 then w ∈ L 26 / 41

  52. Characterization of polynomial time Definition : L ∈ ANALOG-PTIME ⇔ ∃ p polynomial, ∀ word w | w | y ′ = p ( y ) � w i 2 − i y ( 0 ) = ( ψ ( w ) , | w | , 0 , . . . , 0 ) ψ ( w ) = i = 1 accept : w ∈ L 1 ψ ( w ) ℓ ( t ) = length of y computing − 1 y 1 ( t ) reject : w / ∈ L satisfies 2. if y 1 ( t ) � − 1 then w / ∈ L 26 / 41

  53. Characterization of polynomial time Definition : L ∈ ANALOG-PTIME ⇔ ∃ p polynomial, ∀ word w | w | y ′ = p ( y ) � w i 2 − i y ( 0 ) = ( ψ ( w ) , | w | , 0 , . . . , 0 ) ψ ( w ) = i = 1 accept : w ∈ L 1 forbidden y 1 ( t ) ψ ( w ) ℓ ( t ) = length of y poly( | w | ) computing − 1 reject : w / ∈ L satisfies 3. if ℓ ( t ) � poly( | w | ) then | y 1 ( t ) | � 1 26 / 41

  54. Characterization of polynomial time Definition : L ∈ ANALOG-PTIME ⇔ ∃ p polynomial, ∀ word w | w | y ′ = p ( y ) � w i 2 − i y ( 0 ) = ( ψ ( w ) , | w | , 0 , . . . , 0 ) ψ ( w ) = i = 1 accept : w ∈ L y 1 ( t ) 1 forbidden y 1 ( t ) ψ ( w ) ℓ ( t ) = length of y poly( | w | ) computing − 1 y 1 ( t ) reject : w / ∈ L Theorem PTIME = ANALOG-PTIME 26 / 41

  55. Summary ANALOG-P R ANALOG-PTIME w ∈L y 1 ( t ) 1 y 1 ( t ) f ( x ) y 1 ( t ) ψ ( w ) ℓ ( t ) poly( | w | ) x − 1 ℓ ( t ) y 1 ( t ) w / ∈L Theorem ◮ L ∈ PTIME of and only if L ∈ ANALOG-PTIME ◮ f : [ a , b ] → R computable in polynomial time ⇔ f ∈ ANALOG-P R ◮ Analog complexity theory based on length ◮ Time of Turing machine ⇔ length of the GPAC ◮ Purely continuous characterization of PTIME 27 / 41

  56. Summary ANALOG-P R ANALOG-PTIME w ∈L y 1 ( t ) 1 y 1 ( t ) f ( x ) y 1 ( t ) ψ ( w ) ℓ ( t ) poly( | w | ) x − 1 ℓ ( t ) y 1 ( t ) w / ∈L Theorem ◮ L ∈ PTIME of and only if L ∈ ANALOG-PTIME ◮ f : [ a , b ] → R computable in polynomial time ⇔ f ∈ ANALOG-P R ◮ Analog complexity theory based on length ◮ Time of Turing machine ⇔ length of the GPAC ◮ Purely continuous characterization of PTIME ◮ Only rational coefficients needed 27 / 41

  57. Chemical Reaction Networks 28 / 41

  58. Chemical Reaction Networks Definition : a reaction system is a finite set of ◮ molecular species y 1 , . . . , y n f ◮ reactions of the form � → � i a i y i − i b i y i ( a i , b i ∈ N , f = rate) Example (any resemblance to chemistry is purely coincidental) : 2H + O → H 2 O C + O 2 → CO 2 29 / 41

  59. Chemical Reaction Networks Definition : a reaction system is a finite set of ◮ molecular species y 1 , . . . , y n f ◮ reactions of the form � → � i a i y i − i b i y i ( a i , b i ∈ N , f = rate) Example (any resemblance to chemistry is purely coincidental) : 2H + O → H 2 O C + O 2 → CO 2 Assumption : law of mass action k � � � y a i a i y i − → b i y i f ( y ) = k � i i i i 29 / 41

  60. Chemical Reaction Networks Definition : a reaction system is a finite set of ◮ molecular species y 1 , . . . , y n f ◮ reactions of the form � → � i a i y i − i b i y i ( a i , b i ∈ N , f = rate) Example (any resemblance to chemistry is purely coincidental) : 2H + O → H 2 O C + O 2 → CO 2 Assumption : law of mass action k � � � y a i a i y i − → b i y i f ( y ) = k � i i i i Semantics : ◮ discrete ◮ differential ◮ stochastic 29 / 41

  61. Chemical Reaction Networks Definition : a reaction system is a finite set of ◮ molecular species y 1 , . . . , y n f ◮ reactions of the form � → � i a i y i − i b i y i ( a i , b i ∈ N , f = rate) Example (any resemblance to chemistry is purely coincidental) : 2H + O → H 2 O C + O 2 → CO 2 Assumption : law of mass action k � � � y a i a i y i → − b i y i f ( y ) = k � i i i i Semantics : ◮ discrete y ′ � ( b R i − a R i ) f R ( y ) i = ◮ differential → reaction R ◮ stochastic 29 / 41

  62. Chemical Reaction Networks Definition : a reaction system is a finite set of ◮ molecular species y 1 , . . . , y n f ◮ reactions of the form � → � i a i y i − i b i y i ( a i , b i ∈ N , f = rate) Example (any resemblance to chemistry is purely coincidental) : 2H + O → H 2 O C + O 2 → CO 2 Assumption : law of mass action k � � � y a i a i y i − → b i y i f ( y ) = k � i i i i Semantics : ◮ discrete a j � i ) k R � y ′ ( b R i − a R i = y j ◮ differential → reaction R j ◮ stochastic 29 / 41

  63. Chemical Reaction Networks (CRNs) ◮ CRNs with differential semantics and mass action law = polynomial ODEs ◮ polynomial ODEs are Turing complete 30 / 41

Recommend


More recommend