Modular Implementation of Programming Languages and a Partial Order Approach to Infinitary Rewriting Patrick Bahr paba@diku.dk University of Copenhagen Department of Computer Science PhD Defence 30 November 2012
two The Big Pictures Modular Implementation of Partial Order Approach to Programming Languages Infinitary Rewriting f f f f f g g g g a g g g a g g a g a 2
Modular Implementation of Programming Languages 3
Motivation Implementation of a DSL-Based ERP System Enterprise resource planning systems integrate several software components that are essential for managing a business. ERP systems integrate Financial Management Supply Chain Management Manufacturing Resource Planning Human Resource Management Customer Relationship Management . . . 4
What do ERP systems look like under the hood? 5
An Alternative Approach POETS [Henglein et al. 2009] The abstract picture Rule Language We have a number of UI Contract domain-specific languages. Language Language Each pair of DSLs shares some ERP common sublanguage. Runtime Ontology Report System All of them share a common Language Language language of values. We have the same situation on ... ... the type level! How do we implement this system without duplicating code?! 6
Piecing Together DSLs – Syntax Library of language features basic data structures F1 reading and aggregating data from the database F2 arithmetic operations F3 contract clauses F4 type definitions F5 inference rules F6 7
Piecing Together DSLs – Syntax Library of language features F1 F2 F3 F4 F5 F6 Constructing the DSLs Report Language = F1 F2 F3 Contract Language = F1 F4 F3 Ontology Language = F1 F5 Rule Language = F1 F6 F3 8
Piecing Together Functions Example: Pretty Printing Goal: functions of type Program L − → String for each language L “functions” for each feature Combine functions pp 1 pp 1 : String F1 F1 + pp 2 : String F2 + pp 2 : String F2 pp 3 F3 pp 3 : String F3 Other combinations pp 4 : pp 1 String F4 F1 + pp 5 : String F5 + pp 5 : String F5 pp 6 F6 . . pp 6 : . String F6 9
How does it work? Based on: Wouter Swierstra. Data types ` a la carte 10
How does it work? data Lit e = Lit Int data Exp = Lit Int | Add Exp Exp s n i o g | Mult Exp Exp i n s r a u t u c r e decompose e r data Sig e = Lit Int data Fix s = Lit :+: Ops | Add e e In ( s ( Fix s )) | Mult e e combine type Exp = Fix Sig data Ops e = Add e e | Mult e e 11
Combining Functions Non-recursive function pp 1 :: Lit String → String pp 1 ( Lit i ) = show i pp 2 :: Ops String → String pp 2 ( Add e 1 e 2 ) = "(" + + e 1 + + " + " + + e 2 + + ")" pp 2 ( Mult e 1 e 2 ) = "(" + + e 1 + + " * " + + e 2 + + ")" Fold fold :: Functor f ⇒ ( f a → a ) → Fix f → a fold f ( In t ) = f ( fmap ( fold f ) t ) Applying Fold pp :: Fix ( Lit :+: Ops ) → String pp = fold ( pp 1 :+: pp 2 ) 12
Our Contributions Make compositional data types more useful in practise. Extend the class of definable types mutually recursive types, GADTs abstract syntax trees with variable binders “Algebras with more structure” algebras with effects tree homomorphisms, tree automata, tree transducers ◮ sequential composition � program optimisation (deforestation) ◮ tupling � additional modularity Skip details 13
Compositionality We may compose tree automata along 3 different dimensions. input signature : the type of the AST � A 1 � : µ S 1 → R = ⇒ � A 1 + A 2 � : µ ( S 1 + S 2 ) → R � A 2 � : µ S 2 → R sequential composition : a.k.a. deforestation � A 1 � � A 2 � µ S 1 µ S 2 µ S 3 � A 1 ◦ A 2 � output type : tupling / product automaton construction � A 1 � : µ S → R 1 = ⇒ � A 1 × A 2 � : µ F → R 1 × R 2 � A 2 � : µ S → R 2 14
Contextuality tupling / product automaton construction � A 1 � : µ S → R 1 = ⇒ � A 1 × A 2 � : µ ( S ) → R 1 × R 2 � A 2 � : µ S → R 2 mutumorphisms / dependent product automata A 1 : R 2 ⇒ S → R 1 = ⇒ � A 1 × A 2 � : µ S → R 1 × R 2 A 2 : R 1 ⇒ S → R 2 15
Discussion Advantages Drawbacks it’s just a Haskell library it’s just a Haskell library error messages are sometimes uses well-known concepts rather cryptic (algebras, tree automata, functors etc.) learning curve high degree of modularity typical drawbacks of facilitates reuse higher-order abstract syntax Future work reasoning about modular implementations ( Meta-Theory ` a la Carte [Delaware et al. 2013]) describing interactions between modules how well does modularity scale? 16
And now it’s time for something completely different.
Partial Order Approach to Infinitary Rewriting f f f f f g g g g a g g g a g g a g a 18
Rewriting Systems What are (term) rewriting systems? generalisation of (first-order) functional programs consist of directed symbolic equations of the form l → r semantics: any instance of a left-hand side may be replaced by the corresponding instance of the right-hand side Example (Term rewriting system defining addition and multiplication) � x + 0 → x x ∗ 0 → 0 R + ∗ = x + s ( y ) → s ( x + y ) x ∗ s ( y ) → x + ( x ∗ y ) s ( s (0)) ∗ s ( s (0)) → 7 s ( s ( s ( s (0)))) R + ∗ is terminating! 19
Non-Terminating Rewriting Systems Termination: repeated rewriting eventually reaches a normal form. Non-terminating systems can be meaningful modelling reactive systems, e.g. by process calculi approximation algorithms which enhance the accuracy of the approximation with each iteration, e.g. computing π specification of infinite data structures, e.g. streams Example (Infinite lists) � R nats = from ( x ) → x : from ( s ( x )) from (0) → 6 0 : 1 : 2 : 3 : 4 : 5 : from (6) → . . . intuitively this converges to the infinite list 0 : 1 : 2 : 3 : 4 : 5 : . . . . 20
Infinitary Term Rewriting – The Metric Approach When does a rewrite sequence converge? Rewrite rules are applied at increasingly deeply nested subterms. What is the result of a converging rewrite sequence? A converging rewrite sequence approximates a uniquely determined term t arbitrary well. t 0 → t 1 → t 2 → t . . . For each depth d ∈ N there is some n ∈ N , such that → → → → → t 0 t 1 t n t n +1 t . . . . . . � �� � do not differ up to depth d 21
Example: Convergence of a Reduction f f f f f g g g g a g g g a g g a g a R = { a → g ( a ) } 22
Example: Non-Convergence of a Reduction f f f f f g g g a a . . . h h h h h g g g g g a a b g g a b b b b � a → g ( a ) R = h ( x ) → h ( g ( x )) 23
Issues of the Metric Approach Notion of convergence is too restrictive (no notion of local convergence) May still not reach a normal form Orthogonal TRSs are not infinitarily confluent Infinitary confluence t For every t , t 1 , t 2 ∈ T ∞ (Σ , V ) with t 1 և t ։ t 2 t 1 t 2 there is a t ′ ∈ T ∞ (Σ , V ) with t 1 ։ t ′ և t 2 t ′ 24
Partial Order Approach to Infinitary Term Rewriting Partial order on terms partial terms: terms with additional constant ⊥ (read as “undefined”) partial order ≤ ⊥ reads as: “is less defined than” ≤ ⊥ is a complete semilattice (= bounded complete cpo) Convergence formalised by the limit inferior: � � lim inf ι → α t ι = t ι β<α β ≤ ι<α intuition: eventual persistence of nodes of the terms convergence: limit inferior of the contexts of the reduction 25
An Example f f f f f f g g g g a a h h h h h h . . . g g g g g g g a a b g g g a a b b f b b b p -converges to g ⊥ g 26
Properties of the Partial Order Approach Benefits reduction sequences always converge (but result may contain ⊥ s) more fine-grained than the metric approach subsumes metric approach, i.e. both approaches agree on total reductions Theorem (total p -convergence = m -convergence) For every reduction S in a TRS, we have p t is total m t . S : s ։ ⇐ ⇒ S : s ։ Theorem (confluence, normalisation) Every orthogonal TRS is normalising and confluent w.r.t. p-convergent reductions, i.e. every term has a unique normal form. 27
Sharing – From Terms to Term Graphs Lazy evaluation and infinitary rewriting Skip term graphs Lazy evaluation consists of two things: non-strict evaluation sharing � avoids duplication Example from ( x ) → x : from ( s ( x )) : from x x from s x 28
Example : : : from : : 0 0 from 0 0 s s s : from s s 29
Properties of Infinitary Term Graph Rewriting Theorem (total p -convergence = m -convergence) For every reduction S in a GRS, we have p h is total m h . S : g ։ ⇐ ⇒ S : g ։ Theorem (soundness) For every left-linear, left-finite GRS R we have p m g R h U ( · ) U ( · ) m U ( R ) s t 30
Completeness Theorem (Completeness) p-convergence in an orthogonal, left-finite GRS R is complete: p p U ( R ) s t ′ t U ( · ) U ( · ) p g R h Does not hold for metric convergence! Completeness of m -convergence for normalising reductions m U ( R ) s t ∈ NF U ( · ) U ( · ) m g R h 31
Recommend
More recommend