learning equivalence structures
play

Learning Equivalence Structures Luca San Mauro (Vienna University of - PowerPoint PPT Presentation

Learning Equivalence Structures Luca San Mauro (Vienna University of Technology) Logic Colloquium 2018 joint work with Ekaterina Fokina and Timo Koetzing Computational Learning Theory Computational Learning Theory (CLT) is a vast research


  1. Learning Equivalence Structures Luca San Mauro (Vienna University of Technology) Logic Colloquium 2018 joint work with Ekaterina Fokina and Timo Koetzing

  2. Computational Learning Theory Computational Learning Theory (CLT) is a vast research program that comprises different models of learning in the limit. It deals with the question of how a learner, provided with more and more data about some environment, is eventually able to achieve systematic knowledge about it. 1

  3. Computational Learning Theory Computational Learning Theory (CLT) is a vast research program that comprises different models of learning in the limit. It deals with the question of how a learner, provided with more and more data about some environment, is eventually able to achieve systematic knowledge about it. • (Gold, 1967): language identification. 1

  4. Computational Learning Theory Computational Learning Theory (CLT) is a vast research program that comprises different models of learning in the limit. It deals with the question of how a learner, provided with more and more data about some environment, is eventually able to achieve systematic knowledge about it. • (Gold, 1967): language identification. More recently researchers applied the machinery of CLT to algebraic structures: 1

  5. Computational Learning Theory Computational Learning Theory (CLT) is a vast research program that comprises different models of learning in the limit. It deals with the question of how a learner, provided with more and more data about some environment, is eventually able to achieve systematic knowledge about it. • (Gold, 1967): language identification. More recently researchers applied the machinery of CLT to algebraic structures: • (Stephan, Ventsov, 2001): Learning ring ideals of commutative rings. • (Harizanov, Stephan, 2002): Learning subspaces of V ∞ . 1

  6. Our framework • Let K be a class of structures with some uniform effective enumeration {C i } i ∈ ω of the computable structures from K , up to isomorphism. 2

  7. Our framework • Let K be a class of structures with some uniform effective enumeration {C i } i ∈ ω of the computable structures from K , up to isomorphism. • A learner M is a total function which takes for its inputs finite substructures of a given structure S from K . 2

  8. Our framework • Let K be a class of structures with some uniform effective enumeration {C i } i ∈ ω of the computable structures from K , up to isomorphism. • A learner M is a total function which takes for its inputs finite substructures of a given structure S from K . • If M ( S i ) ↓ = n , for finite S i ⊆ S , then n represents M ’s conjecture as to an index for S in the above enumeration. 2

  9. Our framework • Let K be a class of structures with some uniform effective enumeration {C i } i ∈ ω of the computable structures from K , up to isomorphism. • A learner M is a total function which takes for its inputs finite substructures of a given structure S from K . • If M ( S i ) ↓ = n , for finite S i ⊆ S , then n represents M ’s conjecture as to an index for S in the above enumeration. = -learns S if, for all T ∼ • M InfEx ∼ = S , there exists n ∈ ω such that T ∼ = C n and M ( T i ) ↓ = n , for all but finitely many i . 2

  10. Our framework • Let K be a class of structures with some uniform effective enumeration {C i } i ∈ ω of the computable structures from K , up to isomorphism. • A learner M is a total function which takes for its inputs finite substructures of a given structure S from K . • If M ( S i ) ↓ = n , for finite S i ⊆ S , then n represents M ’s conjecture as to an index for S in the above enumeration. = -learns S if, for all T ∼ • M InfEx ∼ = S , there exists n ∈ ω such that T ∼ = C n and M ( T i ) ↓ = n , for all but finitely many i . • A family of structures A is InfEx ∼ = -learnable if there is M that learns all A ∈ A . 2

  11. Our framework • Let K be a class of structures with some uniform effective enumeration {C i } i ∈ ω of the computable structures from K , up to isomorphism. • A learner M is a total function which takes for its inputs finite substructures of a given structure S from K . • If M ( S i ) ↓ = n , for finite S i ⊆ S , then n represents M ’s conjecture as to an index for S in the above enumeration. = -learns S if, for all T ∼ • M InfEx ∼ = S , there exists n ∈ ω such that T ∼ = C n and M ( T i ) ↓ = n , for all but finitely many i . • A family of structures A is InfEx ∼ = -learnable if there is M that learns all A ∈ A . • InfEx ∼ = ( K ) denotes the class of families of K -structures that are InfEx ∼ = -learnable. 2

  12. Our framework, continued Our notation comes from classical CLT: Inf (short for informant ) means that we receive both positive and negative information about S ; Ex (short for explanatory ) means that M shall converge on a single input for S . At the end, we will discuss learning classes obtained by choosing natural alternatives of Inf , Ex , and ∼ =. 3

  13. Our framework, continued Our notation comes from classical CLT: Inf (short for informant ) means that we receive both positive and negative information about S ; Ex (short for explanatory ) means that M shall converge on a single input for S . At the end, we will discuss learning classes obtained by choosing natural alternatives of Inf , Ex , and ∼ =. Remark: our model shares many analogies with the First-order Framework introduced in (Martin, Osherson, 1998). 3

  14. Equivalence structures • Denote by E the class of equivalence structures. Our main focus is on InfEx ∼ = ( E ). 4

  15. Equivalence structures • Denote by E the class of equivalence structures. Our main focus is on InfEx ∼ = ( E ). • (Downey, Melnikov, Ng, 2016): there is a Friedberg enumeration of computable equivalence structures. 4

  16. Equivalence structures • Denote by E the class of equivalence structures. Our main focus is on InfEx ∼ = ( E ). • (Downey, Melnikov, Ng, 2016): there is a Friedberg enumeration of computable equivalence structures. • A non-Friedberg one is of course much more easy to be defined, e.g., for all n , let the size of the E n -classes be the cardinality of the columns of W n . 4

  17. Example of learnability, 1 A B 5

  18. Example of learnability, 1 A B S M ( S ) = � A � 5

  19. Example of learnability, 1 A B S M ( S ) = � A � 5

  20. Example of learnability, 1 A B S M ( S ) = � A � 5

  21. Example of learnability, 1 A B S M ( S ) = � A � 5

  22. Example of learnability, 1 A B S M ( S ) = � A � 5

  23. Example of learnability, 1 A B S M ( S ) = � A � 5

  24. Example of learnability, 1 A B S M ( S ) = � A � 5

  25. Example of learnability, 1 A B S M ( S ) = � A � 5

  26. Example of learnability, 1 A B S M ( S ) = � B � 5

  27. Example of learnability, 2 A B 6

  28. Example of learnability, 2 A B S M ( S ) = � A � 6

  29. Example of learnability, 2 A B S M ( S ) = � A � 6

  30. Example of learnability, 2 A B S M ( S ) = � A � 6

  31. Example of learnability, 2 A B S M ( S ) = � A � 6

  32. Example of learnability, 2 A B S M ( S ) = � B � 6

  33. Example of learnability, 2 A B S M ( S ) = � B � 6

  34. Example of learnability, 2 A B S M ( S ) = � B � 6

  35. Example of learnability, 2 A B S M ( S ) = � A � 6

  36. Example of nonlearnability, 1 A B 7

  37. Example of nonlearnability, 1 Strategy • Assume that M learns {A , B} , • Construct by stages a structure S ∈ {T : T ∼ = A ∨ T ∼ = B} such that M fails to learn S . 7

  38. Example of nonlearnability, 1 A B S ∼ = B M ( S ) = � A � 7

  39. Example of nonlearnability, 1 A B S ∼ = B M ( S ) = � A � 7

  40. Example of nonlearnability, 1 A B S ∼ = B M ( S ) = � B � 7

  41. Example of nonlearnability, 1 A B S ∼ = B M ( S ) = � B � 7

  42. Example of nonlearnability, 1 A B S ∼ = A M ( S ) = � B � 7

  43. Example of nonlearnability, 1 A B S ∼ = A M ( S ) = � B � 7

  44. Example of nonlearnability, 1 A B S ∼ = A M ( S ) = � B � 7

  45. Example of nonlearnability, 1 A B S ∼ = A M ( S ) = � A � 7

  46. Example of nonlearnability, 1 A B S ∼ = A M ( S ) = � A � 7

  47. Example of nonlearnability, 1 A B S ∼ = B M ( S ) = � A � 7

  48. Example of nonlearnability, 2 Recall that the character c A of A is c A = {� k , i � : A has i equivalence classes of size k } . Define A = {A i } i ∈ ω + such that, for all A i ’s, c A i = {� k , 1 � : k � = i } . Proposition A / ∈ InfEx ∼ = ( E ). 8

  49. Finite separability • S is a limit of a finite family A if there is A ∈ A such that → fin S ∧ A �∼ A ֒ = S , 9

  50. Finite separability • S is a limit of a finite family A if there is A ∈ A such that → fin S ∧ A �∼ A ֒ = S , and c S ⊆ c A . 9

  51. Finite separability • S is a limit of a finite family A if there is A ∈ A such that → fin S ∧ A �∼ A ֒ = S , and c S ⊆ c A . • S is a limit of an infinite family A if → fin S ∧ A �∼ ( ∀A ∈ A )( A ֒ = S ) 9

Recommend


More recommend