x 1 ( k + 1) = max { x 1 ( k ) + 2 , x 2 ( k ) + 5 } x 2 ( k + 1) = max { x 1 ( k ) + 3 , x 2 ( k ) + 3 } x 1 (0) = x 2 (0) = 0 ⇒ 0 5 8 13 16 , . . . x ( k ): , , , , 0 3 8 11 16 x 1 (0) = 1 , x 2 (0) = 0 ⇒ 1 5 9 13 17 , . . . x ( k ): , , , , 0 4 8 12 16 The second schedule is more regular, isn’t it ?
Is it possible to construct a better schedule? Cycle S 1 S 2 takes 8 hours. Hence, an interval between trains can not be less than 8 / 2 =4
Is it possible to construct a better schedule? Cycle S 1 S 2 takes 8 hours. Hence, an interval between trains can not be less than 8 / 2 =4 So, this is the optimal schedule!
� 2 5 x 1 ( k + 1) = max { x 1 ( k ) + 2 , x 2 ( k ) + 5 } � x 2 ( k + 1) = max { x 1 ( k ) + 3 , x 2 ( k ) + 3 } , A = 3 3 max − → ⊕ : x ( k + 1) = A ⊗ x ( k ) + − → ⊗ x (1) = A ⊗ x (0) x (2) = A ⊗ x (1) = A ⊗ ( A ⊗ x (0)) = A ⊗ 2 ⊗ x (0) Similarly, x ( k ) = A ⊗ k ⊗ x (0).
� � � � � � � � � � 0 5 8 13 16 x 1 (0) = x 2 (0) = 0 ⇒ x ( k ): , , , , , . . . 0 3 8 11 16 � � � � � � � � � � 1 5 9 13 17 x 1 (0) = 1 , x 2 (0) = 0 ⇒ x ( k ): , , , , , . . . 0 4 8 12 16 A ∈ M n . Let A ⊗ v = λ ⊗ v , v �≡ −∞ . λ ⊗ v := ( λ ⊗ v i ) i =1 ,...,n = ( λ + v i ) i =1 ,...,n v is an eigenvector, λ is an eigenvalue If x (0) is an eigenvector with an eigenvalue λ , then x ( k ) = λ ⊗ k ⊗ x (0)(= kλ + x (0)). Initial conditions – eigenvectors ⇒ regular timetable
Graphs and matrices Directed graph G is ( V, E ), V – vertices, E ⊆ V × V – edges G is weighted if w : D → R is defined Path from i to j is p = (( i k , j k ) ∈ D ( A ) , k = 1 , m ) if i = i 1 , j k = i k +1 , j m = j m Length | p | l = m , weight | p | w = a i k +1 i k . � k =1
A circuit is a closed path: i = j . It is elementary if i k � = i l ∀ k, l . Theorem . A is indecomposable ⇒ Eigenvalue | γ | w λ = max γ | γ | l where γ – elem. circuit in G ( A ). Theorem . A is indecomposable ( A ⊗ n ) i,j − ( A ⊗ k ) i,j ∀ j ⇒ λ = min max n − k i =1 ,...,n k =1 ,...,n − 1
∞ A ⊗ k A + = � k =1 Theorem . A is indecomposable ⇒ Eigenvector is any i ’th column of A + λ , where vertex i lies in the el- ementary circuit with the maximal average weight, A λ = A − λJ , λ is the eigenvalue.
Our systems: Problem 1 Problem 2 and A ⊗ x = b A ⊗ x = λ ⊗ x General system: A ⊗ x ⊕ c = B ⊗ x ⊕ d ( A ⊗ x ⊕ c ≤ B ⊗ x ⊕ d )
To deal with linear systems we would like to develop tropical linear algebra:
To deal with linear systems we would like to develop tropical linear algebra: 1. LINEAR INDEPENDENCE.
To deal with linear systems we would like to develop tropical linear algebra: 1. LINEAR INDEPENDENCE. 2. RANK.
To deal with linear systems we would like to develop tropical linear algebra: 1. LINEAR INDEPENDENCE. 2. RANK. 3. DETERMINANT.
Linear independence over semirings Definition A system of elements, m 1 , . . . , m k in a semimodule M over a semiring S is linearly depen- dent in the Gondran-Minoux sense if there exist two subsets I, J ⊆ K := { 1 , . . . , k } , I ∩ J = ∅ , I ∪ J = K and scalars α 1 , . . . , α k ∈ S , � = (0 , . . . , 0), such that � � α i m i = α j m j i ∈ I j ∈ J
Theorem . [Gondran, Minoux] Any n + 1 vectors of the size n are linearly dependent
Theorem . [Gondran, Minoux] Any n + 1 vectors of the size n are linearly dependent Minus: often there is no linear independent generating sets in a semimodule
Example [Butkoviˇ c, Cuninghame-Green] 1 2 3 4 0 0 0 0 v 1 = , v 2 = , v 3 = , v 4 = − 1 − 2 − 3 − 4 are linearly dependent over R max since 1 − 1 3 + 0 2 + 0 4 − 1 max 0 − 1 0 + 0 = max 0 + 0 0 − 1 , , − 1 − 1 − 3 + 0 − 2 + 0 − 4 − 1 V = � v 1 , v 2 , v 3 , v 4 � contains no l.in. generating set. No 3 vectors generate V ∀ 4 vectors from V are linearly dependent.
Are there better variants of Linear dependence? Definition A subset P of elements in a semimodule M is called weakly linearly dependent if there is an element in P that can be expressed as a linear com- bination of other elements of P
Definition A subset P of elements in a semimodule M is called weakly linearly dependent if there is an element in P that can be expressed as a linear com- bination of other elements of P Plus: Any f.g. module has a finite weakly linearly independent generating set.
Definition A subset P of elements in a semimodule M is called weakly linearly dependent if there is an element in P that can be expressed as a linear com- bination of other elements of P Plus: Any f.g. module has a finite weakly linearly independent generating set. { x 1 , . . . , x k } are weakly linearly dependent. ⇒ x i = λ j x j � j � = i ⇒ { x j | j � = i } generates { x 1 , . . . , x k } . Etc
Minus: There exist infinite systems of weakly linearly independent 3-vectors.
Minus: There exist infinite systems of weakly linearly independent 3-vectors. [Butkoviˇ c, Cuninghame-Green] x i ∈ R 3 Vectors 0 max , i = 1 , 2 , . . . , m are − x i weakly linearly independent for any m and for different x i .
Minus: There exist infinite systems of weakly linearly independent 3-vectors. [Butkoviˇ c, Cuninghame-Green] The vectors x i ∈ R 3 0 max , i = 1 , 2 , . . . , m are weakly linearly in- − x i dependent for any m and for different x i . Idea: due to max any linear combination disturbs ei- ther 0 in the middle or symmetry of x and − x .
Minus: There exist infinite systems of weakly linearly independent 3-vectors. [Butkoviˇ c, Cuninghame-Green] The vectors x i ∈ R 3 0 max , i = 1 , 2 , . . . , m are weakly linearly in- − x i dependent for any m and for different x i . Idea: due to max any linear combination disturbs ei- ther 0 in the middle or symmetry of x and − x . Any 4 of these vectors are Gondran-Minoux linearly dependent!
weak Gondran-Minoux linear = ⇒ linear dependence dependence ⇐ X : =
Definition [Izhakian] A system of elements, m 1 , . . . , m k , m i = [ m 1 i , . . . , m n i ] t , i = 1 , . . . , k , in a semimodule M is strongly linearly dependent if there exist two series of subsets I l , J l ⊆ K := { 1 , . . . , k } , I l ∩ J l = ∅ , I l ∪ J l = K, l = 1 , . . . , n , and α 1 , . . . , α k ∈ S , � = (0 , . . . , 0): α i m l α j m l � � i = j i ∈ I l j ∈ J l
Gondran-Minoux strong linear = ⇒ linear dependence dependence ⇐ X : = − 1 0 0 ∈ R 3 0 − 1 0 , , max 0 0 − 1 are strongly linearly dependent (coefficients 0 , 0 , 0, note that in max-algebra 0 is not a neutral element by addition, −∞ is), but linearly independent by Gondran- Minoux.
0 ⊗ m = 0 + m = m . Thus we have: − 1 0 0 − 1 x 1 = 0 , x 2 = , x 3 = 0 0 0 − 1 Consider I 1 = { 1 , 2 } , J 1 = { 3 } . Then x 1 1 ⊕ x 1 2 = max {− 1 , 0 } = 0 = x 1 3 . Similarly, for I 2 = I 3 = { 1 } , J 2 = J 3 = { 2 , 3 } x 2 2 ⊕ x 2 3 = max {− 1 , 0 } = 0 = x 2 1 and x 3 2 ⊕ x 3 3 = max {− 1 , 0 } = 0 = x 3 1 .
Definition A row rank of A ∈ M m,n ( S ), r ( A ), is the minimal cardinality of weakly l.in. generating set of the linear span of the rows of A . It is useless to consider analogs of this function for other types of l.in. since they are either coincide or do not exist
Example 1 0 0 0 1 0 0 1 0 0 0 1 . Y = X = , 0 0 1 1 1 0 1 1 0 1 0 1 1 0 1 Hence r ( Y ) = 3 < 4 = r ( X ).
Example 1 0 0 e 2 0 1 0 e 3 0 1 0 = 0 0 1 Y = X = , . 0 0 1 e 1 + e 2 1 1 0 1 1 0 1 0 1 e 1 + e 3 1 0 1 Hence r ( Y ) = 3 < 4 = r ( X ).
Example 0 1 0 . 0 0 1 X = 1 1 0 1 0 1 Then c ( X ) = 3 � =4 = r ( X ).
Definition A ∈ M m,n ( S ) is of maximal row rank k , mr GM ( A ) = k , mr S ( A ) = k , or mr w ( A ) = k , if A contains k l.in. rows and any k + 1 rows are l.d. for any one type of l.d. r ( A ) ≤ mr w ( A ) Lemma 1 Example √ √ 3 − 7 ∈ M 2 , 1 ( Z [ √ A = 7] + ). mr w ( A ) = 2: 7 − 2 √ √ √ √ √ 3 − 7 � = α ( 7 − 2) & α (3 − 7) � = 7 − 2 in Z [ 7] + √ √ r ( A ) = 1: 1 = (3 − 7 − 2) generates the row 7) + ( space of A .
Definition A matrix A ∈ M mn ( S ) is of factor rank k , f ( A ) = k if k is the smallest with ∃ B ∈ M mk ( S ), C ∈ M kn ( S ) s.t. A = BC ; f ( A ) = 0 iff A = 0 Factor rank over R + differs from usual rank over R : 0 1 1 1 ⇒ f R ( A ) = 3 but f R + ( A ) = 4. 1 0 1 1 A = 1 1 0 1 1 1 1 0
− 1 0 0 0 . . . 0 − 1 0 . . . 0 D n = 0 0 − 1 0 ∈ M n ( R max ) . . . . . . . . ... . . . . . . . 0 0 0 . . . − 1 Theorem [Develin, Santos, Sturmfels] f ( D n ) is the smallest integer r : n ≤ C ⌊ r 2 ⌋ . r Ex. f ( D 6 ) = 4, f ( D 36 ) = 8
Lemma. mr S ( A ) ≤ mr GM ( A ) ≤ f ( A ) ≤ r ( A ) ≤ mr w ( A ) Examples − 1 0 0 0 . . . 0 − 1 0 . . . 0 D n = 0 0 − 1 . . . 0 ∈ M n ( R max ) . . . . . ... . . . . . . . 0 0 0 . . . − 1 2 = mr S ( D 3 ) <mr GM ( D 3 ) = 3; 3 = mr GM ( D 4 ) <f ( D 4 ) = 4 A sum of any 2 columns with 0 coeffs is (0 , . . . , 0), ⇒ any 4 columns are l.d.
1 0 − 1 2 0 − 2 ∈ M 3 , 4 ( R max ) A = 3 0 − 3 4 0 − 4 3 = f ( A ) <r ( A ) = 4
Rank via determinant?
Rank via determinant? There is no classical definition of the determinant
There is no semiring definition of the determinant. Instead it is usual to consider the following invariant Definition A bi-determinant of A = [ a ij ] ∈ M n ( S ) is a pair ( � A � + , � A � − ), where � A � + = � ( a 1 σ (1) · . . . · a nσ ( n ) ) σ ∈ A n � A � − = � ( a 1 σ (1) · . . . · a nσ ( n ) ) σ ∈ S n \ A n S n is the permutation group of order n , A n ⊂ S n is the subgroup of the even permutations.
Some Properties Let A = [ a ij ]. 1. bid A = bid ( A t ). a 11 . . . a 1 n . . . . . . . . . 2. bid αa i 1 . . . αa in = α bid A . . . . . . . . . . a n 1 . . . a nn a 11 . . . αa 1 j . . . a 1 n 3. bid . . . . . . . . . . . . . . . = α bid A . a n 1 . . . αa jn . . . a nn
4. A is invertible. Then | A | + � = | A | − . The converse does not hold: 1 2 ∈ M n ( Q + , max , · ) A = 3 4 Then � A � + = 4 � = 6 = � A � − but �∃ B : AB = I .
Definition A ∈ M n ( S ) is semiinvertible if ∃ A 1 , A 2 ∈ M n ( S ) such that I + AA 1 = AA 2 I + A 1 A = A 2 A Then � A � + � = � A � − = S is a semifield. ⇒ A is 5. semiinvertible. The converse does not hold: 1 2 ∈ M n ( R + ) A = 2 4 Then A is semiinvertible with A 1 = A 2 = I , but � A � + = � A � − = 4.
6. Multiplicativity: � AB � + = � A � + � B � + + � A � − � B � − + r � AB � − = � A � + � B � − + � A � − � B � + + r for some r ∈ S .
What is an analog of classical “determinantal” definition of rank ? Definition A determinantal rank rk det ( A ) is the biggest k such that there exists a k × k -submatrix B of A with � B � + � = � B � −
Another way ! Definition A permanent of A = [ a ij ] ∈ M n ( S ) is de- fined by per ( A ) = � ( a 1 σ (1) · . . . · a nσ ( n ) ) σ ∈ S n if S = R max then it is max σ ∈ S n { a 1 σ (1) + . . . + a nσ ( n ) } where S n is the permutation group on the set { 1 , . . . , n }
Definition A matrix A ∈ M n ( R max ) is said to be trop- ically singular if the maximum is achieved at least twice. Definition A tropical rank trop ( A ) is the biggest k such that A has a tropically non-singular k × k -submatrix.
What is about general semirings ? Definition A matrix A ∈ M n ( S ) is said to be tropically singular if ∃ T ⊆ S n such that ( a 1 σ (1) · . . . · a nσ ( n ) ) = ( a 1 σ (1) · . . . · a nσ ( n ) ). � � σ ∈ T σ ∈ S n \ T Definition A tropical rank trop ( A ) is the biggest k such that A has a tropically non-singular k × k -submatrix.
More differences in other rank functions over R max 0 −∞ 0 0 −∞ −∞ 0 −∞ −∞ −∞ 0 0 M 5 × 6 ( B ) A = −∞ −∞ 0 0 0 0 ∈ or . −∞ 0 −∞ 0 −∞ 0 M 5 × 6 ( R max ) −∞ 0 0 −∞ 0 −∞ Theorem. [Ya. Shitov] mr GM ( A ) = 5, mc GM ( A ) = rk det ( A ) = 4, trop ( A ) = 3. A is the minimal example distinguishing mr GM and mc GM , mr GM and rk det .
Factor rank ? Tropical rank ? − 1 0 0 . . . 0 0 − 1 0 . . . 0 0 − 1 D n = 0 . . . 0 ∈ M n ( R max ) . . . . . ... . . . . . . . 0 0 0 . . . − 1 Then (1) rk det ( D 3 ) = 3 > 2 = trop ( D 3 ) Max is achieved twice but both times on even substitutions (2) f ( D 4 ) = 4 > 3 = rk det ( D 4 )
mr w ( A ) mc w ( A ) ❍ ✟ ❍ ✟✟✟✟✟✟✟ ❍ ❍ ❍ ❍ ❍ ❍ r ( A ) c ( A ) ❍ ✟ ❍ ✟✟✟✟✟✟✟ ❍ ❍ ❍ ❍ ❍ ❍ f ( A ) ✟ ❍❍❍❍❍❍❍ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ❍ mr GM ( A ) mc GM ( A ) ❍ ✟ ❍ ✟✟✟✟✟✟✟ ❍ ❍ ❍ ❍ ❍ ❍ rk det ( A ) mr S ( A ) = trop ( A ) = mc S ( A )
Theorem. [Izhakian, Rowen]. S = R max then mr S ( A ) = mc S ( A ) = trop ( A ) [Akian, Gaubert, Guterman]: Another proof based on the game theory approach.
Mean payoff games: G = ( V, E ) – directed bipartite graph, a ij , b kl – weights of arcs. Max and Min move a pawn. Payments correspond to moves.
Player Max — maximizer Player Min — minimizer States=vertices: I ∪ J , disjoint, I = { 1 , . . . , m } , J = { 1 , . . . , n } Steps: a ij j • − → i — Min plays and receives a ij from Max b kl − → • l — Max plays and Min pays b kl to Max k
1 P ✐ 1 2 P 2 −∞ P P P P P P P P 8 −∞ ② 1 A = P P P − 9 P P ✏ −∞ 0 ✏ ✏ ✏ ✏ ✏ Example. ✏ − 3 ✏ ✏ ✏ 2 8 ✏ ✏ ✏ ✏ ✏ ✮ ❳❳❳❳❳❳❳❳❳❳❳❳❳❳❳ − 12 ② 2 1 −∞ ③ ✏ ✏ ✏ ✏ ✏ ✏ ✏ B = − 3 − 12 ✏ ✏ ✏ 3 0 ✏ ✏ ✏ ✏ ✏ ✮ − 9 5 5 Min starts at 1. 2 If 1 • − → 1 then cycle with value − 2 + 1 = − 1. − 12 8 But if 1 • − → 2 then Max can do 2 − → • 2 and Min 0 have to do 2 • − → 3. Then cycle with value 0 + 5 = 5. If Min starts at 2, she has no choice. 1 • is winning state for Min, 2 • is not
Natural Assumptions: ∀ j ∈ J , ∃ i ∈ I such that a ij � = −∞ . ∀ i ∈ I , ∃ j ∈ J such that b ij � = −∞ .
Theorem . [AGG] Let A, B ∈ M m,n ( R max ). ∃ solution of Problem Is a tropical cone A ⊗ x ≤ B ⊗ x non- trivial? ⇐ ⇒ ∃ winning for Max initial state in mean pay-off game with matrices A and B .
Corollary. Let A = ( a ij ) ∈ M m,n ( R max ), m ≥ n . Then columns of A are strongly independent iff A contains a tropically non-singular n × n -submatrics.
How big can be the difference between rank functions?
Method of tropical matrix patterns Definition Tropical pattern of A = ( a ij ) ∈ M n m ( R max ) is P ( A ) = ( b ij ) ∈ M n m ( B ) defined by if a uv = max n 1 i =1 { a iv } > −∞ , b uv = if either a uv = −∞ or a uv < max n 0 i =1 { a iv } .
Examples. 1 2 0 0 ∈ M 2 ( R max ) �→ P ( A 1 ) = ∈ A 1 = 2 4 1 1 M 2 ( B )
Examples. 1 2 0 0 ∈ M 2 ( R max ) �→ P ( A 1 ) = ∈ A 1 = 2 4 1 1 M 2 ( B ) 2 3 1 0 ∈ M 2 ( R max ) �→ P ( A 2 ) = ∈ A 2 = 2 4 1 1 M 2 ( B )
Examples. 1 2 0 0 ∈ M 2 ( R max ) �→ P ( A 1 ) = ∈ A 1 = 2 4 1 1 M 2 ( B ) 2 3 1 0 ∈ M 2 ( R max ) �→ P ( A 2 ) = ∈ A 2 = 2 4 1 1 M 2 ( B ) 1 2 0 0 ∈ M 2 ( R max ) �→ P ( A 3 ) = ∈ A 3 = 3 4 1 1 M 2 ( B )
Examples. 1 2 0 0 ∈ M 2 ( R max ) �→ P ( A 1 ) = ∈ A 1 = 2 4 1 1 M 2 ( B ) 2 3 1 0 ∈ M 2 ( R max ) �→ P ( A 2 ) = ∈ A 2 = 2 4 1 1 M 2 ( B ) 1 2 0 0 ∈ M 2 ( R max ) �→ P ( A 3 ) = ∈ A 3 = 3 4 1 1 M 2 ( B ) 3 4 1 1 ∈ M 2 ( R max ) �→ P ( A 4 ) = ∈ A 4 = 3 4 1 1 M 2 ( B )
Gondran-Minoux linear dependence of rows: 1 2 0 0 l.in. �→ P ( A 1 ) = A 1 = l.d. 2 4 1 1 1+1 2+1 1 0 l.in. �→ P ( A 2 ) = A 2 = l.in. 2 4 1 1 1 2 0 0 l.d. �→ P ( A 3 ) = A 3 = l.d. 3 4 1 1 1+2 2+2 1 1 l.d. �→ P ( A 4 ) = A 4 = l.d. 3 4 1 1
� a 1 · � Theorem . Let A = ∈ M nm ( R max ). ··· a n · The rows of A are GM-independent over R max iff � λ 1 ⊗ a 1 · � ∃ λ 1 , . . . , λ n ∈ R max : the rows of P ··· λ n ⊗ a n · are GM-independent over B .
Easy part: Lemma. Let A = ( a ij ) ∈ M nm ( R max ), P ( A ) = ( w ij ) ∈ M nm ( B ). If rows of A are GM-dependent then rows P ( A ) are GM-dependent. Proof. Assume ∃ I, J ⊂ { 1 , . . . , n } , I ∩ J = ∅ , I ∪ J = { 1 , . . . , n } , ( λ 1 , . . . , λ n ) � = ( −∞ , . . . , −∞ ): max i ∈ I { λ i a ik } = max j ∈ J { λ j a jk } for all k. Set µ t = 1 if λ t = max n u =1 { λ u } and µ t = 0 if λ t < max n u =1 { λ u } . Then max i ∈ I { µ i w ik } = max j ∈ J { µ j w jk } for all k .
Hard part: Theorem . Let A = ( a ij ) ∈ M nm ( R max ), a ij � = −∞ for all i, j . Assume the rows of A are GM-independent. Then ∃ A ′ ∈ M nm ( R max ) obtained from A by the mul- tiplication of rows with certain positive numbers s.t. the rows of P ( A ′ ) are GM-independent.
Theorem . Let A ∈ M nm ( R max ), P ( A ) ∈ M nm ( B ). Then trop ( A ) � trop ( P ( A )).
Theorem . Let A ∈ M nm ( R max ), P ( A ) ∈ M nm ( B ). Then trop ( A ) � trop ( P ( A )). Examples. 1. trop ( I n ) = trop ( P ( I n )) = trop ( I n ( B )) = n .
Recommend
More recommend