Authors Markus Bl{\"a}ser, Gorav Jindal, Anurag Pandey,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21
www.theoryofcomputing.org
S PECIAL ISSUE : CCC 2017
A Deterministic PTAS for the
Commutative Rank of Matrix Spaces
Markus Bläser Gorav Jindal Anurag Pandey
Received July 15, 2017; Revised December 22, 2017; Published April 15, 2018
Abstract: We consider the problem of computing the commutative rank of a given matrix
space B ⊆ Fn×n , that is, given a basis of B, find a matrix of maximum rank in B. This
problem is fundamental, as it generalizes several computational problems from algebra
and combinatorics. For instance, checking if the commutative rank of the space is n,
subsumes problems such as testing perfect matching in graphs and identity testing of algebraic
branching programs. Finding an efficient deterministic algorithm for the commutative rank
is a major open problem, although there is a simple and efficient randomized algorithm for
it. Recently, there has been a series of results on computing the non-commutative rank of
matrix spaces in deterministic polynomial time. Since the non-commutative rank of any
matrix space is at most twice the commutative rank, one immediately gets a deterministic
1/2-approximation algorithm for the commutative rank. It is a natural question whether this
approximation ratio can be improved. In this paper, we answer this question affirmatively.
We present a deterministic polynomial-time approximation scheme (PTAS) for comput-
ing the commutative rank of a given matrix space. More specifically, given a matrix space
B ⊆ Fn×n and a rational number ε > 0, we give an algorithm that runs in time O(n4+3/ε ) and
A conference version of this paper appeared in the Proceedings of the 32nd Computational Complexity Conference (CCC),
2017 [1].
ACM Classification: F.2.2, I.1.2
AMS Classification: 68Q25, 68W25, 68W30
Key words and phrases: approximation algorithm, algebraic complexity, commutative rank, matrix
spaces, PTAS, Wong sequences
© 2018 Markus Bläser, Gorav Jindal, and Anurag Pandey
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2018.v014a003
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
computes a matrix A ∈ B such that the rank of A is at least (1 − ε) times the commutative
rank of B. The algorithm is the natural greedy algorithm. It always takes the first set of k
matrices that will increase the rank of the matrix constructed so far until it does not find any
improvement, where the size k of the set depends on ε.
1 Introduction
Given a matrix space B by a basis B1 , B2 , . . . , Bm over some underlying field F, what is the maximum
rank of any matrix contained in B? This maximum rank is also called the commutative rank of the
matrix space B = hB1 , B2 , . . . , Bm i. This problem was introduced by Edmonds [4] and is now known as
Edmonds’ Problem.
With a given basis B1 , B2 , . . . , Bm , we can associate a matrix B with homogeneous linear forms as
entries by setting
m
B := ∑ xi Bi ,
i=1
where x1 , . . . , xm are indeterminates. Every matrix in B is the homomorphic image of B under some
substitution that assigns values from F to the variables. The commutative rank of the space B is same
as the rank of the matrix B over the field of rational functions F(x1 , x2 , . . . , xm ), provided that F is large
enough. For this reason, this problem is sometimes called the symbolic matrix rank problem, too.
The commutative rank problem is a fundamental problem, which generalizes several algorithmic
problems in algebraic complexity theory and graph theory. The maximum matching problem in bipartite
and general graphs is a special case of the commutative rank problem [15]. A graph has a perfect matching
if and only if its Tutte matrix, which is a matrix with homogeneous linear forms as entries, has full rank.
Even the so-called linear matroid parity problem is a special case of the commutative rank problem [19].
Valiant [23] showed that a formula of size s can be written as a determinant of an (s + 2) × (s + 2)
matrix having affine linear forms as entries. (In fact, these entries are of a very special form, they are
either constants or variables.) It follows that checking if a given matrix space has full rank is as hard
as polynomial identity testing of formulas. (Note that we here have affine linear forms, but this is no
problem, since the symbolic rank of B1 + x2 B2 + · · · + xm Bm and x1 B1 + x2 B2 + · · · + xm Bm is the same.)
It is even known that polynomials computed by algebraic branching programs can be written as the
determinant of a polynomial-size matrix with affine linear forms as entries, see [24, 22, 18]. So the
problem of deciding whether a given matrix space is of full rank is as hard as polynomial identity testing
of arithmetic branching programs.
The rank of B is the size of the largest non-vanishing minor of B. Any minor of B is a polynomial of
degree at most n in the variables x1 , x2 , . . . , xm . Therefore, for almost all substitutions, the homomorphic
image of B will be a matrix of maximum rank. The Schwartz-Zippel lemma [25, 20] tells us that we can
choose such a substitution uniformly from a large enough set. In this way, one immediately gets a very
simple randomized algorithm for computing the commutative rank of B, provided that the order of the
field F is large enough. It is a major open problem in complexity theory to find an efficient deterministic
algorithm for computing the commutative rank.
We remark that if the underlying field F is not large enough, then this problem is hard. Buss et al. [2]
proved that the problem is NP-hard when the field F is of constant order.
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 2
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
1.1 Previous work
Since the general case of computing the commutative rank is as hard as identity testing for polynomials
given as algebraic branching programs, several special cases of matrix spaces have been considered.
Maybe the simplest case is when all the matrices Bi are of rank 1 [16, 12, 13]. This case subsumes
the matching problem in bipartite graphs, for instance. For this case, deterministic polynomial-time
algorithms are known [12, 13]. Recently, it was even demonstrated that computing the commutative
rank in this case is in quasi-NC, that is, there are (almost) efficient deterministic parallel algorithms
for this case [6, 10]. The next case is when the matrices Bi are skew-symmetric of rank 2. This case
is also of special interest as the linear matroid parity problem reduces to it [15]. Several deterministic
polynomial-time algorithms have been found for this case, see [21, 8, 17].
Analogous to the notion of commutative rank of a matrix space, there is also a notion of non-
commutative rank (see the next section for a precise definition). The commutative rank of a matrix space
is always a lower bound for the non-commutative rank. Matrix spaces for which the commutative rank
and non-commutative rank are equal are called compression spaces [5]. A deterministic polynomial-
time algorithm for checking if a compression space is of full rank (over fields of characteristic zero)
was discovered by Gurvits [11]. Gurvits’ algorithm was analyzed more carefully by Garg et al. [9]
to demonstrate that the algorithm is actually a deterministic polynomial-time algorithm to check if a
given matrix space has full non-commutative rank. Moreover, Garg et al. [9] devise a deterministic
polynomial-time algorithm which can compute the non-commutative rank exactly, even when it is not
maximum. This algorithm works for C and its subfields. Ivanyos et al. [14] extended this results to
arbitrary fields, using a totally different algorithm.
Fortin and Reutenauer [7] show that the non-commutative rank of any matrix space is at most twice
the commutative rank. So the algorithms in [9, 14] are deterministic polynomial-time algorithms that
can compute a 1/2-approximation of the commutative rank. Approximating the commutative rank of a
matrix space can be seen as a relaxation of the polynomial identity testing problem. Improving on the
1/2-approximation was formulated as an open problem by Garg et al. [9].
1.2 Our results
Our main result is an improvement on this approximation performance. We give a deterministic
polynomial-time approximation scheme (PTAS) for approximating the commutative rank. In other
words, given a basis B1 , . . . , Bm of a matrix space B of n × n-matrices and some rational number ε > 0,
our algorithm outputs a matrix A ∈ B whose rank is at least (1 − ε) · r, where r = max{rank(B) | B ∈ B},
provided that the order of the underlying field is larger than n. Our algorithm performs O(n4+3/ε )
many arithmetic operations, the size of each operand is linear in the sizes of the entries of the matrices
B1 , . . . , Bm . So for fixed ε, the running time is polynomial in the input size.
Our algorithm is very simple, it is the natural greedy algorithm: Assume we have constructed a matrix
A so far. Then the algorithm tries all subsets Bi1 , . . . , Bik of B1 , . . . , Bm of size k, where k depends on ε, and
tests whether we can increase the rank of A by adding an appropriate linear combination of Bi1 , . . . , Bik .
The main difficulty is to prove that when this algorithm stops, then A is an (1 − ε)-approximation. Our
analysis uses so-called Wong sequences.
For polynomial identity testing, one has to test whether a given matrix space has full rank or rank
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 3
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
≤ n − 1. Therefore, our PTAS does not seem to help getting a polynomial-time algorithm for polynomial
identity testing.
1.3 Organization of the paper
Section 2 describes the basic setup of the problem. It describes the basic notation, definitions, and
related known lemmas and theorems. In Section 3, we first present the most basic variant of our greedy
algorithm, which computes a 1/2-approximation of the commutative rank in deterministic polynomial
time. It describes the basic ideas but is much easier to analyze. This motivates our final algorithm which
can compute arbitrary approximations to the commutative rank in deterministic polynomial time. To
extend this 1/2-approximation to arbitrary approximation, we introduce the notion of Wong sequences
and Wong index in Section 4. Section 5 studies the relation between commutative rank and Wong
index. In this section, we prove that the higher the Wong index is of a given matrix, the closer its
rank is to the commutative rank of the given matrix space. This allows us to extend Algorithm 1 to
arbitrary approximation by considering larger subsets. The algorithm for arbitrary approximation of
the commutative rank and its proof of correctness and desired running time are given in Section 6. We
conclude by giving some examples in Section 7 which demonstrate that the analysis of our algorithm is
tight.
2 Preliminaries
Here, we introduce the basic definitions and notation needed to fully describe our algorithm. Henceforth
we assume that the order of the underlying field is greater than n, the size of the input matrices, i. e.,
| F | > n.
1. If V and W are vector spaces, then we use the notation V ≤ W to denote that V is a subspace of W .
2. We use Fn×n to denote the set of all n × n matrices over a field F.
3. Im(A) is used to denote the image of a matrix A ∈ Fn×n .
4. ker(A) is used to denote the kernel of a matrix A ∈ Fn×n .
5. dim(V ) is used to denote the dimension of a vector space V .
6. For any subset S of a vector space U, hSi denotes the linear span of S.
7. For A ∈ Fn×n and a vector space U ≤ Fn , the image of U under A is A(U) = AU = {Au | u ∈ U}.
8. The preimage of W ≤ Fn under A ∈ Fn×n is defined as A−1 (W ) = {v ∈ V | Av ∈ W }.
9. The set {0, 1, 2, . . . , n} is denoted by [n].
10. We use the notation Ir to denote the r × r identity matrix.
Below are some of the basic definitions which we shall need.
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 4
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
Definition 2.1 (Matrix space). A vector space B ≤ Fn×n is called a matrix space.
We usually deal with matrix spaces whose generating set is given as the input. More precisely, we are
given a matrix space
B = hB1 , B2 , . . . , Bm i ≤ Fn×n ,
where we get the matrices B1 , B2 , . . . , Bm as the input. Note that without loss of generality, one can assume
that m ≤ n2 .
Definition 2.2 (Commutative rank). The maximum rank of any matrix in a matrix space B is called the
commutative rank of B. We write rank(B) to denote this quantity.
We shall use the same notation rank(A) for denoting the usual rank of any matrix. Note that
the rank of a matrix A is same as the commutative rank of the matrix space generated by A, that is,
rank(A) = rank(hAi).
Definition 2.3 (Product of a matrix space and a vector space). The image of a vector space U under a
matrix space A is the span of the images of U under every A ∈ A, that is A(U) := AU :=h A∈A A(U)i.
S
We also call this image AU to be the product of the matrix space A and the vector space U.
Definition 2.4 (c-shrunk subspace). A vector space V ≤ Fn is a c-shrunk subspace of a matrix space B,
if rank(BV ) ≤ dim(V ) − c.
Definition 2.5 (Non-commutative rank). Given a matrix space B ≤ Fn×n , let r be the maximum non-
negative integer such that there exists a r-shrunk subspace of the matrix space B. Then n − r is called the
non-commutative rank of B. We use the notation nc-rank(B) to denote this quantity.
From the definition above, it is not clear why we call this quantity non-commutative rank. It can be
shown that the quantity above equals the rank of the corresponding symbolic matrix when the variables
x1 , . . . , xm do not commute. For more natural and equivalent definitions as well as more background on
non-commutative rank, we refer the reader to [9, 7].
Lemma 2.6. For all fields F and for all matrix spaces B ≤ Fn×n , rank(B) ≤ nc-rank(B).
Proof. Let r = nc-rank(B). This means that there exists V ≤ Fn such that rank(BV ) = dim(V ) − (n − r).
Therefore, for all B ∈ B, rank(BV ) ≤ dim(V ) − (n − r). Thus
rank(B) ≤ n − (n − r) = r = nc-rank(B) .
The above lemma states that the non-commutative rank is at least as large as the commutative rank.
But how large it can be compared to the commutative rank? The following theorem states that it is always
less than twice the commutative rank.
Theorem 2.7 ([7, 3]). For infinite fields F and all matrix spaces B ≤ Fn×n , we have :
nc-rank(B) ≤ 2 · rank(B) .
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 5
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
Derksen and Makam also gave a family of examples where the ratio of non-commutative rank and
commutative rank gets arbitrarily close to 2, hence showing that the bound above is sharp (see [3, Theorem
1.15]). To bound the commutative rank of a matrix space, we also need the following easy fact from
linear algebra.
Fact 2.8. Let M be a matrix of the following form:
r columns
r rows L B (2.1)
M=
A 0 n − r rows
n − r columns
Also, let rank(A) = a and rank(B) = b. Then rank(M) ≤ r + min{a, b}.
3 1/2-approximation algorithm for the commutative rank
Here we present the simplest variant of the greedy algorithm which also achieves a 1/2-approximation
for the commutative rank. This algorithm looks for the first matrix that increases the rank of the current
matrix and stops if it does not find such a matrix. Its analysis is much easier than the general case.
Algorithm 1 Greedy algorithm for 1/2-approximating commutative rank
Input : A matrix space B = hB1 , B2 , . . . , Bm i ≤ Fn×n , input is a list of matrices B1 , B2 , . . . , Bm .
Output : A matrix A ∈ B such that rank(A) ≥ 21 · rank(B)
Initialize A = 0 ∈ Fn×n to the zero matrix.
while Rank is increasing do
for each 1 ≤ i ≤ m do
Check if there exists a λ ∈ F such that rank(A + λ Bi ) > rank(A).
if rank(A + λ Bi ) > rank(A) then
Update A = A + λ Bi .
return A.
Lemma 3.1. If | F | > n, then Algorithm 1 runs in polynomial time and returns a matrix A ∈ B such that
rank(A) ≥ (1/2) · rank(B).
Proof. Let A be the matrix returned by Algorithm 1. Assume that A has rank r. We know that there exist
non-singular matrices P and Q such that
Ir 0 . . . 0
0 0 ... 0
PAQ = . . . .. , (3.1)
.
. . . . . .
0 0 ... 0
where Ir is the r × r identity matrix. Now consider the matrix space
PBQ :=hPB1 Q, PB2 Q, . . . , PBm Qi .
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 6
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
This does not change anything with respect to the rank. Also the way the algorithms works is not changed
by such a transformation. So for the analysis, we can replace B by PBQ. Consider any general matrix
A + x1 B1 + x2 B2 + · · · + xm Bm in B. We decompose it as
!
M1 M2
A + x1 B1 + x2 B2 + · · · + xm Bm = . (3.2)
M3 M4
Here M1 is an r × r matrix, M2 is an r × (n − r) matrix, M3 is a (n − r) × r matrix and M4 is a (n − r) ×
(n − r) matrix. M1 , M2 , M3 , and M4 have (affine) linear forms in variables x = (x1 , x2 , . . . , xm ) as their
entries.
Now we claim that the bottom right block M4 is the zero matrix. Assume otherwise. Assume that
the (s,t)-entry of the above matrix is non-zero with s,t > r. Consider the (r + 1) × (r + 1) submatrix of
A + x1 B1 + x2 B2 + · · · + xm Bm , obtained by adding the sth row (from M3 ) and the t th column (from M2 ) to
M1 . We shall denote this submatrix by C, which looks like
1 + `11 (x) `12 (x) ... `1r (x) a1 (x)
`21 (x) 1 + `22 (x) . . . `2r (x) a2 (x)
C=
.
. .
. . . .
. .
.
. (3.3)
. . . . .
`r1 (x) `r2 (x) . . . 1 + `rr (x) ar (x)
b1 (x) b2 (x) ... br (x) c(x)
The `i, j , ai , b j , and c are homogeneous linear forms in x. By our choice, c(x) 6= 0. It is not hard to see
that
det(C) = c(x) + terms of degree at least 2 . (3.4)
Thus there are λ ∈ F and i ∈ [m] such that det(C(α)) 6= 0, where α is the assignment to the variables
x = (x1 , x2 , . . . , xm ) obtained by setting xk = 0 when k 6= i and xi = λ . Since the degree of det(C) is at
most n, by Schwartz-Zippel lemma it follows that we have to check at most n + 1 values for λ .
These choices of i ∈ [m] and λ ∈ F would allow Algorithm 1 to find a matrix A of larger rank. Thus
Algorithm 1 would keep finding a matrix A of larger rank when the matrix M4 is non-zero. Hence it can
only stop when M4 is the zero matrix. If M4 is the zero matrix, then by using Fact 2.8 we know that
rank(B) ≤ 2r. Thus when Algorithm 1 stops, it outputs a matrix A such that rank(A) ≥ (1/2) · rank(B).
The running time is obviously polynomial since the while loop is executed at most n times and
we have to check at most n + 1 values for λ . The size of the numbers that occur in the rank check is
polynomial in the size of the entries of B1 , . . . , Bm .
4 Wong sequences and Wong index
In this section, we introduce the notion of Wong sequences which is crucially used in our proofs. For a
more comprehensive exposition, we refer the reader to [12].
Definition 4.1 (Second Wong sequence). Let B ≤ Fn×n be a matrix space and A ∈ B. The sequence of
subspaces (Wi )i∈[n] of Fn is called the second Wong sequence of (A, B), where W0 = {0}, and Wi+1 =
BA−1 (Wi ).
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 7
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
In [12], first Wong sequences are also introduced. But for our purpose, just the notion of second Wong
sequence is enough. It is easy to see that W0 ≤ W1 ≤ W2 ≤ · · · ≤ Wn , see [12].
Next, we introduce the notion of pseudo-inverses. They are helpful in computing the Wong sequences.
We remark that we need the notion of Wong sequence only for the analysis, our algorithm is completely
oblivious to Wong sequences.
Definition 4.2 (Pseudo-Inverse). A non-singular matrix A0 ∈ Fn×n is called a pseudo-inverse of a linear
map A ∈ Fn×n if the restriction of A0 to Im(A) is the inverse of the restriction of A to a direct complement
of ker(A).
Unlike the usual inverse of a non-singular matrix, a pseudo-inverse of a matrix is not necessarily
unique. But it always exists and if A is non-singular, then it is unique and coincides with the usual inverse.
The following lemma demonstrates the role of pseudo-inverses in computing Wong sequences. This
lemma and its proof are implicit in the proof of Lemma 10 in [12]. We prove it here for the sake of
completeness. The lemma essentially states that we can replace the preimage computation in the Wong
sequence by multiplication with a pseudo-inverse.
Lemma 4.3. Let F be any field and B ≤ Fn×n be a matrix space, A ∈ B, A0 be a pseudo-inverse of A and
(Wi )i∈[n] be the second Wong sequence of (A, B). Then for all 1 ≤ i ≤ n, we have
Wi = (BA0 )i (ker(AA0 ))
as long as Wi−1 ⊆ Im(A).
Proof. We prove the statement by induction on i. Since ker(AA0 ) = (A0 )−1 (ker(A)), we get that
(BA0 )(ker(AA0 )) = BA0 (A0 )−1 (ker(A)) = B ker(A) = W1 .
This proves the base case of i = 1. To prove that Wi = (BA0 )i (ker(AA0 )), we need to prove that
(BA0 )i (ker(AA0 )) ⊆ Wi and Wi ⊆ (BA0 )i (ker(AA0 )). By the induction hypothesis, we just need to prove
that (BA0 )(Wi−1 ) ⊆ Wi and Wi ⊆ (BA0 )(Wi−1 ).
First we prove the easy direction, that is (BA0 )(Wi−1 ) ⊆ Wi . Since Wi−1 ⊆ Im(A), we have that
A (Wi−1 ) ⊆ A−1 (Wi−1 ). Thus (BA0 )(Wi−1 ) ⊆ BA−1 (Wi−1 ) = Wi .
0
Now we prove that Wi ⊆ (BA0 )(Wi−1 ). Since Wi−1 ⊆ Im(A), we get that A−1 (Wi−1 ) = A0Wi−1 +ker(A).
Thus Wi = BA−1 (Wi−1 ) ⊆ BA0Wi−1 + B ker(A). We have B ker(A) = W1 ⊆ Wi−1 , this implies that Wi ⊆
BA0Wi−1 +Wi−1 . Since A ∈ B and Wi−1 = AA0Wi−1 , we get that Wi−1 ⊆ BA0Wi−1 . This in turn implies
that Wi ⊆ BA0Wi−1 + BA0Wi−1 = (BA0 )(Wi−1 ).
Given a matrix space B and a matrix A ∈ B, how can one check that A is of maximum rank in B, i. e.,
rank(A) = rank(B)? The following lemma in [12] gives a sufficient condition for A to be of maximum
rank in B.
Lemma 4.4 (Lemma 10 in [12]). Assume that | F | > n. Let A ∈ B ≤ Fn×n , and let A0 be a pseudo-inverse
of A. If we have that for all i ∈ [n],
Wi = (BA0 )i (ker(AA0 )) ⊆ Im(A) , (4.1)
then A is of maximum rank in B.
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 8
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
Thus, the above lemma shows that if A is not of maximum rank in B, then we have Wi * Im(A) for
some i ∈ [n]. For our purposes, we need to quantify when exactly this happens. Therefore we define:
Definition 4.5 (Wong Index). Let B ≤ Fn×n be a matrix space, A ∈ B and (Wi )i∈[n] be the second Wong
sequence of (A, B). Let k ∈ [n] be the maximum integer such that Wk ⊆ Im(A). Then k is called the Wong
index of (A, B). We shall denote it by w(A, B).
Using the above definition, another way to state Lemma 4.4 is that if the Wong index w(A, B) of
(A, B) is n, then A is of maximum rank in B. But can one say more? In next section, we explore
this connection. We shall prove that the closer w(A, hA, Bi) is to n, the closer the rank of A is to the
commutative rank of hA, Bi.
The converse of Lemma 4.4 is not true in general. But the converse is true in the special case when B
is spanned by just two matrices. Fortunately, for our algorithm we only require the converse to be true in
this special case. The following fact from [12] formally states this idea.
Fact 4.6 (Restatement of Fact 11 in [12]). Assume that | F | > n and let A, B ∈ Fn×n . If A is of maximum
rank in hA, Bi, then the Wong index w(A, hA, Bi) of (A, hA, Bi) is n.
In order to extend the simple greedy algorithm for rank increment described in Section 3 for arbitrary
approximation of the commutative rank, we use the Wong index defined above. To achieve that, we need
the relation between the commutative rank and Wong index, which we establish in the next section.
5 Relation between rank and Wong index
We prove that the natural greedy strategy works, essentially by showing that one of the following happens:
1. The Wong index of the matrix obtained by the greedy algorithm at a given step is high enough, in
which case, we show that the matrix already has the desired rank. Lemma 5.4 formalizes this.
2. We can increase the rank by a greedy step. Lemma 5.5 formalizes this.
In the above spirit, we quantify the connection between the commutative rank and Wong index in this
section, using a series of lemmas. First we need the following lemma which demonstrates that the second
Wong sequence remains “almost” the same under invertible linear maps.
Lemma 5.1. Let F be any field, A ∈ B ≤ Fn×n and (Wi )i∈[n] be the second Wong sequence of (A, B).
If P ∈ Fn×n and Q ∈ Fn×n are invertible matrices, then the second Wong sequence of (PAQ, PBQ) is
(PWi )i∈[n] . In particular, w(A, B) = w(PAQ, PBQ).
Proof. Consider the ith entry Wi0 in the second Wong sequence of (PAQ, PBQ). We prove that Wi0 = PWi
for all i ∈ [n]. We use induction on i. The statement is trivially true for i = 0. By the induction hypothesis,
we have,
Wi0 = PBQ(PAQ)−1 PWi−1 = PBQQ−1 A−1 P−1 PWi−1 = PBA−1 (Wi−1 ) = PWi .
The following technical lemma relates the Wong index with a sequence of vanishing matrix products.
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 9
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
Lemma 5.2. Let F be any field and A, B ∈ Fn×n . Assume
Ir 0
A=
0 0
and express the matrix B as
r columns
r rows B11 B12 (5.1)
B=
B21 B22 n − r rows
n − r columns
Let ` ≤ n be the maximum integer such that first ` elements of the sequence of matrices
B22 , B21 B12 , B21 B11 B12 , . . . , B21 Bi11 B12 , . . . (5.2)
are equal to the zero matrix. Then ` = w(A, hA, Bi).
Proof. Notice that In is a pseudo-inverse of A, so we can assume A0 = In in the statement of Lemma 4.3.
Consider the second Wong sequence of (A, hA, Bi). By Lemma 4.3, it equals (hA, BiA0 )i (ker(AA0 )). Since
we can assume that A0 = In , this sequence is (hA, Bi)i (ker(A)). ker(A) ≤ Fn contains exactly the vectors
having the first r entries equal to zero and Im(A) contains exactly the vectors which have the last n − r
entries equal to zero. Let k = w(A, hA, Bi), we want to show that k = `.
First we show that ` ≥ k. For this, we need to show that
B22 = B21 B12 = B21 B11 B12 = · · · = B21 Bk−2
11 B12 = 0 .
If k = 0 then we do not need to show anything. Otherwise k > 0. Consider the first entry W1 of second
Wong sequence of (A, hA, Bi). By Lemma 4.3, we know that W1 = hA, Bi ker(A). As ker(A) ≤ Fn contains
exactly the vectors which have first r entries to be zero, if B22 was not zero then B ker(A) would contain a
vector with a non-zero entry in the last n − r coordinates. This would violate the assumption W1 ⊆ Im(A).
Thus B22 = 0. Now we use induction on length of the sequence
B22 , B21 B12 , B21 B11 B12 , . . . , B21 Bi11 B12 .
Our induction hypothesis assumes that for i ≥ 1
r columns
j i−2− j
r rows Bi + ∑i−2
j=0 B11 B12 B21 B11 Bi−1
11 B12
Bi = 11 (5.3)
B21 Bi−1
11 0 n − r rows
n − r columns
and
B22 = B21 B12 = B21 B11 B12 = · · · = B21 Bi−2
11 B12 = 0 .
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 10
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
We just proved the base case of i = 1. Consider the following evaluation of Bi+1 = B · Bi
r columns
i−2 j+1 i−2− j
r rows Bi+1
11 + ∑ j=0 B11 B12 B21 B11 + B12 B21 Bi−1
11 Bi11 B12
Bi+1 =
j i−2− j
B21 Bi11 + ∑i−2
j=0 B21 B11 B12 B21 B11
i−1
B21 B11 B12 n − r rows
n − r columns
(5.4)
Since i + 1 ≤ k, we must have B21 Bi−1
11 B12 = 0, otherwise we would have Wi+1 6⊆ Im(A). Also we know
by the induction hypothesis that
B22 = B21 B12 = B21 B11 B12 = · · · = B21 Bi−2
11 B12 = 0 ,
which implies that
r columns
j i−1− j
r rows Bi+1 + ∑i−1
j=0 B11 B12 B21 B11 Bi11 B12
Bi+1 = B · Bi = 11 (5.5)
B21 Bi11 0 n − r rows
n − r columns
Now we show that k ≥ `. Since k = w(A, hA, Bi), for all 1 ≤ i ≤ k, Bi can be written as
r columns
j i−2− j
r rows Bi11 + ∑i−2
j=0 B11 B12 B21 B11 Bi−1
11 B12
Bi = (5.6)
B21 Bi−1
11 0 n − r rows
n − r columns
Note that hA, Bii is spanned by all matrices of the form M1 M2 · · · Mi with M j = A or M j = B, 1 ≤ j ≤ i.
Since we have that Wk ⊆ Im(A), we know that M1 M2 · · · Mk ker(A) ⊆ Im(A) for any product M1 M2 · · · Mk
as above. Now let us see what condition one needs such that Wk+1 6⊆ Im(A) is true. Since A is the identity
on Im(A), only Bk+1 can take ker(A) out of Im(A) for Wk+1 6⊆ Im(A) to be true. By a similar argument as
above, this happens only when B21 Bk−1
11 B12 6= 0, thus ` ≤ k.
Now, having established the connection between Wong index and the sequence of vanishing matrix
products, we prove another technical lemma establishing the relation between the length of this sequence
and the commutative rank.
Lemma 5.3. Let F be any field, B ∈ Fn×n and
r columns
r rows B11 B12 (5.7)
B=
B21 B22 n − r rows
n − r columns
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 11
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
Consider the sequence of matrices
j
B22 , B21 B12 , B21 B11 B12 , . . . , B21 B11 B12 . . . .
If the first k ≥ 1 matrices in this sequence are equal to the zero matrix and B11 is non-singular, then
rank(B) ≤ r(1 + 1/k).
Proof. If rank(B12 ) ≤ r/k, then we are done by using the Fact 2.8. So we can assume without loss of
generality that rank(B12 ) > r/k. Now suppose that
dimhIm(B12 ) ∪ Im(B11 B12 ) ∪ · · · ∪ Im(Bk−2
11 B12 )i ≥ (k − 1) rank(B12 ) .
We note that Im(B12 ), Im(B11 B12 ), . . . , Im(Bk−2
11 B12 ), are subspaces of ker(B21 ). Further using the rank-
nullity theorem, we get
r · (k − 1) r
rank(B21 ) < r − = .
k k
By using Fact 2.8, we again get that rank(B) ≤ r(1 + 1/k).
In the above discussion, we assumed that
k−2
dimhIm(B12 ) ∪ Im(B11 B12 ) ∪ · · · ∪ Im(B11 B12 )i ≥ (k − 1) rank(B12 ).
What if this is not the case? We still want to use the same idea as above but we want to ensure this
assumption. For this purpose, we use a series of elementary column operations on B to transform it to
a new matrix B∗ , which satisfies the above assumption. Since the rank of a matrix is invariant under
elementary column operations, we obtain the desired rank bound. Now we show how to obtain this matrix
B∗ using a series of elementary column operations on B. Whenever we apply these elementary column
operations on B, we shall also maintain the invariant that
k−2
B22 = B21 B12 = B21 B11 B12 = · · · = B21 B11 B12 = 0 .
Suppose
k−2
dimhIm(B12 ) ∪ Im(B11 B12 ) ∪ · · · ∪ Im(B11 B12 )i < (k − 1) rank(B12 ) . (5.8)
Let ρ := rank(B12 ). First, we can assume that B12 has exactly ρ non-zero columns. This can be achieved
by performing elementary column operations on the last n − r columns. This does not change the matrix
B22 = 0. Furthermore, these column operations correspond to replacing B12 by B12 · S for some invertible
(n − r) × (n − r)-matrix S. Since
k−2
B22 = B21 B12 = B21 B11 B12 = · · · = B21 B11 B12 = 0
implies
k−2
B21 B12 S = B21 B11 B12 S = · · · = B21 B11 B12 S = 0 ,
we keep our invariant. We will call the new matrix again B12 .
Note that the image of a matrix is its column span. Since every matrix Bi11 B12 has exactly ρ non-zero
columns (since B12 has ρ non-zero columns and B11 is non-singular), assumption in equation (5.8) means
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 12
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
that there is a linear dependence between these columns. That means there are vectors y0 , y1 , . . . , yk−2 ∈
Fn−r , not all equal to zero, such that
k−2
∑ Bi11 B12 · yi = 0 .
i=0
Moreover, we can assume that these vectors only have non-zero entries in the places that corresponds to
non-zero columns of B12 . First we show that we can assume y0 6= 0. Suppose 0 ≤ j ≤ k − 2 is the least
integer such that y j 6= 0. So we left multiply the equation
k−2
∑ Bi11 B12 · yi = 0
i=0
j −1
by (B11 ) , giving us
k−2 k−2
j −1
(B11 ) ∑ Bi11 B12 · yi = ∑ Bi− j
11 B12 · yi = 0 .
i=0 i= j
By renumbering the indices, this can be re-written as
k−2− j
∑ Bi11 B12 · yi = 0 .
i=0
Thus we can assume that y0 6= 0. (The new sum runs only up to k − 2 − j, for the missing summands, we
choose the corresponding yi to be zero.)
By writing
k−2 k−2
∑ Bi11 B12 · yi = 0 as i−1
B12 · y0 + B11 · ∑ B11 B12 yi = 0 ,
i=0 i=1
we see that there is a linear dependence between the columns of B12 and B11 . Let ` ∈ [n − r] be such that
`th entry of y0 is non-zero. Therefore, we can make the `th column of B12 zero by adding a multiple of
k−2
∑ Bi11 B12 · yi
i=1
and maybe adding some multiple of some other columns of B12 to it. This will decrease the rank of B12
by 1.
We claim that our invariant is still fulfilled. First, we add
k−2
B11 · ∑ Bi−1
11 B12 · yi
i=1
to the `th column of B12 and this will also add
k−2
B21 · ∑ Bi−1
11 B12 · yi
i=1
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 13
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
to the `th column of B22 . Since the invariant was fulfilled before the operation, B22 will stay zero. As
seen before, column operations within the last n − r columns do not change B22 . Thus, one of the n − r
columns on the right-hand side (i. e., the side composed of B12 and B22 ) of B became zero. We can
remove this column from our consideration. Let B0 and B012 the matrices obtained from B and B12 by
removing this zero column. Since the columns of B012 are a subset of the columns of B12 ,
k−2
B21 B12 = B21 B11 B12 = · · · = B21 B11 B12 = 0
implies that
B21 B012 = B21 B11 B012 = · · · = B21 B11
k−2 0
B12 = 0 .
Therefore, our invariant is still valid.
We repeat this process until the equation (5.8) is not true anymore. Note that this happens for sure
when rank(B12 ) = 0. At the end of this process we get a matrix B∗ such that
dimhIm(B∗12 ) ∪ Im(B11 B∗12 ) ∪ · · · ∪ Im(Bk−2 ∗ ∗
11 B12 )i ≥ (k − 1) rank(B12 ) .
Now the rank bound follows from the argument given above.
Finally, combining the above three lemmas, the following lemma gives the desired quantitative
relation between the commutative rank and Wong index, essential to the analysis of our algorithm. It
shows that higher the Wong index of the given matrix, the better it approximates the rank of the space.
Lemma 5.4. Let F be any field, A ∈ B = hB1 , B2 , . . . , Bm i ≤ Fn×n and B = ∑m
i=1 xi Bi , then
1
rank(B) = rank(hA, Bi) ≤ rank(A) 1 + . (5.9)
w(A, hA, Bi)
Proof. Let rank(A) = r. We use C to denote the matrix space hA, Bi, note that this space is being
considered over the rational function field F(x1 , x2 , . . . , xm ).
We know that there exist matrices P, Q ∈ Fn×n such that
Ir 0
PAQ = . (5.10)
0 0
Notice that Im(PAQ) = P Im(A). Thus by Lemma 5.1, w(A, C) = w(PAQ, PCQ). Also, it is easy to see
that rank(A) = rank(PAQ) and rank(C) = rank(PCQ). Hence it is enough to show that
1
rank(PCQ) ≤ rank(PAQ) 1 + . (5.11)
w(PAQ, PCQ)
For the sake of simplicity, we just write PCQ as C and PAQ as A. Thus we have
Ir 0
A= . (5.12)
0 0
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 14
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
We write B as
r columns
r rows B11 B12 (5.13)
B=
B21 B22 n − r rows
n − r columns
We get that B11 is non-singular over the field F(x1 , x2 , . . . , xm ) since A ∈ B. Also, we get by Lemma 5.2
that the first w(A, C) entries of the sequence of matrices
B22 , B21 B12 , B21 B11 B12 , . . . , B21 Bi11 B12 . . .
are zero matrices. Now we apply Lemma 5.3 to obtain that
1
rank(B) = rank(B) = rank(C) ≤ rank(A) 1 + . (5.14)
w(A, C)
Lemma 5.5. If | F | > n, A ∈ B = hB1 , B2 , . . . , Bm i ≤ Fn×n , B = ∑m i=1 xi Bi and w(A, hA, Bi) < k for some
k ∈ [n], then there exist 1 ≤ i1 , i2 , . . . , ik ≤ m and λ1 , λ2 , . . . , λk ∈ F such that w(A, hA,Ci) < k, where
C = λ1 Bi1 + λ2 Bi2 + · · · + λk Bik .
Proof. Let rank(A) = r. We know that there exist matrices P, Q ∈ Fn×n such that
Ir 0
PAQ = . (5.15)
0 0
Let A0 = PAQ , B0 = PBQ and B0 = ∑m 0
i=1 xi PBi Q. We write B as
r columns
" #
r rows B011 B012
B0 = (5.16)
B021 B022 n − r rows
n − r columns
By using Lemma 5.1, we know that w(A, hA, Bi) = w(A0 , hA0 , B0 i) < k. By using Lemma 5.2, we get
that there exists t ≤ k such that B021 (B011 )t−2 B012 6= 0 and
r columns
" #
r rows B0011 B0012
(B0 )t = (5.17)
B0021 B021 (B011 )t−2 B012 n − r rows
n − r columns
for some matrices B0011 , B0012 , B0021 . Since the entries of the matrix B021 (B011 )t−2 B012 are polynomials in the
variables x1 , x2 , . . . , xm of degree at most k, there exists an assignment to these variables by field constants,
assigning at most k variables non-zero values such that B021 (B011 )t−2 B012 evaluates to a non-zero matrix. By
using Lemma 5.2 again, this assignment gives us a matrix C0 ∈ B0 such that w(A0 , hA0 ,C0 i) < k. By using
Lemma 5.1, the same assignment of the variables gives us a matrix C ∈ B such that w(A, hA,Ci) < k.
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 15
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
6 Final algorithm
Suppose we have a matrix space B = hB1 , B2 , . . . , Bm i ≤ Fn×n , B = ∑m
i=1 xi Bi and a matrix A ∈ B. Our
goal is find a matrix D in B such that its rank is “close” to the commutative rank of B. If the Wong
index w(A, hA, Bi) of A in hA, Bi is “large,” then we know by Lemma 5.4 that rank of of A is “close” to
the commutative rank of B, which is equal to the commutative rank of hA, Bi. What if this Wong index
w(A, hA, Bi) is “small”? Then we know by Lemma 5.5 that by trying out a small number (that means,
mw(A,B)+1 ) of possibilities of combinations of Bi , we can find a matrix C ∈ B such that Wong index
w(A, hA, Bi) of A in hA,Ci is also “small.” Using Fact 4.6, we obtain that rank of A is not maximum
in hA,Ci. Thus there exists λ ∈ F such that rank(A + λC) > rank(A). And we can find this λ quite
efficiently. Also, A + λC ∈ B. Thus we can efficiently find a matrix of bigger rank if we are given a
matrix of “small” Wong index. This idea is formalized in the following Algorithm.
Algorithm 2 Greedy algorithm for (1 − ε)-approximating commutative rank
Input : A matrix space B = hB1 , B2 , . . . , Bm i ≤ Fn×n , given as a list of basis matrices B1 , B2 , . . . , Bm .
An approximation parameter 0 < ε < 1.
Output : A matrix A ∈ B such that rank(A) ≥ (1 − ε) · rank(B)
Initialize A = 0 ∈ Fn×n to the zero matrix.
Assign ` = d ε1 − 1e.
while Rank is increasing do
for each {i1 , i2 , . . . , i` } ∈ [m]\{0}
` do
/* This means we try all combinations of matrices Bi1 , Bi2 , . . . , Bi` */
Check if there exist λ1 , λ2 , . . . , λ` ∈ F such that rank(A + λ1 Bi1 + λ2 Bi2 + · · · + λ` Bi` ) > rank(A).
if rank(A + λ1 Bi1 + λ2 Bi2 + · · · + λ` Bi` ) > rank(A) then
Update A = A + λ1 Bi1 + λ2 Bi2 + · · · + λ` Bi` .
return A.
The following theorem proves the correctness of Algorithm 2. Let s be an upper bound on the bit size
of the entries of B1 , . . . , Bm .
Theorem 6.1. Assume that | F | > n. Algorithm 2 runs in time O((mn)1/ε · M(n, s + log n) · n) and returns
a matrix A ∈ B such that rank(A) ≥ (1 − ε) · rank(B), where M(n,t) is the time required to compute the
rank of an n × n matrix with entries of bit size at most t.
Proof. Suppose B = ∑m i=1 xi Bi and A be the rank r matrix returned by Algorithm 2. Let k be the Wong
index w(A, hA, Bi) of (A, hA, Bi). By Lemma 5.4, we know that rank(B) ≤ r(1 + 1/k). Thus
1
r ≥ 1− rank(B) .
k+1
If ε ≥ 1/(k + 1), then we are done. Otherwise we have that ε < 1/(k + 1), i. e., k < 1/ε − 1. Since ` =
d1/ε − 1e, we also have w(A, hA, Bi) < `. By using Lemma 5.5, we get that there exist 1 ≤ i1 , i2 , . . . , i` ≤
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 16
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
m and λ1 , λ2 , . . . , λ` ∈ F such that that w(A, hA,Ci) < `, where C = λ1 Bi1 + λ2 Bi2 + · · · + λ` Bi` . By
using Fact 4.6, we get that A is not of maximum rank in hA,Ci. Thus there exists λ ∈ F such that
rank(A + λC) > A, and we shall detect this in Algorithm 2 since we try all possible choices of i1 , i2 , . . . , i` .
The desired running time can be proved easily. The outer while loop runs at most n times, thus the
total running time is at most n times the running time of one iteration. One iteration of the outer loop has
[m] \ {0}
= O(m1/ε )
`
iterations of the inner for loop. By using the Schwartz–Zippel Lemma [25, 20], one iteration of inner for
loop needs to try at most (n + 1)` = O(n1/ε ) possible values of λ1 , λ2 , . . . , λ` ∈ F. And then we perform
two instances of rank computation. The stated running time follows.
Remark 6.2. Algorithm 2 runs in time O((mn)1/ε · n · M(n)) in the algebraic RAM model. Here M(n) is
the time required to compute the rank of an n × n matrix in the algebraic RAM model. It is known that
M(n) = O(nω ) with ω being the exponent of matrix multiplication. Since one can assume that m ≤ n2 ,
Algorithm 2 runs in time O(n3/ε+ω+1 ) in the algebraic RAM model.
The statement of the above remark and the trivial fact that ω ≤ 3, gives us the running time stated in
the abstract.
Remark 6.3. With a more refined analysis, it can be seen that Algorithm 2 uses
O (mn)1/ε · n · M(n, s + log n)
bit operations if the entries of the input matrices B1 , B2 , . . . , Bm have bit-size at most s. Here M(n,t) is
the bit complexity of computing the rank of a matrix whose entries have bit-size at most t. The additional
log n in the bit-size comes from the fact that the entries of the final matrix A are by a polynomial factor
(in n) larger than the entries of the Bi due to the update steps.
7 Tight examples
We conclude by giving some tight examples, which show that the analysis of the approximation perfor-
mance of the greedy approximation scheme cannot be improved. Consider the following matrix space of
n × n-matrices:
∗ 0 ... 0 ∗ 0 ... 0
0 ∗ ... 0 0 ∗ ... 0
.. .. . . .. .. .. . . ..
. . . . . . . .
0 0 ... ∗ 0 0 ... ∗
0 0 ... 0 ∗ 0 ... 0
(7.1)
0 0 ... 0 0 ∗ ... 0
. . .
.. .. . . ... ... ... . . . ...
0 0 ... 0 0 0 ... ∗
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 17
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
Each block has size n/2 × n/2. This space consists of all matrices where we can substitute arbitrary
values for the ∗ and the basis consists of all matrices where exactly one ∗ is replaced by 1 and all others
are set to 0. Assume that ε = 1/2, that means, that the greedy algorithm only looks at sets of size ` = 1.
Furthermore, assume that the matrix A constructed so far is
0 I 2n
A= . (7.2)
0 0
Any single basis matrix cannot improve the rank of A, since either its non-zero column is contained in the
column span of A or its non-zero row is contained in the row span of A. On the other hand, the matrix
space contains a matrix of full rank n, namely, the identity matrix.
The next space for the case ` = 2 looks like this:
∗ 0 ∗ 0
0 ... ... 0 0 0 ... 0
0 ∗ ... 0 0 ∗ ... 0 0 0 ... 0
.. .. . . .. .. .. .. .. .. ..
.. ..
. . . . . . . . . . . .
0
0 ... ∗ 0 0 ... ∗ 0 0 ... 0
0
0 ... 0 ∗ 0 ... 0 ∗ 0 ... 0
0 0 ... 0 0 ∗ ... 0 0 ∗ ... 0
(7.3)
.. .. . . .. .. .. .. .. .. .. .. ..
.
. . . . . . . . . . .
0
0 ... 0 0 0 ... ∗ 0 0 ... ∗
0
0 ... 0 0 0 ... 0 ∗ 0 ... 0
0
0 ... 0 0 0 ... 0 0 ∗ ... 0
. .. . . .. .. .. . . .. .. .. . . ..
.. . . . . . . . . . . .
0 0 ... 0 0 0 ... 0 0 0 ... ∗
and the corresponding matrix A is !
0 I 2n
A= 3 . (7.4)
0 0
By an argument similar to above, it is easy to see that we need at least three matrices to improve the rank
of A, so the algorithm gets stuck with a 2/3-approximation.
The above scheme generalizes to arbitrary values of ` in the obvious way.
Acknowledgments. We would like to thank Ankit Garg and Rafael Oliveira for pointing out some
inaccuracies in an earlier draft of the paper. We would also like to thank the anonymous reviewers for
their helpful and constructive comments that have helped us to improve the final version of the paper.
References
[1] M ARKUS B LÄSER , G ORAV J INDAL , AND A NURAG PANDEY: Greedy strikes again: A determinis-
tic PTAS for commutative rank of matrix spaces. In Proc. 32nd Computational Complexity Conf.
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 18
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
(CCC’17), volume 79 of LIPIcs, pp. 33:1–33:16. DROPS, 2017. [doi:10.4230/LIPIcs.CCC.2017.33]
1
[2] J ONATHAN F. B USS , G UDMUND S KOVBJERG F RANDSEN , AND J EFFREY O. S HALLIT: The
computational complexity of some problems of linear algebra. J. Comput. System Sci., 58(3):572–
596, 1999. Preliminary version in STACS’97. [doi:10.1006/jcss.1998.1608] 2
[3] H ARM D ERKSEN AND V ISU M AKAM: On non-commutative rank and tensor rank. Linear and
Multilinear Algebra, pp. 1–16, 2017. [doi:10.1080/03081087.2017.1337058, arXiv:1606.06701] 5,
6
[4] JACK R. E DMONDS: Systems of distinct representatives and linear algebra. J. Res. Nat. Bur.
Standards Sect. B, 71(4):241–245, 1967. [doi:10.6028/jres.071B.033] 2
[5] DAVID E ISENBUD AND J OE H ARRIS: Vector spaces of matrices of low rank. Advances in
Mathematics, 70(2):135–155, 1988. [doi:10.1016/0001-8708(88)90054-0] 3
[6] S TEPHEN A. F ENNER , ROHIT G URJAR , AND T HOMAS T HIERAUF: Bipartite perfect matching is
in quasi-NC. In Proc. 48th STOC, pp. 754–763. ACM Press, 2016. [doi:10.1145/2897518.2897564,
arXiv:1601.06319] 3
[7] M ARC F ORTIN AND C HRISTOPHE R EUTENAUER: Commutative/noncommutative rank of linear
matrices and subspaces of matrices of low rank. Séminaire Lotharingien de Combinatoire, 52:B52f,
2004. Available from EuDML. 3, 5
[8] H AROLD N. G ABOW AND M ATTHIAS F. M. S TALLMANN: An augmenting path algorithm for
linear matroid parity. Combinatorica, 6(2):123–150, 1986. Preliminary version in FOCS’84.
[doi:10.1007/BF02579169] 3
[9] A NKIT G ARG , L EONID G URVITS , R AFAEL M ENDES DE O LIVEIRA , AND AVI W IGDERSON: A
deterministic polynomial time algorithm for non-commutative rational identity testing. In Proc. 57th
FOCS, pp. 109–117. IEEE Comp. Soc. Press, 2016. [doi:10.1109/FOCS.2016.95, arXiv:1511.03730]
3, 5
[10] ROHIT G URJAR AND T HOMAS T HIERAUF: Linear matroid intersection is in quasi-NC. In Proc.
49th STOC, pp. 821–830. ACM Press, 2017. [doi:10.1145/3055399.3055440] 3
[11] L EONID G URVITS: Classical complexity and quantum entanglement. J. Comput. System Sci.,
69(3):448–484, 2004. Preliminary version in STOC’03. [doi:10.1016/j.jcss.2004.06.003] 3
[12] G ÁBOR I VANYOS , M AREK K ARPINSKI , YOUMING Q IAO , AND M IKLOS S ANTHA: Generalized
Wong sequences and their applications to Edmonds’ problems. J. Comput. System Sci., 81(7):1373–
1386, 2015. Preliminary version in STACS’14. [doi:10.1016/j.jcss.2015.04.006, arXiv:1307.6429]
3, 7, 8, 9
[13] G ÁBOR I VANYOS , M AREK K ARPINSKI , AND N ITIN S AXENA: Deterministic polynomial
time algorithms for matrix completion problems. SIAM J. Comput., 39(8):3736–3751, 2010.
[doi:10.1137/090781231, arXiv:0907.0774] 3
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 19
M ARKUS B L ÄSER , G ORAV J INDAL , AND A NURAG PANDEY
[14] G ÁBOR I VANYOS , YOUMING Q IAO , AND K. V ENKATA S UBRAHMANYAM: Constructive non-
commutative rank computation in deterministic polynomial time, 2015-2018. [arXiv:1512.03531]
3
[15] L ÁSZLÓ L OVÁSZ: On determinants, matchings, and random algorithms. In Proc. 2nd Internat.
Conf. Fundamentals of Computation Theory (FCT’79), pp. 565–574, 1979. 2, 3
[16] L ÁSZLÓ L OVÁSZ: Singular spaces of matrices and their application in combinatorics. Boletim da
Sociedade Brasileira de Matemática-Bulletin/Brazilian Mathematical Society, 20(1):87–99, 1989.
[doi:10.1007/BF02585470] 3
[17] L ÁSZLÓ L OVÁSZ AND M ICHAEL D. P LUMMER: Matching Theory. Volume 367. AMS Chelsea
Publ., 2009. 3
[18] M EENA M AHAJAN AND V. V INAY: Determinant: Combinatorics, algorithms, and complexity.
Chicago J. Theor. Comput. Sci., (5), 1997. [doi:10.4086/cjtcs.1997.005] 2
[19] JAMES B. O RLIN: A fast, simpler algorithm for the matroid parity problem. In Proc. 13th Internat.
Conf. on Integer Programming and Combinatorial Optimization (IPCO’08), volume 5035 of Lecture
Notes in Comp. Sci., pp. 240–258. Springer, 2008. [doi:10.1007/978-3-540-68891-4_17] 2
[20] JACOB T. S CHWARTZ: Fast probabilistic algorithms for verification of polynomial identities. J.
ACM, 27(4):701–717, 1980. Preliminary version in EUROSAM’79. [doi:10.1145/322217.322225]
2, 17
[21] F. B RUCE S HEPHERD AND A DRIAN V ETTA: The demand-matching problem. Math. Oper. Res.,
32(3):563–578, 2007. Preliminary version in IPCO’02. [doi:10.1287/moor.1070.0254] 3
[22] S EINOSUKE T ODA: Counting problems computationally equivalent to computing the determinant.
Technical Report CSIM 91-07, 1991. 2
[23] L ESLIE G. VALIANT: Completeness classes in algebra. In Proc. 11th STOC, pp. 249–261. ACM
Press, 1979. [doi:10.1145/800135.804419] 2
[24] V. V INAY: Counting auxiliary pushdown automata and semi-unbounded arithmetic circuits. In
Proc. 6th IEEE Conf. on Structure in Complexity Theory (SCT’91), pp. 270–284. IEEE Comp. Soc.
Press, 1991. [doi:10.1109/SCT.1991.160269] 2
[25] R ICHARD Z IPPEL: Probabilistic algorithms for sparse polynomials. In Proc. Symbolic and Algebraic
Comput. (EUROSAM’79), volume 72 of Lecture Notes in Comp. Sci., pp. 216–226. Springer, 1979.
[doi:10.1007/3-540-09519-5_73] 2, 17
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 20
D ETERMINISTIC PTAS FOR THE C OMMUTATIVE R ANK
AUTHORS
Markus Bläser
Department of Computer Science, Saarland University
Saarland Informatics Campus, Saarbrücken, Germany
mblaeser cs uni-saarland de
http://www-cc.cs.uni-saarland.de/mblaeser/
Gorav Jindal
Max-Planck-Institut für Informatik & Saarbrücken Graduate School of Computer Science
Saarland Informatics Campus, Saarbrücken, Germany
gjindal mpi-inf mpg de
http://people.mpi-inf.mpg.de/~gjindal/
Anurag Pandey
Max-Planck-Institut für Informatik & Saarbrücken Graduate School of Computer Science
Saarland Informatics Campus, Saarbrücken, Germany
apandey mpi-inf mpg de
http://people.mpi-inf.mpg.de/~apandey/
ABOUT THE AUTHORS
M ARKUS B LÄSER is a full professor of computer science at Saarland University, Saar-
brücken, Germany. Before joining Saarbrücken, he was an assistant professor at ETH
Zurich. He obtained his Ph. D. from the University of Bonn under the supervision
of Arnold Schönhage and his Habilitation from the University of Lübeck under the
supervision of Rüdiger Reischuk.
G ORAV J INDAL did his undergraduate studies at the Computer Science department of
IIT Delhi. He did his master studies at the Computer Science department of Saarland
University. Since 2014, he has been a Ph. D. student at the Max-Planck-Institut für
Informatik under the supervision of Markus Bläser. Gorav’s research interests lie in
algebraic complexity theory, computer algebra and lower bounds in general.
A NURAG PANDEY did his bachelor’s and master’s studies at the Electrical Engineering
department of IIT Kanpur. His master’s thesis was supervised by Nitin Saxena. Since
2015, he has been a Ph. D. student at Max-Planck-Institut für Informatik under the super-
vision of Markus Bläser. Anurag’s research interests mostly revolve around algebraic
complexity theory.
T HEORY OF C OMPUTING, Volume 14 (3), 2018, pp. 1–21 21