DOKK Library

Explicit Polynomial Sequences with Maximal Spaces of Partial Derivatives and a Question of K. Mulmuley

Authors Fulvio Gesmundo, Joseph M. Landsberg,

License CC-BY-3.0

Plaintext
                           T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24
                                        www.theoryofcomputing.org




    Explicit Polynomial Sequences with
 Maximal Spaces of Partial Derivatives and a
         Question of K. Mulmuley
                         Fulvio Gesmundo∗                           Joseph M. Landsberg †
              Received June 18, 2017; Revised December 31, 2018; Published September 12, 2019




       Abstract: We answer a question of K. Mulmuley. Efremenko et al. (Math. Comp., 2018)
       have shown that the method of shifted partial derivatives cannot be used to separate the padded
       permanent from the determinant. Mulmuley asked if this “no-go” result could be extended
       to a model without padding. We prove this is indeed the case using the iterated matrix
       multiplication polynomial. We also provide several examples of polynomials with maximal
       space of partial derivatives, including the complete symmetric polynomials. We apply
       Koszul flattenings to these polynomials to have the first explicit sequence of polynomials
       with symmetric border rank lower bounds higher than the bounds attainable via partial
       derivatives.
ACM Classification: F.1.3
AMS Classification: 68Q15, 15A69
Key words and phrases: computational complexity, Waring border rank, shifted partial derivatives,
Koszul flattening

1     Introduction
Let Sd CN denote the space of homogeneous polynomials of degree d in N variables and let p ∈ Sd CN .
Let Se CN∗ denote the space of homogeneous differential operators of order e with constant coefficients,
    ∗ Supported by the European Research Council (ERC Grant Agreement no.     337603) and VILLUM FONDEN via the
QMATH Centre of Excellence (Grant no. 10059).
  † Supported by NSF grants DMS-1405348 and CCF-1814254.



© 2019 Fulvio Gesmundo and Joseph M. Landsberg
c b Licensed under a Creative Commons Attribution License (CC-BY)                      DOI: 10.4086/toc.2019.v015a003
                             F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

which acts on Sd CN when e ≤ d. The e-th partial derivative map of p (or e-th flattening of p) is

                                          pe,d−e : Se CN∗ → Sd−e CN                                        (1.1)
                                                       D 7→ D(p).

We call the image of pe,d−e the e-th space of partial derivatives of p; it is straightforward to verify that
rank(pe,d−e ) = rank(pd−e,e ) and that given e0 ≤ e ≤ d/2, if pe,d−e has full rank then pe0 ,d−e0 has full rank.
    Let M ≥ N. Choose a linear inclusion CN ⊆ CM , so that polynomials in N variables can be regarded
as polynomials in M variables which happen to use only N of them. A polynomial p ∈ Sd CN is a
degeneration of q ∈ Sd CM , if p ∈ GLM · q ⊆ Sd CM , where the overline denotes closure, equivalently
in the usual (Euclidean) topology or in the Zariski topology. Similarly, p is a specialization of q if
p ∈ EndM ·q ⊆ Sd CM . Notice that if p is a specialization of q then it is a degeneration of q. In complexity
theory, one is interested in finding obstructions to specialization of a polynomial p to a polynomial q.
    A common strategy to determine obstructions to degeneration (hence, to specialization) of a polyno-
mial q to a polynomial p is to find closed conditions that q and every element of GLM · q satisfy, but p
does not satisfy. Typical examples of this method are flattening techniques (see, e. g., [4]), which exploit
the semi-continuity of matrix rank, namely that {X ∈ Mat`×` : rank(X) ≤ r} is a closed subset of Mat`×`
(both in the usual and in the Zariski topology).
    The method of partial derivatives is one such example: since the entries of qe,d−e (as a matrix
expressing the map Se CN∗ → Sd−e CN ) are continuous in the coefficients of q, whenever p is a degeneration
of q, semi-continuity of matrix rank guarantees that rank(pe,d−e ) ≤ rank(qe,d−e ) for all e. Therefore,
comparing the ranks of the partial derivatives maps of p and q for any e, one can prove that p is not a
degeneration of q (and thus nor is p a specialization of q).
    The method of partial derivatives dates back to Sylvester in 1852 [32], who called the maps (1.1)
catalecticants. These maps have been used to obtain lower bounds on the Waring rank, Waring border
rank and cactus border rank of polynomials (see, e. g., [2, 16]). The symmetric or Waring rank of a
polynomial p ∈ Sd CN is the smallest r such that p = ∑rj=1 `dj where ` j ∈ CN are linear forms. One
writes RS (p) = r. The symmetric or Waring border rank of p is the smallest r such that p is a limit of
polynomials of Waring rank r, and one writes RS (p) = r. The ranks of the partial derivatives maps give
lower bounds for the symmetric border rank of p: RS (p) ≥ maxe {rank(pe,d−e )}. From a complexity
theory perspective, Waring rank captures the complexity of a polynomial in the model of depth-three
powering circuits, or ΣΛΣ circuits, namely depth-three arithmetic circuit whose first and third layers
consist of addition gates and whose middle layer consists of powering gates, sending z 7→ zd for some d.
    Green [13] and Iarrobino and Emsalem [15] showed that for a general polynomial p all the maps
pe,d−e are of maximal rank. When the second author was preparing [20], he asked several experts if they
knew of an explicit sequence of polynomials (e. g., in the complexity class VNP) with partial derivatives
of maximal rank, as the standard references [16] in mathematics and [4] in computer science did not have
one. Those asked did not furnish any example, so we wrote down several, see below. One example we
found surprised us: the polynomial (x12 + · · · + xn2 )k , because it is in the complexity class VPe of sequences
of polynomials admitting polynomial size formulas. It turns out this example had been discovered
by Reznick in 1991 [28, Thm. 8.15], and in the same memoir he describes an explicit sequence that
essentially dates back to Bierman [1] (the proof, if not the statement appeared in 1903), see below.

                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                  2
                       E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

    Let pn,d = x1d + · · · + xnd denote the power sum polynomial of degree d in n variables and hn,d =
        α
∑|α|=d x1 1 · · · xnαn the complete symmetric polynomial of degree d in n variables.
    For the following polynomial sequences, all partial derivatives maps have full rank:

    • PBier,n,d := ∑|α|=d (α1 x1 + · · · + αn xn )d where α ranges over all multi-indices (α1 , . . . , αn ) of non-
      negative integers such that α1 + · · · + αn = d, i. e., exponents of monomials of degree d in n
      variables (Bierman [1], Reznick [28]);

    • fn,k := (pn,2 )k ∈ S2k Cn    (Reznick [28], a proof is given in §2);

    • fen,k := pn,1 fn,k ∈ S2k+1 Cn     (Theorem 2.5);

    • hn,d ∈ Sd Cn    (Theorem 5.6).

    When n, d are polynomially related, hn,d ∈ VPs , the complexity class determined by the determinant,
because the complete symmetric functions can be expressed as a determinant of a matrix whose entries
are power sum functions. On the other hand, the polynomials fn,k and fen,k belong to the complexity class
VPe of polynomials admitting a polynomial size formula. The fact that there are elements in VPe having
partial derivatives map of full rank suggests that the ΣΛΣ model is quite weak.
    The method of shifted partial derivatives is a variant of the method of partial derivatives. Kayal
introduced it in [18] and Gupta et al. [14] exploited it to prove super-polynomial complexity lower bounds
for depth-four circuits for the permanent (and determinant). In the same paper the authors ask if the
method could be used to approach Valiant’s conjecture, that is to separate the class VP from the class
VNP.
    For p ∈ Sd CN the method of shifted partials is based on the study of the following maps (for
judiciously chosen e and τ):
                                                      ∗
                                      p(e,d−e)[τ] : Se CN ⊗ Sτ CN → Sd−e+τ CN
                                                          D ⊗ q 7→ qD(p).

Notice that if τ = 0, then p(e,d−e)[τ] is the partial derivative map defined in (1.1). Let
                                                                    ∗
                                   h∂ =e pi=τ := p(e,d−e)[τ] (Se CN ⊗ Sτ CN ).

     Again, semi-continuity of matrix rank guarantees that if p is a degeneration of q, then dimh∂ =e qi=τ ≥
dimh∂ =e pi=τ for all τ and the method of shifted partials can be used to prove that p is not a degeneration
of q by showing that dimh∂ =e pi=τ > dimh∂ =e qi=τ for some τ.
     There is a geometric interpretation of the image of the shifted partial derivative map: given p ∈ Sd Cn ,
let V (p) ⊆ P(Cn )∗ be the hypersurface of degree d cut out by p. The image of pe,d−e generates an ideal
in Sym(Cn ) that we denote by Je (p); it cuts out a subvariety of V (p) that is called the e-th Jacobian
locus of p. The image of the shifted partials map p(e,d−e)[τ] is the component of degree d + τ of Je (p),
in other words, the value of the Hilbert function of Je (p) in degree d + τ. In particular, the study of the
ranks of the shifted partials maps of p is equivalent to the study of the growth of the ideals Je (p) and
more precisely of the growth of their Hilbert functions.

                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                     3
                              F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

     The results of [6] show that the method of shifted partial cannot be used to separate VPs from VNP
in the classical formulation of Valiant’s conjecture, where one seeks for a super-polynomial lower bound
on the so-called determinantal complexity of the padded permanent polynomial, namely on the smallest
possible n(m) such that zn(m)−m permm is a specialization of detn(m) . Informally, the proof of [6] exploits
the padding zn−m to prove that the shifted partial spaces of zn−m permm do not grow fast enough to give a
super-polynomial separation from the shifted partial spaces of detn .
     This motivates the question on whether the method of shifted partials can be used to achieve a
super-polynomial lower bound in a model which does not require padding.

Definition 1.1. Given n, d, let Xα = ((ξα )ij )i, j=1,...,n be n × n matrices of indeterminates for α = 1, . . . , d.
                                                                                   2
The (n, d)-iterated matrix multiplication polynomial IMMdn ∈ Sd (Cdn ) is

                                    IMMdn : (X1 , . . . , Xd ) 7→ trace(X1 · · · Xd ).

    Let (ξ ji )i, j=1,...,n be an n × n matrix of indeterminates. The (n, d)-matrix powering polynomial
               2
Powdn ∈ Sd Cn is
                                              Powdn : X 7→ trace(X d ).

    By Nisan [24], the polynomials IMMdn and Powdn can be used to define VPs -complete sequences
without the use of padding. More precisely, a sequence of homogeneous polynomials { fm }m∈N with
 fm ∈ Sdm CMm (with dm , Mm growing polynomially in m) is in VPs if and only if either of the following
two equivalent conditions holds:

    • there exists a function n(m) growing polynomially in m, such that fm is a specialization of IMMdn(m)
                                                                                                       m
                                                                                                           ;

    • there exists a function n0 (m) growing polynomially in m, such that fm is a specialization of
      Powdnm0 (m) ).

Remarkably, in this case, the model does not require padding and allows one to compare IMMdn and Powdn
directly with the sequence of polynomials.
    In particular, Valiant’s VPs 6= VNP conjecture can be rephrased by stating that there is no polynomially
bounded function n(m) such that the permanent polynomial permm is a specialization of IMMm         n(m) (or of
Powm n(m) ). Note that Pow m
                           n  is a specialization of IMMm
                                                        n .
    We prove that the method of shifted partials cannot be used to achieve a super-polynomial separation
between permm and IMMm       n:


Theorem 1.2. If n > m5 , then permm cannot be separated from IMMm        n by the method of shifted partial
                                                           m2      mn 2                              2
derivatives. More precisely, given any linear inclusion C ⊆ C           considering permm ∈ Sm Cmn as a
polynomial that just involves m2 of the mn2 variables, then for all choices of e,τ,
                                                           2
                           dimh∂ =e (permm ∈ Sm Cmn )i=τ ≤ dimh∂ =e IMMm
                                                                       n i=τ .


                          T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                     4
                       E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

Additional results
We give a priori upper bounds for the utility of Koszul flattenings, another variant of the partial derivatives
map, in comparing the complexity of polynomials (Proposition 6.1). We show that these bounds are sharp
for the first Koszul flattenings in low dimensions and degree (Remark 6.6). We obtain explicit (but not
sharp) lower bounds for the Koszul flattenings of fen,k , showing that one obtains better Waring border rank
lower bounds with this method than by the method of partial derivatives (Proposition 6.4). Ironically, now
the simple polynomial fen,k has the highest Waring border rank lower bound of all explicit polynomials of
odd degree.

Related work
Let en,d = ∑1≤i1 <i2 <···<id ≤n xi1 · · · xid ∈ Sd Cn denote the elementary symmetric polynomial of degree d in
n variables.  The complexity of en,d has been well studied: its symmetric border rank is bounded below by
   n
       
 bd/2c   because   this is the rank of (en,d )bd/2c,dd/2e (see, e. g., [25]); Lee [22] proved that its symmetric
           bd/2c
rank is ∑i=0 ni when d is odd and there is a similar formula providing a lower bound when d is even.
                  

Its padded version can be computed by a homogeneous depth-three (ΣΠΣ) circuit of size n2 (due to
Ben-Or), and when log(n) ≤ d ≤ 2n/3 , one has the lower bound of max{Ω(n2 /d), Ω(nd)} by Shpilka
and Wigderson [29] for its depth-three circuit size. The lower bounds appear to translate to complete
symmetric functions; however the upper bound relies on the generating function for the elementary
symmetric functions being a product of linear forms, whereas the complete symmetric functions have
generating function ∏ni=1 (1 − xit)−1 . The gap between the padded and unpadded depth-three circuit
complexity may have led researchers to think the results of [6] might fail in a model without padding. For
this reason, K. Mulmuley suggested that we investigate the barriers of the method of shifted partials in
the unpadded setting (although Mulmuley himself anticipated our answer). Similar concerns were shown
after the results by Ikenmeyer and Panova [17] and Bürgisser, Ikenmeyer and Panova [3] on the Geometric
Complexity Theory program, and were partially addressed in [10], exploiting the same homogenization
result of Nisan [24] that we use in this work.
     The shifted partial derivative complexity of elementary symmetric polynomials is studied by Fournier
et al. [7], where strong lower bounds are proved, which in turn give complexity lower bounds for
depth-four circuits.

Acknowledgements
We thank Catherine Yan for discussions on the Gessel-Viennot method, Bruce Reznick and Zach Teitler
for historical information, and Noah Stein and Mathoverflow for the proof of Lemma 5.3. We thank the
Santa Fe Institute and the organizers of the working group in Geometric Complexity Theory in December
2016 that inspired this work. We also thank the anonymous referees for numerous suggestions to improve
the exposition and on the estimate in Case 2 of the proof of Theorem 1.2.


2    Proofs that fn,k , fen,k have maximal partial derivatives
Introduce the notation qn := pn,2 = x12 + · · · + xn2 , `n := pn,1 = x1 + · · · + xn .

                          T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                5
                                F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

    In this section, we prove that fn,k = (pn,2 )k and fen,k = pn,1 fn,k have partial derivatives maps of
maximal rank, or equivalently that their spaces of partial derivatives have the maximal possible dimension.
Although the result for fn,k already appeared in [28, Thm. 8.15], we include our proof in this section
because it seems less involved and more accessible than the original one; moreover the proof of the result
for fen,k relies on the method that we use to prove the result for fn,k (see Remark 2.3).

Proposition 2.1. The space of first derivatives of homogeneous degree d + 2 polynomials of the form
hqn , where h runs over the space of homogeneous degree d polynomials, equals the space of all
homogeneous polynomials of degree d + 1. In symbols, setting L = {∂ (hqn ) : ∂ ∈ (Cn )∗ , h ∈ Sd Cn } , we
have L = Sd+1 Cn .

Proof. If d = 0, the statement holds as
                                                     ∂
                                                         qn = 2xi .
                                                    ∂ xi
    Let d ≥ 1. Consider a monomial xα of degree d − 1, where α is a multi-index, |α| = d − 1. For
i ∈ {1, . . . , n}, define

                             ∂
                                 (qn xi xα ) = 2xi2 xα + (αi + 1)qn xα = 2xi2 + (αi + 1)qn xα .
                                                                                         
                    gi :=
                            ∂ xi

    For every i, gi ∈ L. Let g = (g1 , . . . , gn )T and p = (x12 , . . . , xn2 )T be column vectors. We have

                                               g = [(2Id + A)p] xα

where A is the (n + 1) × (n + 1) rank 1 matrix whose entries in the i-th row are αi + 1 for i = 1, . . . , n.
Let 1 = (1, . . . , 1)T and a = (α1 + 1, . . . , αn + 1)T and notice A = 1aT . In particular by the Sylvester
                                                                           6= 0, so 2Id + A is invertible.
                                            1, Prob. 3.1]) det(2Idα + A)d−1
determinant identity (see, e. g., [27, Chap.
                     2 α     n ∗
    Therefore, xi x ∈ (C ) · qn S C for every monomial x ∈ S Cn . This shows that every non-
                                      d  n

square-free monomial belongs to (Cn )∗ · (qn Sd Cn ).
    Now let xβ ∈ Sd+1 Cn be a square-free monomial; suppose β1 = 1 and let γ be the multi-index with
γ1 = 0 and γ j = β j for j ≥ 2, so that xγ = xβ /x1 . Then

                                                         ∂
                                                2xβ =        (qn xγ ).
                                                        ∂ x1
This concludes the proof.

Theorem 2.2 (Reznick, [28]). The space of e-th partial derivatives of fn.k consists of all multiples of
qk−e
 n   of degree 2k − e. In symbols, for all n, k and e ≤ k, ( fn,k )e,2k−e (Se (Cn )∗ ) = qk−e e n
                                                                                          n S C . In particular,
for all n, k, e, the flattening ( fn,k )e,2k−e has full rank.

Proof. We proceed by induction on e. For e = 0 there is nothing to prove. Let e ≥ 1. By the induction
hypothesis, the (e − 1)-st flattening surjects onto qk−e+1
                                                     n     Se−1 Cn . It suffices to show that

                                      (Cn )∗ · (qk−e+1
                                                 n     Se−1 Cn ) = qk−e e n
                                                                    n S C .


                            T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                6
                       E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

Notice that, for a monomial xα ∈ Se−1 Cn ,
                                                                             α
                             ∂ k−e+1 α                             k−e+1 ∂ x
                                 (qn x ) = 2(k − e + 1)xi qk−e α
                                                           n x + qn
                            ∂ xi                                          ∂ xi
                                                                         α
                                                                            
                                            k−e                  α     ∂x
                                         = qn    2(k − e + 1)xi x + qn         .
                                                                       ∂ xi

Up to rescaling qn and the differential operators, the term in parenthesis is ∂∂xi (xα qn ). These terms span
Se Cn by Proposition 2.1.

Remark 2.3. The results of Proposition 2.1 and Theorem 2.2 hold for every non-degenerate quadratic
form g ∈ S2 Cn . Indeed every quadric can be diagonalized by the action of GLn and the proof only uses
the fact that qn is diagonal. In particular, for every g ∈ S2 Cn , and every e ≤ k, we have

                                            (Se Cn )∗ · (gk ) = gk−e Se Cn .

     In order to prove the analog of Proposition 2.1 for fen,k , we will exploit the decomposition of Sd Cn
into Specht modules under the action of symmetric group Sn which permutes the variables. In particular,
if Cn = [n] ⊕ [n − 1, 1] where [n] = h`n i is an invariant subspace and [n − 1, 1] = hxi − x j : i, j = 1, . . . , ni
is isomorphic to the standard representation of Sn . In particular, a homogeneous polynomial f ∈ Sd Cn
can be written as
                                                         d
                                                   f = ∑ `e gd−e                                                 (2.1)
                                                        e=0

where gd−e ∈ Sd−e [n − 1, 1]; this is the decomposition of f as sum of bi-homogeneous terms according to
the decomposition Cn = [n] ⊕ [n − 1, 1].
    We redefine our basis of Cn and of Cn∗ in accordance to the splitting [n] ⊕ [n − 1, 1]: consider
[n] = h`n i and [n − 1, 1] = hxi − x1 : i = 2, . . . , ni and in the dual space
                                                                                     
                       ∗    1 ∂                 ∂                           1 ∂     ∂
                      `n =           +···+                    and      ∆i =       −       .
                            n ∂ x1            ∂ xn                          2 ∂ xi ∂ x1
In particular, h∆i : i = 2, . . . , ni ' [n − 1, 1] and h`∗n i ' [n]. Since ∆i (`n ) = 0 and `∗n · [n − 1, 1] = 0, this
gives the splitting Cn∗ = h`n i∗ ⊕ [n − 1, 1]∗ , where we identify h`n i∗ and [n − 1, 1]∗ as subspace of Cn .
Remark 2.4. It will be useful to write qn = x12 + · · · + xn2 and its powers as in Eqn. (2.1). Notice that
S2 Cn contains a two-dimensional space of Sn -invariants. To see this, consider the decomposition
(see, e. g., [19, Eqn. (6.7.1)]) S2 Cn = S2 (h`n i ⊕ [n − 1, 1]) = S2 h`n i ⊕ h`n i ⊗ [n − 1, 1] ⊕ S2 [n − 1, 1]: the
subspace S2 h`n i is one-dimensional and Sn acts trivially on it, so this is a space of a invariants generated
by `2n ; the space h`n i ⊗ [n − 1, 1] is isomorphic to [n − 1, 1], which is irreducible, so it contains no
invariants; the subspace S2 [n − 1, 1] contains a one-dimensional subspace of invariants, generated by
gn = (1/2) ∑i, j (xi − x j )2 . The uniqueness of the invariant gn in S2 [n − 1, 1] follows by Schur’s Lemma:
indeed S2 [n − 1, 1] ⊆ [n − 1, 1] ⊗ [n − 1, 1] ' [n − 1, 1]∗ ⊗ [n − 1, 1] = End([n − 1, 1]) because Specht
modules are self dual (see, e. g., [8, Ch. 4]); by Schur’s Lemma, the only Sn -equivariant endomorphism
of the irreducible representation [n − 1, 1] is the identity (up to scale); we deduce that S2 [n − 1, 1] contains
at most a one-dimensional space of invariants.

                          T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                       7
                              F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

   Since qn is an Sn -invariant, we deduce that qn is a linear combination of gn and `2n : indeed, qn =
(1/n)(`2n + gn ).
    We can now prove the following result.
Theorem 2.5. For every n, k, e, the flattening ( fen,k )e,2k+1−e has full rank.
Proof. As before, write Cn = h`n i ⊕ [n − 1, 1]. Up to rescaling qn , write qn = `2n + gn as in Remark 2.4.
We have                                                        2(k− j) k− j     2(k− j)+1 j
                  fen,k = `n qkn = `n (`2n + gn )k = `n ∑k0 kj `n      gn = ∑k0 `n        gn
   Let e ≤ k. In order to calculate rk( fee,2k+1−e ) , we study the image of fee,2k+1−e as an Sn -module. We
have Se Cn∗ = Se (h`∗n i ⊕ [n − 1, 1]∗ ) = ej=0 h(`∗n ) j i ⊗ Se− j [n − 1, 1]∗ , so
                                           L

                                                        e                             
                         ∗               ∗                       ∗ j                 ∗
                                                           M
                   e                             k
                  S (h`n i ⊕ [n − 1, 1] ) · (`n qn ) =        h(`n ) i ⊗ S [n − 1, 1] · (`n qkn )
                                                                            e− j

                                                            j=0
                                                        e
                                                                                                             (2.2)
                                                    = h(`∗n ) j i(Se− j [n − 1, 1]∗ · (`n qkn )).
                                                       ∑
                                                     j=0

To conclude, we show that the last summation is in fact a direct sum. In the j-th summand, we have
                                                                              h                           i
           h(`∗n ) j i · (Se− j [n − 1, 1]∗ · (`n qkn )) = Se− j [n − 1, 1]∗ · `∗n j · ∑ki=0 ki `2i+1 gnk−i =
                                                                                               
                                                                                                 n
                                                                               k
                                                                                                              (2.3)
                                                                                                         
                                                            e− j           ∗              k 2i+1− j k−i
                                                                                           
                                                         = S [n − 1, 1] · ∑ i `n                    gn .
                                                                                 i=d j−1
                                                                                      2 e


    If ∆ ∈ Se− j [n − 1, 1]∗ , we have
                                                       
                                  k                                  k
                                          k 2i+1− j k−i                     k 2i+1− j
                                                                                       ∆(gnk−i ) .
                                                                            
                          ∆     ∑        i `n     gn     =         ∑        i `n
                               i=d j−1
                                    2 e                           i=d j−1
                                                                       2 e

In particular the summands with i > k − e + j are 0; moreover, the i-th term of the summation, varying
                                 j k−i−(e− j) e− j
∆, ranges on the entire `2i+1−
                          n       gn         S [n − 1, 1] by Theorem 2.2 and Remark 2.3. We deduce
                                                                              2(k−e+ j)+1− j e− j
that the leading term (in `n ) of the last line of (2.3) ranges on the whole `n             S [n − 1, 1] =
 2k−2e+ j−1 e− j
`n         S [n − 1, 1]. This shows that the summands in (2.2) are linearly independent because their
leading terms have different degrees, the j-th one having degree in `n equal to 2k − 2e + j − 1.
    From (2.2), we deduce that rank of fee,2k+1−e is
                                                e
                  dim(Se Cn∗ · fee,2k+1−e ) = ∑ dim Se− j [n − 1, 1] =
                                               j=0
                                                e          
                                                  e− j+n−2
                                             =∑               =
                                              j=0    e− j
                                               e                
                                                  n−2+ j      e+n−1
                                             =∑            =         = dim Se Cn∗ .
                                              j=0    j          e


                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                    8
                      E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

hence, fee,2k+1−e is injective and this conclude the proof when e ≤ k. Since fe,2k+1−e is the transpose of
f2k+1−e,e , the proof is complete.


3    Two auxiliary results
Proposition 3.1. Let n(m), k(m) be polynomially bounded functions of m. Then the sequences { fn,k }m
and { fen,k }m are in the algebraic complexity class VPe of sequences admitting polynomial size formulas.

Proof. The standard expression

                                       fn,k = qn · · · qn   (k times), with
                                        qn = x1 · x1 + · · · + xn · xn

gives a formula of size k(3n − 1) for fn,k . The additional `n = x1 + · · · + xn and fen,k = `n · fn,k provides a
formula of size k(3n − 1) + n for fen,k .

Proposition 3.2. If n = 2m then the polynomial fn,k is a specialization of the matrix powering polynomial
Pow2k
    m+1 and the polynomial f n,k is a specialization of the iterated matrix multiplication polynomial
                              e
     2k+1
IMMm+2 . If n = 2m + 1 then the polynomial fn,k is a specialization of the matrix powering polynomial
Pow2k
    m+2 and the polynomial f n,k is a specialization of the iterated matrix multiplication polynomial
                              e
     2k+1
IMMm+3 .
                                              √
Proof. Let n = 2m + 1 and set y±j = x2 j−1 ± −1x2 j for j = 1, . . . , m. Consider the specialization of
Pow2k
    m+2 to the matrix
                                           0 y+       · · · y+
                                                                    
                                                 1           m xn
                                         y−                         
                                         1                          
                                  Qm =  ...                         ,
                                                                    
                                         −            0             
                                         ym                         
                                           xn
of size m + 2.
    We show that Pow2k
                    m+2 (Qm ) = f n,k , up to scale. The characteristic polynomial of Qm is
                                                                   
              det(Qm − tIdm+2 ) = (−1)m+2t m t 2 − ∑ j y+j y−j − xn2 = (−1)m+2t m t 2 − qn .
                                                                                          

                                                                             √
Thus, the nonzero eigenvalues of Qm (as functions of x1 , . . . , xn ), are ± qn . In particular, Powdm+2 (Qm ) =
 √ d          √ d
( qn ) + (− qn ) is 0 if d is odd and it is 2qkn = 2 fn,k if d = 2k is even.
    If n = 2m is even, apply the same argument to the matrix obtained from Qm by removing the last row
and the last column.
                                               2k+1
    Similarly, fen,k is a specialization of IMMm+2  or IMM2k+1m+3 (depending on the parity of n) by making
the first matrix `n Id and specializing the remaining matrices to the matrix above.

                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                  9
                              F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

4    Proof of Theorem 1.2
                                                        2        2                               2
Since n ≥ m5 , we may choose a linear inclusion Cm ⊆ Cmn and regard permm ∈ Sm Cmn . Our goal is to
show that for every s, τ
                              dimh∂ =s IMMm               =s
                                             n i=τ ≥ dimh∂ permm i=τ .

     We split the proof into three cases. In the first and in the second case, we degenerate IMMm  n to f n,k if
m = 2k is even and to fn,k if m = 2k +1 is odd. This is possible by Proposition 3.2. Write Fm,n for either fn,k
                        e
or fen,k in what follows. Since IMMm                                         =s       m             =s
                                     n degenerates to Fm,n , we have dimh∂ IMMn i=τ ≥ dimh∂ Fm,n i=τ .
                                             m                                                    2
     In the third case, we specialize IMMn to the power sum polynomial of degree m in m variables
y1 +· · ·+ym
 m
             m2
                by specializing every argument of IMMm   n to the diagonal matrix of size n×n with y1 , . . . , ym2
              2
in the first m diagonal entries and 0 elsewhere.
     Case 1: s ≥ dm/2e. We show that dimh∂ =s Fm,n i=τ ≥ dimh∂ =s permm i=τ when s ≥ dm/2e. Up to the
                               2              2             2
action of GLmn2 , assume Cm ⊆ Cn ⊆ Cmn , where Cm is the space spanned by the variables of permm
and Cn is the space spanned by the variables of Fm,n . It will suffice to prove

                         dimh∂ =s (permm ∈ Sm Cn )i=τ ≤ dimh∂ =s Fm,n ∈ Sm Cn i=τ

because the remaining mn2 − n variables will contribute the same growth to the ideals Js (permm ) and
Js (Fm,n ). Since s ≥ dm/2e, (Fm,n )s,m−s surjects onto Sm−s Cn by Theorem 2.2 and Theorem 2.5, and
thus, for every shift τ, the shifted partial derivative map surjects onto Sm−s+τ Cn for all τ. This shows
h∂ =s Fm,n ∈ Sm Cn i=τ = Sm−s+τ Cn and proves this case.
     Case 2: s < dm/2e and τ < 2m3 . Again, it suffices to prove

                          dimh∂ =s permm ∈ Sm Cn i=τ ≤ dimh∂ =s Fm,n ∈ Sm Cn i=τ .

Since s ≤ dm/2e, (Fm,n )s,m−s is injective, and its partials have image of dimension n+s−1
                                                                                                
                                                                                            s    . Corollary 2.4
of [6] states that any subspace of Sm−s Cn of dimension n+s−1
                                                                 
                                                              s    generates an  ideal that in degree m−s+τ
                          n+s+τ−1
                                  
has dimension at least      s+τ    ; this is a consequence of a general result on the growth of ideals known
as Macaulay’s Theorem (see, e. g., [30]). Thus
                                                                           
                                       =s          m n          n+s+τ −1
                                dimh∂ Fm,n ∈ S C i=τ ≥                         ,
                                                                   s+τ
(and equality holds in the case m is even).
    We compare this with the crude estimate for permm that ignores syzygies of its s-th Jacobian ideal.
                                                     2
The space h∂ =s permm ⊆ Sm Cn i=0 has dimension ms , because s-th partial derivatives of permm are
subpermanents of size m − s, so there is one for every choice of s rows and s columns of the matrix.
Ignoring syzygies of Js (permm ) ,
                           2             
                            m      n+τ −1
                                              ≥ dimh∂ =s (permm ⊆ Sm Cn )i=τ .
                             s         τ
We will conclude that                         2        
                                    n+s+τ −1    m    n+τ −1
                                              >
                                      s+τ       s      τ

                        T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                   10
                       E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

in the range we consider. This is equivalent to
                                                                              2
                              (n + s + τ − 1)(n + s + τ − 2) · · · (n + τ)    m
                                                                           >      .                          (4.1)
                                    (τ + s)(τ + s − 1) · · · (τ + 1)          s

The left hand side is bounded from below by (n + τ)s /(τ + s)s and the right hand side is bounded from
above by m2s , so that a sufficient condition for (4.1) is

                                                        n+τ
                                                             > m2 .
                                                        τ +s

which holds when n ≥ m5 and τ ≤ 2m3 as s ≤ m.
    Case 3: s < m/2 and τ > m3 . Here set all matrices (X1 , . . . , Xm ) equal to a matrix that is zero except
for the first m2 entries on the diagonal, call them y1 , . . . , ym2 . The resulting degeneration of IMMm      n is
ym
 1 +  · · · + y m . As in Case 1, it will suffice to prove the result for the shifted partials of both polynomials
                m2

in m2 variables because the remaining mn2 − m2 variables will contribute the same growth to both ideals.
The space of partial derivatives of order s is hym−s               m−s
                                                      1 , . . . , ym2 i. The image of the τ-th shifted partial map
consists of all polynomials in m2 variables of degree m − s + τ as soon as m − s + τ > m2 (m − s − 1) + 1,
so that every monomial of degree m − s + τ is divisible by at least one power of order m − s. In particular,
the shifted partials derivative map is surjective whenever when τ > m3 .


5    Complete symmetric functions
Recall that hn,d is the complete symmetric function of degree d in n variables:

                                                     hn,d = ∑ xα ,
                                                                |α|=d


where the summation is over all multi-indices α = (α1 , . . . , αn ) with α1 + · · · + αn = d.

Proposition 5.1. For every monomial xβ with |β | = e ≤ d, we have

                                    ∂e
                                        hn,d = β ! · hn+e,d−e (x1 , . . . , xn , x(β ) ),
                                   ∂ xβ

where
                                        x(β ) = (x1 , . . . , x1 , . . . , xn , . . . , xn )
                                                 | {z }                    | {z }
                                                          β1                     βn

and β ! = β1 ! · · · βn !. In particular the image of the flattening (hn,d )e,d−e is the space
                               D                                              E
                                hn+e,d−e (x1 , . . . , xn , x(β ) ) : |β | = e ⊆ Sd−e Cn .


                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                  11
                                     F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

Proof. We proceed by induction on e. If e = 1, suppose xβ = xn and write
                                                           d
                                               hn,d = ∑ xnj hn−1,d− j (x1 , . . . , xn−1 )
                                                         j=0

so that
                       d
           ∂
               hn,d = ∑ j · xnj−1 hn−1,d− j (x1 , . . . , xn−1 ) =
          ∂ xn        j=0
                        d−1
                    = ∑ (` + 1) · xn` hn−1,d−`−1 (x1 , . . . , xn−1 ) =
                        `=0
                        "                                           #        "                                                 #
                          d−1                                                  d−2
                    =       ∑ xn` hn−1,d−`−1 (x1 , . . . , xn−1 ) + xn ∑ (` + 1) · xn` hn−1,d−`−2 (x1 , . . . , xn−1 )
                            `=0                                                  `=0

where the first summation in the last line contains one term from each summand in the previous line, and
the second summation contains the remaining terms (with shifted indices). The first summation adds up
to hn,d−1 (x1 , . . . , xn ); by repeating this on the second summation we obtain
                                                               "                            #
                                                                             d−3
            hn,d−1 (x1 , . . . , xn ) + xn hn,d−2 (x1 , . . . , xn ) + xn2   ∑ (` + 1) · xn` hn−1,d−`−3 (x1 , . . . , xn−1 )
                                                                             `=0

and iterating this process we obtain ∑dj=0 xnj hn,d−1− j (x1 , . . . , xn ) = hn+1,d−1 (x1 , . . . , xn , xn ) proving the
base case.
   Let e ≥ 1 and suppose β1 ≥ 1. Let γ = (β1 − 1, β2 , . . . , βn ). We have
                                       ∂e           ∂ ∂ e−1
                                           h n,d =             hn,d =
                                      ∂ xβ         ∂ x1 ∂ xγ
                                                         ∂
                                                 = γ! ·      hn+e−1,d−e+1 (x1 , . . . , xn , x(γ) ).
                                                        ∂ x1
By chain rule and by symmetry
                             ∂
                                 hn+e−1,d−e+1 (x1 , . . . , xn , x(γ) ) =
                            ∂ x1
                                             ∂
                                 = (γ1 + 1)                      (γ)   hn+e−1,d−e+1 (y1 , . . . , yn+e−1 ) =
                                            ∂ y1 (x1 ,...,xn ,x ,x1 )
                                  = β1 hn+e,d−e ((x1 , . . . , xn , x(γ) , x1 )),

where we used the case e = 1 again. Since γ! · β1 = β ! , we conclude.

Proposition 5.2. For any choice of multi-indices β , γ with |β | = p and |γ| = e, the coefficient of xγ in
hn+p,e (x1 , . . . , xn , x(β ) ) is
                                            n          
                                                βi + γi
                                           ∏ γi .
                                           i=1


                               T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                                12
                        E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

Proof. Write [ f ]γ for the coefficient of xγ in the polynomial f .
   We use induction on p. If p = 0, then hn+p,e (x1 , . . . , xn , x(β ) ) = hn,e and for every γ we have
                                                                 
                                                                   γi
                                           [hn,e ]γ = 1 = ∏ni=1          .
                                                                   γi

     Let p ≥ 1 and suppose β1 ≥ 1. Write hn+p,e (y) = ∑ej=0 y1j hn+p−1,e− j (y2 , . . . , yn+p ). Let η1 =
(1, 0, . . . , 0) ∈ Zn≥0 . We have
                                                          γ1
                    [hn+p,e (x1 , . . . , xn , x(β ) )]γ = ∑ [hn+p−1,e− j (x1 , . . . , xn , x(β −η1 ) )]γ− jη1 .
                                                          j=0

Apply the inductive hypothesis to the summands of the right hand side to get
                                                        "                    n            #
                                                    γ1 
                                            (β )            γ1 − j + β1 − 1         βi + γi
                [hn+p,e (x1 , . . . , xn , x )]γ = ∑                         ·∏                =
                                                   j=0           γ1 − j        i=2     γi
                                                       γ1              ! n            
                                                             β1 − 1 + j          βi + γi
                                                 = ∑                       ∏ γi =
                                                      j=0         j        i=2
                                                    n            
                                                          βi + γi
                                                 =∏                 .
                                                   i=1       γi

      Proposition 5.2 shows that the entries of the matrix representing the partial derivatives map of hn,d in
the monomial basis are products of binomial coefficients with a special combinatorial structure. Matrices
with this structure are the object of study of the Lindström-Gessel-Viennot theory on totally nonnegative
matrices (see, e. g., [11]). We will provide some results on matrices with this structure which will be used
to prove that the matrices described in Proposition 5.2 have full rank.
      Let a1 , . . . , aN ∈ Z≥0 be nonnegative      integers and let G(a1 , . . . , aN ) be the N × N symmetric matrix
whose (i, j)-th entry is ai +a       j
                                       
                                  ai    . The Lindström-Gessel-Viennot         Lemma (see [11, §2]) guarantees that
G(a1 , . . . , aN ) is a totally nonnegative matrix (in the sense that every minor is nonnegative), and its rank
is equal to the number of distinct ai ’s. In particular, G(a1 , . . . , aN ) is always positive semidefinite and it is
positive definite if and only if the ai ’s are distinct. Moreover if ai1 = ai2 for some i1 , i2 , then the i1 -th and
i2 -th rows are equal.
      Given two matrices A, B of the same size, define A B to be the Hadamard product of A and B. For
vectors a1 , . . . , aN ∈ Zm ≥0 , with ai = (ai j ) j=1,...,m define

                                                                 m
                                                                 K
                                        G(a1 , . . . , aN ) :=         G(a1,i , . . . , aN,i ).
                                                                 i=1

Our goal is to prove that G(a1 , . . . , aN ) is positive definite if the ai are distinct.
    We will need the following two technical results. Given a matrix A we denote by Ai• the i-th row and
by A•i the i-th column of A, respectively, and by AIJ the submatrix consisting of rows in the set I of indices
and columns in the set J of indices.
    The following observation was contributed by Noah Stein [31].

                          T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                      13
                                  F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

Lemma 5.3 (Stein). Let A be symmetric, positive semidefinite. Let I = {i1 , . . . , ir } be a set of indices such
that the r vectors {A•i }i∈I are linearly independent. Then the principal submatrix AII of A has full rank.
Proof. Without loss of generality, suppose I = {1, . . . , r} and let R = AII . We want to prove that R is full
rank, namely that Ru = 0 for some u ∈ Rr implies u = 0. Let v ∈ Rn such that vi = ui if i ≤ r and vi = 0
if i > r. Since A is positive semidefinite, write A = BT B. We have

                                         0 = uT Ru = vT Av = vT BT Bv = kBvk.

In particular Bv = 0, therefore Av = 0; since the first r columns of A are linearly independent, we deduce
v = 0, so that u = 0 and R is nonsingular.

Lemma 5.4. Let A, B be symmetric N × N matrices such that A is positive definite and B is positive
semidefinite with strictly positive diagonal entries. Then A B is positive definite.
Proof. Given a symmetric matrix C, denote by C(k) the k-th leading principal submatrix of C, namely the
k × k submatrix consisting of the first k rows and k columns of C. Then C is positive definite if and only
if det(C(k) ) is positive for every k. Therefore it suffices to show that the leading principal minors of A B
are positive.
     Let k0 be the smallest k such that det(B(k) ) = 0. From Schur’s Product Theorem (see, e. g., [27,
Ex. 36.2.1]), the Hadamard product of positive definite matrices is positive definite. For every k < k0 , we
have that B(k) is positive definite, so (A B)(k) = A(k) B(k) is positive definite as well and in particular
it has positive determinant.
     If k ≥ k0 , from [23, Eqn. 1.11], we have

                     det(A(k)     B(k) ) + det(A(k) B(k) ) ≥ det(A(k) )∏ki=1 bii + det(B(k) )∏ki=1 aii .

Now, for k ≥ k0 , B(k) is a positive semidefinite matrix and B(k0 ) is a principal submatrix with det(B(k0 ) ) = 0,
so det(B(k) ) = 0 as well. Therefore

                                        det((A      B)(k) ) ≥ det(A(k) )∏ki=1 bii > 0

proving that (A       B)(k) has positive determinant.

Proposition 5.5. Let a1 , . . . , aN ∈ Zm
                                        ≥0 . Then the rank of G(a1 , . . . , aN ) equals the number of distinct
m-tuples a1 , . . . , aN .
Proof. We proceed by induction on m. For m = 1, the statement follows from the Lindström-Gessel-
Viennot Lemma.
    If m ≥ 2, for every i = 1, . . . , N, write ai = (a0i , ai,m ). Let A = G(a01 , . . . , a0N ) and B = G(am,1 , . . . , am,N ).
By the induction hypothesis A is positive semidefinite and its rank is equal to the number of distinct a0i ’s.
Similarly for B.
    Let C = G(a1 , . . . , aN ) = A B. If two pairs ai = a j , then A•i = A•j and B•i = B•j so that the corre-
sponding two columns of C are equal. Conversely, if two columns Ci• ,C•j are equal, we show that the
corresponding m-tuples ai and a j are equal. Consider the principal 2 × 2 submatrix obtained from these
two columns:
                                                   Cii jj = Aii jj Bii jj .

                            T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                              14
                       E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

If A•i 6= A•j , then they are linearly independent by the induction hypothesis, and by Lemma 5.3 the
submatrix Aii jj is positive definite. The submatrix Bii jj is positive semidefinite and has strictly positive
diagonal entries, therefore by Lemma 5.4 Cii jj is positive definite, in contradiction with the assumption.
This shows that if Ci• = C•j , then A•i = A•j and therefore B•i = B•j . In particular ai = a j .
    Therefore, we may assume that C has distinct columns and our goal is to show that C has full rank.
Suppose by contradiction that C does not have full rank and let

                                            0 = α1C1• + · · · + αN CN•                                       (5.1)

be a vanishing linear combination of the columns of C.
     Up to conjugation by a permutation matrix, suppose there exist 0 = k0 < k1 < · · · < kr = N such
that a0i = a0j if ks < i, j ≤ ks+1 and a0i 6= a0j otherwise. Notice that if the ai ’s are distinct, then A has full
rank and so does C from Lemma 5.4 because B is positive semidefinite with strictly positive entries.
Therefore, suppose k1 ≥ 2 and up to reducing to a principal submatrix suppose that αk1 6= 0. Since the
first k1 columns (and rows) of A are equal, the first k1 columns (and rows) of B are linearly independent,
otherwise two of them would be equal, providing that two m-tuples ai and a j for i, j ≤ k1 would be equal.
By Lemma 5.3, the principal submatrix B1,...,k          1
                                                 1,...,k1 is positive definite.
     The linear combination (5.1) can be written as

                            0 = A•1    (α1 B•1 + · · · + αk1 B•k1 ) + ∑i>k1 αi (A•i     B•i ).

   Define A  e to be the matrix obtained from A by removing the first k1 − 1 rows and columns, that is
A = G(ak1 , . . . , a0N ). A
e      0                   e has the same rank as A. Let B0 = PT BP, for
                                                                                  
                                            1           α1
                                                ..      ..                        
                                                   .     .                        
                                                          .
                                                                                  
                                                      1 ..
                                                                                  
                                                                                  
                                   P=                                             ;
                                                                                  
                                                       αk                         
                                     
                                                                 1                
                                                                                   
                                                                     ..           
                                                                          .       
                                                                               1

notice that B0 is obtained from B by performing row and column operations. In particular B0 has the same
signature as B; moreover, from the block structure of P, we deduce that the submatrix B0 1,...,k  1
                                                                                           1,...,k1 has the
same signature as B1,...,k  1                                                                                  0
                     1,...,k1 , namely it is positive definite. This shows that the k1 -th diagonal entry of B is
strictly positive. Define Be to be the submatrix obtained from B0 by removing the first k1 − 1 rows and
columns. Define Ce = A     e B.  e The linear combination of (5.1) induces a vanishing linear combination
among the columns of C.    e
     By repeating this procedure at most r times, we find a singular r × r matrix Ce = A        B with A  positive
                                                                                      e ee ee          ee

semidefinite and of full rank (so positive definite) and B positive semidefinite with strictly positive
                                                                 ee
diagonal entries. By Lemma 5.4, we obtain a contradiction. This concludes the proof.

                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                  15
                                   F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

    Using these results, we can finally prove:

Theorem 5.6. For every n, d, e, the flattening (hn,d )e,d−e : Se (Cn )∗ → Sd−e Cn of the complete symmetric
function hn,d has full rank.

Proof. First, we consider the case d = 2k even.
   It suffices to prove the result for e = k.
                                                   *                          +
                                                     ∂ |β |
                         (hn,d )k,k (Sk (Cn )∗ ) =          hn,d : |β | = k =
                                                     ∂ xβ
                                                   D                                           E
                                                 = hn+k,k (x1 , . . . , xn , x(β ) ) : |β | = k .

Define two column vectors
                                                                                               T
                                         h = hn+k,k (x1 , . . . , xn , x(β ) ) : |β | = k           ,
                                                             T
                                         b = xβ : |β | = k .

From Proposition 5.1 and Proposition 5.2, we have h = Ab where the (β , γ)-th entry of A is
                                                            
                                                n    βi + γi
                                       Aβ ,γ = ∏i=1
                                                        γi

namely A = G(β : |β | = k). Since the multi-indices β are all distinct, by applying Proposition 5.5 we
deduce that A is nonsingular and therefore the entries of b are linear combinations of the entries of h.
This shows that (hn,d )k,k is full rank.
    Now consider d = 2k + 1 odd. It suffices to prove the result for e = k + 1. Let

                                         ∂
                                  g=         hn,d = hn+1,d−1 (x1 , x1 , x2 , . . . , xn ) ∈ S2k Cn .
                                        ∂ x1

The image of the flattening gk,k : Sk (Cn )∗ → Sk Cn is contained in the image of (hn,d )k+1,k . To conclude,
we will show that gk,k is full rank.
    Let h = hn+1,d−1 (y1 , . . . , yn+1 ). By the result in the case of even degree, we know that (h)k,k has full
rank, namely                   D                                                  E
                                  hn+1+k,k (y1 , . . . , yn+1 , y(β ) ) : |β | = k = Sk Cn+1 .

In particular, the image of this space under the specialization (y1 , . . . , yn+1 ) = (x1 , , . . . , xn , x1 ) is Sk Cn .
    On the other hand, notice that,

                 hn+1+k,k (y1 , . . . , yn+1 , y(β ) )|(y1 ,...,yn+1 )=(x1 ,...,xn ,x1 ) = hn+k+1,k (x1 , . . . , xn , x(γ) )

where γ1 = β1 + βn+1 + 1 and γi = βi for i = 2, . . . , n (indeed |γ| = k + 1). This shows that the image of
(h)k,k is Sk Cn , and therefore (hn,d )k+1,k is full rank.

    Theorem 5.6 implies

                            T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                                16
                         E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

Corollary 5.7. For every n, d,
                                                                                    
                                                                       n + bd/2c − 1
                                      RS (hn,d ) ≥ RS (hn,d ) ≥                       .
                                                                           bd/2c
   For readers familiar with cactus rank and cactus border rank, by [16, Thm. 5.3D], we obtain the same
lower bounds for cactus rank and cactus border rank.


6     Koszul flattenings
We recall the definition of Koszul flattening introduced in [21]. A Koszul flattening map of p ∈ Sd CN
depends on two parameters s, q: it is obtained by tensoring the s-th partial derivative map (1.1) with the
identity map on the space Λq CN (for some q) and then applying the exterior derivative map. In symbols:
                                        id⊗ps,d−s                                δq,d−s
               Λq CN ⊗ Ss (CN )∗ −−−−−→ Λq CN ⊗ Sd−s CN                          −−−→ Λq+1 CN ⊗ Sd−s−1 CN
                      X ⊗D             7−−−−−→             X ⊗ D(p)
                                                                                                             
                                                                                                         ∂g
                                                             X ⊗g                7−−−→    ∑N1 (xi ∧ X) ⊗ ∂ xi

     Let p∧q        q N
           s,d−s : Λ C ⊗ S C
                               s N∗ → Λq+1 CN ⊗ Sd−s−1 CN denote the composition of the two maps. Explic-

itly, writing xI = xi1 ∧ · · · ∧ xiq for a q-tuple I = (i1 , . . . , i|I| ) and
                                                       ∂s      ∂         ∂
                                                           =        ···
                                                      ∂ xJ   ∂ x j1     ∂ x js
for an s-tuple J = ( j1 , . . . , js ) , the map is
                                                           ∂s               ∂ s+1 p
                                         p∧q
                                          s,d−s : xI ⊗         7→ xk ∧ xI ⊗           .
                                                          ∂ xJ              ∂ xk ∂ xJ
    Then, by [21, Prop 4.1.1], one obtains the border rank lower bound
                                                      rank(p∧q
                                                            s,d−s )       rank(p∧q
                                                                                s,d−s )
                                      RS (p) ≥                          =          .                             (6.1)
                                                    rank((`d )∧q
                                                              s,d−s )
                                                                              N−1
                                                                                    q

    When ps,d−s is of maximal rank for all s, Koszul flattenings can only give a better Waring border
rank lower bound than the partial derivative maps when d = 2k + 1 is odd and s = k. For example,       if
                                                                                     q  N
d = 2k is even, then the rank of the Koszul flattening is bounded above by dim Λ C ⊗ S C so   k−1   N

the Waring border rank lower bound from Koszul flattenings is bounded above by N+k−2
                                                                                            N
                                                                                     k−1    N−q whereas
                                                                      N+k−1
                                                                           
from flattenings alone, one already gets the larger lower bound of      k    . Similarly, if d is odd but
N 6= 2q + 1, the bound from Koszul flattenings is lower than the one from standard flattenings.
    Suppose d = 2k + 1 and pk,k+1 is of maximal rank. Viewed naïvely, one might think Koszul flattenings
could prove Waring border rank lower bounds of up to
                              dim(Λq CN ⊗ Sk CN∗ )
                                                                 
                                                        N +k−1        N
                                       N−1
                                                    =
                                                                    N −q
                                           
                                         q
                                                            k

                            T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                    17
                             F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

but this is not possible because the exterior derivative map is a GLN -module map that is not surjective
unless q = N, N − 1. Indeed, we have the decompositions

                               Λq CN ⊗ Sk+1 CN = Sk+1,1q CN ⊕ Sk+2,1q−1 CN ,
                               Λq+1 CN ⊗ Sk CN = Sk,1q+1 CN ⊕ Sk+1,1q CN

so, by Schur’s Lemma, Sk+2,1q−1 CN = ker(δ ) and image(δ ) = Sk+1,1q CN .
    The following result gives an a priori upper bound for the rank of the Koszul flattening, by determining
a lower bound for the dimension of ker(p∧q  k,k+1 ).

Proposition 6.1. Let p ∈ S2k+1 CN . Then for every q
                                             k
                            rank(p∧q              j
                                  k,k+1 ) ≤ ∑ (−1) dim(Λ
                                                         q− j N
                                                             C ⊗ Sk− j CN∗ ).                           (6.2)
                                            j=0

Proof. We will prove that
                                                 ∧(q−1)
                                dim image(pe−1,d−e+1 ) ≤ dim ker(p∧q
                                                                  e,d−e )

and conclude that the estimate holds for every q via an induction argument. Indeed,
                                              
                                      ∧(q−1)
                            image pe−1,d−e+1 ⊆ image(IdΛq CN ⊗ pe,d−e )

because the exterior derivative map takes derivatives on the factor Sd−e+1 CN . Moreover, since δ 2 = 0,
we have image(p∧q−1
                 e−1,d−e+1 ) ⊆ ker(δq,d−e ). Therefore, passing to dimensions


           dim(image(p∧q−1                                                                ∧q
                      e−1,d−e+1 )) ≤ dim(ker(δq,d−e |image(IdΛq CN ⊗pe,d−e ) ) ≤ dim ker(pe,d−e ).

Now,

                         rank(p∧q             q N   k N∗            ∧q
                               k,k+1 ) = dim(Λ C ⊗ S C ) − dim(ker(pk,k+1 ))

                                      ≤ dim(Λq CN ⊗ Sk CN∗ ) − rank(pq−1
                                                                     k−1,k+2 )

and we conclude by induction.

Remark 6.2. This is still not the end of the story: when N = 2q + 1 with q odd, then the linear map
p∧q        q N
 k,k+1 : Λ C ⊗ S C
                   k N∗ → Λq+1 CN ⊗ Sk CN ' Λq CN∗ ⊗ Sk CN was observed in [21], at least in certain

cases, to be skew-symmetric. In particular, in this case, if the bound in (6.2) is odd, it cannot be attained.

Remark 6.3. Since the border rank bound is obtained by dividing rank(p∧q                      N−1
                                                                                                  
                                                                                  k,k+1 ) by   q    , the best
potential lower bound is obtained when N = 2q + 1 and there the limit of the method is twice the bounds
obtained via flattenings minus lower order terms. This improvement is irrelevant for complexity. It is
known more generally that the improvement in best possible lower bounds beyond the best possible
bounds of partial derivatives are limited for any determinantal method. This was observed independently
by Efremenko, Garg, Oliveira and Wigderson [5], and Gałazka   ˛    [9] for completely different reasons.

                       T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                18
                       E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

   We now show Koszul flattenings can indeed give border rank lower bounds beyond the best lower
bound attainable via the method of partial derivatives.

Proposition 6.4. For all n > 2 and all q < n/2 , the Koszul flattening ( fen,k )∧q
                                                                                k,k+1 has rank at least
                                                 
                                         n−1  n+k−1      
                                                      +q−1 .
                                          q      k

In particular, RS ( fen,k ) ≥ n+k−1
                                   
                                k    + q − 1, which is greater than the lower bound obtainable by flattenings.

Proof. For fixed n, recall the unique Sn -invariant g := gn ∈ S2 [n − 1, 1] from the proof of Theorem 2.5.
Let [n] be the span of ` := `n . For every s, write ps := (`∗ )s−1 · (`qs−1 ), which from Eqn. (2.3) is a
polynomial of degree s with non-zero projection onto [n]s . From Eqn. (2.2), the image of the k-th
flattening map of fen,k is

                                                     k
                                                     M
                             image( fen,k )k,k+1 =         ps+1 Sk−s [n − 1, 1] ⊆ Sk+1 Cn .
                                                     s=0


     We will give a lower bound for the dimension of the image of Λq Cn ⊗image( fen,k )k,k+1 under the
exterior derivative map. We have Λq Cn = Λq [n − 1, 1] ⊕ [n] ∧ Λq−1 [n − 1, 1] as a Sn -module.
     Consider the image of ([n] ∧ Λq−1 [n − 1, 1]) ⊗ (ps+1 Sk−s [n − 1, 1]) under the exterior derivative. The
Sn -equivariant projection of this space onto ([n] ∧ Λq−1 [n − 1, 1]) ⊗ ([n]s+1 ⊗ Sk−s [n − 1, 1]) commutes
with the exterior derivative, which is GLn -equivariant and therefore Sn -equivariant. The image of ([n] ∧
Λq−1 [n − 1, 1]) ⊗ ([n]s+1 ⊗ Sk−s [n − 1, 1]) under the exterior derivative is (when s ≤ k − 1 and after reorder-
ing the factors) the subspace S(k−s,1q−1 ) [n − 1, 1] ⊗ [n] ⊗ [n]s+1 ⊆ [n] ∧ Λq [n − 1, 1] ⊗ [n]s+1 Sk−s−1 [n − 1, 1].
     Now consider the image of Λq [n − 1, 1] ⊗ ps+1 Sk−s [n − 1, 1]. Again, consider its projection onto
Λq [n − 1, 1] ⊗ ([n]s+1 ⊗ Sk−s [n − 1, 1]). By applying        the exterior derivative map, we obtain a subspace
of ([n] ∧ Λq [n − 1, 1]) ⊗ ([n]s ⊗ Sk−s [n − 1, 1]) ⊕ Λq+1 [n − 1, 1] ⊗ ([n]s+1 ⊗ Sk−s−1 [n − 1, 1]) ; consider
                                                       

its projection to the second summand Λq+1 [n − 1, 1] ⊗ [n]s+1 Sk−s−1 [n − 1, 1] when s ≤ k − 1. For the same
reason as above, the image of this projection is S(k−s,1q ) [n − 1, 1] ⊗ [n]s+1 up to reordering the factors.
     Note that S(k−t+1,1q−1 ) [n − 1, 1] ⊕ S(k−t,1q ) [n − 1, 1] = Sk−t [n − 1, 1] ⊗ Λq [n − 1, 1] as a GL([n − 1, 1])-
module. Consider the summands for s ranging from 0 to k − 2 in the first case and 1 to k − 1 in the second,
we obtain components Sk−t [n − 1, 1] ⊗ Λq [n − 1, 1] for t from 1 to k − 1. We obtain a subspace in the
image of the Koszul flattening that is isomorphic as a GL([n − 1, 1])-module to

                                        k−1
                                       M                     
                                               Sk−t [n − 1, 1] ⊗ Λq [n − 1, 1].
                                         t=1

The first factor of the space above has the same dimension as Sk−1 Cn minus dim(S0 [n − 1, 1]) = 1.
   So far we have a contribution to the rank of
                                                       
                                         n−1  n+k−2            
                                                            −1 .
                                           q        k−1

                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                      19
                               F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

   Next consider the s = 0 contribution that one obtains by applying the exterior derivative to the
component with the factor Λq [n − 1, 1]. The exterior derivative map is

Λq [n − 1, 1] ⊗ ([n] ⊗ Sk [n − 1, 1]) → Λq [n − 1, 1] ∧ [n] ⊗ Sk [n − 1, 1] ⊕ Λq [n − 1, 1] ⊗ ([n] ⊗ Sk−1 [n − 1, 1]).

This projects isomorphically onto the first term in the target and (Λq [n − 1, 1] ∧ [n]) ⊗ Sk [n − 1, 1] does not
intersect the image of any other term that
                                         we considered so far, so we obtain an additional contribution to
the image of dimension n−1+k−1
                                    n−1
                              k       q . We now have a contribution of
                                                 
                 n−1  n+k−2       n+k−2     n−1  n+k−1      
                              −1+          =                −1
                  q     k−1          k         q      k

to the rank.
     Finally consider the s = k and s = k − 1 terms in the term with a factor [n] ∧ Λq−1 [n − 1, 1]: the
sources are respectively [n] ∧ Λq−1 [n − 1, 1] ⊗ hpk+1 i and ([n] ∧ Λq−1 [n − 1, 1]) ⊗ pk [n − 1, 1]. Consider
the projections respectively to ([n] ∧ Λq−1 [n − 1, 1]) ⊗ `k−1 S2 [n − 1, 1] (the second factor is `k−1 g) and
([n] ∧ Λq−1 [n − 1, 1]) ⊗ `k−2 S3 [n − 1, 1] (the second factor is `k−2 g[n − 1, 1]). Now, applying the exterior
derivative, these spaces map injectively to ([n] ∧ Λq [n − 1, 1]) ⊗ [n]k−1 [n − 1, 1] and ([n] ∧ Λq [n − 1, 1]) ⊗
`k−2 S2 [n − 1, 1] respectively. These targets do not appear in other terms analyzed above, so we pick up
                                                                                       
                    n−1        q n−1                               n−1        q(n − 1) n − 1
                            =                    and     (n − 1)            =
                    q−1        n     q                             q−1             n       q

additional contributions to the rank.
   Collecting all the contributions together, we conclude.

Remark 6.5. A more careful analysis of the q = 1 case shows it also improves the flattening lower bound.

Remark 6.6. Computer experiments performed in Macaulay2 [12], using a variant of the code of [26],
indicate that the situation may be significantly better.
    For n = 3, . . . , 6, k = 1, . . . , 6, let p ∈ S2k+1 Cn be generic. The Koszul flattening p∧1
                                                                                                k,k+1 has rank equal
to the bound in (6.1) except if n = 3 and k is even (in accordance with Remark 6.2).
    For n = 3, . . . , 6 and k = 1, . . . , 6, let p = hn,2k+1 or p = fen,k . The Koszul flattenings of p∧1 k,k+1 has
rank equal to the bound in (6.1) if k is odd and one less than the bound in (6.1) if k is even.

    Notice that in the cases n = 4, 5, 6 with k even, the Koszul flattenings give a border rank lower bound
for hn,2k+1 and fen,k that is one less than the bound for a generic polynomial.

Question 6.7. What are the ranks of the Koszul flattenings for fen,k and hn,2k+1 in general?



References
 [1] OTTO B IERMANN: Über näherungsweise Cubaturen. Monatshefte für Mathematik und Physik,
     14(1):211–225, 1903. [doi:10.1007/BF01706869] 2, 3

                         T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                                      20
                    E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

 [2] W ERONIKA B UCZY ŃSKA AND JAROSŁAV B UCZY ŃSKI: Secant varieties to high degree Veronese
     reembeddings, catalecticant matrices and smoothable Gorenstein schemes. J. Algebraic Geom.,
     23(1):63–90, 2014. [doi:10.1090/S1056-3911-2013-00595-0, arXiv:1012.3563] 2

 [3] P ETER B ÜRGISSER , C HRISTIAN I KENMEYER , AND G RETA PANOVA: No occurrence obstructions
     in geometric complexity theory. J. Amer. Math. Soc., 32(1):163–193, 2019. [doi:10.1090/jams/908,
     arXiv:1604.06431] 5

 [4] X I C HEN , N EERAJ K AYAL , AND AVI W IGDERSON: Partial derivatives in arithmetic complexity
     and beyond. Found. Trends Theor. Comput. Sci., 6(1-2):1–138, 2010. [doi:10.1561/0400000043] 2

 [5] K LIM E FREMENKO , A NKIT G ARG , R AFAEL O LIVEIRA , AND AVI W IGDERSON: Barriers for
     rank methods in arithmetic complexity. In Proc. 9th Innovations in Theoret. Comp. Sci. conf.
     (ITCS’18), pp. 1:1–1:19. Leibniz Zentrum für Informatik, 2018. [doi:10.4230/LIPIcs.ITCS.2018.1,
     arXiv:1710.09502] 18

 [6] K LIM E FREMENKO , J OSEPH M. L ANDSBERG , H AL S CHENCK , AND J ERZY W EYMAN: The
     method of shifted partial derivatives cannot separate the permanent from the determinant. Mathe-
     matics of Computation, 87(312):2037–2045, 2018. [doi:10.1090/mcom/3284, arXiv:1609.02103] 4,
     5, 10

 [7] H ERVÉ F OURNIER , N UTAN L IMAYE , M EENA M AHAJAN , AND S RIKANTH S RINIVASAN: The
     shifted partial derivative complexity of elementary symmetric polynomials. Theory of Computing,
     13(9):1–34, 2017. Preliminary version in MFCS’15. [doi:10.4086/toc.2017.v013a009] 5

 [8] W ILLIAM F ULTON AND J OE H ARRIS: Representation Theory: A First Course. Springer, 1991.
     [doi:10.1007/978-1-4612-0979-9] 7

 [9] M ACIEJ G AŁ AZKA
                   ˛   : Vector bundles give equations of cactus varieties. Lin. Alg. Appl., 521:254–262,
     2017. [doi:10.1016/j.laa.2016.12.005, arXiv:1605.08005] 18

[10] F ULVIO G ESMUNDO , C HRISTIAN I KENMEYER , AND G RETA PANOVA: Geometric complexity the-
     ory and matrix powering. Diff. Geom. Appl., 55:106–127, 2017. [doi:10.1016/j.difgeo.2017.07.001,
     arXiv:1611.00827] 5

[11] I RA G ESSEL AND G ÉRARD V IENNOT: Binomial determinants, paths, and hook length formulae.
     Adv. in Math., 58(3):300–321, 1985. [doi:10.1016/0001-8708(85)90121-5] 13

[12] DANIEL R. G RAYSON AND M IKE E. S TILLMAN: Macaulay2, a software system for research in
     algebraic geometry. Available here. 20

[13] E DWARD L. G REEN: Complete intersections and Gorenstein ideals. J. Algebra, 52(1):264–273,
     1978. [doi:10.1016/0021-8693(78)90274-0] 2

[14] A NKIT G UPTA , P RITISH K AMATH , N EERAJ K AYAL , AND R AMPRASAD S APTHARISHI: Arith-
     metic circuits: A chasm at depth 3. SIAM J. Comput., 45(3):1064–1079, 2016. Preliminary version
     in FOCS’13. [doi:10.1137/140957123] 3

                      T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                            21
                          F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

[15] A NTONY I ARROBINO AND JACQUES E MSALEM: Some zero-dimensional generic singularities;
     finite algebras having small tangent space. Compositio Math., 36(2):145–188, 1978. NUMDAM. 2

[16] A NTONY I ARROBINO AND VASSIL K ANEV: Power Sums, Gorenstein Algebras, and Determinantal
     Loci. Springer, 1999. [doi:10.1007/BFb0093426] 2, 17

[17] C HRISTIAN I KENMEYER AND G RETA PANOVA: Rectangular Kronecker coefficients and plethysms
     in geometric complexity theory. Adv. Math., 319:40–66, 2017. Preliminary version in FOCS’16.
     [doi:10.1016/j.aim.2017.08.024, arXiv:1512.03798] 5

[18] N EERAJ K AYAL: An exponential lower bound for the sum of powers of bounded degree polynomials.
     Electr. Coll. Comp. Compl. (ECCC), 81, 2012. 3

[19] J OSEPH M. L ANDSBERG: Tensors: Geometry and Applications. Amer. Math. Soc., 2012.
     [doi:10.1090/gsm/128] 7

[20] J OSEPH M. L ANDSBERG: Geometry and Complexity Theory. Cambridge University Press, 2017.
     [doi:10.1017/9781108183192] 2

[21] J OSEPH M. L ANDSBERG AND G IORGIO OTTAVIANI: Equations for secant varieties of
     Veronese and other varieties. Annali di Matematica Pura ed Applicata, 192(4):569–606, 2013.
     [doi:10.1007/s10231-011-0238-6, arXiv:1111.4567] 17, 18

[22] H WANGRAE L EE: Power sum decompositions of elementary symmetric polynomials. Lin. Alg.
     Appl., 492:89–97, 2016. [doi:10.1016/j.laa.2015.11.018, arXiv:1508.05235] 5

[23] M. S TUART LYNN: On the Schur product of H-matrices and non-negative matri-
     ces, and related inequalities. Math. Proc. Cambridge Phil. Soc., 60(3):425–431, 1964.
     [doi:10.1017/S0305004100037932] 14

[24] N OAM N ISAN: Lower bounds for non-commutative computation. In Proc. 23rd STOC, pp. 410–418.
     ACM Press, 1991. [doi:10.1145/103418.103462] 4, 5

[25] N OAM N ISAN AND AVI W IGDERSON: Lower bounds on arithmetic circuits via partial
     derivatives. Comput. Complexity, 6(3):217–234, 1996. Preliminary version in FOCS’95.
     [doi:10.1007/BF01294256] 5

[26] L UKE O EDING AND G IORGIO OTTAVIANI: Eigenvectors of tensors and algorithms for Waring
     decomposition. J. Symb. Comput., 54:9–35, 2013. [doi:10.1016/j.jsc.2012.11.005, arXiv:1103.0203]
     20

[27] V IKTOR V. P RASOLOV: Problems and Theorems in Linear Algebra. Amer. Math. Soc., 1994.
     [doi:10.1090/mmono/134] 6, 14

[28] B RUCE R EZNICK: Sums of even powers of real linear forms. Mem. Amer. Math. Soc., 96(463),
     1992. [doi:10.1090/memo/0463] 2, 3, 6

                      T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                        22
                    E XPLICIT P OLYNOMIALS WITH M AXIMAL S PACES OF PARTIALS

[29] A MIR S HPILKA AND AVI W IGDERSON: Depth-3 arithmetic circuits over fields of char-
     acteristic zero. Comput. Complexity, 10(1):1–27, 2001. Preliminary version in CCC’99.
     [doi:10.1007/PL00001609] 5

[30] R ICHARD P. S TANLEY: Hilbert functions of graded algebras. Adv. in Math., 28(1):57–83, 1978.
     [doi:10.1016/0001-8708(78)90045-2] 10

[31] N OAH S TEIN: Full rank submatrices of positive semidefinite matrix, 1917. MathOverflow. 13

[32] JAMES J. S YLVESTER: On the principles of the calculus of forms. Cambridge and Dublin Math. J.,
     7:52–97, 1852. 2


AUTHORS

      Fulvio Gesmundo
      Postdoc at QMath
      University of Copenhagen
      Denmark
      fulges math ku dk


      Joseph M. Landsberg
      Professor
      Dept. of Mathematics
      Texas A&M University, College Station, TX
      jml math tamu edu
      http://www.math.tamu.edu/~jml


ABOUT THE AUTHORS

      F ULVIO G ESMUNDO works in algebraic geometry and representation theory, focusing on
         problems originating in theoretical computer science and quantum information theory.
         He obtained his M. Sc. in Mathematics in 2012 at the University of Florence (Italy) under
         the supervision of Giorgio Ottaviani, and his Ph. D. in Mathematics in 2017 at Texas
         A&M University under the supervision of Joseph M. Landsberg. His Master’s Thesis
         deals with the classical tensor rank problem in algebraic geometry. His Ph. D. thesis
         concerns the study of geometric and representation theoretic aspects of matrix rigidity,
         a problem originally proposed by L. Valiant in the 70s to prove lower bounds on the
         complexity of the discrete Fourier transform. Gesmundo is currently a postdoc at the
         University of Copenhagen, where his research focuses on tensor decomposition problems
         related to quantum information.




                      T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                           23
                     F ULVIO G ESMUNDO AND J OSEPH M. L ANDSBERG

J OSEPH M. L ANDSBERG works on questions at the interface of algebraic geometry, differ-
    ential geometry and representation theory. His current research is focused on geometric
    problems originating in complexity theory. Landsberg obtained his Ph. D. in 1990
    from Duke University under the direction of Robert Bryant on minimal submanifolds
    and a generalization of calibrations. He has directed ten Ph. D. theses, is the author
    of three monographs, Tensors: Asymptotic Geometry and Developments 2016-2018
    (AMS-CBMS), Geometry and Complexity Theory (Cambridge), and Tensors: Geometry
    and Applications (AMS), a coauthor of the monograph Cartan for Beginners (AMS),
    and has published over 75 research articles. In Fall 2014 he co-organized the semester
    Algorithms and Complexity in Algebraic Geometry at the Simons Institute for the Theory
    of Computing, Berkeley. At the same time he was also the Chancellor’s Professor at U. C.
    Berkeley and gave a course on geometry and complexity theory.




                T HEORY OF C OMPUTING, Volume 15 (3), 2019, pp. 1–24                           24