> <\body> |>|<\doc-note> This paper is part of a project that has received funding from the French \PAgence de l'innovation de défense\Q. ||<\author-affiliation> CNRS (UMI 3069, PIMS) Department of Mathematics Simon Fraser University 8888 University Drive Burnaby, British Columbia V5A 1S6, Canada |<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bātiment Alan Turing, CS35003 91120 Palaiseau, France |>>||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bātiment Alan Turing, CS35003 91120 Palaiseau, France |>>||<\doc-note> > The efficient evaluation of multivariate polynomials at many points is an important operation for polynomial system solving. Kedlaya and Umans have recently devised a theoretically efficient algorithm for this task when the coefficients are integers or when they lie in a finite field. In this paper, we assume that the set of points where we need to evaluate is fixed and \Psufficiently generic\Q. Under these restrictions, we present aquasi-optimal algorithm for multi-point evaluation over general fields. We also present a quasi-optimal algorithm for the opposite interpolation task. > Let > be an effective field, so that we have algorithms for the field operations. Given apolynomial \,\,x|]>> and a tuple =,\,\|)>>\|)>> of points, the computation of |)>=|)>,\,P|)>|)>\\> is called the problem of . The converse problem is called and takes a candidate support of as input. This problem naturally relates to several areas of applied algebra, including polynomial system solving, since we may use it to verify whether all points in a given set are solutions to a system of polynomial equations. In, it has even be shown that efficient algorithms for multi-point evaluation lead to efficient algorithms for polynomial system solving. As an other more specific application, bivariate polynomial evaluation intervenes in computing generator matrices of geometric error correcting codes. An important particular case of interpolation concerns polynomials with prescribed supports and that vanish at a given set of points. This happens in list decoding algorithms for Reed\USolomon error correcting codes; see recent complexity bounds in. In the bivariate case, this task also occurs in the Brill\UNoether strategy for computing Riemann\URoch spaces; see recent advances in. Further applications can be found in. In the univariate case when , it is well known that one may use so-called \Premainder trees\Q to compute multi-point evaluations in quasi-optimal time. More precisely, if > stands for the cost to multiply two univariate polynomials of degree d> (in terms of the number of field operations in >), then the multi-point evaluation of a polynomial of degree d> at points can be computed in time *log d|)>>. A similar complexity bound holds for the opposite operation of . The constants hidden in the latter \P\Q have been studied in. Recently, and under suitable technical assumptions, it has been shown that the cost of univariate multi-point evaluation (and interpolation) drops to *log d/log log d|)>> if the set of evaluation points is fixed. In this case, we do not count the cost of certain precomputations that only depend on the evaluation points and not on the polynomials to evaluate. We may also speak about the \Pamortized cost\Q of multi-point . In the multivariate case when 2>, no quasi-optimal algorithms are currently known for multi-point evaluation. From a theoretical point of view, the best bit-complexity bounds are due to Kedlaya and Umans, in the special cases when the coefficients of our polynomials are modular integers or when they lie in a finite field. They also related the problem to other important operations such as modular composition and power projection. Their results have recently been refined in. Over general fields and for 2>, the best known bound is > P*> P|)>|)>>; due to Nüsken and Ziegler. The multivariate interpolation problem turns out to be more intricate than evaluation, and fewer efficient algorithms are known. Using naive linear algebra, one may design polynomial time algorithms, but with costs that are higher than quadratic in. To our knowledge no interpolation method has been designed in the vein of the Kedlaya\UUmans evaluation algorithms. Several methods are surveyed in for instance, but, here, we will focus on symbolic approaches and their asymptotical complexity bounds. Concerning the computation of polynomials that vanish at a given set of points, with suitable constraints on the supports, early methods can be found in: Gröbner bases of the ideal made of these polynomials are obtained with cost at least cubic in. After a generic change of coordinates the lexicographic basis satisfies the \Pshape lemma\Q and it can be computed in softly linear time by means of univariate interpolations. In other words, our set of points is the solution set of a system |)>=0> and =v|)>> for ,n>, where =d> and \d>. From > and the >, a Gröbner basis for any other monomial ordering can be recovered with >|)>> field operations thanks to the algorithm designed in. In the bivariate case, with generic coordinates, from the latter univariate parametrization, and given \d>, we may compute a basis of the |]>>-module of polynomials of degree \> in > that vanish at ,\,\>. Indeed this reduces to computing a basis of the kernel of the map <\eqnarray*> |]>|]>\>>|>||]>/|)>|)>>>||)>+\+a-1>|)>*x-1>>|>||)>+\+a-1>|)>*v|)>-1>.>>>> This kernel is a free |]>>-module of rank >. The basis in Popov form can be computed with -1>*d|)>> operations in >; this complexity bound is due to Neiger. Taking \>, the latter cost rewrites +1|)>/2>|)>>. The bivariate interpolation problem can be solved with the same kind of complexity bound by means of linear algebra algorithms that exploit displacement ranks. In a more general setting, one may consider evaluation at \Pmultisets of points\Q in the sense that values of polynomials and of certain of its derivatives are involved. The converse problem is usually called , so values for the polynomial and its derivative are prescribed. We will not address these extended problems in the present paper. In this paper, we turn our attention to the amortized cost of multivariate multi-point evaluation. Given a \Psufficiently generic\Q tuple >> of evaluation points, we first construct asuitable \Pevaluation tree\Q that is similar to the remainder trees from the univariate case. After this precomputation, we next show how to evaluate polynomials at > in quasi-optimal time. For instance, for a fixed dimension , a sufficiently large base field >, and a polynomial of partial degrees > P\>> for ,n>>, we show how to compute |)>>> in time *log d|)>>. We also show how to do the opposite operation of interpolation with a similar complexity. It can be shown that the \Pconstant factors\Q in the big-Oh hide a factorial dependence in . Our algorithms rely on suitable genericity hypotheses that ensure the existence of Gröbner bases of a specific shape for the vanishing ideal >> at the evaluation points. This will be detailed in section. Another key ingredient is an algorithm from for the relaxed reduction of polynomials with respect to such Gröbner bases or more general sets of polynomials. In this paper, all reductions will be done with respect to so-called \Paxial bases\Q, which provide sufficiently good approximations of the actual Gröbner bases of >>. This will be detailed in section. Having an efficient algorithm for reducing polynomials with respect to >>, we may use it as a generalization of Euclidean division. In sections and, this allows us to generalize the remainder tree technique to higher dimensions, and establish our complexity bounds for amortized multi-point evaluation and interpolation. It is also noteworthy that the necessary precomputations can be done using linear algebra in time >|)>>, where \2> is a feasible exponent for matrix multiplication. When comparing our new amortized bound *log d|)>> with the traditional bound *log d|)>> in the univariate case, we note that we lose a factor d>. This is due to the fact that we rely on relaxed computations for our polynomial reductions. With a bit of work, a factor might be removed by exploiting the fact that our polynomials are not really sparse, so that we may take =O|)>> in the proof of Theorem (under our assumption that is fixed and with the notation > defined in). It is plausible that the second logarithmic factor can be further reduced using techniques from or a clever Newton iteration. Our genericity assumptions can also be weakened significantly: with the notations from section, it is essentially sufficient to assume that |\|>=O>. Another interesting question is whether the cost >|)>> of the precomputations can be lowered. Under additional genericity assumptions, this is indeed possible when : using the probabilistic algorithm of type Las Vegas from, the precomputations take an expected number of +1|)>/2+o>|)>> field operations. For complexity analyses, we will only consider algebraic complexity models (such as computation trees). In other words we will simply count numbers of arithmetic operations and zero-tests in>. Throughout this paper, we assume > to be a feasible exponent for matrix multiplication with \2>. This means that two n> matrices in n>> can be multiplied in time >|)>>. Le Gall has shown in that one may take\2.373>. We denote by > the time that is needed to compute aproduct of two polynomials \> of degree d>. We make the usual assumption that /d>> is nondecreasing as a function of . It is also convenient to assume that =O|)>> for >. Using a variant of Schönhage\UStrassen's algorithm, it is well known that =O>. If we restrict our attention to fields > of positive characteristic, then we may take =O> n>|)>>. In order to use the extended multivariate reduction algorithms from, sparse polynomial arithmetic is needed. A of a polynomial in ,\,x|]>> is a data structure that stores the set of the non-zero terms of . Each such term is apair made of a coefficient and a degree vector. In an algebraic complexity model the bit size of the exponents counts for free, so the relevant size of such a polynomial is the cardinality of its support. Consider two polynomials and of ,\,x|]>> in sparse representation. An extensive literature exists on the general problem of multiplying and ; see for a recent survey. For the purposes of this paper, a superset > for the support of will always be known. Then we define > to be the cost to compute , where is the maximum of the sizes of such a superset> and the supports of and . Under suitable assumptions, the following proposition will allow us to take =O*log s|)>> in our multivariate evaluation and interpolation algorithms. <\proposition> Let ,\,\> be positive integers and let > in > be of multiplicative order at least \\*\*\>. <\enumerate-roman> The set > made of the products >*\*\>*\*\*\*\*\>> for ,\,e|)>\,\|}>> can be computed using |)>> operations in >. Let and be in ,\,x|]>>, in sparse representation, and let > be a superset of the support of . Assume that >\\> for ,n>, and that > has been precomputed. Then the product> can be computed using *log s|)>> operations in >, where denotes the maximum of the sizes of > and the supports of and . <\proof> The first statement is straightforward. The second one is simply adapted from6>. Computing an element of sufficiently large multiplicative order depends on the ground field >. In characteristic zero, any integer different from and will do. For finite fields > of characteristic 0>, working in an algebraic extension>>> yields elements of order up to >-1>. In the context of Proposition, we may take =/log q|)>>>. In infinite fields > of positive charactersitic 0>, elements of arbitrarily large orders exist, but they may be hard to access without prior knowledge. However, this is mostly atheoretical problem: in practice, we usually either know a transcendental element of infinite order or a way to construct arbitrarily large finite fields inside >. For variables =,\,x|)>> and exponents =,\,i|)>\\>, we define >\x>*\*x>>, \>\\\\|}>>, and |]>\\,\,x|]>>. The monoid > comes with a natural partial ordering > that is defined by <\equation*> \>\\>\i\j\\\i\j. We also assume that we fixed a total admissible ordering > on > that extends > and which turns > into a totally ordered monoid. Given a polynomial <\equation*> P=\\>P>*\\\, we call \\\P>\0|}>> its . If 0>, then we define <\equation*> \\max> supp f to be the of . Let =,\,\|}>> be a finite set of monomials. Given apolynomial *\+\+c*\> and a finite tuple of points =,\,\|)>\|)>>, we have <\eqnarray*> |)>>>|>>||)>>>>>>>|||)>>|>||)>>>|>||>>||)>>|>||)>>>>>>*>>|>>|>>>>>.>>>> The set of tuples of points for which the matrix <\eqnarray*> ,\>>|>||)>>|>||)>>>|>||>>||)>>|>||)>>>>>>>>>> is non-invertible forms a proper Zariski closed subset of |)>>. If > is algebraically closed, then its open complement is actually non-empty: <\lemma> Assume that > is algebraically closed. Then there exists a tuple of points \|)>> for which ,\>> is invertible. <\proof> Let ,\,t|)>\\> be a point such that ,\,\>> are pairwise distinct; such a point exists since > is algebraically closed. Now take =,\,t|)>>> for ,d>>. Then have <\eqnarray*> ,\>>|||)>>|>||)>>>|>||>>||)>>|>||)>>>>>>,>>>> whence ,\>> is an invertible Vandermonde matrix. The lemma shows that ,\>> is invertible for a \Pgeneric\Q tuple of points \|)>>. Assume from now on that this is indeed the case and let us write >> for the ideal of polynomials \|]>> such that |)>=\=P|)>=0>. For any other monomial \\\\> and -*\+\+c*\|)>>, we have <\eqnarray*> |)>>>|>>||)>>>>>>>|||)>>>|>>||)>>>>>>-M,\>*>>|>>|>>>>>,>>>> whence I>> if and only if <\eqnarray*> >>|>>|>>>>>>||,\>*|)>>>|>>||)>>>>>>.>>>> This shows that +I>,\,\+I>> form a basis for the quotient algebra |]>/I>> and it provides us with a formula to express +I>> in terms of these basis elements. Let \\\\\\> be such that \,\,\|}>> is the set of smallest elements of > for each \> and with respect to the total ordering >. We define > to be the set of smallest elements of the complement \\> for >. By Dickson's lemma, this set is finite and it forms an antichain for >. Let \|)>> be a generic tuple of points in the sense that ,\>> is invertible. <\proposition> For each \\>, let <\eqnarray*> ,\>>||-*\+\+c*\|)>,>>>> where ,\,c> satisfy> for =\>. Then >\,\>\\\\|}>> forms the reduced Gröbner basis of >> with respect to >. <\proof> Let \\> be the set of dominant monomials of non-zero elements of >>. Given ,d|}>>, we have \\>: otherwise, +c*\+\+c*\\I>> for certain ,\,c\\>>, which is impossible since ,\>> is invertible. Conversely,implies that \\> for all \\\\>, whence =\\\>. Now it is well known that > is a finite segment of > for > and that each minimal element > corresponds to exactly one polynomial >> in the reduced Gröbner basis with dominant monomial >>=\>. Now such a reduced polynomial is by definition of the form >=\+\> with \\\\=\>. But shows how to compute the unique polynomial >=G,\>> of that form. <\example> Let > be a graded ordering for 2>. For any n+1>, we have =1> and =x> for ,n>. The >-th column of ,\>> contains the -th coordinates of the points of >. If these columns are not linearly independent then ,\>> is not invertible. If is any n> invertible matrix over >, and if |)>> denotes ,\,A*\> then the columns from to of |)>,\>> are linearly dependent, so |)>,\>> is not invertible. This example illustrates that the genericity of a tuple of points \|)>> cannot be necessarily recovered after a linear change of the coordinates. Let =>\\\\|}>> be as in the previous section, with a total admissible ordering >,> and let \|)>> be a generic tuple of points. We need a way to efficiently reduce polynomials \|]>> with respect to the Gröbner basis >>. Since the entire Gröbner basis can be voluminous to store, we will only reduce with respect to a special subset >> of\Paxial\Q basis elements. This also requires us to work with respect to a weighted total degree ordering>. For ,d>, we define <\eqnarray*> >|>|\\x\\|}>,>>>> so that >> contains a unique element ,i>\G,\>> with dominant monomial \x>>. We define >\,i>\i=1,\,n|}>> to be the of >>. Although this set is not a\Pbasis\Q of>>, it forms a sufficiently good approximation of >> for the purposes of this paper. We define <\eqnarray*> >|>|>\i\\\\\i\\|}>>>>> to be the set of monomials that are not divisible by any of the monomials ,\,\>. We also define > to be the >-vector spaced spanned by the elements of >. In all what follows, we assume that the ordering is graded by a weighted total degree. This means that there exist positive real weights ,\,w\0> such that for all >,\>\\>, we have <\eqnarray*> \\\\\\>|>|>\\>.>>>> Here =,\,w|)>> and \P>\Q stands for the dot product: \\=w*i+\+w*i>. Let <\eqnarray*> |>|\\\\>\\|}>.>>>> Then <\equation*> >\\\\\s|}>\\\>\\\\\s|}>. Geometrically speaking, the exponents > with >\\> correspond to lattice points inside or on the boundary of the simplex with vertices ,0|)>>, ,0,\,0|)>>, ,0,\,0|)>>,>, ,0,s/w|)>>. For fixed and large (whence ), it follows that <\eqnarray*> |>||n!*w*\*w>>>|>|>|>\*\*w|n>|w>,n|)>>>||\|>>|>||w*\*w>\n!*d,>>>> where |\|>> stands for the cardinality of >. In the remainder of this paper, we assume that and ,\,w> have been fixed once and for all. Our asymptotic complexity estimates hold for large and under this assumption. <\remark> If =\=w=1>, then one may take > to be the usual graded reverse lexicographic ordering. In that case, the > are of the same order of magnitude and > is approximately a hypercube. Considering general weights gives us more flexibility in the choice of > and allows us to deal with more general hyperrectangular supports. Given a polynomial \|]>>, an of with respect to ,1>,\,A,n>> is arelation <\eqnarray*> ||*A,1>+\+Q*A,n>+R,>>>> with ,\,Q\\|]>> and Red>. Extended reductions are computed by reducing with respect to ,i>> as long as there exists an index for which ,i>>> divides >. There are many ways to compute extended reductions of depending on the way we chose the index in case of multiple options. If we always privilege the lowest possible index, then the process is deterministic and we call \Pthe\Q extended reduction of with respect to >>. In that case, we define >\R> and call it the of with respect to >>. Using relaxed evaluation, this remainder can be computed efficiently: <\theorem> Let \\,\,\\\> be integers, let \\*\*\>, and let \|]>> be such that > P\\> for ,n>. Then an extended reduction> can be computed in time <\equation*> O|)>*log \|)>, whenever an element of multiplicative order !*\> is given in >. <\proof> Without loss of generality, we may assume that <\equation*> w*\\\\w*\. We use the reduction algorithm from with the reduction strategy that always reduces with respect to the axial element ,k>> for which is minimal. Then we claim that the supports of the successive reductions of are all contained in the set <\equation*> \\>\\j\,n|}>,w*i+\+w*i\w*\+\+w*\+w*\|}>. Indeed, let >\\>, let be minimal such that >> divides >>, and consider a monimial>> in the support of x>*\>*A,k>>. It suffices to show that >\\>. Let > be such that >=x>*\>>. For ,k-1>, the relations >\\>> and -\>\x>> imply <\eqnarray*> *-e|)>+\+w*-e|)>>||*-e|)>+\+w*-e|)>>>||>|*-e|)>+\+w*-e|)>\w*\.>>>> Now \\\\,\,\\>\\>, whence <\equation*> w*c+\+w*c\w*e+\+w*e+w*\\w*\+\+w*\+w*\. For ,n>, we directly have <\equation*> w*c+\+w*c\w*e+\+w*e\w*\+\+w*\+w*\. Having shown our claim, we next observe that *e\k*w*\+w*\> for all >\\> and ,n>, whence \*\> and *\*e\!*\>. In particular, the size of the support of the *A,i>> and in the extended reduction is bounded by !*\>. The theorem now becomes a consequence of4> and Proposition, by taking \O*log s|)>>. <\remark> If > is the finite field > and if !*\>, then we can apply Theorem over an algebraic extension>>> of degree <\equation*> \\|log !*\|)>/log q|\>. Constructing this extension can be done using >|)>> operations in> by3.2>. The primitive root of >>>> can be obtained in negligible expected time using aprobabilistic algorithm of Las Vegas type. Then the complexity bound in Theorem becomes <\equation*> O|)>**\|log q>|)>. The fast algorithms for multi-point evaluation of univariate polynomials make use of the technique of . For instance, the remainder tree for four points ,\,\,\\\>> is the labeled binary tree given by <\equation> |)>*|)>*|)>*|)>||)>*|)>|x-\|x-\>||)>*|)>|x-\|x-\>> Given a polynomial \> to evaluate at these four points, we compute the remainders of with respect to each of the polynomials at the nodes, in a top-down fashion. For instance, the value at > is obtained as <\equation*> P|)>=|)>*|)>*|)>*|)>|)> rem |)>*|)>|)> rem |)>. Using Theorem, we may use a similar technique for multivariate polynomials and points ,\,\,\\\>: we replace each polynomial |)>*\*|)>> by the axial basis,\,\|)>>>: <\equation> ,\,\,\|)>>|,\|)>>|\|)>>|\|)>>>|,\|)>>|\|)>>|\|)>>>> We may then compute the evaluation of at > using <\equation*> P|)>=,\,\,\|)>>|)> rem \,\|)>>|)> rem \|)>>. We will call an . In order to specify our general algorithms for multi-point evaluation and the computation of evaluation trees, it is convenient to introduce a few more notations. Given avector =,\,v|)>>, we will write <\eqnarray*> >>|>|,\,v|d/2|\>>|)>,>>|>>|>||d/2|\>+1>,\,v|)>,>>>> where |a|\>> represents the largest integer smaller or equal to . The least integer larger or equal to is written|a|\>>. Given vectors =,\,u|)>> and =,\,v|)>>, we also write <\eqnarray*> \\>|>|,\,u,v,\,v|)>>>>> for their concatenation. Given \|)>>, we may use the following recursive algorithm to compute the evaluation tree for >: <\named-specified-algorithm|EvalTree(>>)> a vector of points \|)>>. an evaluation tree for >. <|named-specified-algorithm> Compute the axial basis >> for >>. If , then return a leaf, labeled by >>. Let >\>|)>> and >\>|)>>. Return the tree with root labeled by >> and children >>, >>. We regard the computation of an evaluation tree as a precomputation. Given an evaluation tree for >, we may efficiently evaluate polynomials \|]>> at all points ,\,\>> using the following algorithm: <\named-specified-algorithm|Eval()> a polynomial \|]>> and an evaluation tree for \|)>>. the vector |)>=|)>,\,P|)>|)>>. <|named-specified-algorithm> Let >> be the axial basis attached to the root of . Let P rem \>>. If , then return >. Let >> and >> be the two children of the root of . Return >|)>\>|)>>. The algorithms > and > actually work for arbitrary point vectors \|)>>: it suffices to use a general purpose algorithm to compute Gröbner bases for the ideals>>, from which we can then extract the required axial bases. We say that > is if ,\,\|)>,\>> is invertible for all i\j\d>. In that case, each of the required axial bases during recursive calls can be computed using Proposition. <\theorem> The algorithms > and > are correct. Let \|]>> with > P\\> and \\> for ,n>, and \\*\*\>. If > is hereditarily generic, then the running times of > and > are respectively bounded by <\eqnarray*> >>||>|)>>>|>,\,\|)>>||*log d+|)>*log \|)>,>>>> whenever an element of multiplicative order !*\> is given in >. <\proof> By induction on , the algorithms > and > are clearly correct. Using Proposition and linear algebra, we see that the computation of >> in the first step of > can be done in time >>, for some constant . The running time > therefore satisfies <\eqnarray*> >>|>|>|>>|>|>+>|d/2|\>|)>+>|d/2|\>|)>,d\2.>>>> By induction on , this yields >\|\>>>, where <\eqnarray*> |\>>>||>||\>>>||>+|\>>|d/2|\>|)>+|\>>|d/2|\>|)>,d\2.>>>> Again using induction on , we also note that |\>>> is increasing. It follows that |\>>\|\>>>|)>> for >=2|log d/log 2|\>>\2*d>. Unrolling the equation <\eqnarray*> |\>>>||>||\>>>|)>>||>>+2*|\>>>/2|)>,>\2,>>>> we find <\equation*> |\>>>|)>\C*>>+2*C*>|2>|)>>+4*C*>|4>|)>>+\=>>*>>=O>|)>. As to the running time >> of >, we recall that |\|>\n!*d>, whence |\|>\K*d> for some constant that depends on . Theorem also implies the existence of aconstant such that <\eqnarray*> >,\,\|)>>|>||)>*log \+>,\,\|)>>>|>,1|)>>|>|>>|>,\,\|)>>|>|*log+>|d/2|\>,\,\,\|)>>>|||+>|d/2|\>,\,\,\|)>,d\2.>>>> Modulo a further increase of , the first and third relation may be combined such as to replace the third relation by <\eqnarray*> >,\,\|)>>|>|*log+>|d/2|\>,\|d/2|\>,1>,\,\|d/2|\>,n>|)>>>|||+>|d/2|\>,\|d/2|\>,1>,\,\|d/2|\>,n>|)>,d\2.>>>> By unrolling the latter inequality in a similar way as above for powers of two, we obtain |\>>>,\,\,\|)>\C*>|)>*>|)>+1|)>>, while using our assumption that /d> is non-decreasing. <\remark> If and the first coordinates of the evaluation points are pairwise distinct, then precomputations can be handled more efficiently as follows. The annihilating polynomial takes *log d|)>> operations in >, using the formula <\equation*> \|)>\-\|)>. With a similar cost we can also interpolate the polynomial |)>> of degree d> satisfying =v|)>> for ,d>. Setting \max> \\deg> \=j,i=1,\,n|)>+1> for ,\-1>, we note that +\+\-1>=d>. Determining the coordinates of \\\\> in the basis > of the quotient ring of>> reduces to computing polynomials ,\,g-1>> in |]>> of respective degree less than ,\,\-1>> and a scalar \> such that <\equation*> g|)>+g*v|)>+\+g-1>|)>*v|)>-1>+c*\,v|)>|)>=0 mod \|)>. Assuming that |\|>\2*d>, a non-trivial solution can be computed in expected time -1>**log d|)>=+1|)>/2>|)>> using the probabilistic algorithm of Las Vegas type underlying1>. Since ,\>> is invertible, the value found for cannot be zero, and we obtain the solution of in this way. In the univariate case, the technique of remainder trees can also be used to efficiently reconstruct a polynomial \> from its values at distinct points ,\,\>. This basically amounts to reversing the evaluation process, while using the Chinese remainder theorem to reconstruct >*A>|)>> from >> and >> for coprime polynomials>> and>>. For such reconstructions using the Chinese remainder theorem, it is useful to precompute the \Pcofactors\Q >,C>\\> with >*A> rem A>=1>, >*A> rem A>=1>, >\deg A>>, and >\deg A>>, after which <\equation*> P rem >*A>|)>=>|)>*C>*A>+>|)>*C>*A>|)> rem >*A>|)>. These cofactors are typically stored in an upgraded version of the remainder tree. In our multivariate setting, a similar idea works, modulo some additional precautions when computing the cofactors. The cofactors are most easily constructed their evaluations. Indeed, consider a tuple of points \|)>> with 2> and its decomposition =\>\\>>. Then the first element >\A>,1>> of the axial basis of >>> certainly vanishes at >> and we wish to let it play the same role as >> above. The corresponding cofactor >> should satisfy >*A>|)>>|)>=1>, so we may compute it through interpolation from its evaluation >>|)>> at >>. However, this requires >> not to vanish at any of the entries of >>. Now the set of tuples of points \|)>> for which one of the entries of >>|)>> vanishes forms a Zariski closed subset of dimension ; for a \Pgeneric\Q tuple of points, both >>|)>> and >>|)>> (where >\A>,1>>) are therefore invertible. Given \|]>>, this allows us to reconstruct >> from >>> and >>> using <\equation*> P rem \>=>>|)>*C>*A>+>>|)>*C>*A>|)> rem \>. More generally, we define > to be if it is hereditarily generic and, for all i\j\d> and ,n|}>\,j|}>>, we have ,\,\|)>,1>|)>\0>. Let us now detail the interpolation algorithm that was summarized and explained above. Similarly to the case of multi-point evaluation, it relies on the construction of an , which contains the required cofactors. Contrary to before, the construction of this tree recursively involves interpolations (in order to reconstruct the cofactors from their evaluations). <\named-specified-algorithm|IpolTree()> an evaluation tree for a super-generic vector \|)>>. an interpolation tree for >. <|named-specified-algorithm> If , then return a leaf. Let >> be the axial Gröbner basis attached to the root of . Recursively apply the algorithm to the children >>, >> of , yielding >>, >>. Let >>, >> be the first entries of the axial bases associated to the roots of >>, >>. Compute >\>,T>|)>> and >\>,T>|)>>. Compute >\>,U>|)>> and >\>,U>|)>>. Return the tree with root labeled by >,C>,C>|)>> and children >>, >>. <\named-specified-algorithm|Ipol(,U>)> \\> and the interpolation tree for \|)>>. Red> with |)>=\>. <|named-specified-algorithm> If , then return >. Let >>, >> be the first entries of the axial bases associated to the roots of >>, >>. Let >>, >> be the children of . Let >,C>,C>|)>> be the label of the root of . Let >\>,U>|)>> and >\>,U>|)>>. Return >*C> rem \>>|)>*A>+>*C> rem \>>|)>*A>|)> rem \>>. <\theorem> The algorithms > and > are correct. If > is super generic, then the running times of > and > are respectively bounded by <\eqnarray*> >>||*log d|)>>>|>>||*log d|)>,>>>> whenever an element of multiplicative order !*2*\> is given in >, where \\*\*\>. <\proof> By induction on , the algorithms > and > are clearly correct. Let us start with the complexity bound for >. We have >>*C>|)>\2*\>, >>*C>|)>\2*\>, >>*C> rem \>>|)>*A>|)>\2*\>, and >>*C> rem \>>|)>*A>|)>>, for ,n>, where |)>*\*|)>=O*n!*d|)>=O>. Theorem therefore implies the existence of constants and with <\eqnarray*> >>|>|>>|>>|>|*log+>|d/2|\>|)>+>|d/2|\>|)>,d\2.>>>> By unrolling the latter inequality in a similar way as in the proof of Theorem for powers of two, we obtain |\>>>|)>\C*>|)>*>|)>+1|)>>, while using our assumption that /d>> non-decreasing. As to the construction of the interpolation tree, Theorem, together with the bound for >> of Theorem, and >> imply the existence of other constants and with <\eqnarray*> >>|>|>>|>>|>|*log+>|d/2|\>|)>+>|d/2|\>|)>,d\2.>>>> Unrolling the latter inequality yields |\>>>|)>\C*>|)>*>|)>+1|)>>. <\bibliography|bib|tm-plain|> <\bib-list|28> J.Abbott, A.Bigatti, M.KreuzerL.Robbiano. Computing ideals of points. Symbolic Comput.>, 30(4):341\U356, 2000. S.Abelard, A.CouvreurG.Lecerf. Sub-quadratic time for Riemann\URoch spaces. The case of smooth divisors over nodal plane projective curves. , HAL, 2020. . A.BorodinR.T.Moenck. Fast modular transforms. Comput. System Sci.>, 8:366\U386, 1974. A.Bostan, C.-P.JeannerodÉ.Schost. Solving structured linear systems with large displacement rank. , 407(1):155\U181, 2008. A.Bostan, G.LecerfÉ.Schost. Tellegen's principle into practice. , ISSAC '03, 37\U44. New York, NY, USA, 2003. ACM. D.G.CantorE.Kaltofen. On fast multiplication of polynomials over arbitrary algebras. , 28:693\U701, 1991. M.F.I.Chowdhury, C.Jeannerod, V.Neiger, É.SchostG.Villard. Faster algorithms for multivariate interpolation with multiplicities and simultaneous polynomial approximations. , 61(5):2370\U2387, 2015. J.-C.Faugčre, P.Gaudry, L.HuotG.Renault. Sub-cubic change of ordering for Gröbner basis: a probabilistic approach. , ISSAC '14, 170\U177. New York, NY, USA, 2014. ACM. C.M.Fiduccia. Polynomial evaluation via the division algorithm: the fast Fourier transform revisited. , STOC '72, 88\U93. New York, NY, USA, 1972. ACM. M.GascaT.Sauer. Polynomial interpolation in several variables. , 12(4):377, 2000. D.HarveyJ.vander Hoeven. Faster polynomial multiplication over finite fields using cyclotomic coefficient rings. Complexity>, 54:101404, 2019. J.vander Hoeven. Faster relaxed multiplication. , ISSAC '14, 405\U412. New York, NY, USA, 2014. ACM. J.vander Hoeven. Faster Chinese remaindering. , CNRS & École polytechnique, 2016. . J.vander Hoeven. On the complexity of multivariate polynomial division. I.S.KotsireasE.Martķnez-Moro, , 198, 447\U458. Cham, 2017. Springer International Publishing. J.vander HoevenG.Lecerf. On the bit-complexity of sparse polynomial multiplication. , 50:227\U254, 2013. J.vander HoevenG.Lecerf. On the complexity exponent of polynomial system solving. , HAL, 2018. . J.vander HoevenG.Lecerf. Fast multivariate multi-point evaluation revisited. Complexity>, 56:101405, 2020. J.vander Hoeven etal. GNU TeXmacs. , 1998. K.S.KedlayaC.Umans. Fast polynomial factorization and modular composition. , 40(6):1767\U1802, 2011. D.Le BrigandJ.-J.Risler. Algorithme de Brill\UNoether et codes de Goppa. , 116(2):231\U253, 1988. F.Le Gall. Powers of tensors and fast matrix multiplication. K.Nabeshima, , ISSAC '14, 296\U303. New York, NY, USA, 2014. ACM. M.G.Marinari, H.M.MöllerT.Mora. Gröbner bases of ideals defined by functionals with an application to ideals of projective points. , 4(2):103\U145, 1993. R.T.MoenckA.Borodin. Fast modular transforms via division. , 90\U96. USA, 1972. IEEE. H.M.MöllerB.Buchberger. The construction of multivariate polynomials with preassigned zeros. J.Calmet, , 144, 24\U31. Berlin, Heidelberg, 1982. Springer Berlin Heidelberg. V.Neiger. Fast computation of shifted Popov forms of polynomial matrices via systems of modular polynomial equations. , ISSAC '16, 365\U372. New York, NY, USA, 2016. ACM. M.NüskenM.Ziegler. Fast multipoint evaluation of bivariate polynomials. S.AlbersT.Radzik, , 3221, 544\U555. Springer Berlin Heidelberg, 2004. D.S.Roche. What can (and can't) we do with sparse polynomials? C.Arreche, , 25\U30. ACM Press, 2018. A.Schönhage. Schnelle Multiplikation von Polynomen über Körpern der Charakteristik 2. , 7:395\U398, 1977. A.SchönhageV.Strassen. Schnelle Multiplikation gro˙er Zahlen. , 7:281\U292, 1971. V.Shoup. New algorithms for finding irreducible polynomials over finite fields. , 54(189):435\U447, 1990. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-biblio> <\db-entry|+if2lDqI1qmMqAaI|inproceedings|CKL89> <|db-entry> E. Y. > <\db-entry|+2X2sGxMCqQBxuWk|inproceedings|BoLeSc:2003:tellegen> <|db-entry> G. É. > <\db-entry|+1LCfUIVm228oQhYU|inproceedings|GR11> <|db-entry> D. S. > <\associate|bib-bibliography> <\db-entry|+2FvHpwlf2H6fD5fe|misc|TeXmacs:website> <|db-entry> > > <\db-entry|+YXseyEg1Ca84l5x|techreport|vdH:polexp> <|db-entry> G. > , Accepted for publication in Foundations of Comp. Math.> <\db-entry|+15oEgG3e1j4cwO00|article|LeBrigandRisler1988> <|db-entry> J.-J. > <\db-entry|+1xqpRSxg16lxOwfH|article|ChowdhuryJeannerodNeigerSchostVillard2015> <|db-entry> C. V. É. G. > <\db-entry|+1xqpRSxg16lxOwfI|techreport|AbelardCouvreurLecerf2020> <|db-entry> A. G. > > <\db-entry|+1J8eA7PL1GSdfUZw|inproceedings|MollerBuchberger1982> <|db-entry> B. > > <\db-entry|+RbpjgzeC8oxQiu|article|BM74> <|db-entry> R. T. > Comput. System Sci.> <\db-entry|+1xqpRSxg16lxOwe2|inproceedings|Fiduccia1972> <|db-entry> > <\db-entry|+1xqpRSxg16lxOweI|inproceedings|MoenckBorodin1972> <|db-entry> A. > <\db-entry|+1xqpRSxg16lxOweY|inproceedings|BostanLecerfSchost2003> <|db-entry> G. É. > <\db-entry|+2KXhjR2JqOY31Ns|techreport|vdH:chinese> <|db-entry> > > <\db-entry|+8lDHqURSvmeX1I|article|KU11> <|db-entry> C. > <\db-entry|+pNGiKeQgnmcqI2|techreport|vdH:kucomp> <|db-entry> G. > , accepted for publication in J. of Complexity> <\db-entry|+EOnwzCwPKLv0R0|inproceedings|NuskenZiegler2004> <|db-entry> M. > T. > <\db-entry|+1xqpRSxg16lxOwf8|article|GascaSauer2000> <|db-entry> T. > <\db-entry|+1xqpRSxg16lxOwee|article|AbbottBigattiKreuzerRobbiano2000> <|db-entry> A. M. L. > Symbolic Comput.> <\db-entry|+1xqpRSxg16lxOwel|article|MarinariMollerMora1993> <|db-entry> H. M. T. > <\db-entry|+1xqpRSxg16lxOwet|inproceedings|FaugereGaudryHuotRenault2014> <|db-entry> P. L. G. > <\db-entry|+27PHx1kC20iqo7p|inproceedings|Neiger2016> <|db-entry> > <\db-entry|+1J8eA7PL1GSdfUa0|article|BostanJeannerodSchost2008> <|db-entry> C.-P. É. > <\db-entry|+TlUcUrl9iqHNw0|inproceedings|vdH:sparsered> <|db-entry> > E. > <\db-entry|+8lDHqURSvmeX9l|inproceedings|vdH:fastrelax> <|db-entry> > <\db-entry|+JJnwNKUifCpptJ|inproceedings|LeGall2014> <|db-entry> > > <\db-entry|+RbpjgzeC8oxQj6|article|CK91> <|db-entry> E. > <\db-entry|+8lDHqURSvmeX5G|article|Sch77> <|db-entry> > <\db-entry|+8lDHqURSvmeX68|article|SS71> <|db-entry> V. > <\db-entry|+bB8EKybvsgtMeG|misc|vdH:cyclomult> <|db-entry> J. van der > <\db-entry|+2DPRky2cs1xL3Q7|inproceedings|Roche2018> <|db-entry> > > <\db-entry|+2UJhhqqcr6IfM9|article|vdH:sparsemult> <|db-entry> G. > <\db-entry|+PcElEj5pzuOv4y|article|Shoup1990> <|db-entry> > <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> TeXmacs:website vdH:polexp LeBrigandRisler1988 ChowdhuryJeannerodNeigerSchostVillard2015 AbelardCouvreurLecerf2020 MollerBuchberger1982 BM74 Fiduccia1972 MoenckBorodin1972 BostanLecerfSchost2003 vdH:chinese KU11 vdH:kucomp NuskenZiegler2004 GascaSauer2000 AbbottBigattiKreuzerRobbiano2000 MarinariMollerMora1993 MollerBuchberger1982 FaugereGaudryHuotRenault2014 Neiger2016 BostanJeannerodSchost2008 vdH:sparsered vdH:sparsered vdH:fastrelax BostanJeannerodSchost2008 LeGall2014 CK91 Sch77 SS71 vdH:cyclomult vdH:sparsered Roche2018 vdH:sparsemult vdH:sparsered vdH:sparsered Shoup1990 BM74 Fiduccia1972 MoenckBorodin1972 BostanJeannerodSchost2008 <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |1.1.Related work |.>>>>|> > |1.2.Our contributions |.>>>>|> > |math-font-series||font-shape||2.Preliminaries> |.>>>>|> |math-font-series||font-shape||3.Gröbner bases for generic sets of points> |.>>>>|> |3.1.Polynomials that vanish on a finite set of points |.>>>>|> > |3.2.Gröbner bases |.>>>>|> > |math-font-series||font-shape||4.Relaxed reduction with respect to axial bases> |.>>>>|> |4.1.Axial bases |.>>>>|> > |4.2.Weighted total degree orderings |.>>>>|> > |4.3.Relaxed reduction |.>>>>|> > |math-font-series||font-shape||5.Multi-point evaluation> |.>>>>|> |5.1.Evaluation trees and multi-point evaluation |.>>>>|> > |5.2.Complexity analysis |.>>>>|> > |math-font-series||font-shape||6.Interpolation> |.>>>>|> |6.1.Interpolation trees and interpolation |.>>>>|> > |6.2.Complexity analysis |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|> <\links> <\collection> > > > > > > > > > > > >