> <\body> <\hide-preamble> \; |>|<\doc-note> This paper is part of a project that has received funding from the French \PAgence de l'innovation de défense\Q. ||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France |>>||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France |>>||<\doc-note> > The evaluation of a polynomial at several points is called the problem of multi-point evaluation. Sometimes, the set of evaluation points is fixed and several polynomials need to be evaluated at this set of points. Efficient algorithms for this kind of \Pamortized\Q multi-point evaluation were recently developed for the special case when the set of evaluation points is sufficiently generic. In this paper, we design a new algorithm for arbitrary sets of points, while restricting ourselves to bivariate polynomials. > Let > be an effective field, so that we have algorithms for the field operations. Given apolynomial \,\,x|]>> and a tuple =,\,\|)>>\|)>> of points, the computation of |)>=|)>,\,P|)>|)>\\> is called the problem of . The converse problem is called and takes a candidate support of as input. These problems naturally occur in several areas of applied algebra. When solving apolynomial system, multi-point evaluation can for instance be used to check whether all points in a given set are indeed solutions of the system. In, we have shown that fast algorithms for multi-point evaluation actually lead to efficient algorithms for polynomial system solving. The more specific problem of bivariate multi-point evaluation appears for example in the computation of generator matrices of algebraic geometry error correcting codes. The general problem of multivariate multi-point evaluation is notoriously hard. If =\> or > is a field of finite characteristic, then theoretical algorithms of Kedlaya and Umans achieve a complexity exponent >, where \0> represents a constant that can be taken arbitrarily close to zero. Unfortunately, to our best knowledge, these algorithms do not seem suitable for practical purposes. The best known bound for over general fields is due to Nüsken and Ziegler: the evaluation of at > P*deg> P|)>> points can be done with > P*> P|)>>|)>> operations in >. This bound is based on the Paterson\UStockmeyer technique for modular composition. Here, the constant\1.5> is a real value such that the product of a > matrix by a \> matrix takes >|)>> operations; one may take \1.667>>; see10.1>. We further cite for an efficient algorithm in the case of special sets of points>. Last year, new softly linear algorithms have been proposed for multi-point evaluation and interpolation in the case when > is a fixed generic tuple of points. These algorithms are in the sense that potentially expensive precomputations as a function of > are allowed. When the dimension is arbitrary but fixed, the algorithms from take softly linear time: they generalize the classical univariate \Pdivide and conquer\Q approach, as presented for instance in10>. The results in are restricted to the case . They take into account the partial degrees of and are based on changes of polynomial bases that are similar to the ones of6.2>. In the present paper, we turn our attention to arbitrary ( possibly non-generic) tuples of evaluation points>, while restricting ourselves to the bivariate case and |)>>. Combining ideas from and, we present a new softly linear algorithm for amortized multi-point evaluation. For the sake of simplicity, we have not optimized all constant factors involved in the cost analysis of our new algorithm, so our complexity bound is mostly of theoretical interest for the moment. The opposite task of interpolation is more subtle: since interpolants of total degree |)>> do not necessarily exist, the very problem needs to be stated with additional care. For this reason, we do not investigate in thispaper. Our bivariate multi-point evaluation makes use of polynomial arithmetic with respect to weighted graded monomial orderings. This section is devoted to the costs of products and divisions in this context. For complexity analyses, we will only consider algebraic complexity models like computation trees, for which elements in > are freely at our disposal. The time complexity simply measures the number of arithmetic operations and zero-tests in>. We denote by > the time that is needed to compute aproduct of two polynomials \> of degree d>. We make the usual assumptions that /d>> is nondecreasing as a function of . Using a variant of the Schönhage\UStrassen algorithm, it is well known that we may take =O>. If we restrict our attention to fields > of positive characteristic, then we may even take =O> d>|)>>. General monomial orderings, that are suitable for Gröbner basis computations, have been classified in. For the purpose of this paper, we focus on the following specific family of bivariate monomial orderings. <\definition> Let \\>. We define the -degree> of a monomial *y> with \> by <\eqnarray*> *y|)>>|>|>>> We define the -ordering> to be the monomial ordering > such that <\equation*> x*y\x*y\u+k*v>|>>>| >>b\v.>|>>>>|\> Let us mention that the idea of using such kinds of families of monomial orderings in the design of fast algorithms for bivariate polynomials previously appeared in. Consider the product of two non-zero bivariate polynomials \> and the obvious bound <\eqnarray*> |>| A+deg B+1|)>* A+deg B+1|)>>>>> for the number of terms of . Then it is well known that Kronecker substitution allowsfor the computation of the product using |)>> operations in >; see8.27>, for instance. Writing A> for the valuation of in , the number of non-zero terms of is more accurately bounded by <\eqnarray*> >|>| A+deg B-val A-val B+1|)>* A+deg B+1|)>.>>>> Via the appropriate multiplications and divisions by powers of , we observe that can be computed using |)>|)>> operations in >. Let us next show that a similar bound holds for slices of polynomials that are dense with respect to the -ordering. More precisely, let A> denote the minimum of over the monomials *y> occurring in . By convention we set 0\+\>. <\lemma> Let \>. The product can be computed using |)>|)>> operations in >, where <\eqnarray*> >|>| A+deg B-val A-val B+1|)>* A+deg B+1|)>.>>>> <\proof> From the monomial identity <\eqnarray*> **y|)>>||*y>>>> we observe that the monomials of -degree are in one-to-one correspondence with the monomials of degree in and degree in in ,|d/k|\>|}>>. It also follows that the number of terms of a non-zero polynomial is bounded by <\equation*> A-val A+1|)>* A+1|)>, and that <\eqnarray*> *y|)>|)>>|| A>>|*y|)>|)>>|| A.>>>> In addition, the number of non-zero terms in the product is bounded by >. So it suffices to compute the formula <\eqnarray*> *y|)>>||*y|)>*B y|)>>>>> in order to obtain the claimed complexity bound. Let be a polynomial in > of -degree > and of leading monomial written >*y>>. Without loss of generality we may assume that the coefficient of this monomial is . We examine the cost of the division of of -degree by with respect to >: <\eqnarray*> ||>>> where and are in >, and such that no monomial in is divisible by >*y>>. Such a division does exist: this is a usual fact from the theory of Gröbner bases. In this context, a polynomial is said to be with respect to when none of its terms is divisible by >*y>>. If *B+> for polynomials > and > such that > is reduced with respect to , then |)>*B=-R>, so =Q> and =R>. In other words, and are unique, so we may write > for quotient of by with respect to >. In the remainder of this section, we assume that has been fixed once and for all. Given \\>A*x*y\\>, we define <\equation*> A>\A*x*y,A>\i+k*j\b>A*x*y. The naive division algorithm proceeds as follows: if has a term *x*y> that is divisible by >*y>>, then we pick a maximal such term for > and compute <\eqnarray*> >|>|*x>*y>*B.>>>> Then \deg A> and the largest term of > that is divisible by >*y>> is strictly less than*y> for >. This division step is repeated for > and for its successive reductions, until and are found. During this naive division process, we note that -l,d-\|]>>> only depends on >> and -l,\|]>>>, \ for ,d-\+1>. When nothing needs to be computed. Let us now describe a more efficient way to handle the case , when we need to compute the quasi-homogeneous component of of maximal -degree >: <\eqnarray*> >>|||]>>*B|]>>+R>.>>>> <\lemma> We may compute |]>>> and >> using A|)>|)>> operations in >. <\proof> We first decompose <\equation*> A>=H+T,H\>|\>>>>>>A*x*y,T\>|\>>>>>>A*x*y and note that is reduced with respect to . In particular, the division of >> by |]>>> yields the same quotient as the division of by |]>>>, so <\eqnarray*> |||]>>*B|]>>+U>>|>>||>>> for some quasi-homogeneous polynomial of -degree with U\\>. Dehomogenization of the relation() yields <\eqnarray*> >|||]>>*B|]>>+U,>>>> with \\>. Consequently, the computation of |]>>> and > takes A|)>|)>> operations in >, using a fast algorithm for Euclidean division in >; see9> or, for instance. For higher values of , the following \Pdivide and conquer\Q division algorithm is more efficient than the naive algorithm: <\specified-algorithm> <\description> \> and an integer ,d-\+1|}>>, where deg A> and \deg B>. -l,d-\|]>>>. <|specified-algorithm> <\enumerate> If \> or then return . If then compute and return |]>>> using the method from Lemma. Let |l/2|\>>. Recursively compute \quo-h,d-\|]>>>. Let \>-Q*B-l,\|]>>|)>>>. Recursively compute \quo,B|)>-l,d-\-h|]>>>. Return +Q>. <\proposition> Algorithm is correct and takes *log l|)>> operations in>. <\proof> Let us prove the correctness by induction on . If \>, then =0> and the result of the algorithm is correct. If , then -l,d-\|]>>=0> and the result is also correct. The case has been treated in Lemma. Now assume that 2> and \>, so h\1>. The induction hypothesis implies that *B|)>>> is reduced with respect to and that -Q*B|)>>> is reduced with respect to. After noting that <\eqnarray*> >||>-Q*B-l,\|]>>|)>>>>|||*B|)>>,>>>> we verify that <\eqnarray*> +Q|)>*B|)>>>||*B|)>>+*B|)>>-*B|)>>>>|||-*B|)>>+*B|)>>>>|||-*B|)>>+*B|)>>deg*B|)>\d-h|)>>>|||-Q*B|)>>+*B|)>>.>>>> Consequently, +Q|)>*B|)>>> is reduced with respect to , whence <\eqnarray*> +Q>||-l,d-\|]>>.>>>> This completes the induction and our correctness proof. Concerning the complexity, step2 takes A|)>|)>=O|)>> operations in>, by Lemma. In step5, the computation of > takes A|)>|)>=O|)>> operations in >, by Lemma. Let >,>|)>> stand for the maximum of the costs of Algorithm for >> and >>. We have shown that =O|)>> and that <\eqnarray*> >|>|++O|)>>>||>|++O|)>.>>>> Unrolling this inequality, we obtain the claimed complexity bound. <\corollary> Let \> be of -degree d>. The remainder in the division of by with respect to > can be computed using /k|)>*log d|)>> operations in>. <\proof> By Proposition, the computation of quo> takes /k|)>*log l|)>>> operations in>. The computation of the corresponding remainder takes /k|)>|)>> further operations in> by Lemma. <\remark> The complexity bound from Proposition is also a consequence of4> by taking \O|)>> for the cost of sparse polynomial products of size . This cost is warranted by the observation that all sparse bivariate polynomial products occurring within the algorithm underlying4> are either univariate products or products of slices of polynomials that are dense with respect to the -ordering. We have seen in section how to compute such products efficiently.\ Let =,\,\|)>>\|)>> be a tuple of pairwise distinct points. We define the for > by <\eqnarray*> >>|>|\\P|)>=,0|)>|}>,>>>> where we recall that |)>\|)>,\,P|)>|)>>. A monic polynomial I>> is said to be if its leading monomial with respect to> is of the form >. The goal of this section is to prove the existence of such a polynomial modulo a sufficiently generic change of variables of the form <\eqnarray*> ||+\*y.>>>> This change of variables transforms > into a new tuple |~>\|)>> with <\eqnarray*> =>|>||~>=*y,y|)>>>>> and the ideal >> into <\eqnarray*> |~>>>|>|\\,y|]>\|~>|)>=,0|)>|}>.>>>> For any degree \>, we define <\eqnarray*> d>>|>|\\deg P\d|}>.>>>> Given a polynomial \d>> such that P=d> we may decompose <\eqnarray*> ||>>> where \> is homogeneous of degree and \d-1>>. The change of variables() transforms into <\eqnarray*> ,y|)>>|>|,y|)>+,y|)>,>>>> where <\eqnarray*> ,y|)>>|>|+\*y,y|)>\\,y|]>d-1>,>>|,y|)>>|>|+\*y,y|)>.>>>> The coefficient of the monomial > in ,y|)>> is =D,1|)>>. The >-vector space >\\d>> is the solution set of a linear system consisting of equations and > unknowns, that are the unknown coefficients of a polynomial in d>>. Such a system admits a non-zero solution whenever \n>. Now assume that is a non-zero element of >> of minimal total degree and let >> denote the set of roots of > in>. Since is minimal we have <\equation*> =|2>\n. that implies <\equation> d\. On the other hand, we have >|\|>\d>. And if \\\\>>, then> is the leading monomial of > for >. Assuming that |\|>\n>, this proves the existence of an axial polynomial > of degree after a suitable change of variables of the form(). We say that > is in if there exists a polynomial of minimal degree in>> that is axial. Let =,\,\|)>>\|)>> be a tuple of points in general position, as defined in the previous section. In this section, we describe a process for reducing a polynomial \>> modulo >>. The reduction of is a polynomial whose support is controlled in a way that will be made precise. Since > is assumed to be in general position, thanks to, we first precompute an axial polynomial \I>> for > of degree <\eqnarray*> >|>| B\.>>>> For 1>, let |)>> denote the number of monomials of >-degree d>. We have <\eqnarray*> |)>>|||)>+\+|>|\>*2|)>.>>>> The >-vector space of the polynomials in >> with >-degree d> is the solution set of a linear system consisting of equations and |)>> unknowns. Consequently, if |)>\n>, then there exists a non-zero polynomial in >> of >-degree d>. Let \I>> be a monic polynomial whose leading monomial is minimal for >> and set <\eqnarray*> >|>|> B.>>>> We may precompute >, by extracting it from a Gröbner basis for >> with respect to>>. By the minimality of the >-degree > of>, we have <\eqnarray*> -1,2|)>>|>|>>> Now write =q*2+r> with \> and ,2-1|}>>. Then <\eqnarray*> -1,2|)>>||+r|)>+*2+r|)>+\++r|)>+r>>|||+1|)>*2+*r>>|||**q*2+*r>>|||**+2*r|)>>>|||>*+2|)>*+2*r|)>>>|||>*+2-r|)>*+r|)>>>||>|>*\.>>>> Consequently, \2*n> and <\eqnarray*> >|>|*n>.>>>> We let > be the smallest integer such that >\n>, hence <\eqnarray*> -1>>|>|>>>> and \|log|\>>. There exists a monic non-zero polynomial in \I>> of minimal degreen>. Since >\n>, we may take \Q> for \>. We call |)>0>>> a for >. We further define <\eqnarray*> >|>| B\|*n>|\>.>>>> Note that +1> is the first integer such that the upper bound() is zero, although =0> holds even for >. <\remark*> If the > are pairwise distinct and if the cardinality of > is sufficiently large, then, after a sufficiently generic linear change of coordinates, a Gröbner basis for>> with respect to the lexicographic ordering induced by x> can be computed in softly linear time: see6>, for instance. Then, a Gröbner basis for>> with respect to>> can be deduced with |)>> operations in> thanks to the FGLM algorithm. This bound has recently been lowered to >*log n|)>> in1.7, Example1.3>, where \2> is a feasible exponent for matrix multiplication. Given ,\>A,\>*x>*y>\\> and 0>, we may use the division procedure from section to reduce with respect to>. This yields a relation <\eqnarray*> ||+R,>>>> where > R\deg> A> and such that none of the monomials in the support of is divisible by the leading monomial of >. We write \R> and recall that > is a >-linear map. We also define the projections > and |\>> by <\eqnarray*> >|>|\\,\\h>A,\>*x>*y>>>||\>>|>|\\,\\h>A,\>*x>*y>.>>>> For 1>, we let >> denote the set of tuples of polynomials ,\,P|)>>\\> such that <\itemize> is the first integer such that \d>, > P\*d>>, for ,m>. Intuitively speaking, such a tuple will represent a sum +\+P> modulo >>. Note that P\|*d>|\>=0>, so \\>. Given ,\,P|)>>\\>, \ we define three new sequences of polynomials by <\equation*> ||||||>|>||)>>||>|>||)>>|||\>>|>||\>|)>>>|>|>|+\|)>>||>|>||)>>|||\>>|>||\>|)>>>|>|>|+\|)>>||>|>||)>>|||\>>|>||\>|)>>>||>||||>||||>|>|>|>|+\|)>>|>|>|>||)>>|>||\>>|>||\>|)>>>|>|>|+\|)>.>|>|||||||>>>> <\lemma> With the above notations. Let \16> and let \>>|)>>. If \*n>, then <\eqnarray*> > R>|>|*d>>>|> \>|>|*\*d>>>>> for ,m-1>. <\proof> We first note that \2> and <\eqnarray*> *d>+*n>>|>|*\*d>.>>>> Let us prove the degree bounds by induction on ,m-1>. For , we have <\eqnarray*> R>|>| P\>>| \>|>| R+deg \\deg R+h\+\*d>,>>>> by using and. Assume now that i\m-1> and that the bounds of the lemma hold for all smaller . Since \2>, the induction hypothesis yields <\eqnarray*> > R>|>|> P,deg> \|)>\*d>.>>>> Using and again, we deduce <\eqnarray*> > \>|>|> R+2*deg \\deg> R+h*2\*d>+*n>\*\*d>.>>>> <\lemma> Under the assumptions of Lemma, we further have <\eqnarray*> |\>+\+|\>+R>||+\+P mod I>.>>>> Assume that \>+>>\1>, let \\*d>, and let > be the first integer such that >\>. If =m>, then <\equation*> |\>,\,|\>,R,0|)>\\>>; otherwise, =m-1> and <\equation*> |\>,\,|\>,R|)>\\>>. <\proof> By construction, we have <\eqnarray*> +\+R>||+\+P+\+\+\ mod I>,>>>> whence <\eqnarray*> |\>+\+|\>+R>||-\|)>+\+-\|)>+R>>|||+\+P mod I>.>>>> Since > is axial, we have |\>=0>, which entails. From Lemma we deduce <\equation> deg |\>\> |\>|2>\> R|2>\*d>|2>=*d>. Now > contains no monomial that is divisible by the leading monomial of > for >> and |\>\deg B>. Using, it follows that <\equation> deg |\>\deg B\\\*n>. Consequently, for ,m-1>, inequalities and, combined with \*n>, lead to <\eqnarray*> > |\>>|>| |\>+2*deg |\>>>||>|*n>+*d>>>||>|**d>=*>.>>>> From \*n\16*n> and, we deduce that 2\2-1>=2+3>>, whence \+4>. If follows that <\equation*> deg \\h\h+3>=0. From P=0>, we thus obtain R=0>. Since \+4>, the polynomial > belongs to> and has degree n>. It follows that R\deg B\n>. Since d/16> and 2>, we further deduce <\eqnarray*> > R>|| R>>||>|*d>>>||>|\2*d>>>||>|**d>=*>.>>>> Finally \d/2> and \d\2> imply \\d>, whence \>. We call the tuple |\>,\,|\>,R|)>> the of ,\,P|)>> with respect to |)>0>>. <\lemma> With the notation and assumptions of Lemma, the reduction of ,\,P|)>> with respect to |)>0>> takes *log d|)>> operations in >. <\proof> First note that P|)>\d> and B|)>=O>. By Corollary, the computation of |)>> therefore takes *log d|)>> operations in >. For ,m>, Lemma and imply <\eqnarray*> >+\|)>>|>|*d>>>|> B>|>|n>.>>>> By Corollary, we deduce that the computation of +\|)>> takes <\eqnarray*> *d>|)>/2|)>*log d|)>>||*log d|)>>>>> operations in>. Summing the costs to compute > for ,m=O>, we obtain the claimed bound. <\proposition> Let |)>0>\>>>. Then a sequence |)>0>\\>> with 0>Q=0>P mod I>> can be computed using *log n|)>> operations in >. <\proof> We set \64>, so that =>+>>\0.958>. We set >|)>0>\|)>0>> and define >|)>0>> recursively as the reduction of >|)>0>> with respect to |)>0>>, for 0>. The integer 8> is the first integer such that <\equation*> \\. We finally take |)>0>\>|)>0>>. Lemma implies that |)>0>> belongs to >>. The complexity bound follows from Lemma. Let \|)>> be a tuple of pairwise distinct points and consider the problem of fast multi-point evaluation of a polynomial \> of total degree > at >. For simplicity of the exposition, it is convenient to first consider the case when >> is a power of two and > is in general position. The core of our method is based on the usual \Pdivide and conquer\Q paradigm. We say that > is in if > is in general position and if \,\,\|)>> and \,\,\|)>> are in recursive general position. An empty sequence is in recursive general position. With the notation of section a recursive general position is ensured if the cardinality of > is strictly larger that > that is recursively defined by <\equation*> \\n+2*\ for 4> and with \2>. Consequently =n*\>. Whenever |\|>\n*\>, we know from section that we can reduce to the case where > is in recursive general position. In this case, we can compute a , that is made of a heterogeneous basis for > and recursive heterogeneous bases for > and >. <\specified-algorithm> <\description> \|)>> with >> and |)>0>\\>>. 0>P|)>|)>>. A recursive heterogeneous basis for >. > is in recursive general position. <|specified-algorithm> <\enumerate> If =0>, then return 0>P|)>|)>>. Compute the reduction |)>0>\\>> of |)>0>> with respect to the heterogeneous basis |)>0>> for >, via Proposition. Recursively apply the algorithm to > and |)>0>>. Recursively apply the algorithm to > and |)>0>>. Return the concatenations of the results of the recursive evaluations. <\theorem> Algorithm is correct and runs in time *log n|)>>. <\proof> The algorithm is clearly correct if =0>. If \0>, then we first observe that both > and > are in recursive general position by definition. Furthermore, Proposition ensures that <\eqnarray*> 0>P|)>|)>>||0>Q|)>|)>.>>>> The concatenation of the results of the recursive applications of the algorithm therefore yields the correct result. As to the complexity bound, the cost of step 2 is bounded by *log n|)>> according to Proposition. Hence, the total execution time > satisfies <\eqnarray*> >|>||)>+O*log n|)>.>>>> The desired complexity bound follows by unrolling this recurrence inequality. <\corollary> Consider an arbitrary effective field > and \|)>>, where is not necessarily a power of two. Modulo precomputations that only depend on > and >, we can evaluate any polynomial in >> at > in time *log n|)>>. <\proof> Modulo the repetition of at most more points, we may assume without loss of generality that is a power of two >>. If > is finite then we build an algebraic extension > of > of degree O>, so that |\|>\n*\>. Multiplying two polynomials in n>> takes <\equation*> O|)>=O|)> operations in >. Consequently, up to introducing an extra factor in our complexity bounds, we may assume that |\|>\n*\>. Modulo a change of variables() from section, we may then assume that |~>>, defined in, is in recursive general position, and compute a recursive heterogeneous basis. Given \>>, we claim that we may compute ,y|)>>\P+\*y,y|)>> using *log n|)>> operations in >. Indeed, we first decompose <\equation*> P=p+\+p, where > and each > is zero or homogenous of degree . Computing +\*y,y|)>> then reduces to computing +\,1|)>>. This corresponds in turn to a univariate Taylor shift, which takes *log i|)>> operations in >; see7>, for instance. Finally, we apply Theorem to ,0,\,0|)>\\,y|]>>> and |~>>. We have designed a softly linear time algorithm for bivariate multi-point evaluation, modulo precomputations that only depend on the evaluation points. This result raises several new questions. First, it would be useful to optimize the constant factors involved in the cost analysis, and study the efficiency of practical implementations. Second, one may investigate extensions of Corollary that take into account the partial degrees of the input polynomial, by applying Theorem to inputs of the form ,0,>,,0>|)>>. Afinal challenge concerns the possibility to perform all precomputations in sub-quadratic time. For this, one might use techniques from, as in. The problem of achieving an almost linear cost appears to be even harder. We thank the anonymous referees for their useful comments. <\bibliography|bib|tm-plain|> <\bib-list|24> J.Berthomieu, G.LecerfG.Quintin. Polynomial root finding over local rings and application to error correcting codes. , 24(6):413\U443, 2013. A.Bostan, C.-P.JeannerodÉ.Schost. Solving structured linear systems with large displacement rank. , 407(1):155\U181, 2008. P.Bürgisser, M.ClausenM.A.Shokrollahi. , 315. Springer-Verlag, 1997. D.G.CantorE.Kaltofen. On fast multiplication of polynomials over arbitrary algebras. , 28:693\U701, 1991. J.-C.Faugère, P.Gianni, D.LazardT.Mora. Efficient computation of zero-dimensional Gröbner bases by change of ordering. Symbolic Comput.>, 16(4):329\U344, 1993. J.vonzur GathenJ.Gerhard. . Cambridge University Press, 2nd, 2002. D.HarveyJ.vander Hoeven. Faster polynomial multiplication over finite fields using cyclotomic coefficient rings. Complexity>, 54:101404, 2019. J.vander Hoeven. Newton's method and FFT trading. Symbolic Comput.>, 45(8):857\U878, 2010. J.vander Hoeven. On the complexity of multivariate polynomial division. I.S.KotsireasE.Martínez-Moro, , 198, 447\U458. Cham, 2017. Springer International Publishing. J.vander Hoeven. . Scypress, 2020. J.vander HoevenR.Larrieu. Fast reduction of bivariate polynomials with respect to sufficiently regular Gröbner bases. C.Arreche, , ISSAC '18, 199\U206. New York, NY, USA, 2018. ACM. J.vander HoevenG.Lecerf. Fast amortized multi-point evaluation. , HAL, 2020. . J.vander HoevenG.Lecerf. Fast multivariate multi-point evaluation revisited. Complexity>, 56:101405, 2020. J.vander HoevenG.Lecerf. On the complexity exponent of polynomial system solving. , 21:1\U57, 2021. J.vander HoevenÉ.Schost. Multi-point evaluation in higher dimensions. , 24(1):37\U52, 2013. X.HuangV.Y.Pan. Fast rectangular matrix multiplication and applications. Complexity>, 14(2):257\U299, 1998. K.S.KedlayaC.Umans. Fast polynomial factorization and modular composition. , 40(6):1767\U1802, 2011. D.Le BrigandJ.-J.Risler. Algorithme de Brill\UNoether et codes de Goppa. , 116(2):231\U253, 1988. V.Neiger. Fast computation of shifted Popov forms of polynomial matrices via systems of modular polynomial equations. , ISSAC '16, 365\U372. New York, NY, USA, 2016. ACM. V.Neiger, J.RosenkildeG.Solomatov. Generic bivariate multi-point evaluation, interpolation and modular composition with precomputation. A.Mantzaflaris, , ISSAC '20, 388\U395. New York, NY, USA, 2020. ACM. V.NeigerÉ.Schost. Computing syzygies in finite dimension using fast linear algebra. Complexity>, 60:101502, 2020. M.NüskenM.Ziegler. Fast multipoint evaluation of bivariate polynomials. S.AlbersT.Radzik, , 3221, 544\U555. Springer Berlin Heidelberg, 2004. M.S.PatersonL.J.Stockmeyer. On the number of nonscalar multiplications necessary to evaluate polynomials. Comput.>, 2(1):60\U66, 1973. L.Robbiano. Term orderings on the polynomial ring. B.F.Caviness, '85. European Conference on Computer Algebra. Linz, Austria, April 1-3, 1985. Proceedings. Volume 2: Research Contributions>, 204, 513\U517. Springer-Verlag Berlin Heidelberg, 1985. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-biblio> <\db-entry|+if2lDqI1qmMqAaI|inproceedings|CKL89> <|db-entry> E. Y. > <\db-entry|+2X2sGxMCqQBxuWk|inproceedings|BoLeSc:2003:tellegen> <|db-entry> G. É. > <\db-entry|+1LCfUIVm228oQhYU|inproceedings|GR11> <|db-entry> D. S. > <\associate|bib-bibliography> <\db-entry|+yAMRuva1eECsGs5|book|TeXmacs:vdH:book> <|db-entry> > <\db-entry|+Cwl6oOtBJq2Vc9|article|vdH:polexp> <|db-entry> G. > <\db-entry|+15oEgG3e1j4cwO00|article|LeBrigandRisler1988> <|db-entry> J.-J. > <\db-entry|+2MBfjYgs1WYemMtx|article|KU11> <|db-entry> C. > <\db-entry|+RNqwNFXZRxKPPF|article|vdH:kucomp> <|db-entry> G. > Complexity> <\db-entry|+1GIl46OCkUAK86P|inproceedings|NuskenZiegler2004> <|db-entry> M. > T. > <\db-entry|+1xqpRSxg16lxOweJ|article|PaSt73> <|db-entry> L. J. > Comput.> <\db-entry|+hVCFX8u1Hzgyugv|article|HuangPan1998> <|db-entry> V. Y. > Complexity> <\db-entry|+C2mCMTwEO5guWF|article|vdH:mmeval> <|db-entry> É. > <\db-entry|+2N2z5Dwc1ioW8ngz|techreport|vdH:amp> <|db-entry> G. > > <\db-entry|+1GQhZkrScfYODlE|inproceedings|NeigerRosenkildeSolomatov2020> <|db-entry> J. G. > > <\db-entry|+3a783HvX7Lqvpm|book|GaGe2002> <|db-entry> J. > <\db-entry|+5gyUr7A16TI54eW|inproceedings|vdH:redbivar> <|db-entry> R. > > <\db-entry|+2MBfjYgs1WYemMtm|book|BuClSh1997> <|db-entry> M. M. A. > <\db-entry|+jZzyy9fWllYOxa|article|CK91> <|db-entry> E. > <\db-entry|+2EIRPFE3ONuyFfN|article|vdH:cyclomult> <|db-entry> J. van der > Complexity> <\db-entry|+2ZLjV7xS2U95RRiv|inproceedings|Robbiano1985> <|db-entry> > '85. European Conference on Computer Algebra. Linz, Austria, April 1-3, 1985. Proceedings. Volume 2: Research Contributions> > <\db-entry|+RbpjgzeC8oxQje|article|vdH:fnewton> <|db-entry> > Symbolic Comput.> <\db-entry|+1xqpRSxg16lxOweD|inproceedings|vdH:sparsered> <|db-entry> > E. > <\db-entry|+NuHCznHJqSBdAS|article|FaGiLaMo1993> <|db-entry> P. D. T. > Symbolic Comput.> <\db-entry|+1aJzkAgp1IPU1dBC|article|NeigerSchost2020> <|db-entry> É. > Complexity> <\db-entry|+NuHCznHJqSBdBR|article|BeLeQu2013> <|db-entry> G. G. > <\db-entry|+1J8eA7PL1GSdfUa0|article|BostanJeannerodSchost2008> <|db-entry> C.-P. É. > <\db-entry|+27PHx1kC20iqo7p|inproceedings|Neiger2016> <|db-entry> > <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> TeXmacs:vdH:book vdH:polexp LeBrigandRisler1988 KU11 vdH:kucomp NuskenZiegler2004 PaSt73 HuangPan1998 vdH:mmeval vdH:amp NeigerRosenkildeSolomatov2020 vdH:amp GaGe2002 NeigerRosenkildeSolomatov2020 vdH:redbivar vdH:amp NeigerRosenkildeSolomatov2020 BuClSh1997 CK91 vdH:cyclomult Robbiano1985 vdH:redbivar GaGe2002 GaGe2002 vdH:fnewton vdH:sparsered vdH:sparsered NuskenZiegler2004 FaGiLaMo1993 NeigerSchost2020 BeLeQu2013 BostanJeannerodSchost2008 Neiger2016 vdH:amp <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Weighted bivariate polynomials> |.>>>>|> |2.1.Complexity model |.>>>>|> > |2.2.Monomial orderings |.>>>>|> > |2.3.Multiplication |.>>>>|> > |2.4.Division |.>>>>|> > |math-font-series||font-shape||3.General position> |.>>>>|> |math-font-series||font-shape||4.Polynomial reduction> |.>>>>|> |4.1.Heterogeneous bases |.>>>>|> > |4.2.Elementary reductions |.>>>>|> > |4.3.Compound reductions |.>>>>|> > |math-font-series||font-shape||5.Multi-point evaluation> |.>>>>|> |math-font-series||font-shape||6.Conclusion> |.>>>>|> |Acknowledgments. |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|> <\links> <\collection> > > > > > >