> <\body> <\hide-preamble> >> >> \\\>> <\surround||> <\render-specified-algorithm| >> <|render-specified-algorithm> > |<\doc-note> This paper is part of a project that has received funding from the French \PAgence de l'Innovation de Défense\Q. |>|||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France |>>||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France |>>|> We prove that the resultant of two \Psufficiently generic\Q bivariate polynomials over afinite field can be computed in quasi-linear expected time, using a randomized algorithm of Las Vegas type. A similar complexity bound is proved for the computation of the lexicographical Gröbner basis for the ideal generated by the two polynomials. |> The efficient computation of resultants is a fundamental problem in elimination theory and for the algebraic resolution of systems of polynomial equations. Given an effective field >, it is well known11> that the resultant of two univariate polynomials \>> of respective degrees e> can be computed using > field operations in>. Here the notation (E)> is an abbreviation for >>, for any expression . Given two bivariate polynomials \> of respective total degrees e>, their resultant > in can be computed in time *e|)>>; see25> and references in that paper. If>, then this corresponds to a complexity exponent of in terms of input/output size. An important open question in algebraic complexity theory is whether this exponent can be lowered. In the present paper, we consider the case when and are \Psufficiently generic\Q. If the coefficients of and are chosen at random in a finite field =\> with sufficiently many elements, then this will be the case with high probability. Under a suitable hypothesis of \Pgrevlex-lex-generic position\Q (defined below) and assuming the (RAM) bit complexity model, our main result is the following theorem: <\theorem> Let \0> be a fixed rational number. Let \> be two polynomials of respective total degrees e> over a finite field =\>. If and are in grevlex-lex-generic position, then > can be computed in expected time <\equation*> O>|)>+*log q|)>, using a randomized algorithm of Las Vegas type. A major result in a similar direction has recently been obtained by Villard. For ageneral effective field >, and under different genericity assumptions, he proposed an algorithm that computes the resultant in of two polynomials \> of degree> in and degree > in using *d>|)>>> operations in>. Here> is the usual exponent for matrix multiplication (such that two n>> matrices over > can be multiplied using >|)>> operations in >). Le Gall has shown in that one may take\2.373>. Resultants are related to elimination theory. Throughout the paper, we assume that the reader is familiar with the basic theory of Gröbner bases, as found in standard text books, so we only briefly recall basic terminology. Let > still be a general effective field. Two common monomial orderings on the polynomial ring > are the lexicographical ordering > and the reverse graded lexicographical ordering > defined by <\eqnarray*> *y\x*y>|>|l\i\k|)>>>|*y\x*y>|>|k+l\j\l|)>|)>.>>>> We say that and are in ( ) position if the leading monomials of the reduced Gröbner basis of with respect to > ( >) coincide with the ones that we would obtain when taking symbolic parameters for the coefficients of and ; see Figure<\float|float|tbh> <\big-figure||gr-frame|>|gr-geometry||gr-grid||1>|gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||1>|gr-edit-grid-old||1>|gr-grid-aspect||>|gr-grid-aspect-props|||>|magnify|0.42044820699257||||>>||||>>||||>>||||>>||||>>||||>>||||>>||>||>||||>>|gr-frame|>|gr-geometry||gr-grid||1>|gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||1>|gr-edit-grid-old||1>|gr-grid-aspect||>|gr-grid-aspect-props|||>|magnify|0.42044820699257|||||>>|||>||>||>||>||>||>||>||>||>||>||>||>||>||>>>> Illustration of the Gröbner stairs for and in generic position with respect to >(left) and > (right) in the case when and . . We say that and are in when they are both in lex-generic and grevlex-generic position. Notice that we do not require the ideal > to be radical over the algebraic closure of >. The relationship between resultants and Gröbner bases is the following: if and are in lex-generic position, then the reduced Gröbner basis of with respect to > consists of the minimal polynomial of , which is a constant multiple of >, and the polynomial > for some \> with d*e>; see section. Assuming that and are in grevlex-generic position, the recent result from is an algorithm to compute a 14> of the Gröbner basis of with respect to> using |)>> operations in>, while conserving its main properties; see section. Note that the Gröbner basis in its usual representation generically requires *e|)>> storage, so its computation is too expensive for our purposes. If we were able to rapidly convert a Gröbner basis for > into a new one for>, then this would allow us to compute resultants in softly linear time. Unfortunately, traditional \Pchange-of-ordering\Q algorithms such as the FGLM algorithm rely on linear algebra, and do not run in softly linear time. For our proof of Theorem in section, we will instead rely on a bivariate counterpart of Kedlaya\UUmans' algorithm for modular composition. This technique does not work for general effective fields >, which explains the restriction to the case when =\> is a finite field in Theorem. For=\>, Mehrabi and Schost showed how to compute a basis of > with respect to > by means of a probabilistic algorithm of Monte Carlo type with a nearly optimal bit complexity bound. Their bound is optimal when the coefficients grow as expected in the worst case and their method depends on Kedlaya\UUmans as well. This approach has been generalized to higher dimensions in. The probabilistic aspect comes from the need of a sufficiently generic linear change of the variables. The best known deterministic complexity bound can be found in. Over finite fields, Poteaux and Schost1.2> achieved the computation of bivariate lexicographical bases with bit complexity >*> in the special case when or belongs to >, provided that the underlying characteristic is greater than, and that is radical, yet without genericity assumption. Their method extends to an arbitrary number of variables. The proof of Theorem relies on a sequence of reductions, using a novel combination of classical and more recent techniques. In sections and, we first recall basic notations and the required results from about concise Gröbner bases. Assuming from there on that and are in grevlex-lex-generic position, we recall in section how to reduce the computation of the resultant to the computation of the minimal polynomial of the multiplication endomorphism by in \\/I>>. This minimal polynomial can be computed with high probability using Wiedemann's algorithm, provided that we have an algorithm for the transposed map of evaluating a univariate polynomial at in>. This kind of strategy has been used several times before in computer algebra; see also12, section4>. The evaluation of a univariate polynomial at in > can be regarded as a bivariate modular composition problem. Exploiting the fact that multiplication in > is fast (thanks to the concise Gröbner basis representation), we show in section how to reduce this problem to multivariate multipoint evaluation. For this reduction, we mostly follow Kedlaya and Umans, along the same lines as in the proof of3.1> for univariate modular composition; refinements can be found in. At that point, we restrict ourselves to the case when > is a finite field, so as to benefit from Kedlaya and Umans' fast algorithm for multipoint evaluation and its transpose. In section, this allows us to conclude the proof of Theorem. The final section contains a few further notes and directions for future research. In particular, we show that the full lexicographical Gröbner basis can be computed in asimilar time as the resultant, with ideas similar to and2>. Finally we quantify the genericity hypotheses in a more precise manner. Throughout this paper, > is an effective field. Most of our algorithms work in the algebraic complexity models of straight-line programs (SLPs) or computation trees, in which execution times correspond to the required number of field operations in >. The genericity assumptions imply that non-trivial zero tests always fail, so the straight-line program framework actually suffices. In section, where we prove Theorem, we specialize> to become a finite field>. From that point on, we assume a RAM bit complexity model and recall that field operations in> can be performed in softly linear time>. Given a finite dimensional >-vector space of ,\,z>|]>> that admits z>*\*z>>> as a basis, it is convenient to mentally represent elements of as column vectors with respect to this basis and linear forms :V\\> as row vectors. Linear maps between two vector spaces of this type correspond to matrices. Writing >> for the set of linear forms :V\\>, the of a linear map W>> is the linear map >:W>\V>> such that >|)>=\|)>> for all V>. If can be computed by a linear SLP over > of length >, then it is well-known13.20> that>> can be computed by an SLP of length +O> V+dim> W|)>>. This \Ptransposition principle\Q is in general easy to be put into practice on concrete programs, as exemplified in. Roughly speaking, the program is regarded as a composition of individual steps that are easy to transpose. For the transposition of a composition K>, where V> is another >-linear map, we next apply the usual formula K|)>>>=K>\L>>. Given indeterminates ,\,z>> and positive integers ,\,n>>>, we define <\eqnarray*> ,\,z>|]>,\,n>>>|>|\,\,z>|]>\deg> A\n,\,deg>> A\n>|}>.>>>> Consider a Gröbner basis of an ideal \,\,z>|]>> for some term ordering on the set of monomials >*\*z>>\>*\*z>>>\i,\,i>\\|}>>. We write ,\,z>|]>> for the >vector space of polynomials \,\,z>|]>> that are reduced with respect to. The reduced monomials in\\,\,z>|]>\z>*\*z>>> form a basis for ,\,z>|]>> and correspond to the monomials \Punder the Gröbner stairs\Q. In other words, > consists of the monomials that are not divisible by the leading monomial of an element in . We also write <\equation*> \:\,\,z>|]>\,\,z>|]>> for the map that computes the normal form of a polynomial \,\,z>|]>> with respect to , the unique polynomial > in ,\,z>|]>> such that \I>. In the remainder of this paper, let \> be two polynomials of total degrees e>, in grevlex-lex-generic position. We write > for the ideal generated by and, and \\/I> for the corresponding quotient algebra. Let us start by recalling several facts from. The reduced Gröbner basis >> of with respect to > consists of polynomials >,G>,\,G>\\> with leading monomials ,x*y,x*y,\,x>; see, 2>, and Figure. 4 and Theorem28> Using |)>> operations in >, one may compute the concise representation of the Gröbner basis ,G,\,G|}>> of with respect to >. The leading monomials of > and >> coincide for ,e>, but ,\,G> are not necessarily reduced. Furthermore, ,\,G> are not explicitly written out (since this typically requires *e|)>> coefficients in >); this is why we need to represent in aconcise way, while ensuring that no essential information is lost. 5 and Proposition31> Given a polynomial \\> with \\s> and \\t\s>, we may compute its normal form |)>\>> with respect to using *|)>>> operations in >. Recall that|)>> is the unique element in =K>>> with -\|)>\I>>; in particular, > and >>> coincide. By means of4, Theorem28, and Proposition31>, the condition that and are indeed in grevlex-generic position can be checked using|)>> operations in> by running the basis computation and aborting when an irregularity occurs. 6.2 and Theorem33> Assume that the concise Gröbner basis of has been computed. We represent elements in the quotient algebra=\/I> by normal forms in >. Given ,\\\>, we may now compute *\\\> using > operations in >, since *\|)>\2*> and *\|)>\2*>. By what precedes, we may therefore compute *\|)>\>> in time >. In other words, products in > can be computed in softly lineartime. As above, and are polynomials in > in grevlex-lex-generic position, >, and \\/I>. Consider the >-linear multiplication map :\\\;a\*a>. It is known that the characteristic polynomial \\> of this map equals *Res,Q|)>>> for some \\>; see for instance2.7> applied with and . When all the zeros of are regular, > is separable and the latter property follows more straightforwardly by comparing the sets of roots of > and of ,Q|)>> via the Stickelberger eigenvalue theorem. On the other hand, since and are in lex-generic position, we have <\equation*> deg \=dim> \=d*e. We introduce the minimal polynomial \\> of > as the monic polynomial of minimal degree such that |)>=0>, or, equivalently, \I>. In particular, > coincides with the unique element of the reduced Gröbner basis of for > that belongs to >. <\lemma> With and in grevlex-lex-generic position, we have =\>. In addition, if |\|>\d*e>, then > can be recovered from > using |)>> operations in >. <\proof> We always have\\>. The polynomials > and > coincide whenever =deg \=d*e>; this is the case if and only if and are in lex-generic position. Once > is known, since |\|>\d*e>, we may find a \\> such that |)>\0> using > operations in >, by means of fast multipoint evaluation. Then the above value > can be computed using |)>> further operations, as <\equation*> \=|)>|Res,y|)>,Q,y|)>|)>>. From > and >, we deduce ,Q|)>=\/\>. The above discussion shows that the computation of > reduces to the determination of >. We use Wiedemann's algorithm and the transposition principle for the computation of>, as follows: <\itemize> We first select a random linear form :\\\>. More precisely, assuming that |\|>\4*d*e>, we select a finite subset \> of size \4*d*e> and take > to be a row vector with random entries from . Taking 2*d*e>, we define the map <\eqnarray*> :\>|>|>>|>|>||)>.>>>> We will explain how to evaluate > efficiently in section. Then we compute the sequence <\equation> \E|)>,\E|)>,\,\E|)>|)>>. This task is an extension of the usual \Ppower projection\Q problem to the bivariate case, since it corresponds to one evaluation of the transposed map of >: <\eqnarray*> >:\>>|>|>>>|>|>|\E|)>,\E|)>,\,\E|)>|)>>|)>.>>>> Using the fast variant of the Berlekamp\UMassey algorithm12, Algorithm12.9 combined with the extended half-gcd algorithm>, we determine the linear recurrence relation of smallest order d*e> satisfied by the sequence(). Stated otherwise, this means that we compute the monic polynomial>> of minimal degree d*e>> such that <\equation*> \E|)>>|)>=\E|)>>|)>=\=\E|)>*\>|)>=0. The set of polynomials > for which \E|)>*\|)>=0> for ,N-1-deg \> is closed under gcds and clearly contains >. This implies that we always have >\\>. If >=d*e>>, then we are sure that >=\=\=\*Res,Q|)>>. The next subsection reminds why this happens with high probability. The above polynomials > and>> coincide if, and only if, /\|)>|)>\0> for any irreducible factor > of >. Now given an irreducible factor > of >, we have /\|)>\0>. Arandom linear form :\\\> as above annihilates a fixed non-zero element of > with probability at most >. The probability that > annihilates /\|)>> is therefore bounded by >. We conclude that the probability > that none of thed*e> irreducible factors > of > annihilates /\|)>> is at least <\eqnarray*> >|>|>|)>\|)>\.>>>> The algorithm of the previous subsection and the present probability analysis are summarized in the following lemma. <\lemma> Assume that and are in grevlex-lex-generic position and that |\|>\4*d*e>. Then the computation of> takes an expected number of > operations in > plus an expected number of > computations of sequences> for different values of >. In this section, we show how to efficiently reduce the evaluation of > to multivariate multipoint evaluation. We mostly follow the same Kronecker segmentation strategy as in for modular composition. Given an integer > that will be specified later, let \|>|\>> be the smallest integer such that >\2*d*e>. We define the Kronecker map <\eqnarray*> ,\,z>|]>,\,\>>|>|>>>>|>|>|>,i=1,\,\,>>>> as the restriction to ,\,z>|]>,\,\>> of the unique morphism >:\,\,z>|]>\\> of >algebras that sends > to >> for ,\>. Notice that is bijective and that both and its inverse can be computed in linear time with respect to the monomial bases. Let \d+e-2> and \e-1> be upper bounds for the degrees in and of elements in >. We may compute <\equation*> g\\>|)>\\+1,D+1>i=1,\,\ using binary powering. By what has been said in section, this requires *log \|)>> operations in >. For any \\>>> and |)>>, we notice that <\eqnarray*> |)>>||,\,g>|)>|)>.>>>> Let \\*-1|)>*D+1>, \ \\*-1|)>*D+1>, and <\eqnarray*> :\,\,z>|]>,\,\>>|>|,N>>>||>|,\,g>|)>.>>>> Note that ,\,g>|)>|)>\N> and ,\,g>|)>|)>\N> indeed hold for \,\,z>|]>,\,\>>. It follows that <\eqnarray*> >||\E\K.>>>> We will compute the map > using evaluation-interpolation. Assume for the time being that |\|>\N> and let ,\,\>\\> be pairwise distinct points. Define \\> for ,N>>. Setting \,\,\>|}>>, \,\,\>|}>>, consider the evaluation map <\eqnarray*> \\>:\,N>>|>|\\>>>||>|,\|)>|)>,\|)>\\\\>,>>>> which is a >-linear bijection. Using traditional univariate evaluation-interpolation in each coordinate10>, both \\>> and its inverse \\>> can be evaluated using SLPs of length *N|)>> over >. In particular, we can compute <\equation> \\E\\>|)>\\\\>i=1,\,\ in time *N*N|)>>. We next define the map <\eqnarray*> >:\,\,z>|]>,\,\>>|>|\\>>>||>||)>,\|)>>,\,>|)>,\|)>>|)>|)>,\|)>\\\\>.>>>> Then we have <\eqnarray*> >||\\>\E>.>>>> Combined with, this yields <\eqnarray*> >||\E\\>\E>\K.>>>> In section, we have reduced the computation of bivariate resultants to the evaluation of the transposed map >> of >. In section the evaluation of > has been reduced to multivariate multipoint evaluation. We first recall how to perform the latter evaluation using algorithms by Kedlaya and Umans. We next combine the above reductions with the transposition principle and prove our main result. Kedlaya and Umans designed various algorithms for modular composition and multipoint evaluation. They also gave algorithms for the transposed operations. For the computation of > and its transpose, we will rely on the following result, which is adirect consequence of4.5 and Theorem7.6>: <\theorem> Let \0> be a fixed rational number. Given \,\,z>|]>,\,\>> and evaluation points ,\,\>\\>> such that =\>>, there exists an algorithm that outputs |)>> for ,\>>, and that runs in time <\equation*> O>+\|)>*log q|)>>|)>. The transpose of the linear map |)>|)>i\\>> can be computed with the same complexity. Note that4.5 and Theorem7.6> are actually stated in an more precise manner and that we voluntarily simplified the presentation. We also refer to for some recent refinements. > and >>> Assume from now on that =\>. Before we prove our main result, let us first study the complexity of evaluating the transpose >> of >. Recall that the computation of asequence as in reduces to one such evaluation. <\proposition> Let \0> be a constant, thought to be small. Assume that max|)>> and that the concise representation of and the sets ,\,\>> of have been precomputed. Then one evaluation of> or of its transpose>> takes time >|)>>. <\proof> We let <\equation*> \\|log log |\> and verify the following bounds: <\eqnarray*> >|||)>>>|>>|>|>|)>>=2+1>*d*e=>>>|=|\|>*|\|>>|>|*\*D*D=O*|)>=>.>>>> Theorem therefore implies that>> and >>> can be computed in time >>. In the previous sections, we have already shown that >, \\>> and > can be computed using >|)>> operations over >. Combining this with(), it follows that one evaluation of > takes time >|)>>. Using our genericity assumptions, we also observed that these computations can be carried out by linear SLPs over >. In view of the transposition principle (see section), it follows that >>, \\>|)>>> and |)>>> can also be computed using >|)>> operations over>. Combining this with(), we conclude that <\eqnarray*> >>|||)>>\E>>\\\>|)>>\\>>>>> can be computed in time >|)>> as well. > Let us first reduce to the case when 4*d*e> and N>. Let =O|)>> be the smallest power of such that \4*d*e> and \N>. Whenever \q>, we replace> by the extension field >>. Since =O>, the construction of >> takes bit complexity >*>, by using14.39>, and the overhead involved by this extension only concerns hidden logarithmic factors in the complexity bounds. Consequently from now 4*d*e> and N> are satisfied. The concise representation of the Gröbner basis of for > takes|)>> operations in >, as recalled in section. With > as in the proof of Proposition, we compute ,\,g>> in time >, and then ,\,\>> in time >|)>>, as seen in section. The minimal polynomial > of can therefore be obtained in expected time >|)>>, thanks to the combination of Lemma and Proposition. We finally deduce > from > using Lemma and |)>> further operations in >. Recall from section that > coincides with the unique element in > of the Gröbner basis > of with respect to > and up to a constant multiple with the resultant >>. It is an interesting question whether the complete basis > can actually be computed fast. Now if and are in lex-generic position, then > contains exactly one other element besides >, which is of the form > with =U+U*x+\+U*x>. Let us show how to recover this parametrization > in expected time <\equation*> O>|)>+*log q|)>. One technique for doing this goes back to Kronecker: it performs a first order deformation in order to compute the minimal polynomial of |)>> modulo ; see for instance in2>. In the present paper, we appeal to an other known method relying once more on power projections; as in2>, for instance. For this purpose, let > be the linear form that has successfully led to the computation of> and let |~>\z*\|)>> be the reciprocal of >. We compute the polynomial \\> of degree d*e> such that > and |~>> are coprime and <\equation*> 0>\E|)>|)>*z=||~>>. Now let :\\\> be the linear form that sends to |)>>. This form> can be computed in softly linear time by the transposition principle. Note that the sequence <\equation> \E|)>,\E|)>,\,\E|)>|)>> may be obtained in the same way as, in time >|)>>, thanks to Proposition. The sequences and() are related by the identity <\eqnarray*> 0>\E|)>|)>*z>||0>U*\E|)>|)>*z>>||||)>*0>\E|)>|)>*z+z*V|)>,>>>> where \>. It follows that <\equation*> W\z*0>\E|)>|)>*z|)>*|~>=z*U|)>*\+z*V|)>*|~> is a polynomial of degree 2*d*e-1>. Since > and |~>> are coprime, we finally obtain the reciprocal \z*U|)>> of using <\equation*> =\*W mod |~>. This takes > further operations in>. We already noted in section that the grevlex-generic position can be checked using|)>> operations in>. Let us now quantify the genericity of the grevlex-generic position. Write d>P*x*y> and e>Q*x*y>. As in, we define P*z> and Q*z>, which both belong to>. By4> and thanks to the relationship between the Euclidean remainder sequence and the subresultant polynomials (see for instance3>), and are in grevlex-generic position if and only if the following conditions are satisfied: <\enumerate> \0> and \0>, The subresultant coefficient > in degree of and is non-zero for ,e-1>. This subresultant is the determinant of a square matrix of size whose entries are coefficients of and ; see for instance6>. The system admits exactly solutions counting multiplicities, or equivalently the coefficient > of > in \Res,Q|)>> is non-zero; see for instance2.7> applied with and . This coefficient > has total degree d+e> in the coefficients of and . Overall, there exists a polynomial > in |)>d>,|)>e>|]>> of total degree at most <\equation*> 2++d+e=2+e*-2*|2>=d*e+e+2, such that |)>d>,|)>e>|)>\0> implies the grevlex-generic position of and . In order to show that> is not the zero polynomial, we construct the auxiliary sequence of polynomials recursively by =z*A+A> and starting with with \0> and \1>, so =z>, =z+1>, =z+2*z>, etc. We verify by induction that =i> for -1>. Taking <\equation> Q\x*AP\y*Q+x*A, contains >, contains >, *Diag Q+A>, and >. In this way ,\,A> is the Euclidean remainder sequence of and . By3> all the subresultant coefficients of and are non-zero. In addition, since and are homogeneous, it is easy to verify that is a non-zero multiple of >. Consequently > is not identically zero. A sufficient (but non necessary condition) to ensure that and are in lex-generic position is that is separable. The coefficients of have degree d+e> in the coefficients of and . Consequently the discriminant of , written\\|)>d>,|)>e>|]>>, has total degree *>. When |)>d>,|)>e>|)>\0> the lex-generic position holds. To see that > is not identically zero it suffices to take *y|)>-1> and y-1>: then is separable for sufficiently generic values of >. Our method can be extended in several directions, as we will briefly outline now. <\itemize> With more work, we expect that the lex-genericity assumption can be relaxed somewhat, to the case when the Gröbner basis for > consists of polynomials of degree > in . Indeed, using linear algebra techniques inspired by Wiedemann's algorithm, the idea would be to recover the characteristic polynomial from the minimal polynomial by determining the multiplicities of the square-free factors. Similarly, and as already noticed in, it might be possible to relax the grevlex-genericity assumption somewhat, to the case when the Gröbner basis for > consists of and polynomials with leading monomials of the form >*y>. In6>, Kedlaya and Umans proposed an algebraic algorithm for multivariate multipoint evaluation (and its transpose, in virtue of the transposition principle). These algorithms are mainly interesting in small characteristic, in which case they can be used instead of the ones from Theorem. These algebraic algorithms can also be generalized to more general finite fields, provided that one has an operation for the Frobenius map and its inverse. Unfortunately, we are not aware of any efficient implementations of Kedlaya\UUmans' algorithms; see for a discussion about the (very large) input sizes for which the complexity bounds would become competitive. For the time being, we therefore do not expect Theorem to induce faster practical implementations of bivariate resultants. Nevertheless, it should be noticed that the maps > and >> can both be regarded as black boxes for our algorithm: whenever a faster algorithm for one of these maps does become available, our method might be relevant for practical applications. Using Chinese remaindering and rational reconstruction, the approach of this paper to compute lexicographical Gröbner bases should extend to the case when and have coefficients in >. In the generic case, this would provide an interesting, more direct alternative of Las Vegas type for and. We thank the anonymous referees for their useful comments. <\bibliography|bib|tm-plain|resultant.bib> <\bib-list|25> A.Bostan, G.LecerfÉ.Schost. Tellegen's principle into practice. Hoon Hong, , 37\U44. New York, NY, USA, 2003. ACM. Y.Bouzidi, S.Lazard, G.Moroz, M.Pouget, F.RouillierM.Sagraloff. Solving bivariate systems using rational univariate representations. Complexity>, 2016. P.Bürgisser, M.ClausenM.A.Shokrollahi. , 315. Springer-Verlag, 1997. D.Cox, J.LittleD.O'Shea. . Springer Science & Business Media, 2nd, 2013. C.DurvyeG.Lecerf. A concise proof of the Kronecker polynomial system solver from scratch. , 26(2), 2007. J.-C.Faugère, P.Gaudry, L.HuotG.Renault. Sub-cubic change of ordering for Gröbner basis: a probabilistic approach. K.Nabeshima, , 170\U177. New York, NY, USA, 2014. ACM. J.-C.Faugère, P.Gianni, D.LazardT.Mora. Efficient computation of zero-dimensional Gröbner bases by change of ordering. Symbolic Comput.>, 16(4):329\U344, 1993. J.-C.FaugèreC.Mou. Sparse FGLM algorithms. Symbolic Comput.>, 80:538\U569, 2017. A.Galligo. A propos du théorème de préparation de Weierstrass. F.Norguet, , 409, 543\U579. Springer, Berlin, Heidelberg, 1974. J.vonzur GathenJ.Gerhard. . Cambridge University Press, 3rd, 2013. B.Grenet, J.vander HoevenG.Lecerf. Deterministic root finding over finite fields using Graeffe transforms. , 27(3):237\U257, 2016. J.vander HoevenR.Larrieu. Fast Gröbner basis computation and polynomial reduction for generic bivariate ideals. , 30:509\U539, 2019. J.vander HoevenG.Lecerf. Fast multivariate multi-point evaluation revisited. Complexity>, 56:101405, 2020. J.vander HoevenG.Lecerf. On the complexity exponent of polynomial system solving. , 2020. . E.KaltofenV.Shoup. Subquadratic-time factoring of polynomials over finite fields. , 67(223):1179\U1197, 1998. K.S.KedlayaC.Umans. Fast polynomial factorization and modular composition. , 40(6):1767\U1802, 2011. L.Kronecker. Grundzüge einer arithmetischen Theorie der algebraischen Grössen. , 92:1\U122, 1882. D.Lazard. Résolution des systèmes d'équations algébriques. , 15(1):77\U110, 1981. F.Le Gall. Powers of tensors and fast matrix multiplication. K.Nabeshima, , 296\U303. New York, NY, USA, 2014. ACM. G.Lecerf. On the complexity of the Lickteig\URoy subresultant algorithm. Symbolic Comput.>, 2019. E.MehrabiÉ.Schost. A softly optimal Monte Carlo algorithm for solving bivariate polynomial systems over the integers. Complexity>, 34:78\U128, 2016. A.PoteauxÉ.Schost. Modular composition modulo triangular sets and applications. , 22(3):463\U516, 2013. V.Shoup. Fast construction of irreducible polynomials over finite fields. Symbolic Comput.>, 17(5):371\U391, 1994. V.Shoup. Efficient computation of minimal polynomials in algebraic extensions of finite fields. , ISSAC '99, 53\U58. New York, NY, USA, 1999. ACM. G.Villard. On computing the resultant of generic bivariate polynomials. C.Arreche, , 391\U398. New York, NY, USA, 2018. ACM. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-bibliography> <\db-entry|+1n2rrWt1upjpgFj|book|GaGe13> <|db-entry> J. > <\db-entry|+1n2rrWt1upjpgFk|article|Lecerf2019> <|db-entry> > Symbolic Comput.> <\db-entry|+1n2rrWt1upjpgFl|inproceedings|Vil18> <|db-entry> > > <\db-entry|+1n2rrWt1upjpgFm|inproceedings|Gall14> <|db-entry> > > <\db-entry|+1n2rrWt1upjpgFp|book|CoxLittleOShea2013> <|db-entry> J. D. > <\db-entry|+1n2rrWt1upjpgFo|article|vdH:ggg> <|db-entry> R. > <\db-entry|+1n2rrWt1upjpgFr|inproceedings|FGHR14> <|db-entry> P. L. G. > > <\db-entry|+1n2rrWt1upjpgFs|article|FGLM93> <|db-entry> P. D. T. > Symbolic Comput.> <\db-entry|+1n2rrWt1upjpgFt|article|KU11> <|db-entry> C. > <\db-entry|+1n2rrWt1upjpgFw|article|MehrabiSchost2016> <|db-entry> É. > Complexity> <\db-entry|+WRVOn5o1KudPzeR|article|vdH:polexp> <|db-entry> G. > > <\db-entry|+1n2rrWt1upjpgFn|article|BoLaMoPoRoSa2016> <|db-entry> S. G. M. F. M. > Complexity> <\db-entry|+1n2rrWt1upjpgFy|article|PoteauxSchost2013> <|db-entry> É. > <\db-entry|+1n2rrWt1upjpgG2|article|KaltofenShoup1998> <|db-entry> V. > <\db-entry|+1n2rrWt1upjpgG0|article|Shoup1994> <|db-entry> > Symbolic Comput.> <\db-entry|+1n2rrWt1upjpgG1|inproceedings|Shoup1999> <|db-entry> > <\db-entry|+1n2rrWt1upjpgFu|article|vdH:kucomp> <|db-entry> G. > Complexity> <\db-entry|+1n2rrWt1upjpgG5|article|FaugereMou2017> <|db-entry> C. > Symbolic Comput.> <\db-entry|+1n2rrWt1upjpgFi|book|BuClSh1997> <|db-entry> M. M. A. > <\db-entry|+1n2rrWt1upjpgFv|inproceedings|BoLeSc2003> <|db-entry> G. É. > > <\db-entry|+1n2rrWt1upjpgFz|incollection|Gal74> <|db-entry> > > <\db-entry|+1n2rrWt1upjpgFx|article|DurvyeLecerf2007> <|db-entry> G. > <\db-entry|+1n2rrWt1upjpgG6|article|Lazard1981> <|db-entry> > <\db-entry|+1n2rrWt1upjpgG3|article|Kronecker1882> <|db-entry> > <\db-entry|+1n2rrWt1upjpgG4|article|GrenetHoevenLecerf2016> <|db-entry> J. van der G. > <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> GaGe13 Lecerf2019 Vil18 Gall14 CoxLittleOShea2013 GaGe13 vdH:ggg vdH:ggg FGHR14 FGLM93 KU11 MehrabiSchost2016 vdH:polexp BoLaMoPoRoSa2016 PoteauxSchost2013 vdH:ggg KaltofenShoup1998 PoteauxSchost2013 Shoup1994 Shoup1999 GaGe13 KU11 vdH:kucomp KU11 PoteauxSchost2013 FaugereMou2017 BuClSh1997 BuClSh1997 BoLeSc2003 vdH:ggg Gal74 vdH:ggg vdH:ggg vdH:ggg vdH:ggg vdH:ggg DurvyeLecerf2007 Lazard1981 GaGe13 vdH:kucomp KU11 PoteauxSchost2013 GaGe13 KU11 KU11 KU11 vdH:kucomp GaGe13 Kronecker1882 GrenetHoevenLecerf2016 FaugereMou2017 vdH:ggg vdH:ggg Lecerf2019 GaGe13 DurvyeLecerf2007 Lecerf2019 vdH:ggg KU11 vdH:kucomp MehrabiSchost2016 vdH:polexp <\associate|figure> |1>|> Illustration of the Gröbner stairs for |P> and |Q> in generic position with respect to |\> |->|-0.3em|>|0em||0em|>>(left) and |\> (right) in the case when |d=5> and |e=3>. |> <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |Relationship to Gröbner bases |.>>>>|> > |Specific fields |.>>>>|> > |Outline of our contribution |.>>>>|> > |math-font-series||font-shape||2.Preliminaries> |.>>>>|> |Computational model |.>>>>|> > |Transposition principle |.>>>>|> > |Gröbner bases |.>>>>|> > |math-font-series||font-shape||3.Concise representation of the quotient algebra> |.>>>>|> |Gröbner basis |.>>>>|> > |Concise Gröbner bases |.>>>>|> > |Normal form |.>>>>|> > |Checking the genericity assumption |.>>>>|> > |Multiplication in the quotient algebra |.>>>>|> > |math-font-series||font-shape||4.Reduction to bivariate modular composition> |.>>>>|> |4.1.Resultants and minimal polynomials |.>>>>|> > |4.2.Wiedemann's algorithm |.>>>>|> > |4.3.Probability analysis |.>>>>|> > |math-font-series||font-shape||5.Reduction to multipoint evaluation> |.>>>>|> |5.1.Kronecker segmentation |.>>>>|> > |5.2.Evaluation-interpolation |.>>>>|> > |math-font-series||font-shape||6.Fast computation of resultants> |.>>>>|> |6.1.Fast multipoint evaluation |.>>>>|> > |6.2.Evaluating |E> and |E>> |.>>>>|> > |6.3.Proof of Theorem |->|-0.3em|>|0em||0em|>> |.>>>>|> > |math-font-series||font-shape||7.Further notes> |.>>>>|> |7.1.Parametrization |.>>>>|> > |7.2.On the genericity assumptions |.>>>>|> > |7.3.Possible extensions |.>>>>|> > |Acknowledgments. |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|> <\links> <\collection> > > > > > >